API Reference

This section provides comprehensive documentation for all differentiable CT operators in diffct. Each function is implemented as a PyTorch autograd Function, enabling seamless gradient computation through the CT reconstruction pipeline.

Overview

The diffct library provides six main differentiable operators organized by geometry type:

  • Parallel Beam (2D): Traditional parallel-beam CT geometry

  • Fan Beam (2D): Fan-beam geometry with configurable source-detector setup

  • Cone Beam (3D): Full 3D cone-beam geometry for volumetric reconstruction

Each geometry type includes both forward projection and backprojection operators that are fully differentiable and CUDA-accelerated.

Parallel Beam Operators

The parallel beam geometry assumes parallel X-ray beams, commonly used in synchrotron CT and some medical CT scanners.

class diffct.differentiable.ParallelProjectorFunction(*args, **kwargs)[source]

Bases: Function

Summary

PyTorch autograd function for differentiable 2D parallel beam forward projection.

Notes

Provides a differentiable interface to the CUDA-accelerated Siddon ray-tracing method with interpolation for parallel beam CT geometry. The forward pass computes the sinogram from a 2D image using parallel beam geometry. The backward pass computes gradients using the adjoint backprojection operation. Requires CUDA-capable hardware and a properly configured CUDA environment; all input tensors must reside on the same CUDA device.

Examples

>>> import torch
>>> from diffct.differentiable import ParallelProjectorFunction
>>>
>>> # Create a 2D image with gradient tracking
>>> image = torch.randn(128, 128, device='cuda', requires_grad=True)
>>> # Define projection parameters
>>> angles = torch.linspace(0, torch.pi, 180, device='cuda')
>>> num_detectors = 128
>>> detector_spacing = 1.0
>>> # Compute forward projection
>>> projector = ParallelProjectorFunction.apply
>>> sinogram = projector(image, angles, num_detectors, detector_spacing)
>>> # Compute loss and gradients
>>> loss = sinogram.sum()
>>> loss.backward()
>>> print(f"Gradient shape: {image.grad.shape}")  # (128, 128)
static backward(ctx, grad_sinogram)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, image, angles, num_detectors, detector_spacing=1.0, voxel_spacing=1.0)[source]

Compute the 2D parallel beam forward projection (Radon transform) of an image using CUDA acceleration.

Parameters:
  • image (torch.Tensor) – 2D input image tensor of shape (H, W), must be on a CUDA device and of type float32.

  • angles (torch.Tensor) – 1D tensor of projection angles in radians, shape (num_angles,), must be on the same CUDA device as image.

  • num_detectors (int) – Number of detector elements in the sinogram (columns).

  • detector_spacing (float, optional) – Physical spacing between detector elements (default: 1.0).

  • voxel_spacing (float, optional) – Physical size of one voxel (in same units as detector_spacing, default: 1.0).

Returns:

sinogram – 2D tensor of shape (num_angles, num_detectors) containing the forward projection (sinogram) on the same device as image.

Return type:

torch.Tensor

Notes

  • All input tensors must be on the same CUDA device.

  • The operation is fully differentiable and supports autograd.

  • Uses the Siddon method with interpolation for accurate ray tracing and bilinear interpolation.

Examples

>>> image = torch.randn(128, 128, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, torch.pi, 180, device='cuda')
>>> sinogram = ParallelProjectorFunction.apply(
...     image, angles, 128, 1.0
... )
class diffct.differentiable.ParallelBackprojectorFunction(*args, **kwargs)[source]

Bases: Function

Summary

PyTorch autograd function for differentiable 2D parallel beam backprojection.

Notes

Provides a differentiable interface to the CUDA-accelerated Siddon ray-tracing method with interpolation for parallel beam backprojection. The forward pass computes a 2D reconstruction from sinogram data using parallel beam backprojection, and the backward pass computes gradients via forward projection as the adjoint operation. Requires CUDA-capable hardware and consistent device placements.

Examples

>>> import torch
>>> from diffct.differentiable import ParallelBackprojectorFunction
>>>
>>> sinogram = torch.randn(180, 128, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, torch.pi, 180, device='cuda')
>>> recon = ParallelBackprojectorFunction.apply(sinogram, angles, 1.0, 128, 128)
>>> loss = recon.sum()
>>> loss.backward()
>>> print(sinogram.grad.shape)  # (180, 128)
static backward(ctx, grad_output)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, sinogram, angles, detector_spacing=1.0, H=128, W=128, voxel_spacing=1.0)[source]

Compute the 2D parallel beam backprojection (adjoint Radon transform) of a sinogram using CUDA acceleration.

Parameters:
  • sinogram (torch.Tensor) – 2D input sinogram tensor of shape (num_angles, num_detectors), must be on a CUDA device and of type float32.

  • angles (torch.Tensor) – 1D tensor of projection angles in radians, shape (num_angles,), must be on the same CUDA device as sinogram.

  • detector_spacing (float, optional) – Physical spacing between detector elements (default: 1.0).

  • H (int, optional) – Height of the output reconstruction image (default: 128).

  • W (int, optional) – Width of the output reconstruction image (default: 128).

  • voxel_spacing (float, optional) – Physical size of one voxel (in same units as detector_spacing, default: 1.0).

Returns:

reco – 2D tensor of shape (H, W) containing the reconstructed image on the same device as sinogram.

Return type:

torch.Tensor

Notes

  • All input tensors must be on the same CUDA device.

  • The operation is fully differentiable and supports autograd.

  • Uses the Siddon method with interpolation for accurate ray tracing and bilinear interpolation.

Examples

>>> sinogram = torch.randn(180, 128, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, torch.pi, 180, device='cuda')
>>> reco = ParallelBackprojectorFunction.apply(
...     sinogram, angles, 1.0, 128, 128
... )

Fan Beam Operators

Fan beam geometry uses a point X-ray source with a fan-shaped beam, typical in medical CT scanners.

class diffct.differentiable.FanProjectorFunction(*args, **kwargs)[source]

Bases: Function

Summary

PyTorch autograd function for differentiable 2D fan beam forward projection.

Notes

Provides a differentiable interface to the CUDA-accelerated Siddon ray-tracing method with interpolation for fan beam geometry, where rays diverge from a point X-ray source to a linear detector array. The forward pass computes sinograms using divergent beam geometry, and the backward pass computes gradients via adjoint backprojection.

Examples

>>> import torch
>>> from diffct.differentiable import FanProjectorFunction
>>>
>>> image = torch.randn(256, 256, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, 2 * torch.pi, 360, device='cuda')
>>> sinogram = FanProjectorFunction.apply(image, angles, 512, 1.0, 1500.0, 1000.0)
>>> loss = sinogram.sum()
>>> loss.backward()
>>> print(image.grad.shape)  # (256, 256)
static backward(ctx, grad_sinogram)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, image, angles, num_detectors, detector_spacing, sdd, sid, voxel_spacing=1.0)[source]

Compute the 2D fan beam forward projection of an image using CUDA acceleration.

Parameters:
  • image (torch.Tensor) – 2D input image tensor of shape (H, W), must be on a CUDA device and of type float32.

  • angles (torch.Tensor) – 1D tensor of projection angles in radians, shape (num_angles,), must be on the same CUDA device as image.

  • num_detectors (int) – Number of detector elements in the sinogram (columns).

  • detector_spacing (float) – Physical spacing between detector elements.

  • sdd (float) – Source-to-Detector Distance (SDD). The total distance from the X-ray source to the detector, passing through the isocenter.

  • sid (float) – Source-to-Isocenter Distance (SID). The distance from the X-ray source to the center of rotation (isocenter).

  • voxel_spacing (float, optional) – Physical size of one voxel (in same units as detector_spacing, sdd, sid, default: 1.0).

Returns:

sinogram – 2D tensor of shape (num_angles, num_detectors) containing the fan beam sinogram on the same device as image.

Return type:

torch.Tensor

Notes

  • All input tensors must be on the same CUDA device.

  • The operation is fully differentiable and supports autograd.

  • Fan beam geometry uses divergent rays from a point source to the detector.

  • Uses the Siddon method with interpolation for accurate ray tracing and bilinear interpolation.

Examples

>>> image = torch.randn(256, 256, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, 2 * torch.pi, 360, device='cuda')
>>> sinogram = FanProjectorFunction.apply(
...     image, angles, 512, 1.0, 1500.0, 1000.0
... )
class diffct.differentiable.FanBackprojectorFunction(*args, **kwargs)[source]

Bases: Function

Summary

PyTorch autograd function for differentiable 2D fan beam backprojection.

Notes

Provides a differentiable interface to the CUDA-accelerated Siddon ray-tracing method with interpolation for fan beam backprojection. Implements the adjoint of the fan beam projection operator, distributing sinogram values back into the reconstruction volume along divergent ray paths. The forward pass computes reconstruction from sinogram data, and the backward pass computes gradients via forward projection.

Examples

>>> import torch
>>> from diffct.differentiable import FanBackprojectorFunction
>>>
>>> sinogram = torch.randn(360, 512, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, 2 * torch.pi, 360, device='cuda')
>>> recon = FanBackprojectorFunction.apply(sinogram, angles, 1.0, 256, 256, 1500.0, 1000.0)
>>> loss = recon.sum()
>>> loss.backward()
>>> print(sinogram.grad.shape)  # (360, 512)
static backward(ctx, grad_output)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, sinogram, angles, detector_spacing, H, W, sdd, sid, voxel_spacing=1.0)[source]

Compute the 2D fan beam backprojection of a sinogram using CUDA acceleration.

Parameters:
  • sinogram (torch.Tensor) – 2D input fan beam sinogram tensor of shape (num_angles, num_detectors), must be on a CUDA device and of type float32.

  • angles (torch.Tensor) – 1D tensor of projection angles in radians, shape (num_angles,), must be on the same CUDA device as sinogram.

  • detector_spacing (float) – Physical spacing between detector elements.

  • H (int) – Height of the output reconstruction image.

  • W (int) – Width of the output reconstruction image.

  • sdd (float) – Source-to-Detector Distance (SDD). The total distance from the X-ray source to the detector, passing through the isocenter.

  • sid (float) – Source-to-Isocenter Distance (SID). The distance from the X-ray source to the center of rotation (isocenter).

  • voxel_spacing (float, optional) – Physical size of one voxel (in same units as detector_spacing, sdd, sid, default: 1.0).

Returns:

reco – 2D tensor of shape (H, W) containing the reconstructed image on the same device as sinogram.

Return type:

torch.Tensor

Notes

  • All input tensors must be on the same CUDA device.

  • The operation is fully differentiable and supports autograd.

  • Fan beam geometry uses divergent rays from a point source to the detector.

  • Uses the Siddon method with interpolation for accurate ray tracing and bilinear interpolation.

Examples

>>> sinogram = torch.randn(360, 512, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, 2*torch.pi, 360, device='cuda')
>>> reco = FanBackprojectorFunction.apply(
...     sinogram, angles, 1.0, 256, 256, 1000.0, 500.0
... )

Cone Beam Operators

Cone beam geometry extends fan beam to 3D with a cone-shaped X-ray beam for volumetric reconstruction.

class diffct.differentiable.ConeProjectorFunction(*args, **kwargs)[source]

Bases: Function

Summary

PyTorch autograd function for differentiable 3D cone beam forward projection.

Notes

Provides a differentiable interface to the CUDA-accelerated Siddon ray-tracing method with interpolation for 3D cone beam geometry. Rays emanate from a point X-ray source to a 2D detector array capturing volumetric projection data. The forward pass computes 3D projections, and the backward pass computes gradients via adjoint 3D backprojection. Requires significant GPU memory.

Examples

>>> import torch
>>> from diffct.differentiable import ConeProjectorFunction
>>>
>>> volume = torch.randn(128, 128, 128, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, 2 * torch.pi, 360, device='cuda')
>>> projections = ConeProjectorFunction.apply(volume, angles, 256, 256, 1.0, 1.0, 1500.0, 1000.0)
>>> loss = projections.sum()
>>> loss.backward()
>>> print(volume.grad.shape)  # (128, 128, 128)
static backward(ctx, grad_sinogram)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, volume, angles, det_u, det_v, du, dv, sdd, sid, voxel_spacing=1.0)[source]

Compute the 3D cone beam forward projection of a volume using CUDA acceleration.

Parameters:
  • volume (torch.Tensor) – 3D input volume tensor of shape (D, H, W), must be on a CUDA device and of type float32.

  • angles (torch.Tensor) – 1D tensor of projection angles in radians, shape (num_views,), must be on the same CUDA device as volume.

  • det_u (int) – Number of detector elements along the u-axis (width).

  • det_v (int) – Number of detector elements along the v-axis (height).

  • du (float) – Physical spacing between detector elements along the u-axis.

  • dv (float) – Physical spacing between detector elements along the v-axis.

  • sdd (float) – Source-to-Detector Distance (SDD). The total distance from the X-ray source to the detector, passing through the isocenter.

  • sid (float) – Source-to-Isocenter Distance (SID). The distance from the X-ray source to the center of rotation (isocenter).

  • voxel_spacing (float, optional) – Physical size of one voxel (in same units as du, dv, sdd, sid, default: 1.0).

Returns:

sino – 3D tensor of shape (num_views, det_u, det_v) containing the cone beam projections on the same device as volume.

Return type:

torch.Tensor

Notes

  • All input tensors must be on the same CUDA device.

  • The operation is fully differentiable and supports autograd.

  • Cone beam geometry uses a point source and a 2D detector array.

  • Uses the Siddon method with interpolation for accurate 3D ray tracing and trilinear interpolation.

Examples

>>> volume = torch.randn(128, 128, 128, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, 2*torch.pi, 360, device='cuda')
>>> sino = ConeProjectorFunction.apply(
...     volume, angles, 256, 256, 1.0, 1.0, 1500.0, 1000.0
... )
class diffct.differentiable.ConeBackprojectorFunction(*args, **kwargs)[source]

Bases: Function

Summary

PyTorch autograd function for differentiable 3D cone beam backprojection.

Notes

Provides a differentiable interface to the CUDA-accelerated Siddon ray-tracing method with interpolation for 3D cone beam backprojection. The forward pass computes a 3D reconstruction from cone beam projection data using backprojection as the adjoint operation. The backward pass computes gradients via 3D cone beam forward projection. Requires CUDA-capable hardware and consistent device placements.

This operation may be memory- and computationally-intensive due to 3D geometry. Consider using gradient checkpointing, smaller volumes, or distributed computing for large-scale applications, and ensure sufficient GPU memory is available.

Examples

>>> import torch
>>> from diffct.differentiable import ConeBackprojectorFunction
>>>
>>> projections = torch.randn(360, 256, 256, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, 2 * torch.pi, 360, device='cuda')
>>> D, H, W = 128, 128, 128
>>> du, dv = 1.0, 1.0
>>> sdd, sid = 1500.0, 1000.0
>>> backprojector = ConeBackprojectorFunction.apply
>>> volume = backprojector(projections, angles, D, H, W, du, dv, sdd, sid)
>>> loss = volume.sum()
>>> loss.backward()
>>> print(f"Projection gradient shape: {projections.grad.shape}")  # (360, 256, 256)
static backward(ctx, grad_output)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, sinogram, angles, D, H, W, du, dv, sdd, sid, voxel_spacing=1.0)[source]

Compute the 3D cone beam backprojection of a projection sinogram using CUDA acceleration.

Parameters:
  • sinogram (torch.Tensor) – 3D input cone beam projection tensor of shape (num_views, det_u, det_v), must be on a CUDA device and of type float32.

  • angles (torch.Tensor) – 1D tensor of projection angles in radians, shape (num_views,), must be on the same CUDA device as sinogram.

  • D (int) – Depth (z-dimension) of the output reconstruction volume.

  • H (int) – Height (y-dimension) of the output reconstruction volume.

  • W (int) – Width (x-dimension) of the output reconstruction volume.

  • du (float) – Physical spacing between detector elements along the u-axis.

  • dv (float) – Physical spacing between detector elements along the v-axis.

  • sdd (float) – Source-to-Detector Distance (SDD). The total distance from the X-ray source to the detector, passing through the isocenter.

  • sid (float) – Source-to-Isocenter Distance (SID). The distance from the X-ray source to the center of rotation (isocenter).

  • voxel_spacing (float, optional) – Physical size of one voxel (in same units as du, dv, sdd, sid, default: 1.0).

Returns:

vol – 3D tensor of shape (D, H, W) containing the reconstructed volume on the same device as sinogram.

Return type:

torch.Tensor

Notes

  • All input tensors must be on the same CUDA device.

  • The operation is fully differentiable and supports autograd.

  • Cone beam geometry uses a point source and a 2D detector array.

  • Uses the Siddon method with interpolation for accurate 3D ray tracing and trilinear interpolation.

Examples

>>> projections = torch.randn(360, 256, 256, device='cuda', requires_grad=True)
>>> angles = torch.linspace(0, 2*torch.pi, 360, device='cuda')
>>> vol = ConeBackprojectorFunction.apply(
...     projections, angles, 128, 128, 128, 1.0, 1.0, 1500.0, 1000.0
... )

Usage Notes

Memory Management: - All operators work with GPU tensors for optimal performance - Ensure sufficient GPU memory for your problem size - Use torch.cuda.empty_cache() if encountering memory issues

Gradient Computation: - All operators support automatic differentiation - Gradients flow through both forward and backward passes - Set requires_grad=True on input tensors to enable gradients

Performance Considerations: - Use contiguous tensors for optimal memory access - Consider batch processing for multiple reconstructions - Profile your code to identify bottlenecks

Coordinate Systems: - Image/volume coordinates: (0,0) at top-left corner - Detector coordinates: centered at detector array center - Rotation: counter-clockwise around z-axis (right-hand rule)