Patient's Motion During Acquisition

Scanner
  B0: Float64 1.5
  B1: Float64 1.0e-5
  Gmax: Float64 0.06
  Smax: Int64 500
  ADC_Δt: Float64 2.0e-6
  seq_Δt: Float64 1.0e-5
  GR_Δt: Float64 1.0e-5
  RF_Δt: Float64 1.0e-6
  RF_ring_down_T: Float64 2.0e-5
  RF_dead_time_T: Float64 0.0001
  ADC_dead_time_T: Float64 1.0e-5

It can also be interesting to see the effect of the patient's motion during an MRI scan. For this, Koma provides the ability to add motion <: MotionModel to the phantom. In this tutorial, we will show how to add a SimpleMotion model to a 2D brain phantom.

First, let's load the 2D brain phantom used in the previous tutorials:

obj = brain_phantom2D()
6506-element Vector{Float64}:
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 ⋮
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0

The SimpleMotion model includes a list of SimpleMotionType's, to enabling mix-and-matching simple motions. These are Translation, Rotation, HeartBeat and their periodic versions PeriodicTranslation, PeriodicRotation and PeriodicHeartBeat.

Head Translation

In this example, we will add a Translation of 2 cm in x, with duration of 200 ms (v = 0.1 m/s):

obj.motion = SimpleMotion(
    Translation(t_start=0.0, t_end=200e-3, dx=2e-2, dy=0.0, dz=0.0)
)
"../assets/5-phantom1.html"
6-dimensional AxisArray{ComplexF32,6,...} with axes:
    :x, (0.0:2.323232421875:230.00000976562498) mm
    :y, (0.0:2.3232322692871095:229.99999465942383) mm
    :z, (0.0:1.0:0.0) mm
    :echos, 1:1
    :coils, 1:1
    :repetitions, 1:1
And data, a 100×100×1×1×1×1 Array{ComplexF32, 6}:
[:, :, 1, 1, 1, 1] =
  0.00551863-0.000530932im  …  -0.00206839-0.00193437im
 -0.00535088+0.000569484im      0.00205934+0.00178631im
  0.00545877-0.000529996im     -0.00197194-0.0015981im
 -0.00546044+0.000483705im      0.00188833+0.00145388im
  0.00548641-0.00042416im      -0.00187737-0.00133552im
 -0.00552133+0.000378931im  …   0.00182819+0.00124202im
   0.0055954-0.000323264im     -0.00177445-0.00114462im
 -0.00564226+0.00026447im       0.00173688+0.00104105im
  0.00573426-0.000198874im      -0.0016697-0.0009231im
  -0.0058207+0.000122468im      0.00164347+0.000844061im
            ⋮               ⋱  
  -0.0222539-0.0609785im        -0.0729578-0.0460124im
  -0.0567521-0.0230377im        -0.0222853-0.0152211im
  0.00770675+0.00919452im       -0.0155111-0.000427792im
 -0.00158783-0.011557im         0.00993224+0.00652902im
  0.00933849-0.0035104im    …  -0.00838639+0.00180487im
 -0.00576856-0.000742455im      0.00373514+0.00389723im
  0.00548184-0.000237293im     -0.00283761-0.00291014im
 -0.00542876+0.000440916im      0.00247522+0.0024715im
  0.00544188-0.000522013im     -0.00226567-0.0021729im

If we simulate an EPI sequence with acquisition duration (183.989 ms) comparable with the motion's duration (200 ms), we will observe motion-induced artifacts in the reconstructed image.

"../assets/5-recon1.html"

The severity of the artifacts can vary depending on the acquisition duration and $k$-space trajectory.

Motion-Corrected Reconstruction

To correct for the motion-induced artifacts we can perform a motion-corrected reconstruction. This can be achieved by multiplying each sample of the acquired signal $S(t)$ by a phase shift $\Delta\phi_{\mathrm{corr}}$ proportional to the displacement $\boldsymbol{u}(t)$ [Godenschweger, 2016]:

\[S_{\mathrm{MC}}\left(t\right)=S\left(t\right)\cdot\mathrm{e}^{\mathrm{i}\Delta\phi_{\mathrm{corr}}}=S\left(t\right)\cdot\mathrm{e}^{\mathrm{i}2\pi\boldsymbol{k}\left(t\right)\cdot\boldsymbol{u}\left(t\right)}\]

In practice, we would need to estimate or measure the motion before performing a motion-corrected reconstruction, but for this example, we will directly use the displacement functions $\boldsymbol{u}(\boldsymbol{x}, t)$ defined by obj.motion::SimpleMotion. Since translations are rigid motions ($\boldsymbol{u}(\boldsymbol{x}, t)=\boldsymbol{u}(t)$ no position dependence), we can obtain the required displacements by calculating $\boldsymbol{u}(\boldsymbol{x}=\boldsymbol{0},\ t=t_{\mathrm{adc}})$.

sample_times = get_adc_sampling_times(seq1)
displacements = hcat(get_spin_coords(obj.motion, [0.0], [0.0], [0.0], sample_times)...)
"../assets/5-displacements.html"

We can now get the necessary phase shift for each sample:

_, kspace = get_kspace(seq1)
ΔΦ = 2π*sum(kspace .* displacements, dims=2)
10000×1 Matrix{Float64}:
  -2.2169475934261254
  -2.1734857088856887
  -2.1299691879523697
  -2.0863980306261674
  -2.042772236907082
  -1.9990918067951147
  -1.9553567402902488
  -1.9115670373925155
  -1.8677226981018997
  -1.8238237224184013
   ⋮
 -20.817605789660078
 -21.320395829342214
 -21.823240505415836
 -22.32613981788374
 -22.829093766744524
 -23.33210235199679
 -23.835165573643348
 -24.338283431681386
 -24.841455926113696

And apply it to the acquired signal to correct its phase:

acq1.kdata[1] .*= exp.(im*ΔΦ)
"../assets/5-recon2.html"

Finally, we compare the original image ▶️ and the motion-corrected reconstruction ⏸️:


This page was generated using Literate.jl.