a b/doc/_includes/inverse.rst
1
.. _ch_mne:
2
3
The minimum-norm current estimates
4
==================================
5
6
.. NOTE: part of this file is included in doc/overview/implementation.rst.
7
   Changes here are reflected there. If you want to link to this content, link
8
   to :ref:`ch_mne` to link to that section of the implementation.rst page.
9
   The next line is a target for :start-after: so we can omit the title from
10
   the include:
11
   inverse-begin-content
12
13
14
This section describes the mathematical details of the calculation of
15
minimum-norm estimates. In Bayesian sense, the ensuing current distribution is
16
the maximum a posteriori (MAP) estimate under the following assumptions:
17
18
- The viable locations of the currents are constrained to the cortex.
19
  Optionally, the current orientations can be fixed to be normal to the
20
  cortical mantle.
21
22
- The amplitudes of the currents have a Gaussian prior distribution with a
23
  known source covariance matrix.
24
25
- The measured data contain additive noise with a Gaussian distribution with a
26
  known covariance matrix. The noise is not correlated over time.
27
28
Computing the inverse operator is accomplished using
29
:func:`mne.minimum_norm.make_inverse_operator` and
30
:func:`mne.minimum_norm.apply_inverse`. The use of these functions is presented
31
in the tutorial :ref:`tut-inverse-methods`.
32
33
The linear inverse operator
34
~~~~~~~~~~~~~~~~~~~~~~~~~~~
35
36
The measured data in the source estimation procedure consists of MEG and EEG
37
data, recorded on a total of N channels. The task is to estimate a total of
38
:math:`Q`
39
strengths of sources located on the cortical mantle. If the number of source
40
locations is :math:`P`, :math:`Q = P` for fixed-orientation sources and
41
:math:`Q = 3P` if the source
42
orientations are unconstrained. The regularized linear inverse operator
43
following from regularized maximal likelihood of the above probabilistic model
44
is given by the :math:`Q \times N` matrix
45
46
.. math::    M = R' G^\top (G R' G^\top + C)^{-1}\ ,
47
48
where :math:`G` is the gain matrix relating the source strengths to the measured
49
MEG/EEG data, :math:`C` is the data noise-covariance matrix and :math:`R'` is
50
the source covariance matrix. The dimensions of these matrices are :math:`N
51
\times Q`, :math:`N \times N`, and :math:`Q \times Q`, respectively. The
52
:math:`Q \times 1` source-strength vector is obtained by multiplying the
53
:math:`Q \times 1` data vector by :math:`Q`.
54
55
The expected value of the current amplitudes at time *t* is then given by
56
:math:`\hat{j}(t) = Mx(t)`, where :math:`x(t)` is a vector containing the
57
measured MEG and EEG data values at time *t*.
58
59
For computational convenience, the linear inverse operator is
60
not computed explicitly. See :ref:`mne_solution` for mathematical
61
details, and :ref:`CIHCFJEI` for a detailed example.
62
63
.. _mne_regularization:
64
65
Regularization
66
~~~~~~~~~~~~~~
67
68
The a priori variance of the currents is, in practice, unknown. We can express
69
this by writing :math:`R' = R/ \lambda^2 = R \lambda^{-2}`, which yields the
70
inverse operator
71
72
.. math::
73
   :name: inv_m
74
75
    M &= R' G^\top (G R' G^\top + C)^{-1} \\
76
      &= R \lambda^{-2} G^\top (G R \lambda^{-2} G^\top + C)^{-1} \\
77
      &= R \lambda^{-2} G^\top \lambda^2 (G R G^\top + \lambda^2 C)^{-1} \\
78
      &= R G^\top (G R G^\top + \lambda^2 C)^{-1}\ ,
79
80
where the unknown current amplitude is now interpreted in terms of the
81
regularization parameter :math:`\lambda^2`. Larger :math:`\lambda^2` values
82
correspond to spatially smoother and weaker current amplitudes, whereas smaller
83
:math:`\lambda^2` values lead to the opposite.
84
85
We can arrive at the regularized linear inverse operator also by minimizing a
86
cost function :math:`S` with respect to the estimated current :math:`\hat{j}`
87
(given the measurement vector :math:`x` at any given time :math:`t`) as
88
89
.. math::
90
91
    \min_\hat{j} \Bigl\{ S \Bigr\} &= \min_\hat{j} \Bigl\{ \tilde{e}^\top \tilde{e} + \lambda^2 \hat{j}^\top R^{-1} \hat{j} \Bigr\} \\
92
                                   &= \min_\hat{j} \Bigl\{ (x - G\hat{j})^\top C^{-1} (x - G\hat{j}) + \lambda^2 \hat{j}^\top R^{-1} \hat{j} \Bigr\} \,
93
94
where the first term consists of the difference between the whitened measured
95
data (see :ref:`whitening_and_scaling`) and those predicted by the model while the
96
second term is a weighted-norm of the current estimate. It is seen that, with
97
increasing :math:`\lambda^2`, the source term receive more weight and larger
98
discrepancy between the measured and predicted data is tolerable.
99
100
.. _whitening_and_scaling:
101
102
Whitening and scaling
103
~~~~~~~~~~~~~~~~~~~~~
104
105
The MNE software employs data whitening so that a 'whitened' inverse operator
106
assumes the form
107
108
.. math::    \tilde{M} = M C^{^1/_2} = R \tilde{G}^\top (\tilde{G} R \tilde{G}^\top + \lambda^2 I)^{-1}\ ,
109
   :name: inv_m_tilde
110
111
where
112
113
.. math:: \tilde{G} = C^{-^1/_2}G
114
   :name: inv_g_tilde
115
116
is the spatially whitened gain matrix. We arrive at the whitened inverse
117
operator equation :eq:`inv_m_tilde` by making the substitution for
118
:math:`G` from :eq:`inv_g_tilde` in :eq:`inv_m` as
119
120
.. math::
121
122
    \tilde{M} = M C^{^1/_2} &= R G^\top (G R G^\top + \lambda^2 C)^{-1} C^{^1/_2} \\
123
                             &= R \tilde{G}^\top C^{^1/_2} (C^{^1/_2} \tilde{G} R \tilde{G}^\top C^{^1/_2} + \lambda^2 C)^{-1} C^{^1/_2} \\
124
                             &= R \tilde{G}^\top C^{^1/_2} (C^{^1/_2} (\tilde{G} R \tilde{G}^\top + \lambda^2 I) C^{^1/_2})^{-1} C^{^1/_2} \\
125
                             &= R \tilde{G}^\top C^{^1/_2} C^{-^1/_2} (\tilde{G} R \tilde{G}^\top + \lambda^2 I)^{-1} C^{-^1/_2} C^{^1/_2} \\
126
                             &= R \tilde{G}^\top (\tilde{G} R \tilde{G}^\top + \lambda^2 I)^{-1}\ .
127
128
The expected current values are
129
130
.. math::
131
   :name: inv_j_hat_t
132
133
    \hat{j}(t) &= Mx(t) \\
134
               &= M C^{^1/_2} C^{-^1/_2} x(t) \\
135
               &= \tilde{M} \tilde{x}(t)
136
137
knowing :eq:`inv_m_tilde` and taking
138
139
.. math::
140
   :name: inv_tilde_x_t
141
142
    \tilde{x}(t) = C^{-^1/_2}x(t)
143
144
as the whitened measurement vector at time *t*. The spatial
145
whitening operator :math:`C^{-^1/_2}` is obtained with the help of the
146
eigenvalue decomposition
147
:math:`C = U_C \Lambda_C^2 U_C^\top` as :math:`C^{-^1/_2} = \Lambda_C^{-1} U_C^\top`.
148
In the MNE software the noise-covariance matrix is stored as the one applying
149
to raw data. To reflect the decrease of noise due to averaging, this matrix,
150
:math:`C_0`, is scaled by the number of averages, :math:`L`, *i.e.*, :math:`C =
151
C_0 / L`.
152
153
.. note::
154
   When EEG data are included, the gain matrix :math:`G` needs to be average referenced when computing the linear inverse operator :math:`M`. This is incorporated during creating the spatial whitening operator :math:`C^{-^1/_2}`, which includes any projectors on the data. EEG data average reference (using a projector) is mandatory for source modeling and is checked when calculating the inverse operator.
155
156
As shown above, regularization of the inverse solution is equivalent to a
157
change in the variance of the current amplitudes in the Bayesian *a priori*
158
distribution.
159
160
A convenient choice for the source-covariance matrix :math:`R` is such that
161
:math:`\text{trace}(\tilde{G} R \tilde{G}^\top) / \text{trace}(I) = 1`. With this
162
choice we can approximate :math:`\lambda^2 \sim 1/\rm{SNR}^2`, where SNR is the
163
(amplitude) signal-to-noise ratio of the whitened data.
164
165
.. note::
166
   The definition of the signal to noise-ratio/ :math:`\lambda^2` relationship
167
   given above works nicely for the whitened forward solution. In the
168
   un-whitened case scaling with the trace ratio :math:`\text{trace}(GRG^\top) /
169
   \text{trace}(C)` does not make sense, since the diagonal elements summed
170
   have, in general, different units of measure. For example, the MEG data are
171
   expressed in T or T/m whereas the unit of EEG is Volts.
172
173
See :ref:`tut-compute-covariance` for example of noise covariance computation
174
and whitening.
175
176
.. _cov_regularization_math:
177
178
Regularization of the noise-covariance matrix
179
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
180
181
Since finite amount of data is usually available to compute an estimate of the
182
noise-covariance matrix :math:`C`, the smallest eigenvalues of its estimate are
183
usually inaccurate and smaller than the true eigenvalues. Depending on the
184
seriousness of this problem, the following quantities can be affected:
185
186
- The model data predicted by the current estimate,
187
188
- Estimates of signal-to-noise ratios, which lead to estimates of the required
189
  regularization, see :ref:`mne_regularization`,
190
191
- The estimated current values, and
192
193
- The noise-normalized estimates, see :ref:`noise_normalization`.
194
195
Fortunately, the latter two are least likely to be affected due to
196
regularization of the estimates. However, in some cases especially the EEG part
197
of the noise-covariance matrix estimate can be deficient, *i.e.*, it may
198
possess very small eigenvalues and thus regularization of the noise-covariance
199
matrix is advisable.
200
201
Historically, the MNE software accomplishes the regularization by replacing a
202
noise-covariance matrix estimate :math:`C` with
203
204
.. math::    C' = C + \sum_k {\varepsilon_k \bar{\sigma_k}^2 I^{(k)}}\ ,
205
206
where the index :math:`k` goes across the different channel groups (MEG planar
207
gradiometers, MEG axial gradiometers and magnetometers, and EEG),
208
:math:`\varepsilon_k` are the corresponding regularization factors,
209
:math:`\bar{\sigma_k}` are the average variances across the channel groups, and
210
:math:`I^{(k)}` are diagonal matrices containing ones at the positions
211
corresponding to the channels contained in each channel group.
212
213
See :ref:`plot_compute_covariance_howto` for details on computing and
214
regularizing the channel covariance matrix.
215
216
.. _mne_solution:
217
218
Computation of the solution
219
~~~~~~~~~~~~~~~~~~~~~~~~~~~
220
221
The most straightforward approach to calculate the MNE is to employ the
222
expression of the original or whitened inverse operator directly. However, for
223
computational convenience we prefer to take another route, which employs the
224
singular-value decomposition (SVD) of the matrix
225
226
.. math::
227
   :name: inv_a
228
229
    A &= \tilde{G} R^{^1/_2} \\
230
      &= U \Lambda V^\top
231
232
where the superscript :math:`^1/_2` indicates a square root of :math:`R`. For a
233
diagonal matrix, one simply takes the square root of :math:`R` while in the
234
more general case one can use the Cholesky factorization :math:`R = R_C R_C^\top`
235
and thus :math:`R^{^1/_2} = R_C`.
236
237
Combining the SVD from :eq:`inv_a` with the inverse equation :eq:`inv_m` it is
238
easy to show that
239
240
.. math::
241
   :name: inv_m_tilde_svd
242
243
    \tilde{M} &= R \tilde{G}^\top (\tilde{G} R \tilde{G}^\top + \lambda^2 I)^{-1} \\
244
              &= R^{^1/_2} A^\top (A A^\top + \lambda^2 I)^{-1} \\
245
              &= R^{^1/_2} V \Lambda U^\top (U \Lambda V^\top V \Lambda U^\top + \lambda^2 I)^{-1} \\
246
              &= R^{^1/_2} V \Lambda U^\top (U (\Lambda^2 + \lambda^2 I) U^\top)^{-1} \\
247
              &= R^{^1/_2} V \Lambda U^\top U (\Lambda^2 + \lambda^2 I)^{-1} U^\top \\
248
              &= R^{^1/_2} V \Lambda (\Lambda^2 + \lambda^2 I)^{-1} U^\top \\
249
              &= R^{^1/_2} V \Gamma U^\top
250
251
where the elements of the diagonal matrix :math:`\Gamma` are simply
252
253
.. `reginv` in our code:
254
255
.. math::
256
   :name: inv_gamma_k
257
258
    \gamma_k = \frac{\lambda_k}{\lambda_k^2 + \lambda^2}\ .
259
260
From our expected current equation :eq:`inv_j_hat_t` and our whitened
261
measurement equation :eq:`inv_tilde_x_t`, if we take
262
263
.. math::
264
   :name: inv_w_t
265
266
    w(t) &= U^\top \tilde{x}(t) \\
267
         &= U^\top C^{-^1/_2} x(t)\ ,
268
269
we can see that the expression for the expected current is just
270
271
.. math::
272
   :name: inv_j_hat_t_svd
273
274
    \hat{j}(t) &= R^{^1/_2} V \Gamma w(t) \\
275
               &= \sum_k {\bar{v_k} \gamma_k w_k(t)}\ ,
276
277
where :math:`\bar{v_k} = R^{^1/_2} v_k`, with :math:`v_k` being the
278
:math:`k` th column of :math:`V`. It is thus seen that the current estimate is
279
a weighted sum of the "weighted" eigenleads :math:`v_k`.
280
281
It is easy to see that :math:`w(t) \propto \sqrt{L}`. To maintain the relation
282
:math:`(\tilde{G} R \tilde{G}^\top) / \text{trace}(I) = 1` when :math:`L` changes
283
we must have :math:`R \propto 1/L`. With this approach, :math:`\lambda_k` is
284
independent of  :math:`L` and, for fixed :math:`\lambda`, we see directly that
285
:math:`j(t)` is independent of :math:`L`.
286
287
The minimum-norm estimate is computed using this procedure in
288
:func:`mne.minimum_norm.make_inverse_operator`, and its usage is illustrated
289
in :ref:`CIHCFJEI`.
290
291
292
.. _noise_normalization:
293
294
Noise normalization
295
~~~~~~~~~~~~~~~~~~~
296
297
Noise normalization serves three purposes:
298
299
- It converts the expected current value into a dimensionless statistical test
300
  variable. Thus the resulting time and location dependent values are often
301
  referred to as dynamic statistical parameter maps (dSPM).
302
303
- It reduces the location bias of the estimates. In particular, the tendency of
304
  the MNE to prefer superficial currents is eliminated.
305
306
- The width of the point-spread function becomes less dependent on the source
307
  location on the cortical mantle. The point-spread is defined as the MNE
308
  resulting from the signals coming from a point current source (a current
309
  dipole) located at a certain point on the cortex.
310
311
In practice, noise normalization is implemented as a division by the square
312
root of the estimated variance of each voxel. In computing these noise
313
normalization factors, it's convenient to reuse our "weighted eigenleads"
314
definition from equation :eq:`inv_j_hat_t` in matrix form as
315
316
.. math::
317
   :name: inv_eigenleads_weighted
318
319
    \bar{V} = R^{^1/_2} V\ .
320
321
dSPM
322
----
323
324
Noise-normalized linear estimates introduced by Dale et al.
325
:footcite:`DaleEtAl1999` require division of the expected current amplitude by
326
its variance. In practice, this requires the computation of the diagonal
327
elements of the following matrix, using SVD equation :eq:`inv_m_tilde` and
328
:eq:`inv_eigenleads_weighted`:
329
330
.. math::
331
332
    M C M^\top &= M C^{^1/_2} C^{^1/_2} M^\top \\
333
            &= \tilde{M} \tilde{M}^\top \\
334
            &= R^{^1/_2} V \Gamma U^\top U \Gamma V^\top R^{^1/_2} \\
335
            &= \bar{V} \Gamma^2 \bar{V}^\top\ .
336
337
Because we only care about the diagonal entries here, we can find the
338
variances for each source as
339
340
.. math::
341
342
    \sigma_k^2 = \gamma_k^2
343
344
Under the conditions expressed at the end of :ref:`mne_solution`, it
345
follows that the *t*-statistic values associated with fixed-orientation
346
sources) are thus proportional to :math:`\sqrt{L}` while the *F*-statistic
347
employed with free-orientation sources is proportional to :math:`L`,
348
correspondingly.
349
350
.. note::
351
   The MNE software usually computes the *square roots* of the F-statistic to
352
   be displayed on the inflated cortical surfaces. These are also proportional
353
   to :math:`\sqrt{L}`.
354
355
sLORETA
356
-------
357
sLORETA :footcite:`Pascual-Marqui2002` estimates the current variances as the
358
diagonal entries of the
359
resolution matrix, which is the product of the inverse and forward operators.
360
In other words, the diagonal entries of (using :eq:`inv_m_tilde_svd`,
361
:eq:`inv_g_tilde`, and :eq:`inv_a`)
362
363
.. math::
364
365
    M G &= M C^{^1/_2} C^{-^1/_2} G \\
366
        &= \tilde{M} \tilde{G} \\
367
        &= R^{^1/_2} V \Gamma U^\top \tilde{G} R^{^1/_2} R^{-^1/_2} \\
368
        &= R^{^1/_2} V \Gamma U^\top U \Lambda V^\top R^{-^1/_2} \\
369
        &= R^{^1/_2} V \Gamma U^\top U \Lambda V^\top R^{^1/_2} R^{-1} \\
370
        &= \bar{V} \Gamma U^\top U \Lambda \bar{V}^\top R^{-1} \\
371
        &= \bar{V} \Gamma \Lambda \bar{V}^\top R^{-1}\ .
372
373
Because :math:`R` is diagonal and we only care about the diagonal entries,
374
we can find our variance estimates as
375
376
.. math::
377
378
    \sigma_k^2 &= \gamma_k \lambda_k R_{k,k}^{-1} \\
379
               &= \left(\frac{\lambda_k}{(\lambda_k^2 + \lambda^2)}\right) \left(\frac{\lambda_k}{1}\right) \left(\frac{1}{\lambda^2}\right) \\
380
               &= \frac{\lambda_k^2}{(\lambda_k^2 + \lambda^2) \lambda^2} \\
381
               &= \left(\frac{\lambda_k^2}{(\lambda_k^2 + \lambda^2)^2}\right) \left(\frac{\lambda^2 + \lambda_k^2}{\lambda^2}\right) \\
382
               &= \left(\frac{\lambda_k}{\lambda_k^2 + \lambda^2}\right)^2 \left(1 + \frac{\lambda_k^2}{\lambda^2}\right) \\
383
               &= \gamma_k^2 \left(1 + \frac{\lambda_k^2}{\lambda^2}\right)\ .
384
385
eLORETA
386
~~~~~~~
387
While dSPM and sLORETA solve for noise normalization weights
388
:math:`\sigma^2_k` that are applied to standard minimum-norm estimates
389
:math:`\hat{j}(t)`, eLORETA :footcite:`Pascual-Marqui2011` instead solves for
390
a source covariance
391
matrix :math:`R` that achieves zero localization bias. For fixed-orientation
392
solutions the resulting matrix :math:`R` will be a diagonal matrix, and for
393
free-orientation solutions it will be a block-diagonal matrix with
394
:math:`3 \times 3` blocks.
395
396
.. In https://royalsocietypublishing.org/doi/full/10.1098/rsta.2011.0081
397
.. eq. 2.10 (classical min norm), their values map onto our values as:
398
..
399
.. - α=λ²
400
.. - W=R⁻¹ (pos semidef weight matrix)
401
.. - K=G
402
.. - ϕ=x
403
.. - C=H
404
..
405
406
In :footcite:`Pascual-Marqui2011` eq. 2.13 states that the following system
407
of equations can be used to find the weights, :math:`\forall i \in {1, ..., P}`
408
(note that here we represent the equations from that paper using our notation):
409
410
.. math:: r_i = \left[ G_i^\top \left( GRG^\top + \lambda^2C \right)^{-1} G_i \right] ^{-^1/_2}
411
412
And an iterative algorithm can be used to find the values for the weights
413
:math:`r_i` that satisfy these equations as:
414
415
1. Initialize identity weights.
416
2. Compute :math:`N= \left( GRG^\top + \lambda^2C \right)^{-1}`.
417
3. Holding :math:`N` fixed, compute new weights :math:`r_i = \left[ G_i^\top N G_i \right]^{-^1/_2}`.
418
4. Using new weights, go to step (2) until convergence.
419
420
In particular, for step (2) we can use our substitution from :eq:`inv_g_tilde`
421
as:
422
423
.. math::
424
425
    N &= (G R G^\top + \lambda^2 C)^{-1} \\
426
      &= (C^{^1/_2} \tilde{G} R \tilde{G}^\top C^{^1/_2} + \lambda^2 C)^{-1} \\
427
      &= (C^{^1/_2} (\tilde{G} R \tilde{G}^\top + \lambda^2 I) C^{^1/_2})^{-1} \\
428
      &= C^{-^1/_2} (\tilde{G} R \tilde{G}^\top + \lambda^2 I)^{-1} C^{-^1/_2} \\
429
      &= C^{-^1/_2} (\tilde{G} R \tilde{G}^\top + \lambda^2 I)^{-1} C^{-^1/_2}\ .
430
431
Then defining :math:`\tilde{N}` as the whitened version of :math:`N`, i.e.,
432
the regularized pseudoinverse of :math:`\tilde{G}R\tilde{G}^\top`, we can
433
compute :math:`N` as:
434
435
.. math::
436
437
    N &= C^{-^1/_2} (U_{\tilde{G}R\tilde{G}^\top} \Lambda_{\tilde{G}R\tilde{G}^\top} V_{\tilde{G}R\tilde{G}^\top}^\top + \lambda^2 I)^{-1} C^{-^1/_2} \\
438
      &= C^{-^1/_2} (U_{\tilde{G}R\tilde{G}^\top} (\Lambda_{\tilde{G}R\tilde{G}^\top} + \lambda^2 I) V_{\tilde{G}R\tilde{G}^\top}^\top)^{-1} C^{-^1/_2} \\
439
      &= C^{-^1/_2} V_{\tilde{G}R\tilde{G}^\top} (\Lambda_{\tilde{G}R\tilde{G}^\top} + \lambda^2 I)^{-1} U_{\tilde{G}R\tilde{G}^\top}^\top C^{-^1/_2} \\
440
      &= C^{-^1/_2} \tilde{N} C^{-^1/_2}\ .
441
442
In step (3) we left and right multiply with subsets of :math:`G`, but making
443
the substitution :eq:`inv_g_tilde` we see that we equivalently compute:
444
445
.. math::
446
447
    r_i &= \left[ G_i^\top N G_i \right]^{-^1/_2} \\
448
        &= \left[ (C^{^1/_2} \tilde{G}_i)^\top N C^{^1/_2} \tilde{G}_i \right]^{-^1/_2} \\
449
        &= \left[ \tilde{G}_i^\top C^{^1/_2} N C^{^1/_2} \tilde{G}_i \right]^{-^1/_2} \\
450
        &= \left[ \tilde{G}_i^\top C^{^1/_2} C^{-^1/_2} \tilde{N} C^{-^1/_2} C^{^1/_2} \tilde{G}_i \right]^{-^1/_2} \\
451
        &= \left[ \tilde{G}_i^\top \tilde{N} \tilde{G}_i \right]^{-^1/_2}\ .
452
453
For convenience, we thus never need to compute :math:`N` itself but can instead
454
compute the whitened version :math:`\tilde{N}`.
455
456
Predicted data
457
~~~~~~~~~~~~~~
458
459
Under noiseless conditions the SNR is infinite and thus leads to
460
:math:`\lambda^2 = 0` and the minimum-norm estimate explains the measured data
461
perfectly. Under realistic conditions, however, :math:`\lambda^2 > 0` and there
462
is a misfit between measured data and those predicted by the MNE. Comparison of
463
the predicted data, here denoted by :math:`x(t)`, and measured one can give
464
valuable insight on the correctness of the regularization applied.
465
466
In the SVD approach we easily find
467
468
.. math::    \hat{x}(t) = G \hat{j}(t) = C^{^1/_2} U \Pi w(t)\ ,
469
470
where the diagonal matrix :math:`\Pi` has elements :math:`\pi_k = \lambda_k
471
\gamma_k` The predicted data is thus expressed as the weighted sum of the
472
'recolored eigenfields' in :math:`C^{^1/_2} U`.
473
474
Cortical patch statistics
475
~~~~~~~~~~~~~~~~~~~~~~~~~
476
477
If the ``add_dists=True`` option was used in source space creation,
478
the source space file will contain
479
Cortical Patch Statistics (CPS) for each vertex of the cortical surface. The
480
CPS provide information about the source space point closest to it as well as
481
the distance from the vertex to this source space point. The vertices for which
482
a given source space point is the nearest one define the cortical patch
483
associated with with the source space point. Once these data are available, it
484
is straightforward to compute the following cortical patch statistics for each
485
source location :math:`d`:
486
487
- The average over the normals of at the vertices in a patch,
488
  :math:`\bar{n_d}`,
489
490
- The areas of the patches, :math:`A_d`, and
491
492
- The average deviation of the vertex normals in a patch from their average,
493
  :math:`\sigma_d`, given in degrees.
494
495
``use_cps`` parameter in :func:`mne.convert_forward_solution`, and
496
:func:`mne.minimum_norm.make_inverse_operator` controls whether to use
497
cortical patch statistics (CPS) to define normal orientations or not (see
498
:ref:`CHDBBCEJ`).
499
500
.. _inverse_orientation_constraints:
501
502
Orientation constraints
503
~~~~~~~~~~~~~~~~~~~~~~~
504
505
The principal sources of MEG and EEG signals are generally believed to be
506
postsynaptic currents in the cortical pyramidal neurons. Since the net primary
507
current associated with these microscopic events is oriented normal to the
508
cortical mantle, it is reasonable to use the cortical normal orientation as a
509
constraint in source estimation. In addition to allowing completely free source
510
orientations, the MNE software implements three orientation constraints based
511
of the surface normal data:
512
513
- Source orientation can be rigidly fixed to the surface normal direction by
514
  specifying ``fixed=True`` in :func:`mne.minimum_norm.make_inverse_operator`.
515
  If cortical patch statistics are available the average
516
  normal over each patch, :math:`\bar{n_d}`, are used to define the source
517
  orientation. Otherwise, the vertex normal at the source space location is
518
  employed.
519
520
- A *location independent or fixed loose orientation constraint* (fLOC) can be
521
  employed by specifying ``fixed=False`` and ``loose=1.0`` when
522
  calling :func:`mne.minimum_norm.make_inverse_operator` (see
523
  :ref:`plot_dipole_orientations_fLOC_orientations`).
524
  In this approach, a source coordinate
525
  system based on the local surface orientation at the source location is
526
  employed. By default, the three columns of the gain matrix G, associated with
527
  a given source location, are the fields of unit dipoles pointing to the
528
  directions of the :math:`x`, :math:`y`, and :math:`z` axis of the coordinate
529
  system employed in the forward calculation (usually the :ref:`MEG head
530
  coordinate frame <head_device_coords>`). For LOC the orientation is changed so
531
  that the first two source components lie in the plane normal to the surface
532
  normal at the source location and the third component is aligned with it.
533
  Thereafter, the variance of the source components tangential to the cortical
534
  surface are reduced by a factor defined by the ``--loose`` option.
535
536
- A *variable loose orientation constraint* (vLOC) can be employed by
537
  specifying ``fixed=False`` and ``loose`` parameters when calling
538
  :func:`mne.minimum_norm.make_inverse_operator` (see
539
  :ref:`plot_dipole_orientations_vLOC_orientations`). This
540
  is similar to *fLOC* except that the value given with the ``loose``
541
  parameter will be multiplied by :math:`\sigma_d`, defined above.
542
543
Depth weighting
544
~~~~~~~~~~~~~~~
545
546
The minimum-norm estimates have a bias towards superficial currents. This
547
tendency can be alleviated by adjusting the source covariance matrix :math:`R`
548
to favor deeper source locations. In the depth weighting scheme employed in MNE
549
analyze, the elements of :math:`R` corresponding to the :math:`p` th source
550
location are be scaled by a factor
551
552
.. math::    f_p = (g_{1p}^\top g_{1p} + g_{2p}^\top g_{2p} + g_{3p}^\top g_{3p})^{-\gamma}\ ,
553
554
where :math:`g_{1p}`, :math:`g_{2p}`, and :math:`g_{3p}` are the three columns
555
of :math:`G` corresponding to source location :math:`p` and :math:`\gamma` is
556
the order of the depth weighting, which is specified via the ``depth`` option
557
in :func:`mne.minimum_norm.make_inverse_operator`.
558
559
Effective number of averages
560
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
561
562
It is often the case that the epoch to be analyzed is a linear combination over
563
conditions rather than one of the original averages computed. As stated above,
564
the noise-covariance matrix computed is originally one corresponding to raw
565
data. Therefore, it has to be scaled correctly to correspond to the actual or
566
effective number of epochs in the condition to be analyzed. In general, we have
567
568
.. math::    C = C_0 / L_{eff}
569
570
where :math:`L_{eff}` is the effective number of averages. To calculate
571
:math:`L_{eff}` for an arbitrary linear combination of conditions
572
573
.. math::    y(t) = \sum_{i = 1}^n {w_i x_i(t)}
574
575
we make use of the the fact that the noise-covariance matrix
576
577
.. math::    C_y = \sum_{i = 1}^n {w_i^2 C_{x_i}} = C_0 \sum_{i = 1}^n {w_i^2 / L_i}
578
579
which leads to
580
581
.. math::    1 / L_{eff} = \sum_{i = 1}^n {w_i^2 / L_i}
582
583
An important special case  of the above is a weighted average, where
584
585
.. math::    w_i = L_i / \sum_{i = 1}^n {L_i}
586
587
and, therefore
588
589
.. math::    L_{eff} = \sum_{i = 1}^n {L_i}
590
591
Instead of a weighted average, one often computes a weighted sum, a simplest
592
case being a difference or sum of two categories. For a difference :math:`w_1 =
593
1` and :math:`w_2 = -1` and thus
594
595
.. math::    1 / L_{eff} = 1 / L_1 + 1 / L_2
596
597
or
598
599
.. math::    L_{eff} = \frac{L_1 L_2}{L_1 + L_2}
600
601
Interestingly, the same holds for a sum, where :math:`w_1 = w_2 = 1`.
602
Generalizing, for any combination of sums and differences, where :math:`w_i =
603
1` or :math:`w_i = -1`, :math:`i = 1 \dotso n`, we have
604
605
.. math::    1 / L_{eff} = \sum_{i = 1}^n {1/{L_i}}
606
607
.. target for :end-before: inverse-end-content