next up previous contents
Next: Numerical Implementation Up: Theory Previous: Multisource Migration   Contents

Multisource Least-squares Migration (MLSM)

In order to suppress crosstalk noise to an acceptable level when the number of multiple sources $ S$ is large, I solve equation [*] in the least-squares sense (Dai and Schuster, 2009; Dai et al., 2009). That is, define the objective function as

$\displaystyle f(\textbf{m})=\frac{1}{2}\vert\vert\textbf{d}-\textbf{Lm}\vert\vert^{2}+\frac{1}{2}\lambda\vert\vert\textbf{m}-\textbf{m}_{apr}\vert\vert^{2},$ (7)

so that, an optimal $ \textbf{m}$ is sought to minimize the objective function in equation [*]. In equation [*], Tikhonov regularization (Tikhonov and Arsenin, 1977) is used and $ \lambda$ is the regularization parameter, determined by a trial and error method. Smoothness constraints in the form of second-order derivatives of the model function can expedite convergence (Kühl and Sacchi, 2003) and partly overcome the problems associated with errors in the velocity model.

With the assumption that nothing is known about $ \textbf{m}$ , $ \textbf{m}_{apr}$ is set to be equal to zero. The model $ \textbf{m}$ that minimizes equation [*] can be found by a gradient type optimization method

$\displaystyle \textbf{m}^{(k+1)}=\textbf{m}^{(k)}-\alpha \textbf{F}(\textbf{L}^{T}(\textbf{Lm}^{(k)}-\textbf{d})+\lambda\textbf{m}^{(k)}),$ (8)

where $ \textbf{L}^{T}(\textbf{Lm}^{(k)}-\textbf{d})+\lambda\textbf{m}^{(k)}$ is the gradient, $ \textbf {F}$ is a preconditioning matrix and $ \alpha$ is the step length. As both the forward modeling and migration operators are linear and adjoint to each other, the analytical step length formula can be used. Alternatively, in order to improve the robustness of the MLSM algorithm, a quadratic line search method is carried out with the current model and two trial models. In this study, I use the conjugate gradient (CG) method, which generally converges faster than the steepest decent method. Moreover, static encoding is used where the encoding functions are the same for every iteration to reduce the I/O cost. Boonyasiriwat and Schuster (2010) show that dynamic encoding (encoding functions are changed at every iteration) is more effective in 3D multisource full waveform inversion and so dynamic encoding results are presented as well. To ensure the convergence of MLSM, the migration velocity should be close to the true velocity model.


next up previous contents
Next: Numerical Implementation Up: Theory Previous: Multisource Migration   Contents
Wei Dai 2013-07-10