Results

Figure 2.2: (a) The 2D SEG/EAGE salt model and (b) the associated reflectivity model, where the red star denotes a source at $ X=2.725$  (km) and the appending yellow line denotes the receiver aperture of this source. (c) The CSG from this source. (d) This CSG corrupted by bandlimited incoherent noise such that snr=10 dB.
The proposed method of lsmmfs is tested on the 2D SEG/EAGE salt model, of size2.6 $ n_x\times n_z = 640\times 150$ , with a grid spacing of 9.144 m. The velocity and the reflectivity model are shown in Figure 2.2(a) and (b), respectively.

The following parameters are chosen to emulate a marine acquisition geometry: shot interval = 18.288 m, receiver interval = 9.144 m, near offset = 45.72 m, line length = 2 km. The number, Mga, of supergathers dividing up all $ \gls{Stot}=304$ sources varies from 1,2,4, up to 8. A Ricker wavelet with a 32 Hz peak frequency is used as the source wavelet, and 160 frequency channels equally divide the frequency range from 0 to 80 Hz, as exemplified alongside equation 2.27. With the true velocity and reflectivity models, a CSG for the source and receivers depicted in Figure 2.2(a, b) is generated for example using split-step forward modeling and is presented in Figure 2.2(c). To probe noise robustness, I contaminate the CSG's with various levels of random noise for a flat spectrum below 80 Hz, to yield snr=10, 20, 30 dB. Figure 2.2(d) shows a contaminated version of (c). The noisy CSG's are first Wiener filtered, before being migrated. The smoothed velocity model shown in Figure 2.4(a) is used as the migration velocity, and is obtained by applying a $ 3\times 3$ boxcar filter to the true velocity model shown in Figure 2.2(a).

Figure 2.3: Normalized model error (a-c) and normalized objective function (d) for various snr as a function of iteration number, in solid curves color coded for various Mga, when minimizing by hybrid CG. For space efficiency, the legends in (a,c) are shared among (a-c). Regarding the black horizontal solid and dashed lines in (a-c), the dash-dot curves in (b), and the symbols $ \medbullet$ , $ \blacksquare$ , $ \blacktriangle$ , and $ \circ$ in (a-c), see text for details. The alphanumeric short labels `4(c)' up to `4(i)' refer to in which figure and panels the corresponding migration images are shown.
\includegraphics[width=6in]{fig/convrgCurvesCG}

Figure 2.4: Reflectivity distributions obtained by various methods with a smoothed velocity model (a), in various parameter settings of Mga and Kit, the iteration number, when applicable. 30 dB of the snr of CSG applies to (b,c), whereas 10 dB applies to (d-i). (c-i) are respectively referred to in Figure 2.3(a) and (c).
\includegraphics[width=6in]{fig/migr_images_REGU_CG}

As the LSM iterations proceed, the trial reflectivity model is updated and surpasses the standard migration image in quality, as demonstrated in Figures 2.3 and 2.4. For comparison, migration with the subsampled CSG's ('Subsmpl Mig') is also considered, which is an alternative means for data reduction and speedup. To yield a speedup of around 8 (see Figure 2.5), comparable to that of my proposed method, the subsampling ratio of `Subsmpl Mig' is chosen as $ 1/8$ . As indicated by the black dashed horizontal lines in Figures 2.3(a-c), the model error of `Subsmpl Mig' always exceeds that of standard migration, indicated by the black solid horizontal lines. As shown in Figures 2.4(g), the image produced by `Subsmpl Mig' contains many artifacts that are disruptive because they are of similar spatial frequency and locations to those of reflectors.

Several features in Figure 2.3 are worth commenting. Larger Mga and snr lead to smaller model error and better convergence. Oscillations in the objective function in panel (d) are the expected behavior of hybrid CG. The objective function is consistently reduced by CG within every $ K_{CGit}=3$ updates, but will increase upon the presentation of newly encoded supergathers. This is because the previous optimization efforts are targeted at reducing a differently parameterized objective function. As the iterations proceed, however, the envelope of the oscillatory objective function still decreases, validating the robust performance of hybrid CG.

In terms of model error, the least-squares method can surpass the standard migration in as few as two iterations (see for example the $ \circ$ symbol at iteration 2 on the cyan curve in Figure 2.3(a)). This estimate, however, is too optimistic, even though I have made sure to minimize the model error of standard migration image as $ \underset{\alpha}{\min}{\Vert\alpha \breve{{\bf {m}}} - {\bf {m}}\Vert^2}$ , where $ \breve{{\bf {m}}}$ is the migration image and $ {\bf {m}}$ is the true model. The reason is that a standard migration image tends to be smooth and the high frequency components are suppressed. Thus, the model error could be large. On the other hand, the image obtained by lsmmfs tends to be sharper, matching the true model better in terms of the $ L_2$ -norm of the model error. The downside, however, is ringy noise, as evident in the corresponding reflectivity images shown in Figure 2.4(e) and (f). That is why it makes sense to involve human subjects in judging the quality of resulting images.

The break-even points where the image quality of lsmmfs is comparable to that of standard migration are indicated by the symbols $ \medbullet$ $ \blacksquare$ , and $ \blacktriangle$ in Figure 2.3(a-c). Three images corresponding to such points are shown in Figure 2.4(c), (h) and (i). To equate the quality of these images with that of the standard migration, shown in Figure 2.4(d), tradeoffs are made. In Figure 2.4(c), (h) and (i), there is some residual high-frequency noise, especially at shallow depths. But this noise is quite distinct from those of reflectors and thus it hardly affects the dominant features. On the other hand, the resolution of Figure 2.4(c), (h) and (i) is better than that of Figure 2.4(d). It is based on these two factors that I choose the break-even points in visual quality. Once the abscissae, or $ K_{it}$ 's, of these break-even points are known (from equation C.3), I calculate the relative computational cost, or, its reciprocal, termed `gain in computational efficiency', which is plotted in Figure 2.5. Here we see that, for the parameter settings and the model under study, nearly an order of magnitude of speedup can be achieved.


\begin{SCfigure}
% latex2html id marker 2442\centering
\includegraphics[width=...
...ther, or equivalently as functions of \gls{Mga} (labeled atop).
}
\end{SCfigure}

One may raise the concern that, due to the frequency selection scheme, even with a dozen iterations of dynamic encoding, each source can hardly have the chance to exhaust its spectrum. For example, take $ \gls{Mga}=2$ , $ \gls{Kit}=10$ , then $ S = \gls{Stot}/\gls{Mga}= 304/2 = 152$ . So at any one iteration, each source only gets assigned $ 1/152$ of the frequency channels available. With 10 iterations, in the best scenario a source can only cover a mere $ 10/152$ of its spectrum. In light of this analysis, the apparent good performance of the frequency selection scheme seems therefore rather counter intuitive. To address this concern, I maintain that due to least-squares iterations, sources no longer act in straightforward linear superposition as they do in the standard migration. Rather, they act cooperatively and with collaboration between sources the model gets effectively illuminated by a wider range of spectrum than provided by stacking migrations.

To test this idea, I examine the convergence performance of IS, where frequency selection encoding with multisource applies as well. Figure 2.3(b) includes the convergence curves (the dash-dot curves are for IS), plotted according to what is prescribed at the end of Appendix C, and Figure 2.4(b) shows a migration image of IS, which is obtained at the same computational cost as in Figure 2.4(c). Evidently, with this amount of computation, IS does not beat the standard migration in terms of either model error or the quality of migration image. The explanation for this phenomenon is precisely the concern raised earlier, aided by the realization that by random frequency assignments, rarely can a smooth spectrum result, and fluctuations in the spectrum are likely. Non-smoothness in the spectrum corresponds to ringiness in the time domain. Therefore the migration image is always inferior to standard migration image. Contrasting is and lsmmfs, one can see the essential role that least-squares updates play in this frequency selection multisource method. Additional insights are reaped from a comparison study conducted in Appendix D, where I show that iterative refinement likely leads to better solutions than migration does.

Figure 2.6: The 3D SEG/EAGE salt velocity model, in m/s, sliced at (left panel) x=6.7 km and (right panel) z=1.98 km.
\includegraphics[width=6in]{fig/3Dvelo_slices}

Figure 2.7: Reflectivity model (a,b) and others obtained by (c,d) standard shot-record prestack split-step migration, and (e,f) the proposed lsmmfs for 1 supergather at the $ 50^\textrm{th}$ iteration of steepest descent.
\includegraphics[width=6in]{fig/3Dreflec_compar_slices}
To test the viability of the frequency selection multisource method in processing 3D data, I use a 3D SEG/EAGE salt model, of size $ n_x \times n_y \times n_z$ = $ 672 \times 672 \times 185$ with a grid interval of 20 m. Slices of the velocity model are depicted in Figure 2.6. There is one receiver at each grid point, and $ S_{tot} = 64 \times 64 = 4096$ sources are equally distributed on the surface. A Ricker wavelet with a 16 Hz peak frequency is used as the source wavelet, and $ n_\omega$ = 360 frequency channels equally divide the frequency range from 0 to 40 Hz, as exemplified alongside equation 2.27. Here, a fixed acquisition geometry of both the sources and receivers is assumed, as the aim of this study is to test whether the frequency selection multisource method can work on either land or marine 3D data.

Note that in this case the number of sources Stot is far greater than the number of available frequency channels $ n_{\omega}$ . If $ S = \gls{Stot}/ \gls{Mga} > n_{\omega}$ , then assignment of non-overlapping source spectra is not possible, unless only a small number of sources are turned on at a time, a practice that would discard much useful information. Here I allow overlapping source spectra. If $ S \gg n_\omega$ , each frequency channel is shared among $ S/n_\omega$ sources. This assignment can be implemented for example by randomly drawing $ S/n_\omega$ source indices in turn without replacement to be assigned to each frequency. In addition, a random polarity $ \pm 1$ is assigned to each source, in order to reduce the crosstalk among sources sharing a frequency. A comparison of this method with standard migration is given in Figure 2.7, where 50 steepest descent updates of lsmmfs in one supergather yield a result comparable to standard migration. Equation C.5 says, based on the computational cost, that the speedup is $ 2S / (4 K_{it} -1)$ = $ 2*4096/199 \cong 41$ . On the I/O side, as analyzed at the end of the Theory section, the proposed method requires either $ \epsilon\times$ or $ (2+\epsilon)\times$ the I/O cost of the standard approach, depending on whether or not the data have been transformed into the frequency domain.

Yunsong Huang 2013-09-22