|
The following parameters are chosen to emulate a marine acquisition geometry: shot interval = 18.288 m, receiver interval = 9.144 m, near offset = 45.72 m, line length = 2 km. The number, Mga, of supergathers dividing up all sources varies from 1,2,4, up to 8. A Ricker wavelet with a 32 Hz peak frequency is used as the source wavelet, and 160 frequency channels equally divide the frequency range from 0 to 80 Hz, as exemplified alongside equation 2.27. With the true velocity and reflectivity models, a CSG for the source and receivers depicted in Figure 2.2(a, b) is generated for example using split-step forward modeling and is presented in Figure 2.2(c). To probe noise robustness, I contaminate the CSG's with various levels of random noise for a flat spectrum below 80 Hz, to yield snr=10, 20, 30 dB. Figure 2.2(d) shows a contaminated version of (c). The noisy CSG's are first Wiener filtered, before being migrated. The smoothed velocity model shown in Figure 2.4(a) is used as the migration velocity, and is obtained by applying a boxcar filter to the true velocity model shown in Figure 2.2(a).
|
|
As the LSM iterations proceed, the trial reflectivity model is updated and surpasses the standard migration image in quality, as demonstrated in Figures 2.3 and 2.4. For comparison, migration with the subsampled CSG's ('Subsmpl Mig') is also considered, which is an alternative means for data reduction and speedup. To yield a speedup of around 8 (see Figure 2.5), comparable to that of my proposed method, the subsampling ratio of `Subsmpl Mig' is chosen as . As indicated by the black dashed horizontal lines in Figures 2.3(a-c), the model error of `Subsmpl Mig' always exceeds that of standard migration, indicated by the black solid horizontal lines. As shown in Figures 2.4(g), the image produced by `Subsmpl Mig' contains many artifacts that are disruptive because they are of similar spatial frequency and locations to those of reflectors.
Several features in Figure 2.3 are worth commenting. Larger Mga and snr lead to smaller model error and better convergence. Oscillations in the objective function in panel (d) are the expected behavior of hybrid CG. The objective function is consistently reduced by CG within every updates, but will increase upon the presentation of newly encoded supergathers. This is because the previous optimization efforts are targeted at reducing a differently parameterized objective function. As the iterations proceed, however, the envelope of the oscillatory objective function still decreases, validating the robust performance of hybrid CG.
In terms of model error, the least-squares method can surpass the standard migration in as few as two iterations (see for example the symbol at iteration 2 on the cyan curve in Figure 2.3(a)). This estimate, however, is too optimistic, even though I have made sure to minimize the model error of standard migration image as , where is the migration image and is the true model. The reason is that a standard migration image tends to be smooth and the high frequency components are suppressed. Thus, the model error could be large. On the other hand, the image obtained by lsmmfs tends to be sharper, matching the true model better in terms of the -norm of the model error. The downside, however, is ringy noise, as evident in the corresponding reflectivity images shown in Figure 2.4(e) and (f). That is why it makes sense to involve human subjects in judging the quality of resulting images.
The break-even points where the image quality of lsmmfs is comparable to that of standard migration are indicated by the symbols , , and in Figure 2.3(a-c). Three images corresponding to such points are shown in Figure 2.4(c), (h) and (i). To equate the quality of these images with that of the standard migration, shown in Figure 2.4(d), tradeoffs are made. In Figure 2.4(c), (h) and (i), there is some residual high-frequency noise, especially at shallow depths. But this noise is quite distinct from those of reflectors and thus it hardly affects the dominant features. On the other hand, the resolution of Figure 2.4(c), (h) and (i) is better than that of Figure 2.4(d). It is based on these two factors that I choose the break-even points in visual quality. Once the abscissae, or 's, of these break-even points are known (from equation C.3), I calculate the relative computational cost, or, its reciprocal, termed `gain in computational efficiency', which is plotted in Figure 2.5. Here we see that, for the parameter settings and the model under study, nearly an order of magnitude of speedup can be achieved.
One may raise the concern that, due to the frequency selection scheme, even with a dozen iterations of dynamic encoding, each source can hardly have the chance to exhaust its spectrum. For example, take , , then . So at any one iteration, each source only gets assigned of the frequency channels available. With 10 iterations, in the best scenario a source can only cover a mere of its spectrum. In light of this analysis, the apparent good performance of the frequency selection scheme seems therefore rather counter intuitive. To address this concern, I maintain that due to least-squares iterations, sources no longer act in straightforward linear superposition as they do in the standard migration. Rather, they act cooperatively and with collaboration between sources the model gets effectively illuminated by a wider range of spectrum than provided by stacking migrations.
To test this idea, I examine the convergence performance of IS, where frequency selection encoding with multisource applies as well. Figure 2.3(b) includes the convergence curves (the dash-dot curves are for IS), plotted according to what is prescribed at the end of Appendix C, and Figure 2.4(b) shows a migration image of IS, which is obtained at the same computational cost as in Figure 2.4(c). Evidently, with this amount of computation, IS does not beat the standard migration in terms of either model error or the quality of migration image. The explanation for this phenomenon is precisely the concern raised earlier, aided by the realization that by random frequency assignments, rarely can a smooth spectrum result, and fluctuations in the spectrum are likely. Non-smoothness in the spectrum corresponds to ringiness in the time domain. Therefore the migration image is always inferior to standard migration image. Contrasting is and lsmmfs, one can see the essential role that least-squares updates play in this frequency selection multisource method. Additional insights are reaped from a comparison study conducted in Appendix D, where I show that iterative refinement likely leads to better solutions than migration does.
|
|
Note that in this case the number of sources Stot is far greater than the number of available frequency channels . If , then assignment of non-overlapping source spectra is not possible, unless only a small number of sources are turned on at a time, a practice that would discard much useful information. Here I allow overlapping source spectra. If , each frequency channel is shared among sources. This assignment can be implemented for example by randomly drawing source indices in turn without replacement to be assigned to each frequency. In addition, a random polarity is assigned to each source, in order to reduce the crosstalk among sources sharing a frequency. A comparison of this method with standard migration is given in Figure 2.7, where 50 steepest descent updates of lsmmfs in one supergather yield a result comparable to standard migration. Equation C.5 says, based on the computational cost, that the speedup is = . On the I/O side, as analyzed at the end of the Theory section, the proposed method requires either or the I/O cost of the standard approach, depending on whether or not the data have been transformed into the frequency domain.
Yunsong Huang 2013-09-22