For large datasets, 3D prestack wave equation migration is a computationally expensive procedure. Its computational workload is proportional to both the number of shots in a survey and the computational complexity of solving the 3D wave equation for a given velocity model. In the case of iterative methods, this workload is proportional to the number of iterations for acceptable convergence.
An attempt to reduce this workload was proposed by Morton and Ober (1998) by migrating one blended supergather, rather than separately migrating individual shot gathers. Here, the supergather is computed by summing a number of shot gathers, each encoded by correlation with a distinct random time series approximately orthogonal with one another. The migration image is then formed by applying a decoded migration operator whose imaging condition is tuned to decode the simultaneous sum of encoded shots. Applying this migration operator to the supergather produces a migration image of good quality only if the number of iterations is sufficiently large. In fact, their results did not show a clear computational cost advantage over the conventional method of wave equation migration.
To mitigate problems associated with wavelets of long random time series, Jing et al. (2000) and Krebs et al. (2009) proposed a polarity encoder that randomly multiplied shot gathers by either or . For phase-encoded multi-source migration, Jing et al. (2000) empirically concluded that the crosstalk term was adequately suppressed when six encoded shot gathers were encoded, summed together, and migrated. On the other hand, Krebs et al. (2009) empirically found that using this strategy with fwi produced acceptable velocity tomograms at a cost saving of at least one order of magnitude. In one of the few exceptions, Gao et al. (2010) used a deterministic encoding to determine a shot's scale factor that gave the most significant update to the velocity model for a specified composite source. Another form of deterministic encoding is plane-wave decomposition (see e.g. Duquet et al., 2001; Zhang et al., 2003; Whitmore and Garing, 1993), which also aims at reducing data volume. Using this method, Vigh and Starr (2008) obtained speedups ranging from three to 10-fold. Other groups, such as Virieux and Operto (2009), Ben-Hadj-Ali et al. (2011,2009), Dai and Schuster (2009), Boonyasiriwat and Schuster (2010), and Ben-Hadj-Ali et al. (2011) discovered similar cost savings for fwi or least squares migration, except that they used somewhat different encoding recipes such as exclusive use or combinations of random time shifting, frequency selection, source selection, amplitude encoding, and/or spatial randomization of the source locations. A related inversion scheme is by Tang (2009), who used random phase-encoding of simultaneous sources to efficiently compute the Hessian for iterative least squares migration. Almost all of these schemes aimed to efficiently approximate the orthogonality between different encoders in as few iterations as possible.
Is there an encoding scheme that can exactly satisfy this orthogonality condition? The answer is yes. The fdm scheme from the communications industry can be used to assign each shot gather to a unique set of frequencies. Careful assignment ensures no overlap in frequencies from one shot gather to the next, thereby eliminating the crosstalk. Just as important, FDM also mitigates the acquisition crosstalk noise associated with a marine geometry.
The marine acquisition crosstalk is defined as the migration noise caused by the mismatch in the modeled traces and the recorded traces. In a marine survey the recorded traces are only alive over a moving swath of hydrophones while the generated finite-difference traces are alive everywhere. This induces large residuals in the data misfit functions, leading to large artifacts in the fwi or migration images. As will be discussed later, the FDM strategy eliminates this problem. The downside of this strategy is, however, the reduced resolving power of seismic illumination. To enhance the resolving power, I use the lsm method (Nemeth et al., 1999; Tang and Biondi, 2009; Duquet et al., 2000), varying each shot gather's unique frequency fingerprint at every 3 cg updates. The resulting migration algorithm for encoded data can be more than an order of magnitude faster than conventional migration while producing nearly the same image quality.
The rest of this chapter is organized as follows. The theory section presents the theory of frequency-division encoding, how it can be used to remove the crosstalk in migrating supergathers, and the I/O implications for computing systems. The method section, supplemented by appendices, defines the objective function for the frequency-division multisource algorithm, discusses the implications for optimization, and derives the computational complexity. The numerical results for both the 2D and 3D SEG/EAGE salt models are then presented in the numerical results section. Here, the 2D model is used to generate synthetic data emulating a marine survey, and the 3D model is used to test the viability of the proposed technique for 3D data. The final section presents a summary and discussion.
Yunsong Huang 2013-09-22