Tying seismic sections with well picks typically requires estimating a source wavelet from a sonic log. Few want to repeat this effort for each dataset with different recording parameters. Each frequency of a source wavelet can have a different amplitude and phase. A typical phase rotation or amplitude scaling does not change with frequency and cannot make one arbitrary wavelet look like another. Instead we should estimate different wavelets directly from cross-correlations of all intersecting datasets.
A collection of independent seismic surveys, 2D lines or 4D vintages, intersect each other, but reflections do not overlay well. Seismic attributes differ and complicate any stratigraphic interpretation, particularly of thin beds. We want to apply wavelet corrections to each of these datasets so that intersections tie as well as possible.
One approach was independently implemented by David Mackidd of EnCana, and by Bishop and Nunns [1]. This approach cross-correlates traces from different surveys near intersection points. From these cross-correlations are extracted time shifts, phase rotations, and scale factors by least-squares means. Bulk phase-rotations require care with phase-unwrapping, and time shifts are not independent of phase. These limited corrections may be robust in the presence of defective spectra, but at the cost of ignoring arbitrary phase and amplitude changes with frequency.
Instead, we take the approach of Henry and Mellman [3], and invert cross-correlations directly for arbitrary wavelets. This problem is very similar to the more-familiar problem of surface-consistent deconvolution (Levin [4]), which solves for prediction-error filters. I have implemented a variation on both. I solve the same wavelet equalization problem as Henry and Mellman, but with a non-quadratic objective function more like surface-consistent deconvolution, and with the Gauss-Newton optimization of Harlan [2].
Let us index independent seismic lines or datasets by indices and . Certain pairs of lines cross at intersections indexed by . Each will index a triplet of indices . At each of these intersections , a representative trace from line will be cross-correlated with a trace from line , all as a function of time . Let us define the cross-correlation as a function of lag time by
(1) | |||
(2) | |||
(3) | |||
(4) |
Typically cross-correlations will be averaged from multiple traces, but I will assume a single trace in the derivation of a solution.
Assume that a data trace
is a convolution of a short wavelet that is specific
to the line with a reflectivity that is specific
to the intersection point . Also assume additive noise
that is specific to the trace:
(5) | |||
(6) |
(7) |
Assume noise is uncorrelated with anything else:
(8) | |||
(9) | |||
(10) | |||
(11) |
A correlation of two traces can then be rewritten as
(12) | |||
(13) | |||
(14) | |||
(15) | |||
(16) | |||
(17) |
I prefer to use longer wavelets to avoid a separate parameterization
of time shifts. I directly minimize errors
in cross-correlations modeled from the estimated wavelets.
The objective function finds the collection of wavelets
and scale factors that best minimize
(18) |
If reflectivities have reasonably consistent strengths, then one can safely set all scale factors to 1. This implementation makes this assumption.
This objective function is a non-quadratic function of the wavelets , but it is very amenable to a Gauss-Newton algorithm that iteratively approximates the objective function as a quadratic.
Initialize all wavelets to delta functions. Linearize a perturbation of modeled cross-correlations with respect to perturbed wavelets :
(19) | |||
(20) | |||
(21) |
We can imagine using estimated wavelets directly for least-squares deconvolution of each line. This would however unnecessarily attempt to whiten the frequency spectra. Most likely, the data are already whitened as much as desired. We also do not want to compute inverse wavelets explicitly, which would amplify some otherwise very weak noisy frequencies.
To avoid unnecessary alteration of the spectral color, we want to reconvolve all deconvolved traces by a wavelet for a preferred reference line. This reference line may also be one that has been carefully tied to sonic logs or formation tops by synthetic seismograms. If more than one line is constrained, then I share the same wavelet for all during optimization. The estimated wavelet for these constrained lines is set aside as the reference wavelet.
For each of the unconstrained lines I solve for a transfer function mapping the estimated wavelet to the reference wavelet, again with least-squares damping. Finally the estimated transfer functions are convolved with the original data, combining the effects of deconvolution and reconvolution.
We can also use the estimated wavelets for each line to extract simpler corrections, such as a bulk time shift, instantaneous phase rotation, and a scale factor, as did Bishop and Nunns [1]. These limited corrections should be more stable and less sensitive to noise. Extracting phase corrections from estimated wavelets is also more robust than fitting phase shifts picked from the data. But for any frequencies shared by intersecting lines, we know we can do better and detect more arbitrary differences.
It is possible that certain lines will contain frequencies not present in any intersecting lines. These isolated frequencies will be absent from cross-correlations. A damped least-squares fit to cross-correlations will omit these frequencies from the estimated wavelet. To retain these frequencies, we can include autocorrelations as well as cross-correlations in the optimizations. A final correction to a reference wavelet may remove the isolated frequencies again.