An algorithm for 3D residual static corrections
William S. Harlan (based on ideas from Greg Lazear)
1996
This algorithm is designed to obtain enough information
in one pass through 3D surface seismic data to estimate reflection
static corrections. During this pass,
a running buffer retains traces whose sources and receivers are
close to the next expected trace.
Each new trace is cross-correlated with traces in the buffer, and
the cross-correlations are saved in a disk file. The necessary
static corrections are optimized iteratively from the
saved correlation functions, and corrections are applied
in a second and final pass through the data. This note
uses a notation which might clarify the details.
Assume that surface seismic data
amplitudes
were recorded over time for many
pairs of 3D ground positions for sources and
receivers .
Sources are indexed with subscripts, and receivers with superscripts.
The time axis has been stretched
according to some “normal moveout” which is
a function of the source-receiver offset
and the midpoint
. After such
adjustments, nearby traces are assumed to resemble each other
within a time window, except for
time-invariant, surface-consistent source shifts
and receiver shifts :
(1)
The approximation is expected to worsen as the collection of
traces includes sources and receivers farther apart.
The time window should be as deep as possible and should include
strong reflections.
Define a cross correlation
for specific pairs
of nearby sources and receivers:
(2)
These crosscorrelations are calculated only between traces whose
sources, receivers, and/or midpoints are within a
limited distance of each other:
(3)
To the extent that our data model (1) is correct, then
(4)
The following algorithm suggests itself. Pick lags
that do maximize the correlations, then solve a least-squares
problem:
(5)
The above algorithm requires that a great many noisy cross-correlations
be picked for possibly unreliable peaks. Instead, we can prepare
and model cross-correlations of different partial sums of the data.
Some information will be lost, but the sums will improve the
signal-to-noise ratio of the correlations and will improve
the reliability of the picked lags at correlation peaks.
Several partial sums are possible. The most common is probably
a crosscorrelation of each trace with a stack of those traces
that share a particular midpoint. (Remember the nearby-midpoint
constraint in (3) applies to all crosscorrelations.)
(6)
In addition we could calculate the following sums of
all crosscorrelations sharing a trace
with a particular source or receiver
(7)
or also calculate the sum of all crosscorrelations sharing traces
with a particular pair of sources or receivers
(8)
Again we limit sums to nearby traces as in (3).
We can examine the lags over a limited range
and
find the peaks of any of these summed crosscorrelations:
(9)
The peaks of the crosscorrelations should ideally coincide
with sums of surface-consistent shifts.
To fit any or all of these picked peaks,
find the sum of source shifts
and receiver shifts which fit
the picked lags in a least-squares sense:
|
|
|
|
|
|
|
(10) |
The simplest algorithm would use only the first of these sums.
We could also maximize summed crosscorrelations more directly,
but the non-linearity is more inconvenient:
(11)
The original data are corrected implicitly with these shifts:
(12)
We could begin the algorithm again with these corrected
data, improving the ability to pick correlation peaks. In practice
we need not correct the data explicitly. We only
adjust the original crosscorrelations:
(13)
We can then recalculate the sums (6),
(7), and (8) and
repeat the rest of the algorithm.
After convergence, the data are explicitly corrected with the
accumulated shifts.