Because the attenuation as given by Equation 1 is based on energy loss per oscillation and for a given traveltime, the high frequencies undergo more oscillations than do the low frequencies, the higher frequencies are preferentially attenuated, and wavelet shape changes with time. There are a variety of techniques by which Q can be measured from field seismic data, all based on this phenomenon. For a source wavelet that is symmetric in time the peak frequency is also the centroid frequency, and the instantaneous frequency from complex analysis may be used (LeBlanc et al., 1991; Panda et al., 1994). Other methods include rise-time analysis (Tarif and Bourbie, 1987), spectral ratios (Jacobson et al., 1981), inverse Q filtering (Gelius, 1987), and wavelet modeling in the time or frequency domain (Jannsen et al., 1985).
A significant obstacle in modeling Q using any of these methods is the interference between layers that are separated by less than a pulse length in arrival time. This interference produces a composite signal with a frequency content lower than that of the individual contributing signals. The magnitude of this effect depends on the relative reflection strengths and layer thicknesses and is in general unpredictable. To mitigate this effect, we analyze data over a range of time and space (0.064 s and 20 traces). A large range will increase the stability and accuracy of the inversion at the expense of resolution. We assume that in the particular case of the very homogenous sediments of the Blake Ridge, the sedimentary structure, and therefore the interference, is consistent from window to window. The interference will affect absolute frequency content, but relative changes may still be accurately observed.
The method we chose to invert for Q here is that of spectral modeling, similar to that of Jannsen et al. (1985). With this method, we may quickly compute synthetic spectra and include all the data within a window without having first to align the phases or compute individual arrival times. Several traces are analyzed at once (Fig. 4A) by first computing a sliding spectrum; that is, the spectrum is computed about each time sample using a windowed, cosine-tapered portion of the trace. The window length, 0.064 s, is 4.8 times the period of the dominant frequency in the SCS source, and twice the length of the entire wavelet (Fig. 2). At each time sample the spectra from all twenty traces in the group are averaged and plotted horizontally, resulting in the time vs. frequency plot shown in Figure 4B.
Based on a series of these plots across the SCS profile, we chose a series of arrival times corresponding to selected events (Fig. 4C). To enhance the performance of the inversion, the events selected met several criteria. They were far enough apart in time to allow some attenuation to occur, while still maintaining as much vertical resolution as possible. They also corresponded to higher amplitude events to improve the signal to random noise ratio. Finally, the chosen events were also important for the geologic interpretation. Layer C contains the relatively reflective upper sediments rich in CaCO3, layer H represents the remainder of the HSZ section from which hydrates were found or inferred, layer B contains the gassy sediments between the bottom-simulating reflector (BSR) and a possible relict BSR, and layer G spans the possibly gassy sediments below. Only the spectra observed at the selected times (Fig. 4C, black curves) were used in the inversion.
This inversion is actually an iterative forward modeling, using the technique of very fast simulated annealing (VFSA; Ingber, 1989), a well-established, robust technique for small nonlinear problems (Sen and Stoffa, 1991). The starting temperature and cooling rate were chosen such that convergence occurred after ~25,000 forward modelings. Although the problem may be linearized and solved much faster by working in the log spectral domain, this amplifies the noise at low points in the spectrum, resulting in a less robust comparison of synthetic and observed spectra. The forward modeling is quite fast on modern desktop computers (~15 s for 25,000 forward modelings), so speed was sacrificed and the nonlinear, but more robust, comparison was used.
The forward modeling consisted of a direct implementation of Equation 1 for each frequency as it propagates through each layer, using the spectrum at the water bottom as a reference. We assume that the selected times correspond to boundaries of homogeneous, laterally invariant layers, each of which has a constant Q. Input times are two way for reflection seismograms. For each frequency sample in the reference spectrum, an amplitude decay for each constant time thickness layer was computed using Equation 1. The decay terms were applied to each layer successively down through the section, yielding a predicted amplitude spectrum at each of the chosen interface times. The predicted spectra were individually normalized and were subtracted from the individually normalized observed spectra. The absolute value of this difference was used as the objective function; the quantity minimized by VFSA. The spectra were computed and subtracted in the linear, not log domain, and because the amplitude spectrum is independent of phase, phase computations were not needed. The objective function was computed for all layer spectra simultaneously, not sequentially (Fig. 4C, gray curves). This results in a more realistic overall solution but may also result in one layer being modeled with a slightly higher Q (high frequencies higher than observed) if a lower layer also has high-amplitude high frequencies. The Q profile that results in the minimum difference between predicted and observed spectra is taken to be the best estimate (Fig. 4D, bold).
One advantage of VFSA is that the large number of models and computed objective function values can be used to estimate the uncertainty in the final model (Sen and Stoffa, 1991). When the minimum objective function lies at the bottom of a steep-sided trough, a small change in the model corresponds to a large misfit, so the uncertainties are small. Conversely, a small change in a model corresponding to an objective function value lying at the bottom of a broad basin results in a much smaller change in misfit, so uncertainties are large. We define the uncertainty as the width of the objective function at a height of 1% from the minimum to the maximum objective function value. For the Q profile in Figure 4D, this results in an uncertainty range in Q on the order of several hundred for each layer. The uncertainty can be significantly reduced if we sacrifice vertical resolution, and invert for a single value of Q (bold, dashed line in Fig. 4D) using the same suite of observed spectra. The same 1% criterion here results in a much smaller uncertainty, as shown by the relatively narrow "error" bars on the inversion (fine, dashed lines in Fig. 4D).