A Look Into Seismic Data Processing
STORY INLINE POST
During almost 36 years in the oil and gas industry, I’ve witnessed big leaps in technology evolution. We started doing reports on typewriters in the late 1980s and two years later we were experimenting with freelance and storyboard (predecessors of excel and powerpoint) on the first PC we had in our office. The only two departments that had computers at that time were seismic data processing and new technologies. However, these computers were used to run very specific software, with a very particular operating system. In the case of seismic data processing, the operating system was the virtual memory system (VMS), with DISCO software.
In my last two articles, I talked about hydrocarbon exploration and the role of geophysics in hydrocarbon exploration. This time, I would like to talk about seismic data processing, which is my professional area of expertise, and the key role it plays in hydrocarbon exploration. It is very interesting how things work in nature and how we can use that knowledge to investigate what we cannot see; in this case, the Earth’s subsurface. To perform seismic data processing, we need to understand three main factors:
First, the conceptual inner structure of Earth and the surface processes that shape the landscape (tectonics, sedimentology), rock types and their composition and how they are created and converted as a result of temperature and pressure, according to the rock cycle, the petroleum system and its components, as the rock's physical properties affect the propagation of elastic waves through it.
Second, the theory of elastic wave propagation into the Earth (Hooke’s law, Snell’s law, Fermat and Huygens principles, among others) and the behavior of waves traveling through different layers of earth. This means that we must be able to understand the structure and composition of the subsurface and have a model of how the waves go down and come up when they reach any interface, with a reflection coefficient strong enough to produce a reflection, providing information that allows us to estimate different attributes through which we can estimate several physical properties of the rocks. Also, we need to understand the methodology of common depth points, which is the best technique to improve the signal-to-noise ratio.
Third, is digital signal analysis and data organization, and how the software of seismic data processing works and how to build seismic data processing sequences (a succession of processing programs planned to build subsurface images). Here, we need to understand seismic data acquisition techniques, from the target we are trying to figure out, the kind of acquisition (land, marine, transitional, 2D, 3D, etc.), the recording format and the size of the data in terms of bytes to planning the computational resources to be used during the life of the project.
One Definition of Seismic Data Processing
In my school days, data processing was defined as the part of seismic exploration responsible for improving the signal-to-noise ratio of seismic information and presenting it appropriately for interpretation. This definition is still valid. However, we have seen a great deal of improvement in seismic data acquisition and processing.
A very simple, normal processing sequence is composed of several stages, with each step having a particular goal. It starts with loading the data into the processing system and changing it from the field recording format (SEG D, normally) to the processing system format. We need to ensure quality control of the data in every step, verifying the improvement in each step. The field data generally is recorded in the shot domain, which means that all the information belonging to a shot is recorded in a composition. The first stage is generally performed in that domain.
The first stage, called preprocessing or data adequacy, is aimed at compensating for all the effects of field data acquisition, starting with geometric definition. It is a very important process in which we load the positioning information, generally in SPS or UKOOA formats. This gives us the coordinates (x, y and z) of each shot point and each receiver and all the necessary information to calculate datum corrections: amplitude decay, noise attenuation, bandwidth reduction, multiple energy attenuation and reduction of the reference datum (onshore data).
The second stage is related to velocity estimation, residual noise, and residual multiple attenuation. In this stage, the data is sorted in what was called Common Depth Point (CDP) domain in the past but is now called Common Mid-Point (CMP). All of this is in preparation for the seismic migration. CMP is the technique that ensures we hit the same point in the subsurface several times (redundancy), with differently configured source-receivers. In the CMP sort, we select from each shot the receivers belonging to the same CMP, according to the geometry of acquisition defined in the first stage. This allows the normal move out (hyperbolic trajectory) and estimation of a velocity field using the stack the CMP gathers to QC the stack performance.
The last stage refers to the velocity model building and repositioning of the events to their true position (seismic migration). This part of seismic data processing has evolved considerably over the years. Velocity model building (VMB) is a very important process because velocity estimation accuracy can help us to build good images of the subsurface. In the beginning, velocity model building was done with semblance analysis or constant velocity stacks, just to stack the data, as the migration was done by hand. Now, VMB is done with full -wave form inversion and the migration process is Mike prestack in depth domain.
Computational evolution has allowed processing of big data, with better computational programs and a lot of algorithms to get better subsurface images. From 24 channels to thousands of channels per shot, from 2D images migrated poststack in real time, to big 3D volumes migrated in depth and corrected by anisotropy that help interpreters find prospects in more complex areas.








By Gerardo Clemente | Independent Contributor -
Wed, 12/07/2022 - 12:00









