Digital processing of satellite images involves the existence of a high-performance digital station with specialized hardware and software for specialized digital processing. Multiple types of processing systems and professional software for image processing and analysis are available. The digital processing modules available on these systems provide the following four categories of functions: 1.Preliminary image processing 2.Strengthening the image 3.Image transformation 4.Classifying and analyzing the image.

1. Preliminary image processing

Remote sensing programs used to study the environment require physical data validation in the field of reflectivity by correcting captor’s errors, atmospheric effects, and image rectification by a reference frame. Preliminary processing functions precede image analysis and information extraction and are grouped into radiometric calibrations, atmospheric and geometric corrections. Atmospheric corrections include correcting data affected by faulty sensor function or atmospheric influences, so we can be sure that what we analyze is radiation reflected by objects and recorded by the sensor. Geometric corrections include correcting geometric distortions due to variations of Earth-sensor geometry and referencing data in a coordinate system (e.g., latitude, longitude) on the Earth’s surface.

The images recorded on the ocean provide an instant or short view of the various phenomena that occur on its surface. So it is absolutely necessary to have previously validated data as there is no means of further verification of the results obtained. Three levels of correction are indispensable: radiometric calibration, atmospheric correction and geometric rectification, all of which follow a landmark network. Catch limits, related to the power available on board the satellite and the environment in which remote sensors operate, are the reason for the progressive degradation of instrument responses. Some satellites have on-board calibration systems that can measure this process by targeting a reference target, a special lamp for detectors operating in the visible spectrum or a black body known for temperature measurements. This calibration is performed by the satellite record delivery center, based on the construction and sensor parameter data. The heterogeneous composition and air mass variations in the atmosphere disrupt the electromagnetic radiation in its path between the radiation source and Earth on the one hand and between Earth and the satellite on the other. Absorption, reflection or diffusion of the signal is of different importance depending on the band considered . These problems can be addressed in three ways: a. Performing field measurements during recordings. The method consists in establishing a model that takes into account the relationship between the data recorded on board the satellite and the field measurements. The quality of the corrections to be applied depends on the number and location of the measurements made. This involves a measurement campaign for each image, as the corrections are valid for a single record. b. Near-infrared bandwidth analysis. This is a method used to correct the images captured by multi-band capture in the visible field. It is used in the absence of meteorological or field data. It is based on the principle that the variations noted in the near infrared band are only related to atmospheric disturbances. This method allows pixel correction with pixel, so a fine interpretation of the data. It is necessary to have extremely dark pixels and images (areas of calm or unpolluted water or inclined shadows). c. General methods for correction of atmospheric influence. These methods are proposed for routine processing by the bodies that distribute remote sensing records. These algorithms transform the radiance of the original image (including the emitted energy reflected by the atmosphere and the Earth’s surface) into image reflection (only the energy emitted and reflected by the surface of the Earth). To make this transformation easy, use a catalog of atmospheric correction functions, which have been calculated for different types of standard atmosphere, aerosol types, zenith angles and altitudes. Geometric corrections are required because the rough image resulting from satellite recordings is affected by deformations. Some are systemic, that is, they are related to the recording system, others are accidental, for random reasons. System errors can be removed by automatic correction programs. These programs are applied to data acquisition and distribution centers. In this category are: the deformations due to the rotation of the Earth, the panoramic distortion, the obliquity of the scanning lines with respect to the nadir line, the speed of the oscillating mirror scanning and the inclination of the orbit facing north-south. Accidental corrections are mainly related to the uncontrollable movements of the satellite to its orbital line and to the vertical of the place. This may cause altitude variations during the recording. These deformations can only be removed by means of selected control points on the ground and special programs made in data analysis and interpretation laboratories. Geometric error sources in pixel positioning: 1. Earth rotation while moving the satellite 2. Panoramic distortion (the pixels at the end of the scan line are larger than those at the top) 3. Curvature of the Earth (introduces other distortions) 4. Moving the satellite while detectors receive scanning information 5. Altitude changes, speed and position of the satellite. 

2.Techniques to strengthen the image

Image enhancement functions are designed to improve the appearance of the image for initial visual interpretation and analysis. Examples of curing functions include contrast enhancement (linear hardening, exponential logarithm, histogram alignment) to increase tonality differences between adjacent targets in the image, or spatial filtering to strengthen certain targets in the image (eg limiting the boundaries using Laplacian filters – channels, roads , etc.). When the spectral dynamics of images (contrast) is weak, it needs to be improved. Practically, by manipulating and reinforcing the thematic of contrast, a number of techniques are used to obtain a modified image for a specific application other than the original. All the numerical methods used have the advantage of minimal information loss. The approach is done in two ways: operating in the space domain and operating in the frequency domain. Spatial space processes are applied even in the plane of the image by manipulating its pixels, while the processing techniques in the frequency domain are based on modifying the transfer function of the image modulation. Before any interpretation and numerical processing, a classification of data is required. This starts with the statistical treatment of the values ​​within the image: the calculation of the main parameters, the average, the ecart-type, the histogram. Histograms allow appreciation of the form of distribution. To achieve this, after the heterogeneity of the image, experienced operators would retain a pixel of ten and not all. If the histogram has a very high class concentration over a narrow range, the general contrast of the image is poor. This range can be broadened by redistributing values ​​in classes. Image smoothing is used to mitigate the parasitic effects in the digital image, which result as a result of poor performance of the sampling system through the transmission channel. Techniques are both in the spatial and frequency domains. In space, proximity proximity technology is used, plus a threshold process, and in the frequency domain, the ‘let down’ filter is used by attenuating high frequencies (disturbance data) using Fourier transform. Enhanced image clarity consists of pixel mediation in an area for a specific one that tends to blur the details of the image. Since mediation is analogous to the integration operation, differentiation will have an opposite effect, and will increase the quality of the given image. The gradient method is used.

3. Transformations of the image

Image transformation functions are similar conceptual functions to image enhancement. But while contrast enhancement only applies to an xcanal image, image transformation involves combined processing of multiple spectral bands. On the digital numbers that characterize each pixel of multispectral images are applied arithmetic operations (decreases, accumulations, multiplications, divisions) leading to the creation of new images that better highlight certain objects in the image (multiplicative contrast enhancement, transformations based on intensity variation , color and saturation of color). This category includes Vegetation Indexes – quantitative assessments based on digital image values, which attempt to measure biomass or vigor of vegetation. Also here are data compression techniques such as Principal Component Analysis that seeks to remove the correlation between the channels of the new recordings and the more efficient representation of information in multispectral records.

4. Classification and interpretation of the image

Image classification and analysis functions are used to identify and classify pixels belonging to a particular object in the image. Classification is usually done on multispectral data sets assigning each pixel in the image to a particular class based on known statistical characteristics or based on the pixel sharpening value in the image. There is a wide variety of classification methods, but a classification can be made in unsupervised and supervised classifications. In the first case, this is an identification method based only on the physical properties or spatial configuration of the objects in the image (unknown or hardly accessible areas). In the second case, the decision criteria of the image classification are established, referring to a prior partitioning of the radiant values: the object classes are chosen; the radiant values ​​and statistical parameters for each class are measured or extracted from the catalogs (average, variation range, type-scale, etc.). For this purpose, there are ideal radiometric field measurements in sampling sites, characteristic for each class. These are performed concurrently with the passage of the satellite and corrected in terms of atmospheric and instrumental transfer. 

Multisource digital data integration

The application of digital processing allows the integration of data from different sources, multi-temporal, taken with different sensors (analogue, photogrammetric, remote sensing) a method very used for data interpretation and analysis. The goal is to extract lighter, more precise and more information. Satellite sensor resolution allows combined or alternative applications with photogrammetric digital cameras up to scale 1: 1,000. Multispectral color images serve for spectral data collection, while pancromatic images provide good spatial resolution. An excellent example is the combination of multispectral optical data that provides surface coverage data with radar images that highlights structural details of the area. Multisensor data integration applications require a perfect geometric overlap of referenced images in a coordinate system or using a reference image (map). For example, the Digital Model of Landscape obtained by photogrammetry can be integrated with multispectral data and used for image classification, for correcting effects due to variation of altimeter and slope, with the effect of increasing the accuracy of the classification. Combining data of different types and from different sources in a digital environment, the potential for information extraction increases considerably. Thus, the concept of the Geographic Information System (GIS) defined as geographic databases with specific methodology for storing, retrieving, analyzing, displaying, extracting, processing and updating some data that can be located on the terrestrial surface using a system of reference. Their purpose is to provide a wide range of information on land and the environment by spatially locating the phenomena studied. To do this, GIS must provide the following main functions: input, validation and encoding of data, data management, easy retrieval and calling, data processing to obtain thematic or synthesis information, decoding the information obtained and presenting it in a more accessible and expressive form in the form of images, thematic maps, tables, etc. These functions are performed with the help of specialized subsystems that are part of GIS. In the case of analog technologies, used maps, processed images, photo-interpreted oleates are scanned or digitized at a resolution high enough to provide the precision of the scale to which they were made and entered into the system’s memory unit. In the case of digital and hybrid technologies, all operations can be carried out using GIS, so each final processing step is currently stored. Few examples of digital imaging software: ERDAS IMAGE Product Suite, ArcGIS, PCI (EASI PACE), ER Mapper, ENVI (Environment for Visualizing Images)…


16 votes, average: 5.00 out of 516 votes, average: 5.00 out of 516 votes, average: 5.00 out of 516 votes, average: 5.00 out of 516 votes, average: 5.00 out of 5 (16 votes, average: 5.00 out of 5)
You need to be a registered member to rate this.
(2822 total tokens earned)


  1. Nicholas

    Huge article! I see the lot of work.

    Just a small advice: you may make some formatting, for example more space between parts/chapters. I read on laptop (small screen), and not so convenient, when the letters are too thick 😉