The back-end on SNAP has been written in Java. But good news to Python enthusiast, they provide Python interface to Java API. It’s through their module called Snappy. In my previous tutorial, I showed you how you can install snappy in your machine and get geared up for the development work. And before we move to the coding, let me mention some points on SNAP API before moving.
Snappy does not have comprehensible API docs, unlike Java API. So it’s a good idea to look at that API. Just be sure that you are looking at the right version. Python snappy is just an interface for the Java API, bind through the jpy module. And most importantly, SNAP implements Graph Processing Framework (GPF). With the traditional software, you have to do a lot of intermediates I/O operation, and they are very difficult or sometimes impossible to implement multi-threading, as a result of which they tend to be very slow and time-consuming. GPF overcomes this by allowing the operation through the nodes which are connected in a direction, avoiding loops and cyclic, so it is also called a Directed Acyclic Graph (DAG). Every node that is present in the graph indicates some kind of operations or processor for the data stream coming from immediately before node and going to immediately after node unless it is a terminal node. As a result of which, the processes can be kept in memory and can be exported at the final node of the entire process.

Photo Credit: ESA
Usually, a graph looks like below. This is an example graph representing Ellipsoid Correction through Geo-location Grid.
<graph id="someGraphId"> <version>1.0</version> <node id="someNodeId"> <operator>Ellipsoid-Correction-GG</operator> <sources> <source>${source}</source> </sources> <parameters> <sourceBands>string,string,string,...</sourceBands> <imgResamplingMethod>string</imgResamplingMethod> <mapProjection>string</mapProjection> </parameters> </node> </graph>
You can get the list of all the operator available in the snap using gpt -h
from you /snap/bin
directory. Similarly, the parameters and the information on the particular operator can be obtained using gpt Ellipsoid-Correction-GG -h
where, Ellipsoid-Correction-GG
is the name of the operator.
Follow this tutorial, if you are comfortable using the GUI. For this, we will be coding using snappy module. We will be following a similar step as we saw in this tutorial.
import snappy from snappy import ProductIO, GPF, HashMap import os HashMap = snappy.jpy.get_type('java.util.HashMap') # Get snappy Operators GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis() input = 'data' output = 'output' def write_product(data, file_path, format=None): # allowed format are # GeoTIFF-BigTIFF,HDF5,Snaphu,BEAM-DIMAP,GeoTIFF+XML,PolSARPro,NetCDF-CF,NetCDF-BEAM,ENVI,JP2,Generic Binary BSQ,Gamma,CSV,NetCDF4-CF,GeoTIFF,NetCDF4-BEAM ProductIO.writeProduct(data, file_path, format if format else 'BEAM-DIMAP') def apply_orbit_file(data, datestamp): params = HashMap() orbit = GPF.createProduct('Apply-Orbit-File', params, data) #write_product(orbit, os.path.join(output, '{}_Orb'.format(datestamp))) return orbit def do_calibration(orbit, datestamp): params = HashMap() params.put('outputSigmaBand', False) params.put('outputGammaBand', False) params.put('outputBetaBand', True) calibration = GPF.createProduct('Calibration', params, orbit) #write_product(calibration, os.path.join(output, '{}_Orb_Cal'.format(datestamp))) return calibration def perform_multilook(calibration, datestamp, range_look_number=3, azimuth_look_number=3): params = HashMap() params.put('nRgLooks', range_look_number) params.put('nAzLooks', azimuth_look_number) params.put('outputIntensity', True) params.put('grSquarePixel', True) multilook = GPF.createProduct('Multilook', params, calibration) #write_product(multilook, os.path.join(output, '{}_Orb_Cal_ML'.format(datestamp))) return multilook def perform_terrain_flattening(multilook, datestamp): params = HashMap() params.put('demName', 'SRTM 1Sec HGT') params.put('demResamplingMethod', 'BICUBIC_INTERPOLATION') params.put('oversamplingMultiple', 1.5) params.put('additionalOverlap', 0.1) terrain = GPF.createProduct('Terrain-Flattening', params, multilook) #write_product(terrain, os.path.join(output, '{}_Orb_Cal_ML_TF'.format(datestamp))) return terrain def dem_coregistration(terrain, datestamp): params = HashMap() params.put('demName', 'SRTM 1Sec HGT') params.put('demResamplingMethod', 'BICUBIC_INTERPOLATION') # not sure if BILINEAR_INTERPOLATION would produce anything different # worth checking params.put('resamplingType', 'BICUBIC_INTERPOLATION') params.put('tileExtensionPercent', 100) params.put('maskOutAreaWithoutElevation', True) coregistered = GPF.createProduct('DEM-Assisted-Coregistration', params, terrain) write_product(coregistered, os.path.join(output, '{}_Orb_Cal_ML_TF_Stack'.format(datestamp))) return coregistered def speckle_reduction(data, datestamp): params = HashMap() params.put('filter', 'Lee Sigma') params.put('enl', 4.0) params.put('numLooksStr', '4') params.put('windowSize', '9x9') params.put('sigmaStr', '0.9') params.put('targetWindowSizeStr', '5x5') speckle = GPF.createProduct('Speckle-Filter', params, data) #write_product(speckle, os.path.join(output, '{}_Orb_Cal_ML_TF_Stack_Spk'.format(datestamp))) return speckle def ellipsoid_correction(speckle, datestamp): params = HashMap() params.put('imgResamplingMethod', 'BILINEAR_INTERPOLATION') params.put('mapProjection', 'WGS84(DD)') ec = GPF.createProduct('Ellipsoid-Correction-GG', params, speckle) write_product(ec, os.path.join(output, '{}_Orb_Cal_ML_TF_Stack_Spk_EC'.format(datestamp))) write_product(ec, os.path.join(output, '{}_Orb_Cal_ML_TF_Stack_Spk_EC'.format(datestamp)), format='GeoTIFF') return ec def process(file): data = ProductIO.readProduct(os.path.join(input, file)) # get the end date from the file name and get first 8 substring as YYYYmmdd datestamp = file.split('_')[4][:8] orbit = apply_orbit_file(data, datestamp) calibration = do_calibration(orbit, datestamp) multilook = perform_multilook(calibration, datestamp) terrain = perform_terrain_flattening(multilook, datestamp) #coregistered = dem_coregistration(terrain, datestamp) speckle = speckle_reduction(terrain, datestamp) final = ellipsoid_correction(speckle, datestamp) print('finished') return True def set_path(): os.path.dirname(os.path.dirname(__file__)) path = os.path.join(os.getcwd()) os.chdir(path) return path def main(): path = set_path() files = [f for f in os.listdir(input) if f.endswith('.zip')] for file in files: status = process(file) if __name__ == '__main__': main()
Hrishikesh
I would like to thank you for such wonderful information in simple language. Currently, I am working on a project where I am using doppler frequency analysis to analyze shift in ship targets in Sentinel 1 data. As number of targets are greater, I want to use SNAP-Python API. For the code, I need following things –
1. Satellite (SAR) to target slant range
2. Angle of incidence when SAR is looking at target
3. Velocity of Satellite (SAR) at the time of Sentinel1 image
4.Azimuth line heading for given data (with respect north)
It is very difficult to find relevant information in API documentation. Hence, it would be appreciated if you could help me.
Thanks.
biplovbhandari
I would suggest going through various papers and looking up the ESA forum for relevant information and way forward.