IDL Toolchain - Gizra/IBLI GitHub Wiki

The IDL scripts start from the assumption that the eMODIS NDVI files are correctly downloaded. These include:

  • The monthly bulk download from http://earlywarning.usgs.gov/fews/downloads/ : this is mostly for historic archive building (and processing).
  • Individual dekadal files (only for even period numbers) from: http://earlywarning.usgs.gov/fews/africa/web/imgbrowsc2.php?extent=eazd The individual files are the key for the operational processing. You need to allow about 2-3 days for the latest dekad image to become available. So dekad 1-10 January would become available on 12 or 13 January. I do not have hard data on it, but the updated filtered data of previous dekads comes in about 8 (?) hours after the latest dekad is added. It seems that updating on midday of 3, 13, and 23 of each month should be fine. Of key importance for downloading the data is the fact that USGS applies temporal filtering to the NDVI series to reduce cloud contamination (and other atmospheric noise). Temporal filtering implies however that the latest data are not yet fully filtered. Therefore, data of less than 1.5 month old need to be downloaded each time a new composite becomes available.

For IDL scripts, please note that we are in the process of making them more operational. For that reason, the newer (more operational) versions have "_OP" at the end of the file name.

IDL scripts:

  • eMODIS_ORGANIZE_OP.pro : takes the monthly zipped files, unzips them and subsets the Kenya part of the East Africa window. The output is written as plain 8-bit binary files, one for every dekad, and puts them in a folder for each year. Also an ENVI-header (text-file) is written for each file that describes how the data are organized. Note that only the even periods are processed, that correspond to day 1-10, 11-20, and 21-end of each month.
  • eMODIS_ORGANIZE_INDIVIDUAL_FILES_OP.pro : checks which zipped TIFF-files are in the individualFiles folder (for the individual dekadal files), opens them all, subsets to Kenya window, and writes output (as previous). Due to filtering (as described above) the most recent composites change: therefore when downloading/processing the newest files of the past month need to be taken. This needs to be done prior to running this program (here we assume the good data are downloaded) --> SO: in the individualFiles folder we assume that all relevant data to process are there. This means: - Regularly delete old composites that are already processed (and/or part of month-files) - Make sure that prior to execution the latest files are updated and are the most recent filtered version
  • eMODIS_createBIL.pro : create one large BIL-file (band interleaved by line) of all eMODIS data of the Kenya window. Purpose is to have all data together in one file, which facilitates rapid processing after. Note that this program is the main bottle-neck in terms of processing speed.
  • ZNORMBIL_8BIT : (programmed by Michele Meroni - JRC). Compute the pixel-level Z scores of the time series. Mean and SD are computed using only data in the specified range (for eMODIS we currently use 2001-2011 with the rationale of intercalibration with AVHRR). During computation it is checked if all values in the time series are valid (within minVal-maxVal) and if the overall variability (95-5%) is high enough (see desccription in file). If not, result for that pixel/dekad is NaN (NotaNumber). Besides a z-scored BIL stack (4 times as big as previous due to Floating output!), a number of other outputs are generated (historal average, standard deviation, and a diagnostics file).
  • AGGREGATE_Z.pro : for each administrative division the z-scores (for each dekad) are aggregated and written to a csv-file. Besides the BIL-stack with z-scores (output of previous program) we require two other inputs: - raster image with division ID codes with precisely the same geometry as the NDVI raster data - diagnostics file --> although this is an output of the previous step, we consistently use an old diagnostics file created previously for a shorter time series. In this way, our output is consistent with previous updates.
  • CUMULATE_Z_PER_DIVISION.pro : takes the output of the previous step and cumulates the division-aggregated dekadal z-scores over time to get the czNDVI for the LRLD and SRSD season. This is written to a csv-file.