API for pipelines - rodekruis/IBF-system GitHub Wiki
This documentation is meant to make clear to pipeline owners how to exactly call IBF-system's API correctly.
- This is relevant both for internal (510) pipeline owners and for external pipeline owners.
- This goes further than how to accomplish 1 successful API-call, but aims to make clear which complete "set" of API-calls is needed for a correct working IBF-portal. This differs per disaster-type.
- It also aims to make clear which different scenarios there are. This also differs per disaster type. There is always at least 'trigger' vs. 'no trigger'. But for example disaster type 'typhoon' contains 'event below trigger', 'event without landfall', etc.
The overview flowchart describes the structure of a pipeline run. The implementation details vary per disaster type and are detailed in the text below.
flowchart
classDef wrap text-wrap: wrap;
A(start pipeline)
B[detect events from data sources like GloFAS stations, ECWMF, and river gauges]:::wrap
C{how many events?}
D[upload indicators for no events<br/><br/>/admin-area-dynamic-data/exposure]:::wrap
E[upload indicators per admin level, and lead time<br/><br/>/admin-area-dynamic-data/exposure]:::wrap
F[upload disaster type indicators like glofas_stations, and typhoon track<br/><br/>/event/alerts-per-leadtime<br/>/typhoon-track]:::wrap
G[upload non-event data like river gauges, and raster files<br/><br/>/point-data/dynamic<br/>/admin-area-dynamic-data/raster<br/>/lines-data/exposure-status]:::wrap
H[inform the portal that the upload is complete<br/><br/>/event/process]:::wrap
I(stop pipeline)
A --> B
B --> C
C -->|zero| D
C -->|non-zero| E
subgraph one["<div style="width:20em; height:10em; display:flex; justify-content: flex-start; align-items:flex-start;">for each event</div>"]
E --> F
end
D --> G
F --> G
G --> H
H --> I
- Start a loop per Glofas station, which is agreed to be the definition of an
event
forfloods
. It is theoretically possible to change event-definition (to e.g. river basin), but this should be agreed with IBF dev team beforehand. - Per station/event
- execute the following logic to identify the
leadTime
(between 0-day and 7-day) to use in API-calls for this event- determine the 1st
leadTime
above trigger threshold- if found: also determine if a warning threshold is already exceeded earlier, and remember this for the
POST /event/alerts-per-leadtime
call (see 2. below)
- if found: also determine if a warning threshold is already exceeded earlier, and remember this for the
- if nowhere above trigger threshold, determine 1st leadTime above any threshold (so only applicable if multiple thresholds defined for country)
- if nowhere above any threshold, continue with next station/event (there are some API-calls to do after event-loop, see below)
- determine the 1st
- Upload exposure data per admin-area for this event via
POST /admin-area-dynamic-data/exposure
(see 1. below) - Upload alert data per
leadTime
for this event viaPOST /event/alerts-per-leadtime
(see 2. below) - Upload Glofas station data for this event via
POST /point-data/dynamic
(see 3. below)
- execute the following logic to identify the
- After the event loop:
- [Only if no stations above any threshold] Upload
no alert
exposure data viaPOST /admin-area-dynamic-data/exposure
(see 1. below) - Upload Glofas station data for all other stations via
POST /point-data-dynamic
(see 2. below) - Upload flood extent raster per
leadTime
viaPOST /admin-area-dynamic-data/raster/{disasterType}
(see 4. below) - Inform API that upload is complete and it can process events and send notifications, via
POST /events/process
(see 5. below)
- [Only if no stations above any threshold] Upload
- No event
- Trigger event
- Medium-warning event
- Low-warning event
- Warning-to-trigger event, which exceeds a warning threshold at an earlier lead time than it exceeds the trigger threshold
- Ongoing event. Any of above events, but with lead time of 0 days.
- Multi-event. Any combination of above events.
-
POST /admin-area-dynamic-data/exposure
> example- This is the main API-call which uploads dynamic data per admin-area.
- This is called once for every
eventName
, which is agreed to be defined per Glofas station. - Within the event-loop, start a loop over every
adminLevel
defined in IBF-system settings - Within this loop, make an API-call per
dynamicIndicator
defined in IBF-system settings - E.g. with 2
events
, 3adminLevels
and 5dynamicIndicators
this means 2 * 3 * 5 = 30 API-calls. - Always include as dynamic indicator
forecast_severity
.- This has value 0 for no alert.
- This has value 0.3 for low warning, 0.7 for medium warning and 1 for trigger. These relate to predefined threshold set in IBF-system settings.
- This should be the case for exactly those admin-areas mapped to Glofas stations with
eapAlertClass
=min
/med
(see info forpoint-data/dynamic
API-call below).
- Also include as dynamic indicator
forecast_trigger
.- This indicates a trigger (value=1) or not (value=0). Other values are not allowed.
- Currently forecast_trigger=1 should exactly align with forecast_severity=1
- Theoretically this can be disconnected: forecast_severity=1 with forecast_trigger=0 will then indicate a 'High warning'.
- The API-call with this indicator should always be done last.
- If this layer is missing, then
false
is assumed.
- Also always include the main exposure indicator (called
mainExposureIndicator
indisasters.json
), e.g.population_affected
. - Optionally include any other dynamic indicators, e.g.
windspeed
orpopulation_affected_u5
. - No alert scenario: this endpoint is also used to indicate a
no alert
scenario. Call this endpoint after the event-loop:- With
eventName=null
andleadTime=1-day
and for eachadminLevel
- Upload
forecast_severity
layer with value=0 for all admin-areas - Upload
forecast_trigger
layer with value=0 for all admin-areas - Upload
population_affected
layer with value=0 for all admin-area
- With
-
POST /event/alerts-per-leadtime
> example- This API-call is used to indicate per event for which leadTimes warnings and/or triggers are determined (between 0-day and 7-day)
- This functionality is only used for
Floods
at the moment. - The
forecastAlert
property perleadTime
is used to signify no alert vs any alert (trigger or warning). - The
forecastTrigger
perleadTime
is used to distinguish trigger alerts from warning alerts. This means it should befalse
for a warning, andtrue
for a trigger. - Use these properties to indicate a
warning-to-trigger
scenario where an observed trigger already exceeds a warning threshold earlier. Do so by settingforecastAlert=true
andforecastTrigger=false
for theleadTimes
where a warning is exceeded and the trigger is not. For theleadTimes
where the trigger is exceeded, both are true.
-
POST /point-data/dynamic
> example- Used for uploading Glofas station dynamic data
- For Glofas data, a separate API-call must be made for each
key
:forecastLevel
,forecastReturnPeriod
,triggerLevel
,eapAlertClass
-
eapAlertClass
must be one of the valuesno
,min
,med
ormax
, where the first means no alert, the last means trigger alert, and the middle categories represent lower threshold (only use this if multiple thresholds are defined for the country). These values should align with forecast_severity=0/0.3/0.7/1 in the exposure endpoint. - Note that the pipeline can decide per country on what basis/logic to put stations in a certain
eapAlertClass
, e.g. based onGlofas probability
(which is the case for Zambia currently) or onReturn period/water discharge level
(which is the case for all other countries). - This API-call is made per event/station within the event-loop.
- Additionally, 1 more API-call with all remaining stations is made after the event-loop. In case of no alerts, this means all stations are in this call.
-
POST /admin-area-dynamic-data/raster/{disasterType}
> example- Used for uploading the flood extent rasters.
- The file name should be
flood_extent_<leadTime>_<countryCodeISO3>.tif
- Note that multiple events with the same
leadTime
should be aggregated into one .tif first. This implies the aggregation and API-calls should be done after the loop over events. - Flood extents are only calculated for triggers, not for warnings.
- The pipeline should also upload an empty raster for all
no alert
and allwarning
leadTimes, as otherwise Geoserver will keep showing the last available data from previous pipeline runs for thoseleadTimes
. This means that in case of no alerts an empty raster is uploaded for every leadTime.
-
POST /events/process
> example- This call starts event processing for given
country
anddisasterType
. - This involves updating pre-existing events, creating new events, closing old events and - if applicable - sending notifications.
- This call must always be made wheter there are currently alerts or not.
- This call starts event processing for given
- The typhoon pipeline also runs per event, but these are defined from the source and not by the pipeline itself.
- The pipeline can contain logic to skip certain events, because not meeting certain thresholds, and continue with the next.
- Upload exposure data per admin-area for this event via
POST /admin-area-dynamic-data/exposure
(see 1. below) - Upload typhoon track data via
POST /typhoon-track
(see 2. below) - Note that there are multiple relevant types of events. In mock these are defined as:
-
eventTrigger
(trigger) -
eventNoTrigger
(warning) -
eventNoLandfall
(track will not make landfall, can be either trigger or warning) -
eventNoLandfallYet
(track is too far away to predict to make landfall or not, can be either trigger or warning) -
eventAfterLandfall
(ongoing event, can be either trigger or warning)
-
- After event-loop
- [Only if no events] Upload
no events
exposure data viaPOST /admin-area-dynamic-data/exposure
(see 1. below) - Inform API that upload is complete and it can process events and send notifications, via
POST /events/process
(see 3. below)
- [Only if no events] Upload
-
POST /admin-area-dynamic-data/exposure
> example- See
floods
entry of this endpoint above -
mainExposureIndicator
=houses_affected
- Trigger events are uploaded via
forecast_severity=1
andforecast_trigger=1
. - Warning events are uploaded via
forecast_severity=1
andforecast_trigger=0
. - Indicate
no events
scenario- with
eventName=null
,leadTime=72-hour
, uploadforecast_severity=0
,forecast_trigger=0
andhouses_affected
with any value for all admin-areas.
- with
- See
-
POST /typhoon-track
> example -
POST /events/process
> example- This call starts event processing for given
country
anddisasterType
. - This involves updating pre-existing events, creating new events, closing old events and - if applicable - sending notifications.
- This call must always be made wheter there are currently alerts or not.
- This call starts event processing for given
- No events
- Trigger event
- Warning event
- Event without landfall
- Event where landfall cannot be determined yet
- Event after landfall (ongoing event)
- Multi-event
- Pipeline defines events by parent admin-area (so e.g. district/admin-level 2 in Malawi), with
eventName=<parentAdminAreaName>
- Per potential event / parent admin-area:
- Establish the worst
leadTime
in the configured leadTime window (currently 0-hour to 48-hour for all configured countries) - Establish whether the forecast exceeds the (currently single) severity threshold at this
leadTime
. If not, skip out of this event-loop. - Establish whether it is a trigger event or warning event. It can in principal be configured per country how to decide this.
- E.g. in Malawi it is based on
leadTime
, if <= 12-hour it is atrigger
, otherwise awarning
. - E.g. in Ethiopia it is configured to always be a
warning
. - Instead of on
leadTime
it could also be based on severity or accuracy thresholds.
- E.g. in Malawi it is based on
- Upload exposure data per admin-area for this event via
POST /admin-area-dynamic-data/exposure
(see 1. below) - If configured, upload exposure data per point asset (schools, etc.) for this event via
POST /point-data/dynamic
(see 2. below) - If configured, upload exposure data for roads (lines) and buildings (polygons) for this event via
POST /lines-data/dynamic
(see 3. below)
- Establish the worst
- After event-loop
- [Only if no events] Upload
no alert
exposure data viaPOST /admin-area-dynamic-data/exposure
(see 1. below) - Upload relevant rasters (
flood depth
and/orrainfall forecast
) perleadTime
viaPOST /admin-area-dynamic-data/raster/{disasterType}
(see 4. below) - If configured, upload river gauge data, as this is not event-related, via
POST /point-data/dynamic
(see 2. below) - Inform API that upload is complete and it can process events and send notifications, via
POST /events/process
(see 5. below)
- [Only if no events] Upload
-
POST /admin-area-dynamic-data/exposure
> example- See
floods
entry of this endpoint above - Trigger events are uploaded via
forecast_severity=1
andforecast_trigger=1
. - Warning events are uploaded via
forecast_severity=1
andforecast_trigger=0
. - Indicate no alerts with with
eventName=null
,leadTime=1-hour
, uploadforecast_severity=0
,forecast_trigger=0
andpopulation_affected
with any value for all admin-areas.
- See
-
POST /point-data/dynamic
> example- Call this to upload dynamic attributes of point assets, per
pointDataCategory
and optionally perleadTime
- The
fid
s has to match with thefid
s of the initial (static) set of point assets data. - Used for uploading
exposure
of schools/health sites/waterpoints- with
key=exposure
andpointDataCategory
=schools/health sites/waterpoints - Only exposed
fid
s are uploaded, not the non-exposed ones.
- with
- Also used for uploading dynamic attributes of
gauges
- use
pointDataCategory=gauges
- and upload 3 values per gauge with
key=waterLevel/waterLevelPrevious/waterLevelReference
- use
- Call this to upload dynamic attributes of point assets, per
-
POST /lines-data/exposure-status
> example- Call this to upload a set of exposed line assets (which means
roads
andbuildings
in practice), perleadTime
. - The set of
exposedFids
has to match with thefid
s of the initial (static) set of point assets data. - Only exposed
fid
s are uploaded, not the non-exposed ones.
- Call this to upload a set of exposed line assets (which means
-
POST /admin-area-dynamic-data/raster/{disasterType}
> example- The filename for 'flood depth' is defined as
flood_extent_<leadTime>_<countryCodeISO3>.tif
. - The filename for 'rainfall forecast' is defined as
rainfall_forecast_<leadTime>_<countryCodeISO3>.tif
- This means that if there are 2 events with the same
leadTime
, they have to be combined in the same tif-file first. And therefore this call is made outside of the event-loop. - The rasters should always be uploaded with the same bounding box (typically nation-wide), even if the event area is smaller.
- The filename for 'flood depth' is defined as
-
POST /events/process
> example- This call starts event processing for given
country
anddisasterType
. - This involves updating pre-existing events, creating new events, closing old events and - if applicable - sending notifications.
- This call must always be made wheter there are currently alerts or not.
- This call starts event processing for given
- No event
- Warning event
- Trigger event
- Ongoing event
- Multi-event
- After an event has started, the pipeline will keep uploading it with
leadTime
=0-hour
for 5 days, without changing the forecast values any more. This means the event will stay visible in the portal long enough to still act upon.- If during this period of 5 days, a new or stronger forecast is identified in the same event area with leadTime > 0-hour, this trumps the ongoing event.
- The pipeline owner could deviate from the number 5, but shouldn't without agreeing first.
- NOTE: that it is agreed that warnings are only uploaded for
leadTime
of more than 12 hours. The pipeline owner could deviate from this, but shouldn't without agreeing first. - Only
leadTimes
1-12/15/18/21/24/48 hours are defined. If the pipeline calculates un unavailable leadTime, this is rounded down. - Also non-exposed Traditional Authorities within an exposed district/event are uploaded with
forecast_severity
=0 andpopulation_affected
present.
- Pipeline defines events by region and by season, as configured by
droughtSeasonRegions
in countries.json.- Each country is divided in 1 or more regions, which has 1 or 2 rainy seasons.
- If it does not rain (enough) in that rainy season, that implies a drought
event
. -
eventName
should be in the format<season_name>_<region_name>
. Example:MAM_National
- Per region and season:
- Only start forecasting at most 3 months in advance. If start of season further away, no event, continue to next.
- The
leadTime
is set by the number of months until the start of the season. If the season starts in March and it is now January,leadTime=2-month
. - Establish if a warning or trigger is predicted for the first month of the season. If so, set
forecast_severity=1
for the appropriate areas. If not, setforecast_severity=0
for all areas. - Only include the admin-areas that are part of this region, as documented by
droughtRegions
in countries.json - Upload exposure data per admin-area for this event via
POST /admin-area-dynamic-data/exposure
(see 1. below)
- After event-loop
- [Only if no events] Upload
no alert
exposure data viaPOST /admin-area-dynamic-data/exposure
(see 1. below) - Upload rainfall forecast raster per
leadTime
viaPOST /admin-area-dynamic-data/raster/{disasterType}
(see 2. below) - Inform API that upload is complete and it can process events and send notifications, via
POST /events/process
(see 3. below)
- [Only if no events] Upload
-
POST /admin-area-dynamic-data/exposure
> example- See
floods
entry of this endpoint above - Trigger events are uploaded via
forecast_severity=1
andforecast_trigger=1
. It should be configurable per country whether the pipeline uploads triggers or warnings. - Warning events are uploaded via
forecast_severity=1
andforecast_trigger=0
. - Indicate no alerts with with
eventName=null
,leadTime
same as calculated in the event scenario, uploadforecast_severity=0
,forecast_trigger=0
andpopulation_affected
with any value for all admin-areas
- See
-
/admin-area-dynamic-data/raster/{disasterType}
> example- The filename is defined as
rain_rp_<leadTime>_<countryCodeISO3>.tif
. - This means that if there are 2 events with the same
leadTime
, they have to be combined in the same flood extent tif-file. And therefore this call is made outside of the event-loop.
- The filename is defined as
-
POST /events/process
> example- This call starts event processing for given
country
anddisasterType
. - This involves updating pre-existing events, creating new events, closing old events and - if applicable - sending notifications.
- This call must always be made wheter there are currently alerts or not.
- This call starts event processing for given
- No events
- Trigger events - Raise a trigger event for every season possible, based on current (or specified) calendar month.
- Warning events - Raise a warning event for every season possible, based on current (or specified) calendar month.
- Ongoing event
TO COMPLETE
- Events are (hackily) defined by
leadTime
:0-month
,1-month
and2-month
- So use
leadTime
also aseventName
, such that the API can interpret it as different events. - Warning events are not allowed. So use
forecast_trigger
equal toforecast_severity
(either both 0 or both 1).
Frequency of forecasts depends on the type of disaster,
disaster type | frequency |
---|---|
flood | daily |
malaria | monthly |
drought | monthly |
flash flood | every 6 hours |
typhoon | every 6 hours |
Warning
The update frequency of a pipeline cannot be unilaterally changed, as there is code in the API and portal that is hard-coded on this.
If there is another update from the pipeline within the same period, this will replace existing data from that period. So for floods, a 2nd upload on the same day will replace the 1st one. For drought, a second upload in the same month, etc.
Important
Every endpoint contains an optional date
attribute, with timestamp format.
- If not set, it will be assumed that the moment of upload is also the time period this data is about.
- This however not always suffices. For example if the input-data for a monthly pipeline is supposed to be available the 25th of September, but is late. Then the pipeline can keep checking daily for availability, but if it's only available on October 2nd, and the pipeline will then process and upload, the IBF-system should know that this is data about September and not about October.
- This can be done by filling in the
date
attribute with any timestamp in September. In principle, day and time do not matter. Obviously. day does matter for daily pipelines and time does matter for 6-hourly pipelines. - The
date
attribute is also important for developing/testing/simulating/demo-ing purposes, where you may want to mock a different moment of upload. Or simulate 7 daily uploads after each other very quicly, by each time filling in thedate
attribute with the next day.