telecon notes - hapi-server/data-specification Wiki


Priority action items:

  1. Bob - provide draft for #136. Attempt to close issue at this meeting.
  2. All - Pick next issue to close out.

Other issues:

  1. Bob - update on updates of that he said would be complete by this meeting
  2. Bob - have finalized tutorial posted. Discuss having one of us present at monthly session
  3. Bob - issue of needing AWS and more permanent hosting of some HAPI servers and services
  4. Sandy - update on logo revisions
  5. Sandy - update on website revisions
  6. Sandy - report on GitHub discussions option for general communication with users
  7. Jon - update on SPASE issue ( Should we make a copy and create our own DOI if no action in another month?
  8. Jeremy - show web app that uses HAPI data
  9. Jon and Sandy - discussion of GAMERA HAPI server




We won't be able to get to all of these and can push to the next telecon or a splinter meeting as needed.

  1. Update on issues needed to get to 3.1 release. Assign action times. Put report on action item in the agenda for the next telecon.

    Updates to get to the 3.1 release:

    • 3.1 release: Items #117 and #136 required for 3.1. All other items downgraded to medium priority and saved for next release.

    • If folks have something ‘done’, speak now if you want it in 3.1

    • Next week, #117 and #136 will report out: #117 = ‘way to generically expose any existing complex metadata associated with a dataset’, #136 = ‘way for each dataset to indicate the max allowed time span for a request’

    • After next week, 1 week for everyone to review, then release 3.1 spec.

  2. Discuss how we avoid having things "fall off" like the discussion of the web page revisions and the logo.

  3. Jon: "One thing the server bug on CDAWeb makes me realize is that people will need a place to report things like that for any server. Servers can have contact info, but not all do. Plus, the summer school shows that people might use a persistent Slack channel for HAPI as a way to get quick help / pointers. Anything that can help people get “unstuck” quickly when they try something new will probably be very useful in aiding adoption."

    (2 & 3) Use of Slack or similar for internal and/or external use: Sandy will look into GitHub’s “Discussion” and Wiki features and report back next week

  4. CDAWeb issue. Bernie says Jenn is fixing the software for that. No additional discussion needed.

  5. Discuss logo. Shrink ‘API’ to 2/3rd size and nestle closer to the ‘H’. Go to 2-color: API as dark blue. Sandy will email designers.

  6. Jeremy's crawler code and relationship to solution to problems given in summer school. Jeremy will look at summer school code that Bob will post soon

  7. Bob's updates to - Now has links for verifying and viewing server ping tests.


This telecon was to hear from the VirES implementors about their experience adding HAPI as a service to their data system.

VirES server can send gzip as requested by the client (via the right HTTP header) ==> we need to check our clients – do they properly trigger compression request?

HAPI spec needs to address special floating point values - in the JSON output, the special values `NaN, Infinity, -Infinity' could be used instead.

The issue is that CSV would need to have the string 'NaN', and binary would have an IEEE NaN value

The spec currently says nothing about text encoding. We have a ticket on this and it almost done. We are using UTF-8 with nil-terminated characters.

HAPI 1201 error is hard to implement – header leaves before data! Note that an HTTP 1.1 chunked response can be interrupted to tell client something is wrong. (main benefit of chunk is integrity check – gives you a way to tell the client that something went wrong.

Question: Are you doing your own chunking? Ans: Django is relied upon for chunking – pass it an iterator and it does the rest (same with gzipping)

traceability – what if the data is updated; how to indicate this to clients? Currently, this is hard since HAI does not report what went into the data. We could consider a new endpoint: hapi/provenance

We will add a link to the VirES HAPI server GitHub project on our list of known implementations.


R and Julia client: Pete Reily from Predictive Science.

need to look at OpenAPI and get HAPI regisered there.

need to look at OGC/EDR - a standard in Earth Science for delivering data.

from Bobby: Rich Baldwin at NOAA is asking about a comparison between OGC EDR and HAPI.

  1. OGC EDR is at
  2. API docs for it are here:
  3. and here:
  • New HAPI logo coming -- Jon and Sandy are leading this.
  • Sandy will be presenting upcoming SuperMag HAPI server
  • Jon will be presenting the HAPI server and its 3.1 features


TOC mechansim: Bob has a nice solution for this:

cd /tmp; pip install pyppeteer; git clone; git clone; cd data-specification/hapi-dev; /tmp/biedit/biedit -p 9000 -o

Note that pyppeteer is optional (PDF output only). Also note that if you are working on a different branch, you would need to switch to that branch - the above command puts you on the master branch.

(Usage note: don't leave the TOC checkbox checked all the time - it triggers per-keystroke updates.)

SPASE at the Heliophysics Data Portal now lists HAPI URLs. Bob could update the page so that CDAWeb ind olink point to the SPASE URL.

Lots of work on addtionalMetadata item in the info response. See that ticket for the latest.


Overview of results from the IHDEA meeting:

  1. once per month, we will have a HAPI telecon devoted to IHDEA members as part of our role representing Working Group 5 (devoted to HAPI) within IHDEA. This will be the first Tuesday of the month at 9am Easter time, which is more internationally friendly, at least for Europeans, and it's at least not the middle of sleeping time for Japan.
  2. Jon is now coordinating IHDEA WG 3 on coordinate frames, and that group will try to come up with a schema and instances of coordinate frame definitions; there will be several meeting throughout the year; Jim Lewis already asked for access to all master CDFs at CDAWeb to get started cataloging the frame names in use
  3. there was some talk about adding images to HAPI - this is complex; IHDEA folks expressed interest in keeping HAPI simple, emphasizing that this is one of its main strengths; there is the EPN-TAP protocol and the CSO interface which can server images, so maybe those are enough; IHDEA folks also suggested maybe an IHAPI interface - something separate for images

Update on the HAPI paper - Bob needs edits by Friday; Jon to add COSPAR recommended standard language and reference to COSPAR Space Weather Panel Resolution on Data Access (from 2018):

Referee also wanted update on ability to deal with image data. Bob to just say this is a possibility but is not in there right now.

Discussion of image handling in HAPI

  1. if we don't return number, this should be via another endpoint ('hapi/references' or similar? this needs thought)
  2. could possibly also serve event lists - but those can have repeat events at the same time - this is at odds with HAPI time series data
  3. we need to try this and see if its worth it

For coordinate frames, and to support the full machine interpretability of vector data in HAPI, the following changes will be added to the spec:

  1. add a 'coordinateSchema' optional entry so that each dataset can specify a machine readable schema for interpreting coordinate frame names
  2. add an optional item to a parameter to indicate that it is a vector quantity, and this element will indicate the coordinate name (to be interpreted according tot eh 'coordinateSchema') and also a 'componentType' which is an enum of 'cartesian', 'cylindrical' or 'spherical'

Discussion on custom parameters: possibly add a section to the 'hapi/cpapbilities' section: ''' capabilities : { optionalParameters : [ { name: 'c_avg', restriction: { type: 'double', range:[0,inf], default: 0, description: "average according to the number of seconds given for the value of this parameter" }, { name: 'x_subtractBackground', restriction: { type: 'string', enumeration:['yes','no'] }, default: "yes', description: "subtract the background from the data?"}, { name: 'c_qualityFlagFilter', restriction: { type: 'int', range[0,4] }, default: 0, description: "quality level to accept, with 0=best quality, 4 = worst" }, ] } '''

Discussion on SuperMAG - might be good to get something working soon, rather than fit SuperMAG intricacies into HAPI mechanisms


demo of test site for SuperMAG HAPI interface by Sandy Antunes; several options for SuperMAG data (baseline subtraction or not, etc); different ways to handle this - possibly use different prefixes, but danger is proliferation of HAPI Server URLs with confusion about which one is for what data; alternately could be done with additional request parameters, possibly non-standard ones

There is likely a need for HAPI to support additional request parameters.

There are two types of new request parameters that might be needed.

  1. parameters that any server might want to support, but do require some effort to implement; examples: time averaging filter; spike removal filter; possibly a parameter value constraint option, although this is getting really complex! These parameters would have a prefix to indicate that they are optional, additional parameters, but if servers want to implement them, they should use the existing names and syntax and meaning (all time averaging should use the same request parameter and should behave the same on the server)
  2. parameters that are truly custom to one server or even one dataset; these have a prefix of x_

There should be a way to convey the presence of and also the meaning of any additional (standard or custom) parameters in the capabilities file.


We talked about serving images - see today's entry for issue #116 for more discussion.

Sandy showed what he's doing for SuperMAG to add a HAPI server there. He's created additional prefix elements in the URL before the /hapi/ part of the server URL to allow for the combinatorics of options for SuperMAG data. There are many, but the two options discussed were:

  1. baseline = daily, yearly or none
  2. give_data_as_offset_from_baseline = true, false (I think Sandy called this "delta")

In this case, some of the combinatorics for data options might be able to be gotten rid of if SuperMAG woucl also allow it's baseline data to be released as a separate dataset. We can ask about this, but it would still be worth it to look at ways to support extensions to the standard HAPI query parameters.

Extensions to HAPI query parameters could be described in the capabilities endpoint. They would have to be simple and fully described. You could envision enumerated options as in option1={A,B,C} or numeric options like 0.0 <= option2 <= 10.0 (expressed in JSON syntax in the capabilities). There would need to be a description for each elements, and units for any numeric quantities.

Sandy will make his existing server publically availble for testing, and Jeremy will try it out. Bob will make his Intermagnet development server available too, and we can compare them next week to see how to proceed.


For issue 115 ( ) the SPASE group and the IHDEA group are planning to come up with a way to identify coordinate frames in a standard way. There's actually an existing IHDEA working group on this already for one year, but nothing was done yet. There is a potential leveraging of SPICE-based techniques, but SPICE does not actually have a naming convention for frames. Several folks have their own conventions, but none have gotten wide adoption. There are a few standard papers on coordinate frames people use as the basis of their conventions.

We talked about allowing non-standard schemas for 'units' and also for 'coordinateFrameName' elements. As long as there is also listed a reference to the specification, we though it would be OK for people to specify a schema that we did not explicitly list. There was talk of listing all custom schemas in the 'about' endpoint, but then Bernie suggested and we agreed that it really belongs in the 'info' response (close to the use of the new schema name), which is where you need it anyway. A ticket has been opened for this.

Next week, Sandy will present progress towards a SuperMAG HAPI server. SuperMAG present some challenges since it currently presents data in a way somewhat orthogonal to HAPI (each station is not a dataset, but HAPI tends to think of them that way), and also there are lots of options or flags that the SuperMAG native access mechanism exposes.

We briefly talked about how HAPI could possibly be used in a cloud-based setting. This is being explored for SuperMAG, and then also for possibly model output data. Model data may have variable grid structure, so the dta structures are changing shape at each time point. HAPI does not currently support this.

Finally we talked about using HAPI for images. See ticket #116. Bob and Sandy are both interested in this, so we can talk about it next time too.

Agenda for next meeting:

  • quick status update on HAPI paper and any HAPI presentations
  • Sandy to present on SuperMAG (15-20 minutes, plus discussion)
  • overview of a sample coordinate frame schema (written by JonV and based on CDAWeb info) - this is just a toy version of a real schema, but we can reference it and it shows people the basics of what would be needed for a more full-scoped coordinate frame naming standard.
  • talk about HAPI for images
  • review of outstanding, high-priority tickets


talk about 3.1 issues and priorities


no meeting


To discuss:

Bob - will make webinar on client usage of HAPI for science users after the paper comes out

Eric: "target specific users; scientists versus data providers"

This is ready to be closed after confirmation by Aaron and others at CDAWeb: Closed the ticket on user identity management.

These are some older tickets with no champion, so we reviewed those to see if anyone wants to revive them, or else they should be closed. (servers emit HTML) -- agreed to close with coments about other solution (use self-documenting REST style) -- agreed to keep open with low priority for now

Wed 1pm Eastern is next ticket review meeting

Next regular meeting will be June 14.


  1. HAPI 3.0.0 on zenodo: communities are SPA and PyHC
  2. HAPI paper submitted to JGR
  3. Jeremy: what about clients if servers go to 3.0.0? How do clients negotiate with the server which versino to use?

What about a capabilities object that identifies other versions of the spec:

otherVersions: [
         { "version": "2.1",
           "url": "" },
         { "version": "3.0",
           "url": "" }

This above approach is possibly non-standard - Bob found this reference with 4 methods.

Need an issue for this - not for before 3.1

Note: some things in 3.0 that will be deprecated.

Clients: Python and Matlab: not tested yet against 3.0; SPEDAS - not yet at 3.0; Eric will mention to SPEDAS team

Sample 3.0 data:



  1. ok to close issue #107 (it's attached to the 3.0 release); create new ticket for longer term web site updates
  2. wording discussion for issue #77 on keyword normalization: best to use the latest language in the spec, but include the older style keywords to indicate that they are deprecated; have a block at the start of the 3.0 spec indicating the big changes
  3. how to improve landing pages so that people who don't know anything about HAPI can get started.
  4. COSPAR drafts / updates due Dec 7.

Landing page improvement ideas:

a. better page (The content at comes form the checked in to the project associated with The actual server is at Amazon, but the landing page gets pulled / reserved from github.

b. improve the user interface of the HAPI Server Explorer" (or whatever Bob wants to call it), running at Ideas: add verb to dropdown menus; add intro paragraph; include name (HAPI Server Explorer, or equivalent)at top of page; create a set of slides to explain the usage, or maybe a video (use Camtasia for making videos, or maybe OBS);

At the Heliophysics Data Portal, maybe have two links for a HAPI-accessible data set: use HAPI data, info about how to HAPI. Aaron emphasized the need to let people know that this service is available and how to use it.

Action items

  1. Jon - close issue #107 about (after changing link to
  2. Jon - create new ticket for better intro site - a "Getting Started" page for brand new HAPI users.
  3. Jon - look into updates for the main page
  4. Bob - continue with spec updates for issue #77
  5. Bob - revisit / restart the HAPI paper; submit to JGR, Space Science Reviews, Advances in Space Research
  6. Jeremy - keep thinking about sub-group for server mods
  7. Jon - check with Masha on COSPAR group status
  8. Bobby - maybe include HAPI (and SPASE) on COSPAR presentation

Next meeting is: Dec 7 - this is during AGU, but it keeps us checking up on action items.


went through all issues to categories by milestone (3.0 or 3.0+)

Action items:

  1. Jon to review pull request for issue #94
  2. Jon to check with Eric on MMS units status
  3. change units pull request to only have units specs with good online info
  4. Bob to write up spec changes for keyword normalization
  5. all to read related issues #82 #83 and #87 for discussion next week
  6. Jon to ask Jeremy about starting server extensions working group


  1. How to describe access to a dataset via HAPI in SPASE via the <AccessURL> element?

There was a lot of discussion about this - our solution that we proposed last time much more than I anticipated. SPASE is not clear about the intent about <AccessURL> so we debated about responding using HTML versus JSON.

Two tickets are opened after the discussion: 101 (use HTML) and 102 (add links to make HAPI more truly REST-ful)


  1. Jon gave summary of meeting with Beatriz Martinez at EASC; HAPI server coming for Cluster Science Archive (CSA), but they are re-doing the server and will be done in January - no changes to metadata schema, so Bob and Jeremy will look at their metadata to see how it maps to SPASE; they are interested in using Bob's server initially as a living (actually used) example and then create their own implementation eventually; they were interested in any Java components we might be able to offer to help
  2. discussion about how best to incorporate HAPI info in the AccessURL of a SPASE record. The info response is not the right thing, since that is very computer-centric, so the current thought is to use the top level URL for the HAPI serer and then reference the dataset ID in the ProductKey element of AccessURL.
  3. Nand asked us if we were really making a difference and suggested it is time to zoom out and ask larger questions about impact and relevance. We need to push adoption more by getting totorials / quick starts out there. We need to finish the HAPI paper.
  4. IHDEA meeting is 19-22 Oct; agenda still being formed - people need to submit talks now since that's how the agenda will be formed; see this link:


  1. Bob to review CDA metadata with Jeremy ahead of Dec 2 meeting with Beatriz
  2. Jon to look at adding quick start link to HAPI server home page
  3. Jon to look through SPASE records to see which ones would need HAPI access info
  4. Jon to add HAPI access info to agenda of next SPASE meeting (likely to be on Thursday Oct 15)
  5. all: submit your IHDEA talk now


  1. for Issue 94 (server info page), we decided to use the endpoint, and added a publisherCitation optional element. This is ready to go into the spec. Additional items which are more dynamic belong on a different endpoint, discussion of which belongs in another issue.
  2. broken links now fixed on
  3. updates from Bob: generic HAPI server being updated to work on a Windows server. Supported operating systems for this generic server are: Unix, Mac, Windows, Raspberry Pi, Docker
  4. HAPI client being upgraded to chunk up requests for longer time ranges; discussion of caching, which is linked to this capability due to need to

Action: Bob will update spec with new "about" endpoint; others will review his pull request and can make suggestions Action: Jon to incorporate time-varying changes into spec (different pull request) ahead of IHDEA meeting in October


  1. opened new issues #98 regarding how a server can indicate the ability to handle parallel requests from the same client; note that sometimes, it's hard to tell how many requests are from a single client if you have multiple servers behind a load balancer
  2. AMDA server is using HAPI now based on Bob's node-js front end; no public API could be found yet at AMDA (ask about this!); a new versin of the node-js server is about to be released; the AMDA folks asked about "HAPI inside" label or logo. Several will look into this (the PyHC group has a logo design process underway). Several responded to Genot's questions.
  3. need to finalize and close issue about HAPI error codes for time ranges
  4. no meeting next week


  1. closed issue #95 (should stop=None be interpreted by server to be the stopDate for the dataset?) since this can be handled in client code
  2. action item for all: comment on issue 97 about time error code clarification
  3. Bob presented the case for more complete info about each HAPI server (all.txt is very plain right now). We need a schema for the server list, and it would also be great to have an endpoint whereby servers can emit their own info (presumably using the same schema element). Bob has already created issue #94, and he will use that to come up with a schema for server details.
  4. Bob to update web page on the HAPI main page - hopefully he can talk about recent web page updates next week
  5. Jeremy gave demo of Sparkline ( capability he has added to his Autoplot-based HAPI server, which can be added to other servers if they want a quick visualization capability



  1. release 2.1.1 is pretty much ready to go; procedural questions:
    • retain a running changelog versus just changes for most recent version (maybe each major release has running changelog);
    • the use of pull request mechanism works as long as people work on the same branch for modifications; that
    • need to copy in the contents to the 2.1.1 directory
  2. decide which changes are key for resolving and including in 3.0; see this list:
  3. meeting this Wednesday 9-11:30am noon with ESAC about HAPI server
  4. GEM meeting this week
  5. AGU abstracts due July 29; ; some interesting sessions:
  6. Helio HackWeek - maybe a short presentation about uniform data access to participants?
  7. ISWAT sign up - still needs doing:
  8. Bob - updating the IDL client - needs installation instructions to be added by Scott; also making more user friendly; better landing page for list of active servers; schema for server list; Bob to report / demo next week.



  1. release discussion: 3 issues (small, clarifications) to go into 2.1.1; then stamp this out; then copy to hapi-dev to begin work for 3.0; make sure there are no version 3.0 issues contaminating the 2.1.1. release.
  2. ISWAT cluster sub team? General agreement to join. Sepearate paper for COSPAR meeting (different than Bob's current draft); include Batiste Cecconi, and Arnaud Masson
  3. HAPI web site improvements needed
  4. paper: journal is for Space Weather (SPASE is in here), or maybe Space Science Reviews? (same as SPEDAS paper)
  5. Heliophysics 2050 workshop - looking for white papers; need one for Data Environment:
  6. around the room - action item updates; new servers:
    • Arnaud asking for time to meet for ESAC server; Bob's generic server an option
    • no news from InSook; PPI node needs more support or other project leadership

Remember to do for 2.1.1 release: add leading comments about key clarifications.



  1. read Bob's draft paper - see his email dated 2020-06-22
  2. see previous action items
  3. optional: read the recent SunPy paper:
  4. side issue: move URI template wiki and effort to spearate GitHub project; make more useful by implementing in more languages -- translate JavaCC file to antlr, then to other languages? Summer of Code Project? find JAvaCC code for URI template parsing (Jon)


Today's call was a review of tasks needed to focus on pushing HAPI forward. Here is what we came up with, in order of importance.

  1. finish spec updates
  2. get SuperMAG HAPI server up and running
  3. get PDS HAPI server up and running
  4. finish and publish Bob's paper on HAPI - the spec and it's uses, showing wide adoption in Heliophysics and also at the PDS/PPI node
  5. status dashboard for existing servers
  6. integration with SpacePy
  7. continued and even more coordination with PyHC projects

For the next few months, we will try month telecons of 30 minutes to keep momentum going. The link to these notes will be included in the weekly telecon notice.

Action items from today:

  1. Bob to email Rob about SuperMAG (done)
  2. Bob to send link to paper (done)
  3. Jeremy to contact InSook
  4. Jon - get busy with those spec updates!


  1. walk-through of Jeremy's stand-alone Java client (no dependency on Autoplot); currently offers low level access to an iterator of records as they stream in; server oriented methods are static methods; very low level options expose the JSON content of the response; higher level methods allow conceptual access that isolates you from the actual JSON content (which also insulates users from potential changes in the JSON)
  2. discussion of issue:77 on some naming changes to remove a few quirks - see the issue for details


  1. Looked over issues
  2. talked briefly about availability info; some tickets related to this already
  3. Jon will present at the Python meeting on Wednesday.
  4. Jeremy will present Java client next time: May 11
  5. presentation / discussion of SuperMAG HAPI interface targeted for June 1.


  1. Jeremy is starting a Java client, mostly coded and looking for people he could work with.
  2. talked a bit about client identities, to support Super Mag server Bob Weigel is working on.

next meeting

  1. "Time Series Data" vs "Time Ordered Data". Shing says that Time Series Data implies F(T) where F is an array of scalars.


  1. Chris L from LASP presented their HAPI server, and plans for the next version.


  1. meeting was planned but was cancelled.


  1. Jeremy has been working with In-Sook with the UCLA server.
  2. Bob Weigel has been working to have the verifier check on version numbers.


  1. in 2.1.0, need an update now about labels and units so the verifier can be updated
  2. we need a 2.1.0 official release (at the right time point - to allow proper differencing)
  3. changelog entries need to link to individual commits or diffs
  4. examples need to be added to clarify units and labels:
The spec doesn't describe very well what to do for the label (or for the units) of multi-dimensional arrays.
For a 1-D array, it seems clear enough: the label (or units) can be a scalar (that applies to all elements
in the array) or an array of values with one string per data element  (and the length must match the size
of the 1-D array).
"name": "Hplus_velocity",
"description": "velocity vector for H+ plasma",
"type" :"double",
"size": [3],
"label": "plasma_velocity"
"units" :"km/s"
For a two-dimensional (or higher) array, we should allow for the units and the label to be a scalar that
can then apply to the entire multi-dimensional object:
"name": "velocities",
"description": "two velocity vectors for different plasma species",
"type" :"double",
"size": [2,3],
"label": "plasma_velocity"
"units" :"km/s"
The idea of having an array parameter is that all these elements have a strong "sameness" about them,
so expecting the units to be the same is reasonable.
Note: for an array of two vector velocities, the size should be [2,3] instead of [3,2] since the
fastest changing index is at the end of the array.
Note that the ordering is not ambiguous for things like a [2,2] because the spec indicates that the
later size elements are the fastest changing.
See this:

You could also give a label for each dimension:
"name": "velocities",
"description": "two velocity vectors measured from different look directions",
"type" :"double",
"size": [2,3],
"label": [ ["species index"], ["vector component"]],
"units" :"km/s"
Each label in this case applies to the entire dimension.
But the values still all have the same units.  It's hard to think of a case where the units would be
different - otherwise, why is it an array?
You could also label each vector component:
"name": "velocities",
"description": "two velocity vectors measured from different look directions",
"type" :"double",
"size": [2,3],
"label": [["species index"], ["Vx", "Vy", "Vz"] ],
"units" :"km/s"
Or, you might want to label everything:
"name": "velocities",
"description": "two velocity vectors measured from different look directions",
"type" :"double",
"size": [2,3],
"label": [["H+ velocity ", "O+ velocity"], ["Vx", "Vy", "Vz"] ],
"units" :"km/s"

The units behave in a similar way, in that a scalar unit string is broadcast to all elements in its dimension,
but an array of string values are applied to each element in the array. One use cse for this would be for
vectors specified as R, theta, phi values.
add example with different units (R, theta, phi, for example)


meeting agenda

  1. brief report on Jon's UCLA visit - I will tag-up with In Sook a few more times via phone call over the next few months; she had some fairly complex data and had some issues fitting it into HAPI; the PDS group there needs help converting PDS3 data into CDF
  2. going through outstanding issues identified by Bob: PDF problem, nulls in bin ranges, null in labels
  3. next telecon meeting: Feb. 12, 1pm

Notes: for UCLA HAPI server: what about using VOTable tools already used; leverage Eric on different floor! also some PDS3 to CDF tools at CDAWeb! Jeremy to work with In Sook - possibly loop Jon in for a few discussions

MAVEN SWEA (Solar Wind Electro Analyzer) data is in CDAWeb too; the elevations vary for the first 8 energies only, and then are fixed for the remaining 56 energies; HAPI could capture these elevations as a separate variable in the header;

Action: Jon to find out how the team views this data; PI at LASP is listed in metadata

Action: explore how image pixel references could also be captured using these bins

Action: remake HAPI 2.1.0 PDF and see if it fixed the Github renderer; ask around - is this broken?

Action: add phrase about bins content that having both centers and ranges is also OK.

Action: need more bins examples since this is one of the most complex parts of the spec

Action: Bob to write up description about allowing a null ranges if there are bins with centers but no ranges (for just some bins - if there are no ranges then just don't have a ranges object); Jon will review the writeup

Action: for integral channels, explain that you still need to put a (very high, but just high enough) bin boundary

Action: add-write-up for time-varying bins and for header references

Action: Jeremy to meet with In Sook Moon at UCLA to help along the HAPI effort there; see above too

Action: fix time string length for datashop Cassini MAG dataset (Bob noticed this)

Action: Doodle poll on new meeting time (and alternate meeting week of Feb 10)


  1. Discussion about CDAWeb server's approach to ordering of parameters in the request; can CDAWeb be 2.1 compliant? Nand to consider this soon.
  2. In looking at CHANGELOG: need to clarify changes in CHANGELOG: add numbers; categorize as to effect on servers
  3. AGU updates: Amazon lambdas could be useful

Plans for this year:

  1. get spec ready for 3.0 release; Jon to come up with a reading list for the "what to put in 3.0" discussion on Jan 27
  2. check up on PDS server - Jon to visit UCLA in January - Jeremy participating via telecon?
  3. Python client - spruce up docs and packaging; make sure other libraries use this as lower layer elements
  4. training for scientists - tutorials at meetings; tutorial telecon captured as video and crib sheets - borrow this technique form Eric Grimes (Jon to ask for assistance)!
  5. status and continuity of LASP server
  6. paper out on the 3.0 spec; Bob has an early draft he will send around

Discussion about data and DOIs; CDAWeb will acquire DOIs via SPASE; can retrieve data now using DOI or CDAWeb ID or SPASE ID; waiting for missions to coordinate DOI assignment; should HAPI offer a more generic data query capability for other IDs? Question about versioning and provenance? HAPI will use this standard when it comes avaialble.

Next meeting: Jan 27 to decide about issues to include for HAPI 3.0


  1. IHDEA meeting update: the verifier is very popular; the new ability to handle time-varying bins was presented; HAPI is now accepted as the interoperable way to deliver time series data; ESDC (which stands for ESAC Science Data Center, where ESAC stands for European Space Astronomy Center) is planning to adopt HAPI - they are waiting for a lull in activity - we will coordinate with them starting around the new calendar year; CDPP is also planning an implementation
  2. is having problems in some browsers because of it's certificate and https issues

Action items:

  1. Jeremy to fix the certificate issue
  2. Jeremy to prepare a demo of Autoplot using SAMP for something other than granule access. SAMP can delivery das2 endpoints, and could similarly expose HAPI endpoints (either at the dataset level or probably also at the whole server level)
  3. Bob to give a demo of the generic server capability (it is now installable via pip)
  4. Jon to update the spec with all recently (conceptually) approved changes, including: time-varying bins, references in the header, a cleaning up of the usage of id versus dataset, etc; changing time.min and time.max to start and stop in the request interface (keep but deprecate the older terms)
  5. PyHC meeting in two weeks - Jon and Aaron to attend; others will participate online; ensuring a sensible, common data access mechanism within the emerging library is of particular interest to the HAPI crowd
  6. next telecon on Nov 18


  1. The two new features (time-varying bins and references in the info header) have both been tried on live demo servers, and seem to be working well. See Ticket #83. These are ready to be written up in version 3 of the spec.
  2. units - For HAPI 3.0, we would also like to add an optional "unitsSchema" as an optional Dataset Attribute. This would allow data providers to specify what convention is to be used for interpreting the units strings in the metadata (i.e., info header). As mentioned in Ticket #81, which is about this topic, conventions like UDUNITS2 are suitable for this, and they satisfy case 1 and case 2 that are described in Ticket #83. There needs to also be a way to specify which version of the schema is in play, and we decided to start with a rough version identifier, such as "udunits2" rather than being very specific like "udunits2.2.26" since that would be harder for clients to manage when there are minor version changes. The other example are the units from AstroPy, which are apparently part of the core AstroPy package, which is now at version 3.2.1 so that using AstroPy-comnpliant units in HAPI metadata could be indicated using a "unitsSchema" of "astropy3". Rather than force people to choose a units schema from a list, we will describe the ones commonly in use and provide recommendations for how to come up with the appropriate schema name. If clients do not recognize the unitsSchema, they will just ignore it. Note that each dataset specifies it's own unitsSchema (but not individual parameters).
  3. other news: WHPI (Whole Heliosphere and Planetary Interactions, see is attempting to make plasma data from all relevant Heliophysics missions and models accessible. There's a meeting next September and ideas are floating around now to help make this happen. HAO has money to work on this. This would be a great chance to point out that HAPI was designed exactly for this problem, and try to get some traction with and support from this group.
  4. other news: Cluster data is going to be mirrored at CDAWeb, where the default option is to present it in it's converted CDF form (ISTP-compliant) and serve it via the usual CDAWeb conventions, including HAPI.
  5. upcoming meetings:
    1. Aaron headed to Big Data meeting for NAS - he is looking for ideas and slides
    2. IHDEA - Jon to will present the latest updates to HAPI
    3. PyHC meeting; Aaron and Jon to attend; Bobby attending remotely and has ideas ha wants advanced
    4. AGU - relevant sessions are Monday (IN11E - Tools and Databases in Solar and Planetary Big Data) and Thursday (SH41C - Python for Solar and Space Physics)


This call was to give a quick status update from the sub-group working on references and time-varying parameters. A few suggestions were logged in Ticket 82.

Aaron also commented about maintaining a focus on implementations, and having something ready for people who want to implement a HAPI server but want to just drop is a pre-existing, generic server that can districtue their data using HAPI. We also talked about future connections to NSF efforts, such as we hope to bolster using the SuperMAG effort that is underway. Madrigal would also be a useful connection to make.


  • Eric will send Bob a note to update the HAPI main web page about the IDL client in SPEDAS.
  • Time-varying bins virtual hack-a-thon is this Thursday; iron out spec changes and implications
  • upcoming meetings: AGU (PySPEDAS poster will mention HAPI, Jeremy is in Python session, Jon has 2 abstracts on HAPI), also the IHDEA meeting in October - present time-varying bins update to ESA contingent


agenda: Jeremy's presentation on Das2 server options

Das2 servers have flags for individual datasets that grew out of the original use for Das2 servers, which was as a somewhat internal protocol between a client and server written by one developer, who understood what all the "secret" options were and could use them to optimize the data transfer for what the client needed. Jeremy advised against this kind of behind-the-scenes options proliferation.

Because the ensuing discussion led to significant interest in adding optional capabilities to HAPI servers, the bulk of the content for this telecon is captured in Issue:79

See that issue for details about adding server processing options.

Action items:

  • Bob to present about the FunTech server
  • Jon to follow up on server implementers
  • need examples of capabilities modifications to support binning, interpolation, and spike removal



Focus for the future:

  • more complete example package showing people how to access typical dataset using multiple clients
  • paper describing HAPI - Bob W. to send around draft; options: JGR, Space Phys. Rev


  • Bob: send around client test suggestions
  • all test clients per Bob's directions
  • Jon: check with Nand and Doug about server status
  • Jeremy: prepare demo of Das2 dataset options management (only retrieve finest resolution, etc)
  • Jon: straw-man examples of binning, interpolation and de-spiking
  • all: bring servers up to new spec


We decided to proceed with the release of 2.1.0. The one thing left to do is update the spec to reflect the resolution of Issue #69 about how to handle a user request for parameters that are not in the same order as what is in the metadata. We are also adding an additional error code (1411 - out of order or duplicate parmeters).


Suggestions - update the "server" nomenclature in the spec to reflect intent: this is the full server and URL prefix of the top server location / entry point. After the first example (http://server/hapi/data?id=alpha&time.min=2016-07-13) clarify that "server" includes the hostname and possible prefix path to the HAPI service.

Lots of questions about prescribing the order of returned parameters - Nand: this can add confusion when there is no header in the response (then you have to consult the info to see what you got in the response). The differences are focused on client expectations (return what I ask for) versus a data-centric perspective (the data exists and will be returned with as little changes as possible - no re-orderings). Jon will discuss with Bob and Jeremy and bring a suggestion to the next telecon.


topics discussed

  1. Duration (issue #75, now closed) is tied to time-varying bins, so the explanation is not in the spec document, but a separate implementation page until the time-varying bins are figured out.
  2. Need one last look at all changes since last release:
  3. Need to make a changelog with diffs for key updates; roll up typos, etc, into one item
  4. Bob looked at timeseries data from earthquakes (including electric field values). He said their standards seems pretty easy to map to HAPI - we have similar elements and a similar approach (of course the details differ); he will send some links See The timeseries link is the one for data.
  5. normalizing ids and labels fro version 3.0- (See discussion below)
  6. coordinating efforts on Python HDEE proposals

Considerations for Normalizing the use of descriptive labels (see below for details)

For parameters: id - machine readable ID with limited characters (no spaces or odd characters); BX_GSE label - short, human readable version of ID; spaces ok, "Bx in GSE coordinates" description - up to a paragraph of information about the parameter or dataset; think figure caption; same level of info from SPASE record

SPASE analogs are: parameter key, name, description (the main thing is to have them correspond one-to-one with SPASE, and maybe others?)

Relationship to resourceURL? If this is present, then 'description' is obtainable there.

For catalog entries (each entry is a dataset): currently, each dataset has: id (Required), title (optional) Suggestions for 3.0: A. each dataset has: id (required), optionals: label, description, start, stop, cadence B. have a verbose flag on catalog request that generates a full, parameter-level catalog of all datasets; like this If present, advertise in capabilities as catalog verbosity

Does this make HAPI too much of a registry? Original idea was to let discovery focus be outside HAPI. It makes HAPI usable in other contexts.


Discussion about the generic server Bob W. is creating:

The server has multiple installation methods, one of which is a Docker image. This option has drawback, since it's hard to edit files inside Docker (you have to ssh into the Docker VM, and then use whatever primitive OS tools are available, like vi or nano). So after someone configures their server, they could build a Docker image, but it might not be too useful like this as a deliver mechanism. Unless-- you could have the server config file be external, and then tell the Docker image about it at startup. Bob will look into allowing the run option for the Docker image to take a URL argument pointing to an external config file.

Volunteers are needed to try out Bob's method and see how easy or hard it is to build the back end components to feed the HAPI front end.

What is also needed is a GUI mechanism for building that back end. This could be a separate open source project to build this part.

NOAA Space weather week is coming up; this is a good time to connect with both the science and operations / developer side of the house at NOAA. Also, the archiving side (NGDC) and the realtime side (NOAA Space Weather Prediction Center) will both be there, and they have separate mandates that don't mix often. Jon will contact Larry P. and Bob S. to see about connecting with NOAA people about using HAPI for their archive and real-time data

Specification updates

Jon is planning to put some revamped TIMED/GUVI data behind a hAPI server, and one issue is that each measurement needs to be correlated with a lat/lon on the Earth. We need a way to associate data columns with support info columns, like lat/lon. Also, sometimes, the lat/lon may be fixed, or partly fixed, i.e., changes every few years (when the ground magnetometer station is moved). Options are:

  1. just have a column that repeats the same value (this is the default now, and probably until HAPI V3)
  2. the header could list all the options for a slowly varying quantity, and also provide labels for each value, and then the data column could reference the label and only repeat that instead of the enire set of values; this is a kind of built-in compression

We should look at how Earth science organizes data products that need lat/lon registration.

Next steps for HAPI - better on-boarding process for people who want to adopt HAPI. Groups so far that have done this are CCMC and Fundamental Technologies (PDS/PPI sub node in Kansas).

We need to make our documentation have more of a flow or be more organized and cookbook oriented.

There are still a lot of outstanding open issues on the spec document. These need to be cleaned out. Most are documentation clarification, but two are larger issues. The biggest one is handling "mode changes" (bin vluaes that change with time). This is issue 71. Jeremy, Jon and Bob need to meet separately to try their latest approach as outlined in the issue.



attendees: Jon, Jeremy, Todd, Chris, Eric

  1. Happy New year everyone; we are missing our NASA colleagues and hoping they can get back in there soon
  2. is this meeting time OK for the upcoming year? will do a poll later to see if this time is OK
  3. EGU - abstracts due Thursday; Tom is going from LASP; no session identified for data environment topics; no one else likely to go
  4. iSWA HAPI Server is up and Jeremy reports that it performs well; Jon to ask CCMC to advertise it more on their main page
  5. Masha from the CCMC mentioned at the AGU that HAPI was approved by COSPAR and that we should form a group about it before the next COSPAR meeting in March; Jon to follow up with her about this, since it was a hurried conversation in the poster hall; the COSPAR approval of SPASE is still in process pending some clarifications, possibly related to how SPASE and HAPI interact
  6. is the URI template mechanism a part of SPASE? Todd thinks it can be listed in the AccessURL
  7. discussion about creating a "drop-in server"; we need to first define this more clearly; some kind of ready-to run mechanism to support the use case where a provider does not already have a server that can be modified; definitely it should provide proper HAPI parameter parsing and a secure environment; maybe these parts could be done in multiple languages (NodeJS, Python, Java) to give people options. Bob's server is coming along nicely and could be made into something installable via NPM (installer / repository specific to JavaScript); maybe we can start a group project for this effort; there are some datasets at APL to which we could try applying the generic server: TIMED data (time series of atmospheric retrievals and images) and also SuperMAG (which has some strict user registration and data usage acknowledgement requirements)
  8. client work - need to keep bolstering the Python client to make sure it is industrial strength Python; Bob is working this - does he need / want help? this will hopefully end up in the Heliophysics Python library
  9. next meeting: Jan. 28 (since 21 is Federal holiday)


Post-AGU meeting discussion about AGU - discussion with Bob and Jeremy and Larry Brown - Bob wants more issues closed, especially bug ones; the ambiguity of cadence is a key one; 2. other AGU news: charter in the works for IHDEA (International Heliophysics Data Environment Alliance) 3.


  1. meeting reports from various events: IVOA - Jon V.; very short - astronomers have preliminary interest in HAPI; contact is Ada Nebot; ADASS - anyone go to this?; EarthCube RCN - Jon V.; HelioPython - Aaron, Bob, others
  2. Update on time-varying bins - not much news yet
  3. server status check, including LASP; development so far on Github at LASP site
  4. AGU Plans

Meeting updates: IVOA - interest in HAPI and our experience; only a preliminary connection - further dialog needed; interest in re-using existing standards, such as Apache AVRO

Meeting updates: Python meeting - presentations from contributing libraries and other existing libraries in terms of practices and structure; possible HDEE call for exploring e.g., library governance; Bob and Aaron met with NGDC (Eric Keane, and Rob Redmond) who have their own APIs (spider, and 2 others since then, now another); API is mostly for internal use within web-page plotting and for access to their own database; DISCVR and GOES products; most of their products already in CDAWeb; SWPC real-time data is separate, and they only expose files for security reasons - thus would need a wrapper; question: what is latency with iSWA at CCMC? if low, then probably good enough; could ask CCMC to cover more products; group at ONERA (French radiation belt group, Sebastien Bourdarie) also building a HAPI server - eventually using a Python Django - would they be willing to contribute it as open source?!!!

Overview of LASP HAPI server from Chris Lindholm; it will be generic as a LaTiS server - if users can set up their data to fit into the LaTiS framework, then the data can be served via HAPI. More at AGU, including public HAPI server. Functional programming (Scala) being used.

Next meeting (after the AGU): January 7, 2019


  1. report on International Heliophysics Data Environment Alliance (IHDEA) - meeting as ESAC (archive for all ESA missions); Arnoud Masson; enabling cross-agency interoperability; public site is at with dev and info mailing lists
  2. upcoming meetings: HASA HQ Data and Computing across all SMD, IVOA (Nov 7-9), Python Meeting at LASP, ADASS, EarthCube
  3. connection with NOAA being sought (Bob Weigel working this with Aaron); need to prime the discussion with the right NOAA people before Space Weather Week (April 1-5, 2019 in Boulder)
  4. Update from LASP (Doug Linholm) - code for scala-based somewhat modularized HAPI server available at which might be demonstrated next time
  5. Update on Python client - able to push data directly to Autoplot; lots of other features for a demo next time
  6. next telecon - Nov. 19; topics include more on Python client and possibly some on the LASP HAPI server


topics covered:

Upcoming meetings:

  1. Python meeting in Boulder: Aaron and Bob Bob attending; Aaron (with Alex DeWolfe) is coordinating Python library development for Heliophysics
  2. Astrophysics Data Analysis conference in College Park, MD
  3. NSF EarthCube RCN meeting at NJIT: Jon going; let Aaron know if you want to be invited attend

Python client: Bob has a basic package installer working and a Jupyter notebook;

Specification updates: Jeremy and Jon presented ideas for dealing with issue:71 about constants in the header and about time-varying header elements; suggestion from Todd and Bob: use native JSON reference capability; possibly also have our own reference syntax when using a parameter value as time-varying bin values

Action items:

  1. Jon and Jeremy - revise the suggestion for issue 71 to use native JSON refs
  2. Bob and everyone - find more Python helpers


topics covered

linking parameters in the header this relates to issue 71; there has not been much work on this yet; issue 71 now has a write-up of some options; Jeremy will explore some options in the next few days

email lists for now, we will just use the hapi-dev list for most communications; we can use hapi-news occasionally, but that should include instructions for getting on the hapi-dev list, since that is still going to be the priority list for a while

NOAA data would it make sense to have NOAA data via a pass-through HAPI server (written outside of NOAA)? we should interact with NOAA some, especially at next year's Space Weather Week, when developers and scientists are all available

server updates:
APL: JUNO data going to be put behind a HAPI server
Iowa: Autoplot bug fixes; das2 server codebase is shared with hapi server codebase, and a setting determines if the das2 server is also a hapi server; decided dataset by dataset within a das2 server
LASP: development underway for HAPI server, which will be part of the LATiS version 3 effort; work is all being done on Github and so the codebase will be usable by others interested in serving data via HAPI or LATiS
PDS/PPI: server is up and running; CAPS data available - more testing needed; any dataset in PDS4 can be easily added to the HAPI server
CDAWeb: Nand's server still running OK; saw some accesses from APL; problematic variables being removed

client updates:
Nand is working on Java client - this could be coordinated with Jeremy and Larry Brown
VisualBasic client for MS Excel is going slowly at APL; high school intern will continue this fall

action items

Todd - send Jon and Aaron the email addresses on the hapi-news and hapi-dev distribution lists
Jon - send something to HAPI-news occasionally to keep people up to date on development
Jon - test the hapi-dev list using the WebEx meeting setup tool to see if everyone will get the WebEx invite
Jon - email Alex DeWolfe about adding more data formatting discussion to this Friday's Python telecon
Jon - work with Aaron to touch base with the CCMC people for a status update on their server and we're especially interested in any feedback they have regarding the specification
Jeremy - work on implementing something for linking variables and/or header items
Jeremy and Bob - remove time library dependence from Python client; look into Jupyter notebook as a demo for how to use Python client to interact with a HAPI server


AGU sessions - planning for multiple sessions; SPEDAS training after Mini-GEM (and poster in Cecconi's session)

Oct 2,3,4 Python for Space Physics at LASP; presentations on existing capabilities; architecture discussion and layout; Alex DeWolfe coordinating; she also has mailing list and telecons every other Friday

Jon - send Alex D. a note about Python integration of HAPI; jump in on upcoming Python telecon
Jon - write up summary of discussion on reference variables and include in issue 71, then notify everyone

next telecon: August 20


COSPAR summary - news from Todd

SPASE and HAPI put forward in resolutions recommending their use as standards

AGU submission possibilities

Jeremy will submit to this session by Baptiste Cecconi:
IN044: Interoperable tools and databases in Planetary Sciences and Heliophysics

Bob is thinking about this session:
IN007: ASCII Data for Public Access

Jon will put a HAPI specification poster in this session: IN042: Integrating Data and Services in the Earth, Space and Environmental Sciences across Community, National and International Boundaries

Bobby and Bernie will not create a HAPI-specific poster, but can support a CDAWeb description on another HAPI poster, which should also include Nand.

Doug will present the HAPI-fied version of LaTiS at the AGU as well, session is still TBD.

Next telecon will be July 30
topics to include:
issue 71:
updates from various servers (CDAWeb, PDS/PPI, GMU, UIowa, APL, LASP, and maybe the CCMC developers)


Note: next telecon is in one week (July 23) in order to have a short tag-up on AGU abstract submissions.

The two action items from today are:

A. peruse the AGU session list and think about what HAPI abstracts we can submit. There are multiple options:

  1. multiple posters: a poster on the Spec, one on clients, one on servers
  2. one poster on all of these (spec, clients, servers)
  3. other permutations: one on the spec and servers; then one more for clients

There's a session by Baptiste Cecconi:

There's also a Heliophysics Python session by Alex DeWolfe:

B. take a look at issue 71 - it's about how to handle constant parameters or references in the header and in the data.

URL is:

Be ready to talk about this at the next telecon

Server updates: The CDAWeb HAPI server is going to use Nand's approach for the foreseeable future: (We did not talk about this, but it uses https (encrypted), which has to be considered when mixing with regular http (non-encrypted) sites.)

CCMC - Aaron can check with them soon to see how they're doing

PDS - Todd not on the call (COSPAR); will get an update next time

LASP - Doug says funding all set up and work is starting / progressing

Client updates

  • Autoplot and the MIDL HAPI client were presented at the MOP meeting last week. 20- people attended the tutorial. A few scientists are starting to see the value of having one access method across data centers.
  • SPEDAS tutorial held at GEM meeting. Another planned for Sunday evening after mini-GEM at AGU. A part of this will be about HAPI, so Eric was fine with a HAPI representative helping out with or being present for that part of the annual SPEDAS tutorial. We hope to have CCMC and PDS and maybe LAP online with HAPI by then!
  • At APL, some interns are going to attempt a fully Excel-based HAPI client, or at least some mechanism that can produce more regularized CSV files that can be opened easily in Excel.



  1. news from Bob on updates to the verifier is link to new verifier; it just is a pass-through to his own site at GMU

  1. also from Bob - update on the generic server Bob - few tweaks to docs and ready to start advertising about in 2 weeks;

  2. transitioning to actually serve HAPI content

Jeremy: change documentation so that it points to working examples on

  1. feedback about Jeremy's proposal for constant parameters

Jeremy's proposal for constant elements in the header or data:

Lots of discussion about exactly how to arrange references in the header. Should there be a more generic way to link variables - i.e., treat even the constant elements as a kind of parameter, and then just have them linked in the header, like CDF does. Or should we keep header variables different than time-varying data parameters?


Today's discussion: PDS HAPI server is up and running. Send issues to Todd. The rest of today's discussion was mostly about how to handle data with unusual bins, such as 3D data that in addition to a regular grid of bins along each dimension, somehow also has a separate grid of bin values that applies to a specific slice or face of the data. This is a MAVEN dataset, and Todd will be sending around more info about it for next week.

We will have another telecon next week, May 14, and then take off the week of the 21st, since that is the TESS meeting week.


  • upcoming meetings:
  1. EGU is this week; HAPI poster is on Wednesday, presented by Baptiste
  2. CCMC meeting (Friday session) is devoted to comparing interoperability mechanisms and has international participation
  3. TESS meeting - no updates - registration is open
  • in applying HAPI to Cassini data, scientists wanted to be able to manipulate and combine the data, doing more than just presenting what is in the file; MIDL does this because it knows what type of science data it is dealing with - effectively, it has more metadata so it can make a particle data object (with look directions, or pitch angles, etc); the way to have HAPI support this with the current spec would be to add custom metadata extensions (allowed by the spec) that would allow a client to know more about a dataset discussion about using HAPI to capture more

  • status updates: incremental progress on server development

  • Jeremy and Eric are working on supporting caching using the If-Modified-Since HTTP request header mechanism; Jeremy has a draft document out about how to do this; Autoplot can already do caching, and adding the If-Modified-Since to Jeremy's test server did not take too long (few hours of Python modificaitons). Eric is working on adding caching to SPEDAS - he is planning to use daily chunking of data (same as Jeremy).

  • discussion about a generic server - see next paragraph

Generic Server Ideas

Jeremy and Jon want to start a group development of a generic server that is independent of current servers, many of which are modifications of existing, historically motivated servers, and since HAPI is being added as a secondary delivery mechanism, these modified servers are not suitable as generic examples. Also, it would be good to focus on web security in the design from the beginning. So we envision a 2-level system with a front-end that manages incoming requests, and also returns the response. The front end is completely generic and re-usable and as the outward facing element, it is made to be very secure. The back end deals with the data management needed to fulfill the request. It should be made able to handle data arrangements that are nearly HAPI-ready, such as a static HAPI site that has files and metadata as fixed entities (and the back end knows how to subset them, etc).

The back-end could be made generic if the data center can provide three elements of functionality:

  1. the ability to read a dataset for a given time range and bring it into an internal data structure of that data centers choosing (QDataset for Autoplot, ITableWithTime for MIDL, something similar for CDF programmers). This capability is something each data center will possibly have already.
  2. the ability to subset this internal data model by parameters or by time
  3. the ability to turn this internal data model into a HAPI-specific structure that the back end knows about (and is essentially a HAPI-based data model with the right metadtata).

If a server can provide these 3 things, the back-end code can handle the rest of the HAPI-specific processing.

Doug mentioned that this essentially reflects the design layout of LISARD, and some of the code is already on Github, and the upcoming development will likely be another Github project within the current hapi github project. The generic server should not be tied to a single institutions code base, but we can certainly pull ideas from existing implementations.

Jon wants to get Rob Barnes and Bob Schaefer involved, since they both have relevant data that we can try to make available through HAPI, and as we do this, we could also spend a little extra time to create a generic server like the one outlined above. Schaefer's data is interesting since it is ITM data with higher dimensionality, and this would demonstrate that HAPI can be used for ITM data.

Bob Weigel (not on the call today) needs to also be heavily involved in the design and implementation of this generic capability, since he has expressed an interest in it for a long time already.

action items

  1. create feature request for overlay metadata to identify specific data types; this topics is related to time-varying metadata, so this could be incorporated into any updates to the spec
  2. write up ideas about generic server and create feature request (or update existing one).
  3. next meeting is Monday, April 16, when Rick M. from CCMC will demo his HAPI interface; no meeting on April 23 since that is the week of the CCMC meeting



  • HAPI error codes - spec document update almost done - needs example still
  • HAPI caching in Autoplot - few small bugs before production; structured so that the cached content could be used by other clients in other languages; detection of stale cache is via the optional modification date (which is not granular) or just age in cache; maybe flesh out a common set of refresh rules on this telecon?
  • modification dates and HTTP status codes - Bob, Jeremy and Jon to talk at next week's time slot
  • CDAWeb HAPI server; Nand's is running at ; add this to servers.txt (Jon); being migrated inside CDAWeb
  • LASP - getting set up soon
  • SPEDAS - bug fixes and time format handling updates (will use regex from Github to handle YYYY-DOY formats); the validator (by Bob Weigel) may have a better tested way to parse times -- see the verifier code here: (needs leap seconds updates); also SPEDAS accepts parameter restrictions; also handles first time column OK
  • demo by Larry about MIDL4 HAPI client
  • Aaron - need long term organization mechanism
  • Jeremy, Bob, Jon to use next week's 1pm slot to talk about modification times and expiration dates
  • next telecon: March 26

Action Items:

  • HAPI web page ( needs to mention SPEDAS! (Jon)
  • discussion about posting a Java client to Github main page (Jeremy and Larry)




  • PDS PPI node - server update in progress; works in development; being pushed into Git repo for move to production environment; available for PDS4 datasets (MAVEN and LADEE now; soon Cassini and MESSENGER; migration of everything else underway too)
  • Jeremy and Bob - more generic servers; Jeremy: multi-threaded Python; Bob: node.js server in dev.
  • GSFC HAPI server - Nand has new version; also has API for HAPI input stream and output stream
  • could be some interest in making data from active missions jointly usable; stay tuned for senior review report

Switch to every 2 weeks - next telecon in March 12.

Next time - MIDL demo.


CDAWeb - JSON update still in progress

Bob and Jeremy - working on generic server and developer documentation;

the HAPI verifier - up to 2.0! ability to check JSON and binary is still in progress; ability to set timeout will be added soon

discussion about error codes: the spec points out that when no JSON is requested, only the HTTP status response is available; Bob and Nand already implemented mechanisms that do more than this, and they suggest we add to the spec so that it recommends the following for HAPI server error responses:

  1. modify the HTTP response text (not the code number) to include the HAPI-specific error code and message
  2. even for error conditions that report "not found" still return JSON content to describe the error message

Note: These are all small enough changes (and are just recommendations) so that they only trigger a version number increase to 2.0.1

Before adding it to the spec, we need to see which servers can do this, and which clients can utilize this information. We expect it is not a problem, but want to be sure. What we know already about servers: Tomcat (yes), node.js (yes), Perl(?), Python(?). About clients: curl (yes), wget (no)

Next week: Eric Grimes - will demo IDL HAPI client and SPEDAS crib sheet

action items:

study the following server capabilities to implement 1 and 2 above; Jeremy (Python and Perl servers)

see how proxies affect the transmission of the JSON content when there is an HTTP 404 error; was this going to be Bob or Jeremy?

clarify the error handling section in the spec to describe the new recommendations (Jon)


discussion about streaming implication of timeouts - need statement in the spec about servers needing to meet reasonable timeout assumptions for clients; current typical values are around 2 minutes; we need to check these; must specify for time-to-first-byte and time-between-bytes

Bob's verifier currently has multiple tiers of checks; it will be switched to allow the timeout to be an input

also need to clarify expectations about multiple simultaneous requests (do servers need to be multi-threaded?); CDAWeb limits simultaneous connections for security reasons; Apache has settings to limit connections; does Tomcat?

how to clarify any confusion about streaming? record variance is the fastest changing item

make sure the spec mentions that servers can respond with "too much data" which is especially relevant if delivering data in any of the column-based formats were considering as optional output format

Discussion about current JSON format - there was a question about the validity of records with different types in the array for one record; JSON Lint parses this fine, claiming mixed values are OK; JSON Spec RFC7519 agrees;



related topic of interest: Open Code / Source white papers

  • NASA is serious about it's commitment to encourage / require open code.
  • people are encouraged to submit short statements with support or opposition or suggestions of pitfalls to avoid, etc.
  • some comments about streamlining the legal / formal release process; also documentation is time consuming
  • difference between open source project (lots of global developers contributing) versus open code (source code available, but not necessarily supporting active, joint development)
  • overlap with SPASE descriptions for publicly available resources

HAPI email list now set up

Web site improvements: minor improvements only, add dates to releases; mention the news listserv and how to subscribe; current telecon members have post capability - new members are moderated starting off; others listen only; eventually have a [email protected]; add all the logos fro supporting organizations

Lessons from the AGU:

  • discussion with Arnaud Masson (Aaron's counterpart at EGU); Aaron will set up a meeting about interoperability at the right level of formality, using HAPI as an example case
  • feedback from Hayes: OK to proceed with some HAPI development

plans for the year

conference presence this year? EGU - joint abstract with Baptiste (ask about collaborators) and Arnaud and LASP group (Tom, etc) supporting the presentation of the material at the meeting; Jon will write tomorrow TESS in June - abstracts due in February (AGU-based site)

  • Jeremy: update from SPEDAS group - re-writing client for latest version
  • need to get feedback from CCMC on their server?
  • Bob: working on generic HAPI front-end server to manage HAPI requests; if a provider has a command-line way to stream data, it can be connected to the front end to make data available via HAPI; updates in a few weeks; (this would be run on existing servers at the provider site); includes validation mechanism internally

Next telecon is Jan. 22.


Action items:

  • Jon: Draft note for SPA email newsletter. Request for comments on HAPI 2.0.0; emphasize good lowest common denominator
  • Aaron: start talking with ESA; get names of telecon people
  • Todd, Jeremy, Jon: get listserv email set up at; Todd will look
  • all: keep working on implmentations
  • Bobby: send AGU notes
  • all: what standards group to join or become: SPASE, Apache, IVOA, COSPAR


  • Nand wants someone to check the JSON output of his CDAWeb server; Bob says the verifier will eventually do a cross comparison between the CSV and JSON and binary data


Topic 1: how to capture start and stop times

Write-up proposals for handling start and stop times: option 1: reserved keywords for the start time column and stop time column option 2: keywords that refer to the names of the start time column and stop time columns option 3: delta +/-; use units on the column to capture a duration

suggestions: accumulationTimeStart accumulationTimeStop

accumulationStartReference accumulationStopReference

accumulationStartTimes -> name start time column accumulationStopTimes -> name of stop time column

comments: accumulation is too specific

measurementStartTimes measurementStopTimes

Topic 2: what about extended request keywords? lots of issues: in capabilities (server-wide) or in info (dataset specific)?

Need a document to capture topics we've discussed and not put in the spec, but need to remember.

next meeting: Jan. 8


  • Bernie demonstrated a way for servers to indicate that data has not changed since last requested; servers emit a last-modified header value, and clients and include a if-modified-since header, to which servers can give a 403 "Not Modified" if nothing has changed; this is harder for a service-based approach, since these header values are supposed to relate to the actuat content of the response (rather than the underlying data used to construct the response).
  • There is already an optional attribute in the HAPI info header for modificationDate and clients can look at this and just not issue a request for data if nothing has changed (rhather than issue a request and look for the 403)
  • It would take a lot of work for all servers to implement an accurate modificationDate since there could be a lot of granules to examine; for static datasets, it is easier since it does not change
  • So for now, we will not make any changes to the spec.

AGU plans - still need to choose a night for the HAPI dinner - Wed. is current winner on doodle poll


  • update spec: error if you mix date format within an info header
  • next week: Bernie illustrates last-modified in info header or catalog?


action items:

  • review Bob's list of 1.0 to 2.0 changes (Jon)
  • add example to clarify the single string or array of strings for parameter units and labels (Jon)
  • update the spec document to clarify what the data stream should look like for 2D arrays when streaming JSON formatted data; the JSON mechanism of arrays of arrays is what the spec calls for
  • look into mailing list options (Jon and Jeremy)
  • keep working on implementations (everyone)


Bob showed a simplified version of the website that removed duplicate info on the GitHub developer page and the GitHub Pages web site page. He's attempting to link to to go even farther in avoiding duplication.

We still need a novice friendly landing page at

We reviewed modification to the units and label attributes within the Parameter definition in the spec. They need some tweaks:

  • add to each "In the later case," to clarify about array values.
  • instead of referring to the one unit or label string as a scalar, just call it "a single string" since scalar sounds too numeric

Lots of discussion about Extensions to HAPI - it is captured here as we discussed it.

maybe have an area where new endpoints can appear:

  • this could serve as both "extensions" and "experimental" in that people can try out new things

Doug: dap2 - does not define extensions; it has simple query mechanism for index-based selection of data

in the CAPABILITIES description, need to capture the fact that the extension exist:

"extensions": [ "average", "decimate" ]

Or, maybe we define some higher level functionality as part of the spec (for the data endpoint), and just make it optional.

"options": [
      "data": ["average","filter", "interpolate"]

Bob: needs examples to help us see how it works: easy one would be decimation (only include every Nth point)

Lots of different ideas:

  • this does not work well since you will want to do more than decimate - it needs to be a request parameter
  • Doug: could use function syntax: id(ACE_MAG)&stride(10)&average(60)
  • this is similar enough to regular request syntax that it is probably better to stick with one syntax

For constraints on data, recall that we are using time.min and time.max with an eye for extending this to data


We could have users stuff all their extended capability into one additional parameter (with CSV function calls with parameters to the functions)


Most people liked having extension right on the data endpoint, but with the x_ prefix to indicate they are extensions and experimental.

  • These could be advertised in the capabilities endpoint like this:
"extensions": [
          { "name": "x_UIOWA_average",
            "description": "mid-western averages",
            "URL": ""
      "x_stride" :

Todd: we are talking about two things:

  1. additional processing done by the data endpoint (averaging, etc)
  2. different endpoints (listing coverage intervals for a dataset)

Aaron: maybe moving too fast with extensions - let's get a solid base working first

Nand had a question about mixed time resolution - he's going to ask it via email.

Add a SUPPORT email link to the main HAPI page!

  • try to use GitHub mechanism for listserv to keep track of asked questions
  • we should use the domain for listserv options


The web site is finally transitioned to show version 2.0 as the latest version. Note that this version was finalized a while ago.

The issue of mixed units was discussed again. With Todd present, we revisited the use of unitsArray and labelArray, and have decided not to add those attributes. Instead the units attributes (which is required) and the label attribute (optional) will be allowed to have two meanings. A scalar value must be used for a scalar parameter, but for array parameters, you can use either a scalar or an array. The scalar means that all array elements have the same units, and the array means you have to specify a units value for each element in the array (so the array must have the same shape as given by the size attribute). The spec will be updated so people can see if they like that. This is also very backwards compatible.

Jeremy said the regular expression he mentioned in issue #54 (which some people tried and did not work) does indeed have a problem (with interpreting colons?) and he's looking into it.

CCMC attendees: Chiu Weygand and (I think?) Richard Mullinix

Questions from the discussion with the CCMC people:

  • what about extensions to the API? they had additional filters they wanted to allow; we mentioned the possibility of defining how people could add extensions, and then having a suggested set of optional extensions as part of the spec; it would take another working group or a dedicated effort to clarify this
  • time parsing was more difficult for them - this might end up being a common difficulty, so we should think about providing time parsing libraries in multiple languages
  • they wanted to know about subsetting the catalog and how to arrange their server URLs

We will try to have a HAPI dinner at the AGU on Tuesday, Wednesday or Thursday night. Doodle poll will be taken soon.


  • Jon: update dev spec with new definitions of parameter attributes units and label
  • Jon: Doodle poll for AGU dinner
  • Jon and Bob: figure out how best to arrange the main GitHub site and GitHub Pages site to avoid duplication


discussion about mixed units for arrays: we decided to try a unitsArray attribute on parameters to capture different units for each array dimension

also decided to add an optional label attribute for parameters, with a corresponding labelArray

Jeremy has new regular expressions for checking date format compliance - see issue #54

Add Jeremy's regular expressions (for Java (uses named field) and others) to validate allowed ISO8601 date formats.

Client and Server updates:

  • any 2.0 servers? not yet
  • ask Nand about status of CDAWeb HAPI server (Aaron)
  • alternate CDAWeb approach: Bob's server
  • datashop - eventaully get Cassini APL data
  • Iowa HAPI server - Chris has it in non-public beta
  • CCMC - still working on it
  • SPEDAS - aware of and interested in; not urget yet?
  • idl client - update from Scott imminent


a. implementation status

Chris Piker has the current spec worked into UIowa's das2 server and JEremy has questions about CSV from him:

Question: why NaN for CSV fill?
Answer: keeps it consistent with binary

Question: why no comments allowed in CSV?
Answer: makes readers more complex and slow

Question: how to handle progress info between client and server?
Possible Answers: two-way communication? use multiple connections to the server, one of which is for tracking progress; maybe see web-workers mechanism;
a clever option: track rough progress using the time tags in the data, since the overall time range is known!

Question: How well defined is the CSV spec? Answer: not sure what we decided on this; Jeremy was going to look at cleaning it up?

b. Todd mentioned on the SPASE call last week about the PDS/PPI plans for HAPI servers

c. Aaron is hoping to have an HDMC meeting at some point to solidify plans

d. the Github web site has still not been changed

e. I heard back from Daniel Heynderickx, who works with data servers at ESA and wants to use HAPI

f. update from Doug Lindholm: LASP white paper sent to Aaron; Lattice extensions to implement the HAPI spec; also, a HAPI client reader implementation so Lattice could ingest data form other HAPI servers and re-serve it via a Lattice API

g. Jeremy reports that the SPEDAS group looking at Scott Boardsen's IDL implementation
he's hoping to convince them to expose data that's been read via SPEDAS through an IDL HAPI server (so Autoplot could read it from the server); MMS has LEvel 2 products only available via the IDL routines in SPEDAS


add section numbers to TOC?

next meeting: Monday, Oct 9, 1pm: status of implementations


call with Jon V and Bob Weigel

  • We are planning on redoing the web site to make it more coherent for visitors. Landing page not be the Github page, but just the, and modify the README to have not hyperlink to a release, but just to the markdown and the PDF and HTML, as well as to the JSON schema.

  • Use GitHub pages mechanism for the web site, possibly using Jeremy's domain "" so that this points to the README.

  • Get rid of the "versions" directory (in the structure branch) using a more flat arrangement.

  • Not expose Github tags to people, since that would lead them to download the whole repository (with all older versions of the spec).


  • The SPASE group has been told about our preferred way to indicate the availability of a HAPI server within a SPASE record. There can just be an AccessURL pointing to the "info" endpoint for a particular dataset.

  • Bob showed the Matlab and Python clients he has.

  • Action items:

    • Jon:
      • rename current development version to release version
      • add updated Table of Contents
      • release version 2.0
    • Bob:
      • fix problem with JSON schema (centers and/or ranges)
      • look over the file arrangement before 2.0 is released
      • update the verifier to the latest spec (use a separate branch of the verifier code for each version?)

In subsequently looking over the HAPI specification Github page,' I think we need to prepare it for long-term stability with multiple releases. The standard approach is to have one directory for each release, and then have a landing page that points to the most recent release, as well as the development version.

Jon is setting up a separate telecon later this week to propose, tweak, and settle on a directory arrangement scheme for this and subsequent releases.


How to incorporate HAPI URL into SPASE?

  • Give an info URL like this and let software figure out how to parse it
  • Just give a URL to the top of the HAPI server, and assume the SPASE ID (product key) is the dataset name in HAPI
  • Give the URL to the top of the HAPI server, but also give the HAPI dataset name (in case HAPI data server names things differently)
  • What about a data request?

Nand's request: need clarifying use case.

Two other Nand suggestions:

  1. We should always provide the header; original reason was to be able concatenate subsequent requests; value of always having header is that data self-identifies when you save it. Discussion: communicating just the numbers is sometimes useful; the API already emphasizes a division between the header and the data; importing just the numbers with no header might be important (in Excel, for instance, or IDL using its CSVread mechanism); Conclusion: keep the option to leave off the header
  2. Precision in general and about time values. Conclusion: let the server decide. Good practice is to limit the output to the precision you (the server) actually have.


Telecon notes

  • issue 51: should time column have required name "Time"? -- decided not to require this, but to add to spec a clarification on the importance of having an appropriately names time column (don't leave the time column name as Unix Milliseconds when you changed it to be UTC to meet the HAPI spec)
  • issue 40: why only string values for fill? -- decision is that it is OK to require fill values be strings; the problem is that JSON does not enforce double precision to be 8-byte IEEE floating point, so we can't rely on JavaScript or the JSON interpreter to convert the fill value ASCII into a proper numeric value; thus, we will just leave it as a string and the programming language on the client will need to do the conversion
  • issue 42: what about a request for specific parameters that is somehow empty? -- decision: treat this as an error condition; in fact this is generically an error: any optional request parameter, if present, must also have a value associated with it; since it was optional, its presence then requires a value
  • issue 46: need to clarify about the length of strings and how to use null termination in a string; the spec currently does not capture what we wanted to say; the null terminator is needed only in binary output, and only when the binary content of the string data ends before filling up the required number of bytes for that element in the binary record; so the length should NOT include any space for a null terminator; if you fill up the entire number of bytes with the string, there is no need for the terminator; if you are less than the number of bytes, then you do use a null, with arbitrary byte content padding to the required length
  • issue 49: time precision -- change spec to say that the server should emit whatever time resolution is appropriate for the data being served; servers should be able to interpret input times also down to a resolution that makes sense for the data; any resolution more fine that what the server handles should result in a "start time cannot equal stop time error"; the precision the clients can handle is outside the scope of the spec, so users concerned about high time resolution should be aware of any restrictions of the clients they use.
  • email notifications: just use a listserv at APL for people who want notification of any change to the hapi-specification repository (not just issues); so far, this will be: Bob, Todd, Jeremy, Jon; no need for more complex scheme using pull requests with branching and merging (the complexity of that is warranted only with larger source code projects)

Bob's Updates

  • MATLAB client: ** hapi.m is feature complete from my perspective except for some minor changes for the binary read code. ** hapiplot.m is feature complete from my perspective. ** hapi.m and hapiplot.m work using data from four different HAPI servers ** Neither of the scripts have been systematically tested on invalid HAPI responses. Common errors are caught and other errors generally lead to exceptions. This could be improved and we'll probably add code to catch errors as we find them.
  • Python client: ** is feature complete from my perspective. It handles CSV and binary. ** has far fewer features than hapiplot.m. I am now certain that I don't like matplotlib. ** Both scripts work on dataset1 from, which includes many possible types of parameters. I have not tested on data from Jon's, Nand's, and Jeremy's server.
  • There are some issues that we'll need to discuss about the clients that are related to whether there is a difference between a parameter has no size vs. size=[1]. See also a question about size on the issue tracker.
  • Verifier ** Mostly feature complete and I still need to post the schema that I am using at I have a few questions for Todd about encoding conditional requirements. ** I added a few new checks and emailed Jeremy, Nand, and Jon warning them to expect new errors and warnings.
  • Issues ** Hopefully I am done posting issues and questions ...
  • Specification document ** I made several editorial changes to the HAPI-1.2-dev document
  • Outreach ** Tried to do a phone call with Redmon last week. Will try again next week as I am out after Wed of this week. ** Looked of SpacePy and figured I would wait till was complete before I emailed Morley. Will email him next week.


Discussion 1: clarity needed for multi-dimensional data when one or more dimensions does not have any 'bins' associated with it; right now, the spec pretty much says you have to have bins for all dimensions;

We settled on adding a single line to the spec: If a dimension does not represent binned data, this dimension must still be present in the 'bins' array for each dimension, but that dimension should have '"centers": null' to indicate the lack of binned elements.

Discussion 2: we need a place on the wiki to describe a common set of routines and calling parameters so that all the scripting languages can use the same names for the various types of calls.

Progress on action items from last time:

  • Scott added the IDL client to GitHub
  • Jeremy started a Java checking client
  • most of the people to be contacted have not been yet - we need more to advertise first...

Bob created basic Python and Matlab clients, creating areas for them at the top level of GitHub; these are ready for others to mess around with and add/augment as a kind of joint development.

Jeremy has a Java API checking app (verifier) also at the top level in GitHub, and also open for joint development.

Action Items:

  • Bob: email several people to ask about their interest in and potential use of the HAPI spec for their data serving interface
  • Bob: still working on basic Matlab HAPI client
  • Bob: email SpacePy people about HAPI client development and status of SpacePy
  • Jeremy: work on rudimentary server checking mechanism
  • Jon: add code to the verifier and see how it could be migrated to be or at least use a generic Java client
  • Bernie, Bobby, Nand - CDAWeb server is progressing but not done
  • Jon, Todd - start a collaborative effort to create a Python client in association with the SpacePy people
  • Jon: waiting to hear back from Daniel Heynderickx about newly release version 1.1
  • Aaron: update the CCMC people with news about version 1.1


topics discussed:

public versus private data served by HAPI: we won't make usernames and passwords part of the spec, but will have a part of the wiki devoted to implementation guidelines, where we can describe how to best serve data that has both private and public regimes.

Issue: citations - data providers will not like that HAPI obscures the source of the data; data providers won't get credit for serving their data, the won't know who is using it, and the appropriate reference wont' get cited Temporary resolution: will add an official issue to capture the need to address this concern; for now, the SPASE record that a HAPI dataset can point to can contain a citation; ultimately, it would be great to have a DOI associated with each HAPI dataset. Also, the resourceURL or the resourceID (or both) can serve as substitutes for a more robust citation.

how many different server implementations are needed? The only viable way for lots of data to stay accessible through HAPI is if the providers who install the HAPI servers also maintain them. Unused services will fall into disrepair (liek OPeNDAP services at CDAWeb, which got little use.)

Instead of creating an implementation that anyone can use (via a possibly hard-to-design interview process), maybe we focus on getting key providers to have an implementation, and we focus our energy and funding on a team that can help them understand the spec and get a sustainable HAPI installation going.

Multiple groups are working on servers that could be installed by 3rd party users, so this would give users a choice of HAPI server implementations.

We listed organizations that we hope would be interested in providing this kind of common access via a HAPI mechanism:

  • NOAA - National Weather Service (older:SWPC); Howard Singer
  • NGDC -> NCEI (Spyder, now retired; Rob Redmond potentially interested)
  • USGS (Jeff Love)
  • Madrigal (MIT/Haystack)
  • CDAWeb - Nand Lal working on updating his server to HAPI 1.1
  • other SPDF data
  • PDS PPI Node (Todd King)
  • LASP (Doug Lindholm) LISIRD2 / Lattice Evolution to 3rd party use
  • GMU / ViRBO / TSDS
  • Univ. of Iowa - Heliophysics and planetary missions
  • APL - Heliophysics and planetary missions
  • SuperMAG
  • other ground-based magnetometers
  • European groups: VESPA, AMDA (Baptiste Cecconi), other ESA projects (Daniel Heynderickx)
  • software/tool providers:
    • SpacePy - Steve Morely, also John Niehof
    • SPEDAS - Vassilis Angelopoulus (?)

For now, we will focus on working with the set of these groups that are more internal (to the existing HAPI community), such as PDS, CDAWeb, LASP, and CCMC. After we hace some success here, we can branch out to groups like NOAA, USGS, SuperMAG, and the Europeans.

Also, we need client libraries first, before HAPI becomes a compelling option, so several people will start working on those.

Action Items:

  • Bob: email several people to ask about their interest in and potential use of the HAPI spec for their data serving interface
  • Bob: work on basic Matlab HAPI client
  • Bob: email SpacePy people about HAPI client development and status of SpacePy
  • Jeremy: work on rudimentary server checking mechanism
  • Bernie, Bobby: report back with status from Nand about his updating of the CDAWeb HAPI server to meet the 1.1 spec
  • Jon, Todd - start a collaborative effort to create a Python client in association with the SpacePy people
  • Jon: email Daniel Heynderickx about newly release version 1.1
  • Aaron: update the CCMC people with news about version 1.1
  • Scott: commit IDL client to Github area



  • final review of changes to HAPI spec document for version 1.1 release
  • discussion about implementation activities based on the distributed list of proposed activities

topics discussed:

review of recent edits of spec by Todd, Bob, Jeremy, Jon

new domain available for examples; Jeremy to make our example links live soon (tonight?)

Question: should we allow HAPI servers to have additional endpoints beyond the 4 required ones in the spec? Todd: no - put them under another root url outside the hapi/ endpoints.
Bob, Jeremy, Jon : yes, but put in separate namespace under hapi/ext/ or with specified prefix (like underscore)
Answer for now: punt and push this to future version; might be good idea to allow extensions, but we need to figure out how to allow servers to advertise their extensions - it needs to be in the capabilities endpoint. Also, we need to think more about implications. We have a pretty controlled namespace now, so we don't want to dilute that. Silence in the spec for now means people will hopefully realize they are in exploratory territory.

release of new spec! now at Version 1.1.0; tag is v1.1, name is Version 1.1.0

discussion about https: we'll need to address this in the spec at some point

re-arrangement of top level documents:

  • move spec to something else besides the
  • describe all files in the including the recent versions
  • for now, indicate in the where to find the stable release versions

Action Items

  • Todd: create PDF stamp of version 1.1.0 and put in repository
  • Jon and others?: update main spec document to indicate that the live version at the tip of the master branch - list of released versions; probably use a different name for the key spec document and put more general explanation in the
  • Jon: issues to add:
    • extension to endpoints
    • supporting for https; Let's Encrypt offers free certificates
  • Jon: create wiki to keep track of longer running issues, like the activities document or telecon notes
  • Jon: close out old issues related to release of version 1.1
  • all: consider our set of next key activities: creating personal servers, creating drop-in servers for other people, making lots of data available, creating clients in multiple languages, lists/registries of HAPI servers, integration with SPASE
⚠️ ** Fallback** ⚠️