May 19, 2017 (Meeting Notes) - AICC/CMI-5_Spec_Current GitHub Wiki

cmi5 Meeting – May 19, 2017:

Attendees:

Andy Johnson (ADL) Art Werkenthin (RISC) Christopher Thompson (Medcom, Inc.) Clayton Miller (NextPort Engineering) David Pesce (Exputo) Dennis Hall (Learning Templates) Dinesh Varhan (IBM) Giovanni Sorentino –(E-CO e-Learning Studio) Henry Ryng (InXSOL) Tom Creighton (ADL)

Tom Creighton from ADL presented the process and procedures from the ADL LRS Conformance Test Suite:

The slides can be found here: https://drive.google.com/open?id=0B-zSoKlPb_5PYnNzb1M3WXRyazVMUnRwd3ByR0Z1UVR5ZURv https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif

Tom’s first slide was about the ADL Ecosystem. On the left side are three GitHub projects that existed (at the time of the Test Suite). Production began last April. Prompted by 1) The fact that it is really hard to keep track of conformance without some sort of testing process and 2) The Department of Defense Instruction Update. The DoDI is a set of requirements which DoD uses to drive technology needs.

There was at this time, a spec group and an LRS Test Group. The Test Group already had progress in requirements and software. Finally, there was an external BAA performer that shared a version of an LRS Test. There were also additional community members to include. On the right side of this slide, there were other apps that we wanted to bring into the software. The LRS Test has two main components: A basic user app – allows user to view and run tests. It provides user and test artifacts (logs and certificates) and a core – a collection of Mocha Test Cases. Mocha is built on JavaScript and is a platform. It was already developed, just had to create test cases for it. It allows for any running of a script within a test case. We were able to use this to send GETs and POSTs to an LRS and wait for a response. Then evaluate the response and codes. Then would send those results back to the front end to go back to the user. This allows the business processes (issuance of certificates, etc.) to be protected, but allows the test cases to be Open Source and always viewable. It also allowed developers to pull down the tests and run them within their own development environment. We hoped this would aid LRS developers in their timelines. It can also work offline, within internal networks, etc. Tom then showed a process diagram. The project has issue tracker, works in GitHub projects – like a Trello task board. Could turn tasks into issues and could internally link. Didn’t end up using it as much because we had an internal process. For internal front-end requirements, development of certificates, etc. was all determined internally and put into Trello. All external/public issues was run through GitHub projects.

Lesson Learned: Were tracking 6 different GitHub projects and a Trello. Yikes. Process: Would look at issues, evaluate if it was an obvious mistake (Error Code 200 vs. 204) etc. Would then fix, push to the development branch, would address it in a weekly meeting, and then would push it to the master branch. If there were any contentious (unclear/needed discussion) issues, then it would go back to the Spec Group. Would discuss in GitHub or at monthly meeting. Consensus was achieved and then would follow the other procedure. We started with a Suite of 1500 Tests. We had to go through all of the tests and evaluate if they were valid those test cases or not. This first part was in a closed room. Our LRS obviously was doing great in this case as we built it and controlled the Test Cases. Once we opened it up, found there were a lot of changes. My suggestion is that the sooner it is public, the better. Another problem was using GitHub (public) and Trello (ADL only). There were some links that weren’t obvious as issues didn’t always make it between them. It was hard for the community to understand why we were making changes without a GitHub issue. We made a conscious decision not to have the Test Suite within the actual xAPI Spec site because we didn’t want to version the master branch as rapidly as we’d clearly need to. We identified those as separate projects and don’t think we can really get any lower. Two is way better than six, though.

We also had no external schedule of release dates. We had an internal schedule test, meeting, release cycle that was weekly. It was unclear for anyone internal or external as far as when we would push out changes. Needed clearer communication to outside on when this happened. Our Error Messages weren’t awesome, but we thought they were pretty good. We linked back to the spec, which we learned from the SCORM Test Suite days. Sometimes this was hard because the whole message wasn’t received wholly or was tough to debug. We then would include request and response headers with each test as there wasn’t enough information. We had a BAA (external contracted help) who used community support to pull out all of the requirements and documented them. Could then go between the test cases, conformance test suite, and spec and find the actual requirements. Each Test case links back to the spec. Tried to keep weekly meetings short (15 or fewer minutes) and focused on issues and important impact dates. If there were issues, the smaller group was informed.

Even though we could have made the Error Messages could be better. We were extra critical because we started with the Test Suite. Had to understand if the test was valid first, which provided insight into what the actual Error was and could be more clear. The Conformance Test Suite isn’t just about issuing conformance, but to help developers on their path to adoption. We released in April, as a final version, the Test Suite. We were asked by the ADL Community to slow down. We were ready in January to allow vendors to test everything out so they could get to conformance. It was suggested that updates were slowed down as well. Vendors needed time/resources to keep up with constant changes. We tried to listen as much as possible. We are rolling out a new process. We got rid of Trello. Everything is organized within GitHub. Just one GitHub project – LRS-Conformance-Test-Suite. This is the Mocha core. All issues will be managed there, regardless if it is about the core or the UI/app. Hopeful that over time, we can direct everyone there, whether it comes in through email, conversation, Help Desk, etc. Will continue to run up the chain to the Spec group as needed.

We were doing direct changes and edits, feeling we could as ADL. Problem was that updates were hard to detect by the ADL Community. Now we clone the project to our own GitHub, then we issue Pull Request to the one GitHub site. GitHub allows for rules to be injected, like having a review before a merge happens.

The first Wednesday of the month, we will merge anything in development to staged. We will have a week of review, where internal and external people can pull down the code and run against their LRS. After this feedback, if appropriate, the merge will happen on the 2nd Wednesday of every month to coincide with the Conformance Test Group Meeting. As long as the meeting goes well with the issues, the merge will happen. If there were issues, then the merge would wait for the next month. Hopefully we can gain more stability and stretch the process out (2,3, 6 months).

We had the Test Suite in beta from October to April, but some vendors didn’t come to the table until things were final.

Tom then took questions:

Dennis asked about the hosting of the Test Suite. Tom answered that the Test Suite has a front-end that initiates a test for you. No need to pull down code or libraries. Learned from SCORM that this is problematic. As far as cmi5 building a Test Suite, how difficult would it be for cmi5 to leverage our stuff, but our LRS conformance test are just test cases on a platform. Could easily use ADL’s Test Suite implementing packages, etc. Can go through details on another meeting. The front end is protected, but the process itself could be useful and public. If interested, some of that could be abstracted out. Dennis said that he felt we would much rather use it as a Service. The hosting might have to be on a dedicated server. ADL may be able to do this, but would be up to ADL Government Leadership. ADL has no problem opening up what we can. Art has been looking at this from a Cucumber perspective. Where does this lead to? How does it generate code from plain English? Is this the right path? Art found it hard to use and beyond a free “pointless” training, the rest is paid. Still can’t figure out the end result. Tom found that for anyone who knows JavaScript is frustrating. It was brought up to us because non-coders could potentially bring in test cases, but at the end of the day, it was coders bringing tests forward. We haven’t committed to anything as a group. It is one of the three choices we have on the table. Can figure out at the next meeting if we want to use it or not. There could be Cucumber to Mocha apps. Chris Thompson believed that Cucumber is focused on Java, not JavaScript, but could have multiple levels. Could have some conversion potential.

Chris asked are there better ways to describe in English, a Mocha test before it goes out. Say someone wants to do “X test”, how does that change into a Mocha test. Tom said we had a conformance test document. No requirement came outside of that, but could provide feedback about each test. Chris recommended our group look at this for style.

Art said that the only real difference would be the sequential aspect.

We then had just a couple minutes left and decided to finish the discussion next week when a decision could potentially be made about the direction of the Test Suite development.

Other useful links:

Mocha Test suite: https://github.com/adlnet/lrs-conformance-test-suite

Conf Reqs: https://github.com/adlnet/xapi-lrs-conformance-requirements