[Development] Create a new import dataset type - fli-iam/shanoir-ng GitHub Wiki
The goal of this page is to describe the steps to create a new import type, and a new linked dataset type. This new data type will be described as [XXX] in this page.
Use case
The principal steps of an import from the user perspective:
- A user connects
- The user goes on the left panel "Import data"/ From [XXX]
- The user selects an archive, usually compressed.
- Some metadata about the archive is displayed, the user can eventually choose to select part on the archive or all the archive to import if necessary
- The user select the context of the import (study, center, subject, examination, etc..)
- A job is displayed progressing in the "Jobs" menu on the top of the left panel
- Datasets are visible by the user and can be searched in solr search
The principal steps of an import from shanoir perspective
- The archive is uploaded into import Microservice and let in "tmp" folder
- This archive get a security check
- The archive is unzipped
- The archive is analyzed (is it really data type [XXX]) and red to get some metadata
- An extended 'importJob' object is created and sent to the front for the user to choose
- The Datasets microservice receives the 'importjob' object with all the context
- New [XXX]Dataset / [XXX]DatasetAcquisition are created.
- The file is stored (either on PACS or on the file system) and referenced in a DatasetExpression and DatasetFile.
- The newly created dataset is referenced in solr
Technically
[FRONT]
-
Add a menu option to go to the [XXX] import page
-
Add redirection to actually go to the [XXX] import page
-
Create [XXX] import page with data upload
-
Create [XXX] import data selector page if necessary
-
Create [XXX] clinical context import page
-
Create [XXX] import service page to call import/dataset API
-
Create a new [XXX]dataset component
-
Create a new [XXX]dataset-acquisition component
-
Create a new [XXX]data-model component
-
Adapt dataset-component page to better manage display/download
-
Adapt dataset-tree component to better manage display/download
[BACK/IMPORT]
In ImporterApiController, implement a new API method to receive the file (usually as zip) In this method:
- Unzip the file
- Check data type consistency (first only by the extension)
- "Read" the data and extract some metadata if possible.
The idea here is that we don't really want just to "store" any file a user wants to upload but actually gather some metadata. This allows to
- Get some information to display on shanoir (not just a file name)
- Check data consistency (the user can check if he uploaded the good file)
- Check if the data is really an [XXX] dataset
At the end of this method, we create a [XXX]ImportJob.java (extends ImportJob object with specfic elements) object that will hold all the information about the import.
[BACK/DATASETS]
Classes to extend
- Dataset
- DatasetAcquisition
- DatasetDTO
Classes to create:
- [XXX]DatasetDecorator
- [XXX]DatasetMapper
Implement a new method in DatasetAcquisitionApiController
Implement a new method in ImporterService
This method will have the responsibility to create a dataset acquisition and a dataset and to link them with the examination It will also have the responsibility to create a "DatasetFile" and store it intelligently on the file system.