Technical debt solution design - Zenmo/Holon-webapp GitHub Wiki

Datamodel to the backend - 10/12 days

databasedingetje

To discuss:

  1. Generic solution for applying data manipulations from interactive inputs such as:
    1. Scaling all assets based on a grid connections
    2. Scaling specific assets based on ID
  2. Development environment for the AnyLogic model guys?

Solutions to couple to cloudclient? To implement edits on the datamodel?

  • Djantic allows to use the pydantic logic of exporting models
  • Just basic serialization? serialized_obj = serializers.serialize('json', [ obj, ])

Development environment

Features that could be useful:

  1. A development environment that allows for relatively easy changes to the datamodel

Django with migrations makes it easy to make changes to the datamodel, making sure this is possible with a remote tool is not desirable because it can break a lot of things. Easy changes to the data are possible with a django admin or connecting pgadmin to the database directly. The django admin is the preffered option.

  1. Copy the production base to local

create a bash script that synchronises environments but asks for a password to prevent unwanted access -> there is allready a script in wagtail pipit prod_to_local.sh

  1. A generative way of building scenario's based on math (distributions, shares of, types, etc)

Not really sure what you mean by this, Django has the possibility of adding calculated fields called properties on a model. You can also write a function that generates a scenario based on the parameters stored in the model.

  1. A way of pushing locally developed model configurations

With the loaddata functionality it is possible to import data to the database. Again updating the schema with an import is not desired so that should still be a development action. If a conversion should be made between the configuration and the database a custom django admin script should be created.

  1. Ability to generate JSON files for local simulation with AL

With the dumpdata functionality it is easy to generate json files from the database, however this does not take into account the dynamically generated properties. This can be done with a simple api endpoint that loads models into a dict and outputs a JSON file

  1. If we implement a way of getting data (e.g. technical assumptions) from Django DB ahead of running AL, then it should be able to get that data from production locally
  2. It should be possible to connect to intermediate steps of the model chain; e.g. contact an API endpoint for each of the services (economic, upscaling, etc.).

Functional requirements

  1. Ability to map interactive inputs to specific parts of the datamodel such as asset atributes

We can create a parent model or link to generic relations to control which attribute a interactive input maps to, Or use django polymorphic If done well this hierarchy of models will make it easy to fulfill the other requirements because you can filter, group based on the model type.

  1. Filter functionality for mapping interactive inputs (conditionals)
  2. The mapping should be able to balance attribute groups (total amount of cars is constant if technology share changes)
  3. One interactive input should be able to map to multiple filter-and-adjust steps

Modules

software architecture

Add tests - 4 days

Clean up economic module - 3 days

Configs through the CMS instead of yamls - 3 days

Overview:

  1. cloudclient api config (contains API key and the server url)
  2. cloudclient experiment config (model version, parameters, etc)
  3. economic config (does not completely exist, some values are hardcoded; but contains querries)
  4. upscaling config (contains queries and mapping of variables)

Caching - 3 days

Clean up pepe - 2 days

Small bug fixes - 2 days

Combine queries ETM - 1 day