Development Process - uiowaSEP2024/002_ImagePro GitHub Wiki
This page contains the overview of the work completed in each of the project's sprints and the meeting minutes summarizing our team meetings.
Sprint 1
- AWS set up #20
- API research #21
- Static code analysis tool - Sonar #9
- Example code analysis
- Backend research #19
- Issue templates #1
- Git LFS #39
- Initial Pre-Commit setup with Black #3
- Initial Repo Setup #4
- Wiki page - Motivation #25
- Initial .gitignore setup #5
- Pull request template and issues #14
- Hospital viewing studies in progress #32
- Integrating code bases #16
- Frontend research #18
- Merge .github folders #27
- Merge precommit files #28
- Initial GH actions workflows - static analysis #23
- Create example Medical Imaging tool #31
- Deploy job-mon-app locally to create a job #44
- Set up AWS accounts and authentication #33
- Better project views #37
- Adding correct permissions for Amazon CDK #29
- Reorganize code base #40

We successfully set up the old project (the job monitoring app) and are able to run example jobs and track their status.
We spent this sprint learning the job-monitoring-app and getting it working for all of our environments. Because of this no new features were added. We plan to enhance and add new features to the 'Tracker' app in future sprints.
- Total points accomplished: 39 out of 53
- Backend test coverage: 96%
- CDK infra test coverage: 89%
- Frontend test coverage: 73%
- TrackerApi test coverage: 93%
- Getting workflows to pass
- We had multiple instances where our CI workflows would fail. A few problems we faced were due to permissions, out of date versions, and being unable to find files due to incorrect imports. We did pair-programming to fix most of these issues. When we implemented ruff as a pre-commit hook we discovered a failing unit test in a workflow. This was difficult to track down and to try and resolve this issue we ran tests on specific commits to see where the failure originated.
- Getting the backend (postgres) running for some of our environments.
- Both Ivan and I came accross an error with postgres due to no role for postgres existing. We both ran the command createuser -s postgres to solve this.
- Finding burn-down chart tools
- We could not find a good tool to generate a burn-down chart for our sprint. We currently use excel as a means to create a very simple chart. We are still working on deciding what and how to generate these charts.
- Lower our duplicate code percentage
- Finish getting our app deployed on AWS
- Create more robust documentation for the backend, especially around postgres
- Modify Example Tool to simulate real-world product running in the system
-
Communication was really effective. Everyone did good at participating at every meeting, coding together, asking questions and providing help.
-
Documentation of issues and processes is going well. We have extensive documentation on our processes and information on the various components of our app and project. We also have good meeting minutes.
-
Need a less generic pull request template. Add more templates for chores. Comment out example text on templates.
-
Need to schedule a sprint set up meeting where we create sufficient amount of issues and populate the next sprint's backlog. This should be done before we start on the sprint. Proper tracking processes need to be documented in Standards. This included labeling, adding issues to the project, dating the issues at creation and at completion, and modifying the milestones and other criteria at the beginning and throughout the sprint.
-
Smaller issues and pull requests. We need to break down our tasks that we are tackling into smaller more manageable pieces.
-
Get everyone's environment configured correctly so that we are all on the same settings and can run our applications consistently.
-
Sprint start-up meeting to populate backlog and configure our tracking tools to ensure process is moving smoothly. Along with that, ensuring that all issues are configured correctly with labels, dates, etc.
-
More research on certain milestones that are giving us big obstacles (AWS)
Sprint 2
Dates: 2-12-24 --> 2-23-24
- #37 Better Project Views
- #47 Update Templates
- #48 ER Diagram of old job-mon-app
- #49 ER Diagram for new app
- #62 Set up Orthanc Configs for Client and Example Tool
- #73 Acceptance Testing
- #76 Additional DICOM data
- #79 Configuration for pytest-cov
- #80 Connect Server Logger to API
- #86 Implement the Update Step Functionality
- #88 Jobs need to be accessible via the Navbar
- #90 Add DocStrings for Autodocs
- #91 Change Name of JobMonitoringApp
- #94 Sonar needs to ignore Test dirs
- #99 Better error messaging on frontend
- #100 Missing Requirements
- #110 Failing event datetime test
- #63 Extend Example tool to interact with PACS via Pydicom
- #65 Refactor naming for frontend
- #66 Refactor naming for backend
- #70 Compatible logging system for example tool
- #71 New database design implementation
- #77 Dependency Upgrades
- #78 Upgrade Pydantic
- #89 Fix frontend so we can see step progress
- #95 Fix Cypress Testing
- #101 Docker Compose for Orthanc Dockers
- #106 Change Status Schema

- Currently our app runs the original job_monitoring_app that is integrated with the orthanc_logger_agent which receives mock jobs and uses to trackerapi to log these into the database and show them on the front end
- Added implementation to orthanc_logger_agent
- Integration between trackerapi and orthanc_logger_agent
- Refactoring of job_monitoring_app
- Points Planned: 54
- Total points accomplished: 39
- Backend test coverage: 96
- Frontend test coverage: 74
- TrackerApi test coverage: 94
-
Importing the trackerapi module into the orthanc_data_logging module was difficult due to the import structure of the job_monitoring_app and the structure of our project. We resolved this by modifying the import structure and our PYTHONPATH variable.
-
Changing the schema of the backend proved to be harder than expected because of all the references to the old schema throughout the project. There are still some unknown errors as to why the schema change made by us is still encountering issues. This is high priority and needs to be explored ASAP.
-
A lot of bugs with the job_monitoring_app, many of which take a long time to debug due to lack of familiarity with the methods they used and lack of documentation. We are currently combing through the job_monitoring_app to create extensive documentation and comments.
-
Make major changes to the database schema to be made my Zach and Audrey
-
Implement the receiver agent fully to a minimum viable product
-
Despite having an increase in outside obligations for some team members, we were still able to accomplish a lot this sprint getting new functionality integrated with the old functionality.
-
Our ability to document code and make our code cleaner and more readable has improved by a lot. This will make our project much easier to modify as we move forward.
-
Our issue creation and tracking has improved since last sprint, which was one of our main goals for this sprint
-
We discovered a lot of issues that worked and didn't work regarding process and implemented them, including pull request etiquette and documentation
-
Gained a much deeper understanding of the old code base as we continue to dive into it
-
We need to make smaller more modular pull requests so that integrating new code into our main branch isn't so much of a hassle. We end up producing a lot less bugs this way and create more code at a faster pace
-
Meeting agendas are something that we have not been implementing that would expedite our process and make us more efficient
-
Better timeboxing of issues when creating new things to add to our project
-
Had a meeting about pull requests and understanding best practices for implementing new code (ie make pull requests smaller and modular)
-
Put an agenda in our meeting minutes for non-scrum meetings
Sprint 3
Dates: 2-24-24 --> 3-9-24
- #106 Change Status Schema
- #95 Fix Cypress Testing
- #89 Fix Frontend so we can see steps progress
- #113 Update jobs to studies
- #115 Make hospital table
- #116 Make provider table
- #121 Use ENV File for API key
- #124 Create PACS table
- #125 Create Product table
- #57 Dockerize Example tool
- #129 Decrease size of BrainTool Docker image
- #118 Orthanc Reciever Agent to download stable studies
- #57 Dockerize Example tool
- #135 hacky workaround for orthanc receiver agent
- #114 Update job configurations to study configuration
- #70 Compatible logging system for example tool
- #64 Return Results to client via Orthanc
- #58 Dockerizing Frontend and Backend
- #127 Update studies table to have new information
- #60 Create a docker compose for local development
- #77 Dependency Upgrades
- #78 Upgrade pydantic
- #65 Refactor Naming for frontend
- #66 Refactor Naming for backend
- #117 Homogenize step and event
- #126 Update user tables
Our system is able to accept connections from the hospital, process the data automatically and pass it through our mock client's product, and return the product results back to the hospital while monitoring all those steps in the system and providing live visualizations for providers and hospitals in our frontend app.
The technical improvements include:
- redesigned database (renaming of tables, new tables added)
- functional medical data management (PACS) servers
- dockerized system components (PACS)
- redesigned and integrated frontend (new icons, displays all steps and shows statuses)
- Points Planned: 45
- Points Completed: 47
- Backend test coverage: 96
- Frontend test coverage: 74
- TrackerApi test coverage: 94
- Modifying the schema required major changes to the app and was difficult to do in a modular fashion. We had to design a pattern of changing the schema and app in specific steps where we add tests for new functionality, add the new functionality, modify the app so any obsolete functionality is no longer used, and then finally remove the old functionality and change the database schema again.
- Faced alot of trouble with connecting the dockers to work together in docker compose. Ran into a lot of CORS errors but were able to resolve this by adding the needed urls to the "allow-origins" option.
We plan to fully focus on the deployment to AWS that initially utilizes the docker network and can be transitioned to Kubernetes in the following sprints. We also plan on finishing up the database redesign which will consist of heavily modifying the users table and schema
-
Community programming enabled us to debug new code at a high rate. These sessions also served to cross-train members on other parts of the app so that everyone knows what is going on. These were also the times where we got pull requests merged, which went faster thanks to the increased cross training and communication of the team.
-
Came up with good processes for tackling complex problems by breaking it down into smaller pieces BEFORE coding and implementing code in a planned fashion.
- Some issues created are still pretty large and would be more useful if broken down into smaller parts that could be tackled. Now that we have a better understanding of the project and where its going, this should be easier to do.
- At our planning meeting we will focus on making smaller, more modular issues. These should preferably be 5 points at the most. More time needs to be spent by the team on estimating these issues.
Sprint 4
Dates: 3-18-24 --> 3-29-24
- #126 Update user tables
- #119 Sphinx Autodocs
- #149 Remove API Key generation from provider
- #146 Dockerize orthanc receiver
- #60 Creating Docker Compose for local development
- #58 Dockerizing Backend and Frontend
- #117 Homogenize step and event
- #127 Update studies table to have new information
- #153 Update sign up
- #152 Create admin dashboard
- #155 Base Kubernetes Deployment
- #157 Docker PreCommit Lint

Our system is able to accept connections from the hospital, process the data automatically and pass it through our mock client's product, and return the product results back to the hospital while monitoring all those steps in the system and providing live visualizations for providers and hospitals in our frontend app.
The technical improvements include:
- redesigned database (updated roles in user table from customer, provider to hospital, provider, admin)
- removal of api key generation from provider to prep it for an admin only functionality
- Sphinx autodocs to further enhance our application's documentation
- Dockerizing our receiver agent
- Refactoring our receiver agent into two separate entities: listening agent, study pipeline agent
- Points Planned: 35
- Points Completed: 21
- Backend test coverage: 97
- Frontend test coverage: 74
- TrackerApi test coverage: 94
- We had a shortened amount of time to complete work on this sprint, and Zach was unavailable for a portion of it so despite planning for less work to be done, we still fell a little short of the mark. In the future we might need to communicate more about planning and if we get into a similar situation we should pivot and adjust what we want to get done for the sprint end.
We plan to finish up our planned frontend enhancements: an admin dashboard and having dropdowns on sign up to choose a hospital/provider. We also plan to focus on the deployment to AWS to Kubernetes which will require more research and practice using minikube.
-
We have found a good process for working together on issues and making incremental changes while keeping a working MVP. We have just continued to use this process and it is serving us well
-
With a more limited sprint, we did a really good job of setting goals that seemed reachable for this time.
-
After having our testing plan meeting with Curt he suggested we get our frontend coverage to 80%+.
-
Make some sort of documentation to track the research that was done that didn't contribute to anything technical, as that progress is not tracked and makes it seem like we didn't do as much work as we did.
-
Add more frontend testing
-
Document research on wiki or other public tool
Sprint 5
Dates: 3-29-24 --> 4-19-24
- #153 Update sign up
- #162 Research Kube and AWS (Ivan)
- #161 Research Kube and AWS (Audrey)
- #163 Research Kube and AWS (Zach)
- #167 Update seed.py
- #169 Update dicom conversion
- #152 Create Admin Dashboard
- #148 Remove old docusaurus documentation
- #177 Deploy frontend to AWS EKS
- #173 Deploy subsystem to ECK
- #160 Increase frontend test coverage
- #174 Kubernetes (PACS, agent, loop)
- #175 User/Developer Manual
- #176 Public Deployment
Our system is able to accept connections from the hospital, process the data automatically and pass it through our mock client's product, and return the product results back to the hospital while monitoring all those steps in the system and providing live visualizations for providers and hospitals in our frontend app.
We also have a completely functional local Dockerized environment
The technical improvements include:
- AWS EKS cluster set up with some configurations set. Were able to deploy a piece of our system to EKS successfully
- Receiver loop and listening agent for studies separated successfully
- Points Planned: 31
- Points Completed: 41
- Backend test coverage: 97
- Frontend test coverage: 74
- TrackerApi test coverage: 94
- AWS is a beast and figuring out how to do things properly took a lot of time. To win it just took a lot of man hours.
- Kubernetes and getting our system properly configured was also very difficult with a lot of pieces. Our strategy for this and the AWS problem was to divide in to two teams and conquer. This is a strategy we have been using for a while and worked pretty well, especially since we have good communication and documentation for the other members to crosstrain for our parts and vice versa.
- Get all of our components deployed to EKS on AWS
- Extended sprint and we took advantage of that. Made fantastic progress with our local minikube environment and our initial deployment to AWS (EKS).
- Paired programming. We split into two teams (AWS, Kubernetes) and tackled a lot more than we would've if we community programmed.
- We need to optimize Dockers in kubernetes with here files. Our organization of these isn't the best.
- Lots of trial and error happened this sprint. If we had documentation of what we tried vs didn't try development would go a lot faster.
- Our user manual.
- Creating a User/developer manual that explains our entire app
- Communicate what we've tried to the team. Whether that be in teams or in our codebase.
Sprint 6
Dates: 4-20-24 --> 5-3-24
- #180 Use ImageInsights instead of BotImage
- #179 Update repo main README
- #178 Kubernetes (study, trackerapi)
- #174 Kubernetes (pacs, agent, loop)
- #173 Deploy system to ECK
- #182 Study Job Bug
- #188 Setup persistant storage
- #189 Expose cluster to internet
- #190 Create https connection for cluster
- #191 Deploy Study System to EKS
- #192 Deploy and expose frontend via EKS
- #193 Deploy static orthanc for receiving input data
- #194 Deploy and expose backend
- #175 User/Dev Manual

Our system is able to accept connections from the hospital PACS, process the data automatically and pass it through our mock client's product, and return the product results back to the hospital while monitoring all those steps in the system and providing live visualizations for providers and hospitals in our frontend app.
This sprint we were able to successfully deploy our entire application on AWS and access/use it online.
- Points Planned: 78
- Points Completed: 75
- Backend test coverage: 97
- Frontend test coverage: 73
- TrackerApi test coverage: 94
Many, many, many problems and lots of hours spent debugging them. We had numerous DNS issues, 401 authorization issues, and other minor bugs. We often got to together as a group and debugged our way through them. The amount of bugs experienced in our kubernetes configurations and AWS EKS set up is too much to go through, but together as a team we grinded through all of them to create a working product.
- Pair programming was incredibly beneficial. We split into two teams and powered through AWS and kube. This sprint was mainly just a whole lot of man hours and elbow grease.
- Ideally there would have been more organization with bug handling and documenting of the process during the implementation, but due to time constraints we struggled with this during sprint 6.
- Document as we go, not after
- Don't wait so long to begin deployment. Deploy early and deploy often, even if its just a small piece of the overall system.
Meeting Minutes
Members: Ivan, Zach, Audrey, Michal ~6hrs
We now have everything deployed on AWS and our minikube local environment works with no bugs. We do have a few bugs in our AWS environment. We realized that our frontend is making way too many calls to our backend and it is causing our database to crash. We have attempted to refactor some code to lessen the calls. We can successfully pull data from our Orthanc servers, spin up kube jobs, and view results on the frontend it is just very slow and the database sometimes crashes.
Next steps are fixing the calls to the database and creating the rest of our documentation.
Members: Ivan, Zach, Audrey, Michal ~6hrs
We spent most of the time getting our Orthanc deployed to AWS. We also attempted to create an EC2 instance for the hospital Orthanc but might take a different direction. Our minikube environment is running fully but we just need to work on some minor bugs such as: data not being removed from the internal orthanc. Spent most of the 6 hours today debugging various things but overall made lots of great progress.
Members: Ivan, Zach, Audrey, Michal ~12hrs
We spent the whole week working on AWS and Kubernetes. Zach and Ivan have made great progress on the cluster we have in AWS. They are still trying to work through some Fargate issues and might switch courses and not use Fargate due to EFS issues. Audrey and Michal have made great progress on getting the rest of our app (orthanc, study handler, trackerapi) working in minikube. As of 4-26 the whole pipeline seems to be connected and running as we were able to see the study and its events in the database. Next steps for minikube is setting up correct urls so we can actually navgiate via frontend.
This whole week was LOTS of trial and error but a great learning experience for everyone.
Members: Ivan, Zach, Audrey, Michal ~2hrs
Agenda:
- Ivan and Zach work on AWS
- Audrey and Michal work on kubernetes
What was done:
- Ivan and Zach have made really good progress on AWS. They deployed frontend, backend, database as EC2 clusters and are working on deploying our small kube system to EKS. It is a lot of trial and error but they have made good progress working through bugs
- Michal and Audrey fixed the sonar cloud complaints on the yaml files and merged the open PR. Audrey worked on created a kube deployment and service for PACS and got it working with no errors.
Next steps:
- Keep working on getting the whole system in EKS.
- Create kubernetes deployment/service for the receiver agent and listening agent
- Verify these new kube pods can communicate
Members: Ivan, Zach, Audrey, Michal ~2hrs
Agenda:
- Each of us create an AWS account with correct permissions
- Each of us download AWS cli
- discuss next steps
What was done:
- Discussion of this weeks work.
- EKS cli and AWS cli was downloaded and Ivan created new AWS accounts for the team. We will use fargate.
- Audrey and Michal attempted to figure out how much resources need to be allocated for our kube cluster we want to deploy to aws. Going to test out some default values and try those, can always reallocate.
Next steps:
- Merge PR once we allocate resources in the manifests.
- Zach and Ivan will pair on deploying and setting up AWS
- Audrey and Michal will work on creating manifests for the study listener and study handler, and then refactoring the pvc connection so that it communicates with a PACS, and lastly refactoring the study handler so that it can be parameterized.
Members: Ivan, Zach, Audrey, Michal ~20mins
- Zach, Audrey, Ivan are stuck on implementing the ingress but verified that the backend, frontend, and database can all communicate
- We have decided to put ingress on hold for now and focus on deploying what we have to AWS
- Michal is almost done with getting his deployments working for handling the studies and communicating to the pacs servers
- Next step is to get what we have working in minikube deployed to EKS on AWS
Members: Ivan, Zach, Audrey, Michal ~1.5hrs
Agenda:
- Work on ingress
- Get the backend deployment connected to the database persistant volume
What was done:
- Ivan was able to get the backend communicating with the database and he created an ingress resource for the frontend
- Zach was troubleshooting storage on his laptop and had to restart his minikube
- Audrey was able to add ingress to her minikube cluster and can now visit the frontend via the cluster ip
To-do:
- get the ingress resource working (need to configure dns)
- connect the frontend to the backend/database
Members: Ivan, Zach, Audrey ~40mins
Agenda:
- Work on ingress
What was done:
- Met briefly to discuss our progress on kube deployments.
- Ivan was able to get all the yaml files created to spin up our database.
- Audrey found more resources for applying ingress controllers and creating ingress resources
This meeting was cut short and we are meeting tomorrow morning
Members: Ivan, Zach, Audrey, Michal ~30mins
- Prepped for our demo this morning
- Discussed priorities for this week (kube deployment)
- Audrey, Zach, and Ivan plan to meet up tomorrow to work on applying ingress
- Audrey has a PR up for updating the seed file
- Michal has a PR up for updating our dicom classifier
Members: Ivan, Zach, Audrey, Michal ~30mins
- Prepped for our demo this morning
- Discussed priorities for this week (kube deployment)
Members: Michal, Audrey, Zach, ~1.5 hours
Agenda:
- Prep for our demo
What was done:
- Discovered a bug with our dockerized brainmask tool and spent time debugging that the bash script is not working correctly so we reverted to just running the python file instead of the docker
- Recorded our demo to prep for tomorrow's presentation
Members: Ivan, Zach, Audrey, Michal ~40mins
- Audrey has a PR up for studies page
- Michal is working on a study handler in minikube and gave a little demo on minikube and kubectl
- Ivan showed his work so far on separating the agent however we might put it on the backburner until we get deployment working since we have a MVP
- Zach, Audrey, Ivan plan to meet this weekend to research more kubernetes, specifically ingress
At some point over the weekend we will record our demo and get our mvp polished up. As of right now our priorities are deployment, creating a user manual, increasing frontend test coverage, and updating our seed file to prepare for our demo.
Members: Zach, Audrey, Michal ~15mins
- Zach made good progress on dropdowns, will have PR up soon
- Audrey making good progress on studies page
- Michal continuing development with minikube
Members: Ivan, Zach, Audrey, Michal ~30mins Sprint 4 retro notes are in sprint 4 wiki page
- Audrey is continuing work on updating the studies page
- Zach is continuing work on creating dropdowns on sign up
- Ivan is continuing work with separating functionality of our agent
- Michal is continuing development with minikube
Members: Ivan, Zach, Audrey, Michal ~30mins
- Ivan is continuing work on separating functionality within the receiver agent.
- Michal is continuing research on our deployment to kubernetes.
- Audrey created a PR for removal of api key generation and will start looking at implementing an admin dashboard.
- Zach will start looking at adding dropdowns for selecting a hospital/provider on signup.
Members: Ivan, Zach, Audrey, Michal ~30mins
- Ivan is working on separating our receiver agent so that we have one docker for listening and another for the study pipeline.
- Michal is continuing research on our deployment to kubernetes
- Audrey and Zach created a PR for the user table changes
- Zach is continuing work on Sphinx autodocs, we are running into a CI testing failure at the moment
- Audrey is beginning work on removing the api key generation from the provider role
Members: Ivan, Zach, Audrey, Michal ~1.5 hr
- In depth discussion on next steps for structuring providers and api keys and attributes needed for the PACs table
- Zach and Audrey are continuing progress on updating the user table
- Ivan is finishing up getting the Orthanc docker working within the compose
- Zach continuing progress on Sphinx autodocs
- Michal continuing researching kubernetes and our deployment strategy
We created a diagram during this meeting. See the Diagrams 3-25-24 page in our wiki.
General Notes:
-
We will want an admin view that consists of viewing all studies (can choose which hospital we want to view studies for) and viewing all providers and their api_keys.
-
We want to remove functionality of being able to generate api_keys on the frontend. Instead when a provider is created we will generate an api_key and hardcode it to that user. Basically we will have 1 profile (provider user account) for each provider. We will seed this. In the real world the provider provides a product but for the scope of our project right now, providers and products are the same thing.
-
A provider will still want to view the analytics of studies performed by their product.
-
A hospital user will still want to view studies they’ve sent.
-
When a user signs up they will have the option to choose hospital or provider. If they choose hospital they will need to select which hospital.
-
A hospital can be associated with multiple pacs.
-
We plan to use EKS and Fargate for deployment. This might change as we look into how to host PACS at static ip addresses.
Members: Zach, Michal, Audrey ~30 mins
- Progress made on Sphinx autodocs, can we find a way to make it look better and automate creation?
- chore made to delete old teams docs
- Michal has done some research on AWS and Kubernetes and which route we should go
TODO:
- Research more into sphinx autodocs and address the issues noted above (Zach)
- Get started on remaining db changes (Zach, Audrey)
- Continue AWS research (Michal)
Members: Audrey, Michal, Ivan, Zach ~30 mins
Notes for sprint 3 retro can be found on sprint 3 wiki. We also planned out points and issues for sprint 4.
Members: Audrey, Michal, Ivan, Zach ~1.5 hours
Agenda:
- Discuss midterm presentation
- Continue working on Dockerizing
We began work on our midterm presentation. Still debugging our docker connections. They can communicate but now we are running into CORS errors.
Members: Audrey, Michal, Ivan ~2.5 hours
Agenda:
- fix error in step 4
- work on Dockerizing
We had a community programming/debugging session to work through the bugs we found in step 4 prior to spring break. Ended up having to rewrite some code and managed to fix the bug. Our receiver agent was creating too many studies and never created a deliverables directory which caused the code to error out. Running into an issue with connecting the frontend, backend, and database Docker containers together. This will need to be worked on further.
Members: Audrey, Zach ~20 mins
Agenda:
- test the job config to study config branch
Zach and Audrey did some more testing on job to study config branch after reordering some migrations. Some downgrade errors were occuring but these were fixed by adding a default value to a column.
Members: Audrey, Zach, Ivan, Michal ~2 hours
Agenda:
- go over Dockerfile for frontend
- discuss the bug we currently have in main
The Dockerfile for the frontend is good to go for now. Audrey, Ivan, and Zach paired on trying to fix the bug in main. This bug occurred when trying to create a study in the orthanc receiver agent and log. We discovered that we were trying to slice into a dictionary so we slightly modified some code and were able to get the agent to run. Now we are running into an error with our BrainTool and this still needs to be looked into. Audrey worked on finishing up renaming job configurations to study configurations in our db schema and while the team tested the branch Michal discovered that the migrations do not work if there is already data in the db. Zach and Audrey paired on this error and reordered some migrations.
Members: Ivan, Zach, Audrey, Michal ~30 mins
Updated some wiki pages and issues to prep for our SCM meeting with Curt. Zach and Audrey are finishing up the db schema changes for studies, should hopefully have it done by EOD. Michal is working on building out the functionality for the receiver agent to connect orthanc servers and our logger. Ivan/Michal are still working on Dockerizing the rest of our application.
Members: Ivan, Zach, Audrey ~1 hr
Agenda:
- Pair on db schema changes
Zach and Audrey continued pairing on the db schema changes using our new process we discussed in Scrum this morning. It is going much better. We are making small commits and then plan to squash them. Ivan demonstrated the new dockerfile he created for the example tool and how it builds. It is quite large so Ivan and Michal are going to pair on how to make it smaller. We plan to continue working on this items over the weekend and early next week.
Members: Ivan, Michal, Zach, Audrey ~30 mins
Had an in depth discussion on how to manage our db schema changes. We decided that we will do it directory by directory since the backend can run without the frontend. Ivan gave a demo on cherry picking and squashing commits via the git GUI in pycharm. Ivan and Michal discussed how we can use Docker Network to create a comprehensive local development environment that will significantly sped up our development process.
Action Items:
- Watch the docker network video Ivan linked in Teams chat
Members: Zach, Audrey, Michal ~1 hr
Agenda:
- Michal demo automatic data retrieval from PACS
Overview: We each gave a brief overview of the directories we are currently working in (Audrey -frontend, Zach -backend, Michal -example tool). Zach and Audrey paired on attempting to update the db schema to rename jobs to studies. It is a lot of work and we need to figure out how to do it more modularly.
Members: Ivan, Zach, Audrey, Michal ~30mins
- Moved issues into our Sprint 3 project
- Estimated how many points we can complete this sprint, see Sprint 3 wiki
- Created two new milestones for this sprint
Members: Ivan, Zach, Audrey, Michal ~20mins
- Generated our burndown chart for sprint 2
- Zach and Audrey are still troubleshooting migrations
- Ivan and Michal will be working on flushing out the receiver agent this week
- Zach and Audrey will continue making db schema changes
Members: Ivan, Zach, Audrey ~30mins
Details are in Sprint 2 Wiki page
Members: Ivan, Zach, Audrey ~1hr
- Got some PRs reviewed and merged
- Zach and Audrey began working on changing our event model to accept another enum role ("pending")
- Showed new frontend changes
- Closed some issues and modified start and end dates to prep for retro
Members: Ivan, Zach, Audrey ~15mins
- Ivan created/merged a PR for docstrings
- Audrey realized 2 of Cypress tests are failing and created a bug issue
Discussed what is left to be done for the week:
- Zach/Ivan -> fixing pr 80-connect-logger-to-api
- Audrey -> frontend changes
Members: Ivan, Zach, Audrey ~30mins
- Ivan helped Zach with his environment setup and running our logger and example tool
- Zach gave us a rundown on our updated database schema courtesy of Michal
Members: Ivan, Zach, Audrey ~3hrs
- Zach created an updateEvent function and necessary routing to connect the trackerapi with our logger tool.
- Ran into tons of import issues on all of our environments, Ivan helped and gave a crash course on symlinks and setting up virtual environments (took about 2 hrs to resolve the issues)
- Audrey looked at the jobs component of the frontend to determine how the steps of the jobs were getting pulled and displayed from the backend
- We can successfully run the app and see the job configuration that was created in our logger but are getting Key Errors/500 errors when running our logger so we still have some work to do
Members: Ivan, Zach, Audrey
- Ivan worked on dockerizing the orthanc server
- Zach and Audrey will pair on connecting the logger with the trackerapi later today. Some work needs to be done to update the backend and create some more methods in the trackerapi
- Our team priority this week is to finish up connecting the logger to the trackerapi so that our MVP for sprint 2 can produce a general job configuration
Action Items:
- Figure out how to test docker images
Members: Ivan, Zach, Audrey, Michal ~1.5hrs
- Michal and Audrey worked on getting pytest to exclude certain directories
- Worked on testing presentation
- Discussed orthanc logger Ivan and Michal created
- Created a high level diagram and had a discussion on what will need to be added to our database design in order to support tech providers, hospitals, and PACS
- Michal and Zach paired on how to connect the orthanc logger to the trackerapi
Members: Ivan, Zach, Audrey 20 mins
- Ivan and Michal created a PR the initial setup for a pyorthanc server and logging system
- We discussed our testing plan to prep for our presentation
- Zach and I will continue to work through database design, will get more info on what needs to be added in tomorrow's working meeting
Members: Ivan, Michal 4h
- Implemented Orthanc PACS servers for the internal system and the mock hospital
- Analyzed and design the logging system for the project. Crucially, we identified different logging needs for various stakeholders:
- Hospitals do not care about the inner workings of products and want to see simpler steps
- Our clients want to know when and why their product fail. They will want tailored logging system for their products
- Discussed the system organization and design to manage multiple components
- Implemented initial logging to the data receiver system
- Implemented initial logging to the example medical imaging product
Members: Zach, Audrey, Michal, Ivan 30 min
- Discussed the testing strategy for our project and work on the presentation
- Discussed the tasks for the sprint
- Discussed questions for the last year team - Zach is meeting with them
Members: Zach, Audrey ~1 hr
- Created a few issues regarding diagram creation and the billing feature
- Paired on creating an ER diagram of the original job-monitoring-app
Members: Zach, Audrey, Michal, Ivan ~20 mins
- Not a lot of work done over the weekend, no updates
- Discussed high level scope of sprint 2
- ER Diagrams (Zach and Audrey)
- Finish up AWS deployment and Orthanc setup (Ivan and Michal)
- Enchance example tool's logging (Ivan and Michal)
- Michal informed he will be gone next week
- Created a high level diagram of our application
- Planning to meet later today to create user stories and community program
Members: Zach, Audrey, Michal, Ivan ~30 mins
Notes about this meeting is in the Sprint 1 Wiki
Members: Zach, Audrey, Michal, Ivan ~20 mins
- Discussed and finished up work on Sprint 1 wiki
- Determined we need a better process for issues, will discuss more during retro later today
Members: Zach, Audrey, Michal, Ivan ~2.5 hours
- Walked through SonarCloud and made sure everyone could see the dashboard
- Spent time debugging our backend environment and updating user manual
- Debugged a failing unit test in our Ruff pr and got it merged (took most of the meeting)
- Fixed instances of duplicate code
- Began working on Sprint 1 wiki and had a burn down chart discussion
Action Items:
- Finish up Sprint 1 wiki
- Finish up burn down chart generation
- Talk to Curt about raising threshold for duplicate code percentage
- Plan for sprint 2
Members: Zach, Audrey, Michal, Ivan ~2 hours
- Demonstrated mock script for creating jobs
- Set up Ivan with correct permissions in AWS
- Got ruff set up as pre-commit hook
- Went through all of our issues and filled out correct fields and close what needed to be closed
Goals:
- Get burndown chart tool set up (Zach)
- Deploy job-mon-tool to AWS (Team)
- Manage different dependency files (Ivan)
- set up sonarcloud (blocked at the moment) (Audrey)
- look into creating artifacts for test coverage (Audrey)
- finish research on frontend (Audrey)
- Create testing plans and wrap-up for sprint 1
Members: Zach, Audrey, Michal, Ivan ~40 min
- Discussing PR for medical imaging creation tool
- This will be used to simulate interactions between hospitals and medical imaging companies
- Sonarcloud static analysis will get set up when we get permissions from Curt
- Have multiple stand alone apps with different dependency files, we need a development env that contains all req files and the app contains links to each file.
- Need to investigate the app more and figure out how to run mockscript
Goals:
- Get burndown chart tool set up (Zach)
- Deploy job-mon-tool to AWS (Team)
- Manage different dependency files (Ivan)
- set up sonarcloud (Audrey)
- Create testing plans and wrap-up for sprint 1
Members: Zach, Audrey, Michal, Ivan
- Weekend was busy for everyone, emphasizing pair programming this week to make sure we get some deliverables done
- Michal making progress on the example medical imaging product pipeline
- Get Amazon CLI installed on everyones devices this week
- Updating rule sets on Github: requires status checks, deployments to succeed for merging to main
Goals:
- Set up sonar or other static analysis tool
- Understanding the job workflows on job-monitoring-app and how those are set up
- Deploy job-monitoring-app to AWS by sprint end
Members: Zach, Audrey, Michal, Ivan
Topics:
- Discussed user story and pull request templates, made some modifications, merged
- Made sure everybody was able to install and run the old job-monitoring-app project
- Adding job-monitoring-app to our github repo
- Investigating job-monitoring-app, need to better understand how each part works and what the capabilities are
- Setting up AWS for our team
- Creating issues and path forward
TODO:
- Create Docker pipeline for jobs (Michal)
- Set up AWS for team (Ivan, Michal)
- Understand job-monitoring-app, what are the 'jobs', how are the input into the system, etc.. (Zach, Audrey)
- Fill out issues on Github for moving forward (Team)
- Improve Wiki with standards (Team)
- Integrate job-monitoring-repo (Zach, Ivan)
Members: Zach, Ivan, Michal
Topics:
- Should we reuse job-monitoring-app and improve on it? How we will go about it?
- The plan is to integrate the job-monitoring-app and enhance it
- What AWS services are we going to need and how will we access them?
- AWS EKS/ECS, Ivan is going to set up root account
TODO:
- Create user stories and fill backlog
- Create any other templates necessary for development
- Fully run the job-monitoring-app and contact their development team member
- Set up AWS
- Integrate job-monitoring-app repo with ours
Members: Zach, Ivan, Michal, Audrey
Topics:
- discussed job-monitoring-app repo and how we will be doing DevSecOps and enchancing their application.
- AWS, Docker, firewalls
- Delved more into what docker will be used for
TODO:
- Continue our book report research
- brainstorm user stories
- AWS
Members: Zach, Ivan, Michal, Audrey
Topics:
- discussed our project pitch for the company
- Ivan and Michal gave high level overview of what our project is
- Came up with a solid full technical stack
- discussed the use of AWS and Docker
TODO:
- Continue our book report research
- brainstorm user stories
- AWS
Members: Zach, Ivan, Michal, Audrey
Topics:
- discussed our project pitch for the company
- learned of a python api project that we could potentially use for our project
TODO:
- set up times to meet
- project pitch for class
Below are miscellaneous notes we used through our development process.
Testing Plan
We'll continue leveraging the comprehensive tests developed for our backend, frontend, and trackerapi directories within the job-monitoring-app. Currently, our testing workflows are set up to execute our test suite automatically with every PR submission. Our testing suite includes frameworks such as pytest, Jest, and Cypress. Our example-tool directory won't undergo testing as it serves as a simulation of our client. A significant aspect of our project involves Dockerizing our components. We plan to use Docker compose to set up a test suite that we can run against all of our dockerized components.As part of our deployment strategy to AWS, accompanied by Docker and Kubernetes, we will thoroughly test our deployment setup. To achieve this, we're planning to implement a CD AWS workflow in GitHub Actions. This workflow will deploy to a staging environment, conduct thorough tests, and subsequently authorize deployment to production.
Manual End-to-End Testing and Linting Environment Setup: Use Docker Compose to mimic production. Track all Manual test in our wiki Hedolint - linter for Dockerfiles Multistage Building Add testing stage to the build process Can only proceed with build if the testing stage passes Pumba Can artificially introduce adverse network conditions (packet loss, latency, etc) Test robustness of Docker containers Docker Bench Automated tool for checking Docker containers against security benchmarks
Manual Testing Log
Template Manual Tests ***Title: [Brief title describing the test case]
Prerequisites:
- [Any prerequisites or setup required before testing]
Steps:
- [Step 1]
- [Step 2]
- [Step n]
Expected Result:
- [Describe the expected outcome of the test]
Actual Result:
- (To be filled by the tester)
Status:
- (To be filled by the tester, e.g., Passed, Failed, Blocked)
Notes:
- (Additional observations, issues encountered, or steps to replicate a problem)
Title: Dockerized Orthanc Works
Prerequisites:
- Ensure Docker is installed
Steps:
- Run
create_and_run_orthanc_docker.sh - Select Internal Orthanc when prompted
Expected Result:
- Orthanc is port forwarded to localhost:8026
- Orthanc can communicate with Example hospital orthanc
Actual Result:
- Orthanc is port forwarded to localhost:8026
- Orthanc can communicate with Example hospital orthanc
Status:
- Passing
Title: Dockerized Example tool Works
Prerequisites:
- Ensure Docker is installed
Steps:
- Run
build_brain_mask_tool_docker.sh - Run
example_run_brain_mask_tool_docker.sh
Expected Result:
- Docker image is built after step 1
- Docker Container is run successfully and then destroyed after running
Actual Result:
- Docker image is built after step 1
- Docker Container is run successfully and then destroyed after running
Status:
- Passing
Initial Notes on the Job Monitoring App
trackerapi/demo/mockscript.py: shows a demo of how the process works.
- they create a trackerapi() object with an api key
- register the trackerApi()'s job configuration
- create a trackerJobApi() object by calling tracker.create_job (see trackerapi/trackerpi/api.py) this also seems to create a job in the database as well
- for each step, they call trackerJobApi.send_event, this creates an event in the db and returns a
trackerEventAPI object - This file is important for seeing what items you need in the db or set up prior to sending a job to the db
trackerapi/trackerapi/api.py:
- has the backend routes initialized in the file
- TrackerAPI object contains its api key, base url, and other urls needed for http requests. Contains methods for making post and get requests to the specified backend routes. createJob() makes a post request that creates the job in the db and reqturns a TrackJobAPI object. This object that contains the original TrackerAPI object and the provider_job_id. This object as well as the TrackerAPI object can both call sendEvent() which creates an event in the db and returns a TrackerEventAPI object, which does not seem to have a use that I can tell.
The backend is expecting the request to have a header named "x-api-key" with the api key as its value. The api key is then pulled from the header.
Before creating a job, the necessary data must be in the db (provider_id, job_configuration)
backend/routers folder:
- These are relevant in that they are the urls that the trackerapi uses to make its http requests.
backend/app/dependencies.py: *******
- Contains OAuth function for obtaining requesters credentials from the cookies 'access token' field
- this file needs a closer look
backend/app/services:
- Python files in this folder contain functions for interacting with the database. Used by dependencies.py, routers, and other files.
backend/schemas:
- FastAPI uses these schemas to automatically parse request bodies in http requests into json format and populate objects. Ex: in backend/app/routers/events.py, the create_event function has a parameter event of type schemas.EventCreatePublic. That schema defines the json format for the event and the request body data is used to populate this json object, which is then inserted into the 'event' parameter.
It looks like our Frontend is comprised of React components and tested using Jest. I believe we are all unfamiliar with that so some outside learning will need to happen.
As far as features, provider accounts have 4 main things they can do:
- Generate reports (.csv files)
- View job analytics
- Generate API keys for jobs
- View jobs
Customer accounts have 1 main feature:
- View jobs
I see fit to keep all current features. For future enhancements to the frontend:
- Have different views for job analytics. Like a pie chart, line graph, etc and make them exportable.
- Possibly show a preview of a generated .csv report. Right the provider has no idea what the report looks like until they download it.
- Let the provider choose which columns (?) they want in their generated report before downloading it.