IMSc Extra Homework Assignment - ftsrg-edu/ase-labs GitHub Wiki
IMSc Extra Homework Assignment: Industrial IoT
Industrial Internet of Things (IIoT) systems represent the convergence of operational technology (OT) and information technology (IT) in industrial settings, connecting physical machinery, sensors, and control systems to networked computing resources. IoT enables real-time monitoring, predictive maintenance, process optimization, and data-driven decision making by collecting and analyzing data from throughout the manufacturing process, while maintaining the robust reliability and safety requirements essential to industrial operations. This integration creates what's often called a "smart factory" or part of Industry 4.0, where manufacturing processes become more autonomous, efficient, and adaptable.
The goal of this task is to create a domain model for IIoT systems be leveraging Refinery and modern intelligent coding assistant (e.g., Copilot) technologies.
- You may solve tasks 1 and 2 either by hand, by an intelligent coding assistant (LLM), or a combination thereof.
- Task 3 requires manual intervention with Refinery to perform semantic analysis on your domain model.
- Task 4 attempts to perform a similar semantic analysis with an LLM.
- In task 5, you'll use Refinery and an LLM together for neuro-symbolic reasoning.
You may use any intelligent coding assistant (e.g., Copilot, Tabnine, JetBrains AI), or any cloud-based (e.g., ChatGPT, Claude, Gemini) or local (e.g., Llama) LLM.
At each step, document your work, including the relevant Refinery code, graph models, and the used prompts and the transcript of the conversation with the LLM. To include graph models in your documentation, you can use the Export button in the Graph view in the Refinery web interface to export your graph model as an image.
Task 1: Domain modeling (5 IMSc points or +1 point to final score)
Create a Refinery domain model based on the following specification:
We wish to represent manufacturing facility's automation and monitoring system. The system is structured as follows:
A Factory contains multiple production lines and monitoring dashboards. Each production line consists of 1 to 20 workstations, which come in three types:
1. Robotic Cells
2. Assembly Stations
3. Quality Control Stations
Robotic Cells are equipped with multiple automation devices (minimum of 2), while Quality Control Stations include one or more alarms for notification purposes.
The automation infrastructure includes different types of devices:
* Actuators (for mechanical actions)
* Vision Systems (for visual inspection)
* PLCs (Programmable Logic Controllers)
PLCs play a central role in the system:
* They can manage between 1 and 10 other automation devices
* They process various types of events including alarms, status updates, and production metrics
* Each automation device can report to at most one PLC
The monitoring system features dashboards that:
* Subscribe to multiple PLCs (1 to 5)
* Log different types of events (1 to 10)
* Track alarms, status updates, and production metrics
Make sure to create a syntactically valid Refinery model. We recommend looking at the Refinery language reference and using the Refinery web interface to validate your work.
If you wish to use an LLM for this, keep in mind that Refinery code likely was not in the training set of the LLM. Therefore, you will have to use techniques like in-context learning by providing relevant examples to make the model generate valid Refinery code. Document the prompt engineering techniques you have used. If you can make the LLM generate something close to Refinery code, it is possible you will find it easier to edit the output by hand rather than making the model fix its own errors.
Task 2: Domain-specific constraints (5 IMsc points or +1 point to final score)
In addition to the textual specification above, a domain expert has suggested the following well-formedness constraints:
- A PLC must process at least one event.
- Each automation devices other than a PLC must report to a PLC attached to the same Robotic Cell.
- Each production line must have a Quality Control Station.
Modify the metamodel and/or extend it with error predicates to ensure that the constraint are satisfied.
Make sure to create a syntactically valid Refinery constraints. We recommend looking at the Refinery language reference and using the Refinery web interface to validate your work.
Task 3: Generating instance models with Refinery (5 IMsc points or +1 point to final score)
Use Refinery as a model generator to generate some consistent instance models of your metamodel.
You will need to set an appropriate model scope
to control the number of objects generated per instance model.
The Refinery online demo uses a very short timeout for model generation to reduce resource usage. If you want to run the generator with more resources, we recommend you to download and run the Refinery web interface Docker container.
Your metamodel might turn out to be syntactically valid, but unsatisfiable (having no consistent instance model). The semantic analysis in this step will help you discover this problem. Refinery might report that your problem is unsatisfiable outright, or may compute for a very long time without producing any instance models.
If your model generation is unsatisfiable, you will need to fix your metamodel, constraints, or model scope as appropriate. At the completion of this task, your metamodel should be satisfiable and you should have generated some consistent instance models.
Task 4: Generating instance models with LLMs (5 IMsc points or +1 point to final score)
Use an LLM to generate instance models of you metamodel.
The goal of this task is to make the LLM generate syntactically valid instance models of your metamodel. To do so, you will need to pass the metamodel from Task 1, the constraints from Task 2, and some examples of the Refinery instance specification language to the LLM. If you have updated your metamodel in Task 3, use the updated version.
While the Refinery language supports partial modeling to specify both open-world and closed-world modeling problems, it might be easier to make the LLM output purely positive assertions (without closing down the world). You can use the Partial/Concrete button in the Refinery web interface to close down the world automatically.
Use Refinery to visualize the instance model if it is syntactically valid. If it is not syntactically valid, you can use Refinery and an LLM together in the next task to fix it.
Task 5: Using Refinery and an LLM together (5 IMsc points or +1 point to final score)
The goal of this task is to use Refinery together with an LLM. We propose two workflows that integrate the two tools: either process the output of the LLM with Refinery, or process the output of Refinery with an LLM. More complex workflows and feedback loops are also possible.
To complete this task, you only have to perform a single workflow.
Proposed workflow A: Process LLM output with Refinery
Take the generated instance model from Task 4 or generate a new one and copy it into Refinery.
- If there are any syntax errors, copy the list of syntax errors into the LLM conversation and try to get it to fix the syntax.
- If any well-formedness constraints indicate errors in the model (watch out for error markers in the Graph view or the Table view), describe the semantic errors to the LLM and try to get it to fix the model.
Repeat these two steps as necessary to generate a valid instance model.
Proposed workflow B: Process Refinery output with an LLM
To perform this workflow, you will need to download and use the Refinery Command-Line Interface (CLI) Docker container.
Save your metamodel and model scope into a Refinery model generation problem (.problem
) file.
Use the Refinery CLI to generate a consistent Refinery instance model (.refinery
) file.
This is a textual description of your instance model.
Copy the instance model into an LLM conversation, and try to get the LLM to process it. Possible use-cases include:
- Try to get the LLM rename the objects in the instance model with semantically relevant names.
- Try to get the LLM analyze the instance model for relevance or realisticness, and suggest some more relevant constraints.
- Try to get the LLM provide a natural-language description of the instance model.
To complete this workflow, you will only need to make the LLM complete a single use-case.