Responsible AI - innovation-dandelion-hub/dandelion-hub GitHub Wiki

Responsible AI

We are based on Microsoft's responsible AI principles.


At Dandelion Hub, fairness is one of the core principles guiding the design and operation of our platform. To ensure our AI systems are fair and do not perpetuate biases, we have implemented several key controls:

Default Knowledge Base

We use Azure OpenAI to maintain a default knowledge base that cannot be modified. This ensures all users have access to a consistent and verified data source.

Retrieval and Generation of Additional Data (RAG)

When users wish to use additional data, it only affects the session they are interacting with, without altering the default knowledge base. This approach ensures that additional data is used contextually and temporarily, preventing permanent biases.

Data Control and Notification

Using Microsoft Copilot Studio and Azure Prompt Flow, we can implement locks when the information provided by users differs from the data stored in our verified sources. In such cases, we notify the user that the entered data cannot be fully utilized due to its lack of fidelity to official sources. This ensures decisions and analyses are based on accurate and reliable data.

Reliability and Security

At Dandelion Hub, reliability and security are crucial to ensuring our systems are robust and minimize the risks of failures and vulnerabilities:

Automatic Backups

We use Azure Cosmos DB, which performs automatic backups of the data every 4 hours. This ensures there is always a recent copy of the data, reducing the risk of information loss.

Anomaly Detection

We are planning to train a model using Azure Machine Learning or utilize the anomaly detection service through an integration with Azure Monitor. This will allow us to proactively identify and respond to data anomalies, triggering additional backups and comparing the source data with that stored in Azure Cosmos DB to ensure its integrity.

Privacy and Security

The privacy and security of data are fundamental at Dandelion Hub. We have implemented various measures to protect our users' information:

Access through Managed Identities

We use managed identities in Microsoft Power Platform environments to ensure that only authorized users can access the platform's data and functionalities.

Authentication and Validation

We implement authentication and validation through Azure Entra ID and Azure Key Vault (RBAC), ensuring that access to data and resources is controlled and protected with high-security keys.

Web Security

For the web version, access and blocking are managed through Power Pages or Azure, maintaining website security with protocols such as TLS. This ensures that communications between users and the platform are encrypted and protected against potential cyber attacks.


At Dandelion Hub, inclusion is a fundamental principle. We ensure that our platform is accessible and usable by people from diverse backgrounds and abilities through various measures:

Accessible Design

The website uses a grayscale color palette with high contrast between elements. This design ensures that users with visual disabilities, such as color blindness, can navigate and use the platform with ease.

Accessibility Tools in Copilot

Our assistant in Microsoft Copilot Studio includes text-to-speech and speech-to-text tools, facilitating access for users with hearing or motor disabilities. These tools comply with web accessibility standards (WCAG), ensuring that all users can interact with the platform effectively.


Transparent Documentation

Our platform documentation includes the origin of the data, diagrams, and resources based on our interface, including versions in English and Spanish. It also provides a detailed guide on how to create the same solution using Microsoft Learn and Bicep as IaaS.

Data Sources

In every interaction with the Copilot agent, if any data or relevance is mentioned, the agent will respond with the consultation sources in APA, Harvard, Latin formats, among others, at the user's discretion.


At Dandelion Hub, we understand the importance of responsibility in the development and use of artificial intelligence. To ensure our platform is used ethically and responsibly, we have implemented several mechanisms:

Reporting Issues

We have included a button on the platform that allows users to report any problems or questions they encounter. By clicking this button, users are redirected to our GitHub repository, where they can find detailed documentation on the tool and a forum for direct contact with our development team. This mechanism ensures that solutions and answers are always at hand and that any issues are addressed promptly.

Documentation and Use Cases

Our documentation includes detailed examples of how we apply the principles of responsible AI in our platform. This not only helps educate users on the ethical use of AI but also holds our team accountable for ensuring best practices are followed.

Blocking Sensitive Topics

In Microsoft Copilot Studio, we have programmed our assistant so that, if political or unrelated topics to the tool's use arise, the chat will automatically be blocked. The user will have to restart the chat to continue using it. This approach ensures that the platform is used appropriately and aligned with its primary purpose.