Architectural Diagrams - PatilAntariksh/Mind-For-The-Blind GitHub Wiki

Mind-For-The-Blind System Context Diagram

Context Diagram

This System Context Diagram provides a high-level view of the Mind-For-The-Blind system, showing how it interacts with users and external systems.

The diagram shows:

The central system (Mind-For-The-Blind Mobile Application)

Primary users (actors):

  1. Visually Impaired Users – Individuals who rely on the app for assistance. 2.Helpers/Navigators – People assisting visually impaired users through video chat.

External systems interacting with the app:

  1. Currency Detection Module – Identifies currency notes using machine learning.

  2. AI-Powered Personal Assistance – Provides voice-based responses.

  3. Video Call Module – Enables real-time communication between users and helpers.

  4. Cloud Server/Database – Stores user authentication data securely.

Container Diagram

Screenshot 2025-03-04 at 7 55 03 PM

This container diagram represents the architecture of the "Mind for the Blind" mobile application

Key Components & Their Roles:

Actors (Users)

Visually Impaired User

  • A blind user who accesses the application for various assistive services.
  • Logs into the mobile app via biometric authentication.
  • Can use services like currency detection, AI assistant, and helper navigation.

Helper

  • A visually paired user who assists the blind person with navigation.
  • Logs into the app and joins a video chat room to guide the visually impaired user.
  • Helps in real-time through video communication.

System Components (Containers)

  1. Mobile App (Android/iOS)
  • The central interface that grants users (both visually impaired and helpers) access to assistive services.

  • Provides an intuitive UI with large buttons and voice interactions.

  • Routes user actions to respective services.

  1. Database
  • Stores user information and user types (blind person or helper).

  • Manages authentication and biometric login credentials.

  1. Currency Detection Module
  • Uses a Machine Learning (ML) model to detect currency denominations from camera input.

  • Converts detected currency details into audio feedback for the user.

  1. Helper Navigation Module (Video Call Module)
  • Provides real-time video chat between the blind user and their helper.

  • Ensures secure password-based room authentication before allowing users to connect.

  • Used by blind users to navigate safely with assistance.

  1. AI Assist Mode (AI Chatbot)
  • An AI-powered chatbot that interacts with users through verbal input and output.

  • Helps users with general queries and day-to-day assistance.

System Interactions & Workflow:

User Registration & Login

  • Both visually impaired users and helpers register and log in via biometric authentication.

  • The database stores user credentials.

  • Accessing the Mobile App

Users log in and are presented with three main services:

a) Currency Detection

b) Helper Navigation

c) AI Assistant Mode

⚠️ **GitHub.com Fallback** ⚠️