Sprint 3 Objectives - PatilAntariksh/Mind-For-The-Blind GitHub Wiki
🧭 Sprint 3 – AI and Navigation Enhancements
🎯 Sprint Goal
The primary goal of Sprint 3 is to significantly enhance the accessibility and usability of the Mind-For-The-Blind application by introducing voice-based interactions, improving the accuracy of the ML model, and refining the overall user experience through thorough testing and UI adjustments.
✅ Objectives
1. Integrate Voice-Based AI Assistant
- Add a conversational AI assistant for handling user queries through voice input.
- Enable speech-to-text and text-to-speech functionalities for seamless communication.
2. Improve Voice Accessibility & App Guidance
- Implement audio prompts on screen load to assist blind users.
- Ensure all critical pages provide spoken instructions and context.
3. Improve ML Model Accuracy
- Retrain or fine-tune the currency detection ML model.
- Enhance accuracy and reduce latency for real-time usage.
4. Enable Realtime Camera Access for Currency Scanning
- Provide a responsive and low-latency camera preview.
- Ensure real-time detection and audio announcement of dollar denominations.
5. UI Refinements & Accessibility Improvements
- Improve layout, text contrast, and element spacing for accessibility.
- Ensure compatibility with screen readers and improve user flow.
6. Conduct Broader User Testing & Collect Feedback
- Involve both blind users and helpers for hands-on testing.
- Document feedback and identify areas for improvement.
7. Resolve errors that we are encountering in the Video call module
- Try integrating the ZEGOCLOUD module to see if we can successfully integrate the module.
- If not find alternative ways.