Hardware Requirement Specification - Mir-Fahad-Abdullah/Walking-Assistant-Robot-for-Blind-People GitHub Wiki

The hardware configuration of the Walking Assistant Robot is specifically designed to be cost-effective, light in weight, and having real-time obstacle detection and navigation features. The system makes it portable and energy-efficient so that it is simple to operate for visually impaired individuals. The system includes the essential components of environment awareness sensors, a processing unit for performing object detection algorithms, and an auditory output system for the delivery of voice instruction. This compact yet effective design renders the robot operational in various environments without being expensive and inaccessible.

The platform includes the following major components:

  • Nvidia Jetson Orin Nano (4GB) – Handles both AI processing and motor/sensor control
  • USB Camera (Logitech C270)
  • VL53L1X ToF Sensors (2x or more)
  • HC-SR04 Ultrasonic Sensors (2x)
  • MPU6050 IMU Sensor
  • L298N or TB6612FNG Motor Driver
  • DC Gear Motors with Wheels (4x or 6x)
  • USB Speaker or I2S Audio Module (MAX98357A)
  • Power Bank (5V, 3A or higher)
  • Li-ion Battery Pack
  • Buck Converter (MP2307)

Robot Architecture

design

The rocker-bogie suspension system, which was inspired by the Mars Rover, enables each wheel to independently articulate and adapt to the terrain's shape, enabling the robot to climb stairs. When working with irregular surfaces, such as stair steps, this configuration helps preserve optimum ground contact and stability. For adequate traction on risers, which are usually angled between 30° and 35°, the robot uses high-torque DC motors in conjunction with big, deep-tread rubberized wheels. The lightweight, long-lasting PVC used to make the chassis is inexpensive, easy to fabricate, and provides good structural strength for academic prototyping. By placing the hardware and power supply close to the base, the low center of gravity enhances balance and lessens the possibility of tipping.

Conclusion

The primary work of this project involved developing user-robot interface communication tools by using TTS technologies and Bangla NLP systems. To achieve the requirements the voice interface needed both natural accuracy alongside quick reliable performance. Our solution meets the needs of blind users in Bangladesh thanks to optimized models and tailored speech training and considerate implementation methods. Through its development the project achieves two main goals: solving technological difficulties as well as improving accessibility safety in public spaces for blind users.