Disha operates on the principle of fusing advanced computer vision, artificial intelligence, and sensor technology to create a comprehensive assistive tool for the visually impaired. At its core, the device utilizes a high-resolution camera system and depth sensors to capture detailed visual and spatial data from the user's surroundings. This data is then processed by onboard AI algorithms, including Convolutional Neural Networks for object and facial recognition, and Natural Language Processing for voice interaction. The system leverages sensor fusion to integrate data from multiple sources, such as an Inertial Measurement Unit, providing a robust understanding of the user’s environment and movement. Real-time analysis of this combined data enables Disha to provide contextual awareness, navigation guidance, and object identification through an intuitive voice interface. Essentially, it replicates and enhances human perception by translating complex environmental information into actionable audio cues, empowering users with greater independence and situational awareness. The culminating deliverable of the Disha project is a fully realized, wearable voice assistant device that empowers blind and visually impaired individuals with enhanced autonomy. This final product will consist of a meticulously engineered hardware prototype, seamlessly integrating advanced sensor technology and a powerful processing unit within a comfortable, ergonomic design. Complementing this hardware will be a sophisticated AI-driven software platform, featuring robust algorithms for object and facial recognition, spatial awareness, and intuitive voice interaction. Rigorous real-world testing and validation will ensure the device's accuracy and reliability, accompanied by comprehensive user documentation and a plan for continuous software updates. Ultimately, Disha will represent a tangible, user-friendly solution, ready for manufacturing and distribution, designed to significantly improve the lives of its users
Show MoreYear of Establishment2025