Modified Microsoft Hololens To Serve the Disabled People
The people who are accustomed to see everything around do not notice how important the role of vision in social interaction and communication is. The Microsoft research project uses AI (artificial intelligence) and AR (augmented reality) to help people with visual disorders “see” their interlocutors.
The problem faced by people with visual disorders, for sure, lies in that they cannot see other people around them. Many non-verbal clues used by others in the conversation may not be available to them.
The Tokyo project is a new attempt by Microsoft researchers to understand how AI and AR technologies can be useful for disabled people.
And, though each case is individual, it should be noticed that virtual and voice controlled assistants are a boon to many people who cannot use the touch screen, mouse and keyboard so easily.
The team of specialists, launched several years ago with an unofficial task of improving access to information, began to observe the participants of the Special Olympics, and then hold seminars with the participation of blind and partially sighted people community. The first results of their research were based on a subtle context, the basis that vision provides in almost all situations.
“Most people have a very nuanced and detailed perception of how to interact with others. This perception is directly influenced by the understanding of what kind of people are in the room, what they do, what kind of relationship they have with me, whether they are relevant to me or not,” said Microsoft researcher Ed Catrell. “And for the blind, many of the signals that we take for granted are simply unavailable.”
An experimental solution to this problem, promoting by Project Tokyo, is to use modified Microsoft Hololens (mixed reality glasses) – without using lenses, of course.
In this case, we are talking about a very complex visualization device that can identify objects and people if they are assigned the correct code.
Users wear the device like a high-tech hoop or headband on their heads and the user program stack provides them with a set of contextual prompts:
• When a person is detected, let’s say, four feet to the right, the headset emits a click that sounds as if it is coming from this place.
• If the person’s face is familiar, the headset gives a second signal and the person’s name is announced (only audible to the user).
•If the face is unknown or poorly visible, a long sound is produced that is modulating when the user points his head towards another person, ending with a click when the face is centered on the camera (which also means that the user is facing him face to face).
• For those nearby, the LED strip glows white in the direction of the detected person, and green if that person has been identified.
Other tools are still being evaluated, but this set is just the beginning, and based on the results of thematic experimental games with blind children, we can conclude that these devices will be extremely useful.
The work done is a good start, but it is definitely still in the process. Bulky, expensive equipment is not what you would like to wear all day and, certainly, different users will have different needs. And what about facial expressions and gestures, road signs and menus?
Ultimately, the future of Project Tokyo, as before, will be determined by the needs of communities, which, unfortunately, are seldom consulted when it comes to development of artificial intelligence systems and other modern amenities.
image credit: 3DWorld