Modified HoloLens helps teach kids with vision impairment to navigate the social world

Growing up with blindness or low vision can be difficult for kids, not just because they can’t read the same books or play the same games as their sighted peers; Vision is also a big part of social interaction and conversation. This Microsoft research project uses augmented reality to help kids with vision impairment “see” the people they’re talking with.

The challenge people with vision impairment encounter is, of course, that they can’t see the other people around them. This can prevent them from detecting and using many of the nonverbal cues sighted people use in conversation, especially if those behaviors aren’t learned at an early age.

Project Tokyo is a new effort from Microsoft in which its researchers are looking into how technologies like AI and AR can be useful to all people, including those with disabilities. That’s not always the case, though it must be said that voice-powered virtual assistants are a boon to many who can’t as easily use a touchscreen or mouse and keyboard.

The team, which started as an informal challenge to improve accessibility a few years ago, began by observing people traveling to the Special Olympics, then followed that up with workshops involving the blind and low vision community. Their primary realization was of the subtle context sight gives in nearly all situations.

“We, as humans, have this very, very nuanced and elaborate sense of social understanding of how to interact with people — getting a sense of who is in the room, what are they doing, what is their relationship to me, how do I understand if they are relevant for me or not,” said Microsoft researcher Ed Cutrell. “And for blind people a lot of the cues that we take for granted just go away.”

In children this can be especially pronounced, as having perhaps never learned the relevant cues and behaviors, they can themselves exhibit antisocial tendencies like resting their head on a table while conversing, or not facing a person when speaking to them.

To be clear, these behaviors aren’t “problematic” in themselves, as they are just the person doing what works best for them, but they can inhibit everyday relations with sighted people, and it’s a worthwhile goal to consider how those relations can be made easier and more natural for everyone.

The experimental solution Project Tokyo has been pursuing involves a modified HoloLens — minus the lens, of course. The device is also a highly sophisticated imaging device that can identify objects and people if provided with the right code.

The user wears the device like a high-tech headband, and a custom software stack provides them with a set of contextual cues:

  • When a person is detected, say four feet away on the right, the headset will emit a click that sounds like it is coming from that location.
  • If the face of the person is known, a second “bump” sound is made and the person’s name announced (again, audible only to the user).
  • If the face is not known or can’t be seen well, a “stretching” sound is played that modulates as the user directs their head towards the other person, ending in a click when the face is centered on the camera (which also means the user is facing them directly).
  • For those nearby, an LED strip shows a white light in the direction of a person who has been detected, and a green light if they have been identified.

Other tools are being evaluated, but this set is a start, and based on a case study with a game 12-year-old named Theo, they could be extremely helpful.

Microsoft’s post describing the system and the team’s work with Theo and others is worth reading for the details, but essentially Theo began to learn the ins and outs of the system and in turn began to manage social situations using cues mainly used by sighted people. For instance, he learned that he can deliberately direct his attention at someone by turning his head towards them, and developed his own method of scanning the room to keep tabs on those nearby — neither one possible when one’s head is on the table.

That kind of empowerment is a good start, but this is definitely a work in progress. The bulky, expensive hardware isn’t exactly something you’d want to wear all day, and naturally different users will have different needs. What about expressions and gestures? What about signs and menus? Ultimately the future of Project Tokyo will be determined, as before, by the needs of the communities who are seldom consulted when it comes to building AI systems and other modern conveniences.

Gadgets – TechCrunch

Kids with lazy eye can be treated just by letting them watch TV on this special screen

Amblyopia, commonly called lazy eye, is a medical condition that adversely affects the eyesight of millions, but if caught early can be cured altogether — unfortunately this usually means months of wearing an eyepatch. NovaSight claims successful treatment with nothing more than an hour a day in front of its special display.

The condition amounts to when the two eyes aren’t synced up in their movements. Normally both eyes will focus the detail-oriented fovea part of the retina on whatever object the person is attending to; In those with amblyopia, one eye won’t target the fovea correctly and as a result the eyes don’t converge properly and vision suffers, and if not treated can lead to serious vision loss.

It can be detected early on in children, and treatment can be as simple as covering the good eye with a patch for most of the day, which forces the other eye to adjust and align itself properly. The problem is of course that this is uncomfortable and embarrassing for the kid, and of course only using one eye isn’t ideal for playing schoolyard games and other everyday things.

And you look cool doing it!

NovaSight’s innovation with CureSight is to let this alignment process happen without the eyepatch, instead selectively blurring content the child watches so that the affected eye has to do the work while the other takes a rest.

It accomplishes this with the same technology that, ironically, gave many of us double vision back in the early days of 3D: glasses with blue and red lenses.

Blue-red stereoscopy presents two slightly different versions of the same image, one tinted red and one tinted blue. Normally it would be used with slightly different parallax to produce a binocular 3D image — that’s what many of us saw in theaters or amusement park rides.

In this case, however, one of the two tinted images just has a blurry circle right where the kid is looking. The screen uses a built-in Tobii eye-tracking sensor so it knows where the circle should be; I got to test it out briefly and the circle quickly caught up with my gaze. This makes it so the other eye, affected by the condition but the only one with access to the details of the image, has to be relied on to point where the kid needs it to.

The best part is that there isn’t some treatment schema or tests — kids can literally just watch YouTube or a movie using the special setup, and they’re getting better, NovaSight claims. And it can be done at home on the kid’s schedule — always a plus.

Graphs from NovaSight website.

The company has already done some limited clinical trials that showed “significant improvement” over a 12-week period. Whether it can be relied on to completely cure the condition or if it should be paired with other established treatments will come out in further trials the company has planned.

In the meantime, however, it’s nice to see a technology like 3D displays applied to improving vision rather than promoting bad films. NovaSight has been developing and promoting its tech over the last year; It also has a product that helps diagnose vision problems using a similar application of 3D display tech. You can learn more or request additional info at its website.

Gadgets – TechCrunch

Animated, interactive digital books may help kids learn better

Digital books may have a few advantages over ordinary ones when it comes to kids remembering their contents, according to a new study. Animations, especially ones keyed to verbal interactions, can significantly improve recall of story details — but they have to be done right.

The research, from psychologist Erik Thiessen at Carnegie Mellon University, evaluated the recall of 30 kids aged 3-5 after being read either an ordinary story book or one with animations for each page.

When asked afterwards about what they remembered, the kids who had seen the animated book tended to remember 15-20 percent more. The best results were seen when the book animated in response to the child saying or asking something about it (though this had to be done manually by the reading adult) rather than just automatically.

“Children learn best when they are more involved in the learning process,” explained Thiessen in a CMU news post. “Many digital interfaces are poorly suited to children’s learning capacities, but if we can make them better, children can learn better.”

This is not to say that all books for kids should be animated. Traditional books are always going to have their own advantages, and once you get past the picture-book stage these digital innovations don’t help much.

The point, rather, is to show that digital books can be useful and aren’t a pointless addition to a kid’s library. But it’s important that the digital features are created and tuned with an eye to improving learning, and research must be done to determine exactly how that is best accomplished.

Thiessen’s study was published in the journal Developmental Psychology.

Gadgets – TechCrunch