Modified HoloLens helps teach kids with vision impairment to navigate the social world

Growing up with blindness or low vision can be difficult for kids, not just because they can’t read the same books or play the same games as their sighted peers; Vision is also a big part of social interaction and conversation. This Microsoft research project uses augmented reality to help kids with vision impairment “see” the people they’re talking with.

The challenge people with vision impairment encounter is, of course, that they can’t see the other people around them. This can prevent them from detecting and using many of the nonverbal cues sighted people use in conversation, especially if those behaviors aren’t learned at an early age.

Project Tokyo is a new effort from Microsoft in which its researchers are looking into how technologies like AI and AR can be useful to all people, including those with disabilities. That’s not always the case, though it must be said that voice-powered virtual assistants are a boon to many who can’t as easily use a touchscreen or mouse and keyboard.

The team, which started as an informal challenge to improve accessibility a few years ago, began by observing people traveling to the Special Olympics, then followed that up with workshops involving the blind and low vision community. Their primary realization was of the subtle context sight gives in nearly all situations.

“We, as humans, have this very, very nuanced and elaborate sense of social understanding of how to interact with people — getting a sense of who is in the room, what are they doing, what is their relationship to me, how do I understand if they are relevant for me or not,” said Microsoft researcher Ed Cutrell. “And for blind people a lot of the cues that we take for granted just go away.”

In children this can be especially pronounced, as having perhaps never learned the relevant cues and behaviors, they can themselves exhibit antisocial tendencies like resting their head on a table while conversing, or not facing a person when speaking to them.

To be clear, these behaviors aren’t “problematic” in themselves, as they are just the person doing what works best for them, but they can inhibit everyday relations with sighted people, and it’s a worthwhile goal to consider how those relations can be made easier and more natural for everyone.

The experimental solution Project Tokyo has been pursuing involves a modified HoloLens — minus the lens, of course. The device is also a highly sophisticated imaging device that can identify objects and people if provided with the right code.

The user wears the device like a high-tech headband, and a custom software stack provides them with a set of contextual cues:

  • When a person is detected, say four feet away on the right, the headset will emit a click that sounds like it is coming from that location.
  • If the face of the person is known, a second “bump” sound is made and the person’s name announced (again, audible only to the user).
  • If the face is not known or can’t be seen well, a “stretching” sound is played that modulates as the user directs their head towards the other person, ending in a click when the face is centered on the camera (which also means the user is facing them directly).
  • For those nearby, an LED strip shows a white light in the direction of a person who has been detected, and a green light if they have been identified.

Other tools are being evaluated, but this set is a start, and based on a case study with a game 12-year-old named Theo, they could be extremely helpful.

Microsoft’s post describing the system and the team’s work with Theo and others is worth reading for the details, but essentially Theo began to learn the ins and outs of the system and in turn began to manage social situations using cues mainly used by sighted people. For instance, he learned that he can deliberately direct his attention at someone by turning his head towards them, and developed his own method of scanning the room to keep tabs on those nearby — neither one possible when one’s head is on the table.

That kind of empowerment is a good start, but this is definitely a work in progress. The bulky, expensive hardware isn’t exactly something you’d want to wear all day, and naturally different users will have different needs. What about expressions and gestures? What about signs and menus? Ultimately the future of Project Tokyo will be determined, as before, by the needs of the communities who are seldom consulted when it comes to building AI systems and other modern conveniences.

Gadgets – TechCrunch

Amazon debuts a scale model autonomous car to teach developers machine learning

Amazon today announced AWS DeepRacer, a fully autonomous 1/18th-scale race car that aims to help developers learn machine learning. Priced at $ 399 but currently offered for $ 249, the race car lets developers get hands-on — literally — with a machine learning technique called reinforcement learning (RL).

RL takes a different approach to training models than other machine learning techniques, Amazon explained.

It’s a type of machine learning that works when an “agent” is allowed to act on a trial-and-error basis within an interactive environment. It does so using feedback from those actions to learn over time in order to reach a predetermined goal or to maximize some type of score or reward.

This makes it different from other machine learning techniques — like Supervised Learning, for example — as it doesn’t require any labeled training data to get started, and it can make short-term decisions while optimizing for a long-term goal.

The new race car lets developers experiment with RL by learning through autonomous driving.

Developers first get started using a virtual car and tracks in a cloud-based 3D racing simulator, powered by AWS RoboMaker. Here, they can train an autonomous driving model against a collection of predefined race tracks included with the simulator, then evaluate them virtually or choose to download them to the real-world AWS DeepRacer car.

They can also opt to participate in the first AWS DeepRacer League at the re:Invent conference, where the car was announced. This event will take place over the next 24 hours in the AWS DeepRacer workshops and at the MGM Speedway and will involve using Amazon SageMakerAWS RoboMaker and other AWS services.

There are six main tracks, each with a pit area, a hacker garage and two extra tracks developers can use for training and experimentation. There will also be a DJ.

The league will continue after the event, as well, with a series of live racing events starting in 2019 at AWS Global Summits worldwide. Virtual tournaments will also be hosted throughout the year, Amazon said, with the goal of winning the AWS DeepRacer 2019 Championship Cup at re:invent 2019.

As for the car’s hardware itself, it’s a 1/18th-scale, radio-controlled, four-wheel drive vehicle powered by an Intel Atom processor. The processor runs Ubuntu 16.04 LTS, ROS (Robot Operating System) and the Intel OpenVino computer vision toolkit.

The car also includes a 4 megapixel camera with 1080p resolution, 802.11ac Wi-Fi, multiple USB ports and battery power that will last for about two hours.

It’s available for sale on Amazon here.

more AWS re:Invent 2018 coverage

Gadgets – TechCrunch

New techniques teach drones to fly through small holes

Researchers at the University of Maryland are adapting the techniques used by birds and bugs to teach drones how to fly through small holes at high speeds. The drone requires only a few sensing shots to define the opening and lets a larger drone fly through an irregularly shaped hole with no training.

Nitin J. Sanket, Chahat Deep Singh, Kanishka Ganguly, Cornelia Fermüller, and Yiannis Aloimonos created the project, called GapFlyt, to teach drones using only simple, insect-like eyes.

The technique they used, called optical flow, creates a 3D model using a very simple, monocular camera. By marking features in each subsequent picture, the drone can tell the shape and depth of holes based on what changed in each photo. Things closer to the drone move more than things further away, allowing the drone to see the foreground vs. the background.

As you can see in the video below, the researchers have created a very messy environment in which to test their system. The Bebop 2 drone with an NVIDIA Jetson TX2 GPU on board flits around the hole like a bee and then buzzes right through at 2 meters per second, a solid speed. Further, the researchers confused the environment by making the far wall similar to the closer wall, proving that the technique can work in novel and messy situations.

The team at the University of Maryland’s Perception and Robotics Group reported that the drone was 85 percent accurate as it flew through various openings. It’s not quite as fast as Luke skirting Beggar’s Canyon back on Tatooine, but it’s an impressive start.

Gadgets – TechCrunch

The BecDot is a toy that helps teach vision-impaired kids to read braille

 Learning braille is a skill that, like most, is best learned at an early age by those who need it. But toddlers with vision impairment often have few or no options to do so, leaving them behind their peers academically and socially. BecDot is a toy created by the parents facing that challenge that teaches kids braille in a fun, simple way, and is both robust and affordable. Read More

Gadgets – TechCrunch

These magical (robotic) socks teach you to dance (robotically)

 As humans find themselves forced to mate with our robotic overlords I suspect there will be some dancing. And what better way to teach us how to dance than with motors tucked into our socks? Designer Pascal Ziegler built these wild wearables to teach “dancing pairs choreography.” They’re basically vibrating socks. There is an Instructable here so you can make a pair of your… Read More

Gadgets – TechCrunch