SimShine raises $8 million for home security cameras that use edge computing

SimShine, a computer vision startup based in Shenzhen, has raised $ 8 million in pre-Series A funding for SimCam, its line of home security cameras that use edge computing to keep data on-device. The funding was led by Cheetah Mobile, with participation from Skychee, Skyview Fund and Oak Pacific Investment.

Earlier this year, SimShine raised $ 310,095 in a crowdfunding campaign on Kickstarter. It will use its pre-Series A round for product development and hiring.

SimShine’s team started off developing computer vision and edge computing software, spending five years working with enterprise clients before launching SimCam.

The company plans to release more smart home products that use edge computing with the ultimate goal of building a IoT platform to connect different devices, co-founder and chief marketing officer Joe Pham tells TechCrunch. SimCam currently integrates with Amazon Alexa and Google Assistant, with support for Apple Homekit in the works.

Pham says edge computing protects users’ privacy by keeping data, including face recognition data, on device, while also decreasing latency and false alarms, because calculations are performed continuously on the device (cameras connect to Wi-Fi so customers can watch surveillance video on their smartphones). It also means customers don’t have to sign up for the subscription plans that many cloud-based home security cameras require and reduces the price of each device since SimCam does not have to maintain cloud servers.

Gadgets – TechCrunch

Xnor’s saltine-sized, solar-powered AI hardware redefines the edge

“If AI is so easy, why isn’t there any in this room?” asks Ali Farhadi, founder and CEO of Xnor, gesturing around the conference room overlooking Lake Union in Seattle. And it’s true — despite a handful of displays, phones, and other gadgets, the only things really capable of doing any kind of AI-type work are the phones each of us have set on the table. Yet we are always hearing about how AI is so accessible now, so flexible, so ubiquitous.

And in many cases even those devices that can aren’t employing machine learning techniques themselves, but rather sending data off to the cloud where it can be done more efficiently. Because the processes that make up “AI” are often resource-intensive, sucking up CPU time and battery power.

That’s the problem Xnor aimed to solve, or at least mitigate, when it spun off from the Allen Institute for Artificial Intelligence in 2017. Its breakthrough was to make the execution of deep learning models on edge devices so efficient that a $ 5 Raspberry Pi Zero could perform state of the art computer vision processes nearly well as a supercomputer.

The team achieved that, and Xnor’s hyper-efficient ML models are now integrated into a variety of devices and businesses. As a follow-up, the team set their sights higher — or lower, depending on your perspective.

Answering his own question on the dearth of AI-enabled devices, Farhadi pointed to the battery pack in the demo gadget they made to show off the Pi Zero platform, Farhadi explained: “This thing right here. Power.”

Power was the bottleneck they overcame to get AI onto CPU- and power-limited devices like phones and the Pi Zero. So the team came up with a crazy goal: Why not make an AI platform that doesn’t need a battery at all? Less than a year later, they’d done it.

That thing right there performs a serious computer vision task in real time: It can detect in a fraction of a second whether and where a person, or car, or bird, or whatever, is in its field of view, and relay that information wirelessly. And it does this using the kind of power usually associated with solar-powered calculators.

The device Farhadi and hardware engineering head Saman Naderiparizi showed me is very simple — and necessarily so. A tiny camera with a 320×240 resolution, an FPGA loaded with the object recognition model, a bit of memory to handle the image and camera software, and a small solar cell. A very simple wireless setup lets it send and receive data at a very modest rate.

“This thing has no power. It’s a two dollar computer with an uber-crappy camera, and it can run state of the art object recognition,” enthused Farhadi, clearly more than pleased with what the Xnor team has created.

For reference, this video from the company’s debut shows the kind of work it’s doing inside:

As long as the cell is in any kind of significant light, it will power the image processor and object recognition algorithm. It needs about a hundred millivolts coming in to work, though at lower levels it could just snap images less often.

It can run on that current alone, but of course it’s impractical to not have some kind of energy storage; to that end this demo device has a supercapacitor that stores enough energy to keep it going all night, or just when its light source is obscured.

As a demonstration of its efficiency, let’s say you did decide to equip it with, say, a watch battery. Naderiparizi said it could probably run on that at one frame per second for more than 30 years.

Not a product

Of course the breakthrough isn’t really that there’s now a solar-powered smart camera. That could be useful, sure, but it’s not really what’s worth crowing about here. It’s the fact that a sophisticated deep learning model can run on a computer that costs pennies and uses less power than your phone does when it’s asleep.

“This isn’t a product,” Farhadi said of the tiny hardware platform. “It’s an enabler.”

The energy necessary for performing inference processes such as facial recognition, natural language processing, and so on put hard limits on what can be done with them. A smart light bulb that turns on when you ask it to isn’t really a smart light bulb. It’s a board in a light bulb enclosure that relays your voice to a hub and probably a datacenter somewhere, which analyzes what you say and returns a result, turning the light on.

That’s not only convoluted, but it introduces latency and a whole spectrum of places where the process could break or be attacked. And meanwhile it requires a constant source of power or a battery!

On the other hand, imagine a camera you stick into a house plant’s pot, or stick to a wall, or set on top of the bookcase, or anything. This camera requires no more power than some light shining on it; it can recognize voice commands and analyze imagery without touching the cloud at all; it can’t really be hacked because it barely has an input at all; and its components cost maybe $ 10.

Only one of these things can be truly ubiquitous. Only the latter can scale to billions of devices without requiring immense investment in infrastructure.

And honestly, the latter sounds like a better bet for a ton of applications where there’s a question of privacy or latency. Would you rather have a baby monitor that streams its images to a cloud server where it’s monitored for movement? Or a baby monitor that absent an internet connection can still tell you if the kid is up and about? If they both work pretty well, the latter seems like the obvious choice. And that’s the case for numerous consumer applications.

Amazingly, the power cost of the platform isn’t anywhere near bottoming out. The FPGA used to do the computing on this demo unit isn’t particularly efficient for the processing power it provides. If they had a custom chip baked, they could get another order of magnitude or two out of it, lowering the work cost for inference to the level of microjoules. The size is more limited by the optics of the camera and the size of the antenna, which must have certain dimensions to transmit and receive radio signals.

And again, this isn’t about selling a million of these particular little widgets. As Xnor has done already with its clients, the platform and software that runs on it can be customized for individual projects or hardware. One even wanted a model to run on MIPS — so now it does.

By drastically lowering the power and space required to run a self-contained inference engine, entirely new product categories can be created. Will they be creepy? Probably. But at least they won’t have to phone home.

Gadgets – TechCrunch

Virgin Galactic touches the edge of space with Mach 2.9 test flight of SpaceShipTwo

The fourth test flight of Virgin Galactic’s SpaceShipTwo took its test pilots to the very edge of space this morning, reaching just over 52 miles of altitude and a maximum speed of Mach 2.9. It’s another exciting leapfrog of the aspiring space tourism company’s previous achievements.

Takeoff was at 7:30 AM against a lovely sunrise in the Mojave:

The actual spacecraft, SpaceShipTwo, was strapped to the belly of WhiteKnightTwo (VSS Unity and VMS Eve specifically) as the latter gave it a ride up to about 45,000 feet.

At that point SpaceShipTwo ignited its rocket engine and started zooming upwards at increasing speed. The 60-second burn of the engine, 18 seconds longer than the third test flight’s, took the craft up to Mach 2.9 — quite a bit faster than before.

After that minute-long burn SpaceShipTwo deployed its “feathers,” helping slow and guide it to a controlled re-entry. It had at this point reached 271,268 feet, approximately 51.4 miles or 82.7 kilometers.

Now, space “officially” begins by international consensus at 100 km, at what’s called the Karman line. But space-like conditions begin well before that, and a planned altitude of around 80 km was good enough for NASA to load a set of microgravity experiments onto the craft. And some have suggested the line should be 80 km instead. So while it’s debatable whether Virgin Galactic truly went to space (the company is saying so), it definitely got close enough to get a taste.

And the pilots, Mark ‘forger’ Stucky and CJ Sturckow, are definitely astronauts.

If this flight isn’t the one that makes Virgin Galactic the first to get to space without a national space organization’s help, chances are the next one will be. I’m awaiting more images from the flight and will update this post with them as soon as they’re available.

Gadgets – TechCrunch