Google will unveil the Pixel 4 and other new hardware on October 15

Google will reveal the next Pixel in greater detail at an event happening October 15 in New York, the company confirmed via invites sent to media today. We already know the Pixel 4 will be revealed at this event, because Google has already dropped some official images and feature details for the new Android smartphone, but we’ll probably see more besides given that the invite promises “a few new things Made by Google.”

Here’s what we know so far about the Pixel 4: Everything. Well okay, not everything, but most things. Like it’ll use Google’s cool Soli radar-based gesture recognition technology, for both its updated face unlock and some motion controls. Infinite leaks have shown that it’ll have a body design that includes a single color/texture back, what looks like a three-camera rear cluster (likely a wide angle, standard and zoom lens), a 6.23-inch OLED display on the XL with image resolution of 3040×1440, with a 90Hz mode that will make animations and scrolling smoother.

unnamed

The animation Google sent out with the invites for its 2019 hardware event.

It also has rather large top and bottom bezels, a rarity for smartphones these days, but something that Google apparently felt was better than going with a notch again. Plus, it has that Soli tech and dot projectors for the new face unlock, which might require more space up top.

In terms of other hardware, there are rumours of a new ChromeOS-based Pixelbook plus new Google Home smart speakers. We could also see more of Stadia, Google’s cloud gaming service which launches in November. Google could also show off additional surprises, including maybe Chromecast updates, or an update to Google Wifi to take advantage of the newly certified Wifi 6 standard.

Basically, there could be a lot of surprises on hand even if the Pixel 4 is more or less a known quantity, and we’ll be there to bring you all the news October 15 as it happens.


Android – TechCrunch

Google will unveil the Pixel 4 and other new hardware on October 15

Google will reveal the next Pixel in greater detail at an event happening October 15 in New York, the company confirmed via invites sent to media today. We already know the Pixel 4 will be revealed at this event, because Google has already dropped some official images and feature details for the new Android smartphone, but we’ll probably see more besides given that the invite promises “a few new things Made by Google.”

Here’s what we know so far about the Pixel 4: Everything. Well okay, not everything, but most things. Like it’ll use Google’s cool Soli radar-based gesture recognition technology, for both its updated face unlock and some motion controls. Infinite leaks have show that it’ll have a body design that includes a single color/texuture back, what looks like a three-camera rear cluster (wide angle, standard and zoom lens lily), a 6.23-inch OLED display can the XL with image resolution of 3040×1440, with a 90Hz mode that will make animations and scrolling smoother.

unnamed

The animation Google sent out with the invites for its 2019 hardware event.

It also has rather large top and bottom bezels, a rarity for smartphones these days, but something that Google apparently felt was better than going with a notch again. Plus, it has that Soli tech and dot projectors for doing the new face unlock which might require more space up top.

In terms of other hardware, there’s less in terms of solid info to go on, but there are rumours of a new ChromeOS-based Pixelbook plus new Google Home smart speakers, and we could also see more of Stadia, Google’s cloud gaming service which launches in November. Google could also show off additional surprises, including maybe Chromecast updates, or an update to Google Wifi to take advantage of the newly certified Wifi 6 standard.

Basically, there could be a lot of surprises on hand even if the Pixel 4 is more or less a known quantity, and we’ll be there to bring you all the news October 15 as it happens.

Gadgets – TechCrunch

How Oculus squeezed sophisticated tracking into pipsqueak hardware

Making the VR experience simple and portable was the main goal of the Oculus Quest, and it definitely accomplishes that. But going from things in the room tracking your headset to your headset tracking things in the room was a complex process. I talked with Facebook CTO Mike Schroepfer (“Schrep”) about the journey from “outside-in” to “inside-out.”

When you move your head and hands around with a VR headset and controllers, some part of the system has to track exactly where those things are at all times. There are two ways this is generally attempted.

One approach is to have sensors in the room you’re in, watching the devices and their embedded LEDs closely — looking from the outside in. The other is to have the sensors on the headset itself, which watches for signals in the room — looking from the inside out.

Both have their merits, but if you want a system to be wireless, your best bet is inside-out, since you don’t have to wirelessly send signals between the headset and the computer doing the actual position tracking, which can add hated latency to the experience.

Facebook and Oculus set a goal a few years back to achieve not just inside-out tracking, but make it as good or better than the wired systems that run on high-end PCs. And it would have to run anywhere, not just in a set scene with boundaries set by beacons or something, and do so within seconds of putting it on. The result is the impressive Quest headset, which succeeded with flying colors at this task (though it’s not much of a leap in others).

What’s impressive about it isn’t just that it can track objects around it and translate that to an accurate 3D position of itself, but that it can do so in real time on a chip with a fraction of the power of an ordinary computer.

“I’m unaware of any system that’s anywhere near this level of performance,” said Schroepfer. “In the early days there were a lot of debates about whether it would even work or not.”

Our hope is that for the long run, for most consumer applications, it’s going to all be inside-out tracking.

The term for what the headset does is simultaneous localization and mapping, or SLAM. It basically means building a map of your environment in 3D while also figuring out where you are in that map. Naturally robots have been doing this for some time, but they generally use specialized hardware like lidar, and have a more powerful processor at their disposal. All the new headsets would have are ordinary cameras.

“In a warehouse, I can make sure my lighting is right, I can put fiducials on the wall, which are markers that can help reset things if I get errors — that’s like a dramatic simplification of the problem, you know?” Schroepfer pointed out. “I’m not asking you to put fiducials up on your walls. We don’t make you put QR codes or precisely positioned GPS coordinates around your house.

“It’s never seen your living room before, and it just has to work. And in a relatively constrained computing environment — we’ve got a mobile CPU in this thing. And most of that mobile CPU is going to the content, too. The robot isn’t playing Beat Saber at the same time it’s cruising though the warehouse.”

It’s a difficult problem in multiple dimensions, then, which is why the team has been working on it for years. Ultimately several factors came together. One was simply that mobile chips became powerful enough that something like this is even possible. But Facebook can’t really take credit for that.

More important was the ongoing work in computer vision that Facebook’s AI division has been doing under the eye of Yann Lecun and others there. Machine learning models frontload a lot of the processing necessary for computer vision problems, and the resulting inference engines are lighter weight, if not necessarily well understood. Putting efficient, edge-oriented machine learning to work inched this problem closer to having a possible solution.

Most of the labor, however, went into the complex interactions of the multiple systems that interact in real time to do the SLAM work.

“I wish I could tell you it’s just this really clever formula, but there’s lots of bits to get this to work,” Schroepfer said. “For example, you have an IMU on the system, an inertial measurement unit, and that runs at a very high frequency, maybe 1000 Hz, much higher than the rest of the system [i.e. the sensors, not the processor]. But it has a lot of error. And then we run the tracker and mapper on separate threads. And actually we multi-threaded the mapper, because it’s the most expensive part [i.e. computationally]. Multi-threaded programming is a pain to begin with, but you do it across these three, and then they share data in interesting ways to make it quick.”

Schroepfer caught himself here; “I’d have to spend like three hours to take you through all the grungy bits.”

Part of the process was also extensive testing, for which they used a commercial motion tracking rig as ground truth. They’d track a user playing with the headset and controllers, and using the OptiTrack setup measure the precise motions made.

optitrack

Testing with the OptiTrack system.

To see how the algorithms and sensing system performed, they’d basically play back the data from that session to a simulated version of it: video of what the camera saw, data from the IMU, and any other relevant metrics. If the simulation was close to the ground truth they’d collected externally, good. If it wasn’t, the machine learning system would adjust its parameters and they’d run the simulation again. Over time the smaller, more efficient system drew closer and closer to producing the same tracking data the OptiTrack rig had recorded.

Ultimately it needed to be as good or better than the standard Rift headset. Years after the original, no one would buy a headset that was a step down in any way, no matter how much cheaper it was.

“It’s one thing to say, well my error rate compared to ground truth is whatever, but how does it actually manifest in terms of the whole experience?” said Schroepfer. “As we got towards the end of development, we actually had a couple passionate Beat Saber players on the team, and they would play on the Rift and on the Quest. And the goal was, the same person should be able to get the same high score or better. That was a good way to reset our micro-metrics and say, well this is what we actually need to achieve the end experience that people want.”

the computer vision team here, they’re pretty bullish on cameras with really powerful algorithms behind them being the solution to many problems.

It doesn’t hurt that it’s cheaper, too. Lidar is expensive enough that even auto manufacturers are careful how they implement it, and time-of-flight or structured-light approaches like Kinect also bring the cost up. Yet they massively simplify the problem, being 3D sensing tools to begin with.

“What we said was, can we get just as good without that? Because it will dramatically reduce the long term cost of this product,” he said. “When you’re talking to the computer vision team here, they’re pretty bullish on cameras with really powerful algorithms behind them being the solution to many problems. So our hope is that for the long run, for most consumer applications, it’s going to all be inside-out tracking.”

I pointed out that VR is not considered by all to be a healthy industry, and that technological solutions may not do much to solve a more multi-layered problem.

Schroepfer replied that there are basically three problems facing VR adoption: cost, friction, and content. Cost is self-explanatory, but it would be wrong to say it’s gotten a lot cheaper over the years. Playstation VR established a low-cost entry early on but “real” VR has remained expensive. Friction is how difficult it is to get from “open the box” to “play a game,” and historically has been a sticking point for VR. Oculus Quest addresses both these issues quite well, being at $ 400 and as our review noted very easy to just pick up and use. All that computer vision work wasn’t for nothing.

Content is still thin on the ground, though. There have been some hits, like Superhot and Beat Saber, but nothing to really draw crowds to the platform (if it can be called that).

“What we’re seeing is, as we get these headsets out, and in developers hands that people come up with all sorts of creative ideas. I think we’re in the early stages — these platforms take some time to marinate,” Schroepfer admitted. “I think everyone should be patient, it’s going to take a while. But this is the way we’re approaching it, we’re just going to keep plugging away, building better content, better experiences, better headsets as fast as we can.”

Gadgets – TechCrunch

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $ 25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $ 25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

Gadgets – TechCrunch

How Axis went from concept to shipping its Gear smart blinds hardware

Axis is selling its first product, the Axis Gear, on Amazon and direct from its own website, but that’s a relatively recent development for the four-year old company. The idea for Gear, which is a $ 249.00 ($ 179.00 as of this writing thanks to a sale) aftermarket conversion gadget to turn almost any cord-pull blinds into automated smart blinds, actually came to co-founder and CEO Trung Pham in 2014, but development didn’t begin until early next year, and the maxim that ‘hardware is hard’ once again proved more than valid.

Pham, whose background is actually in business but who always had a penchant for tech and gadgets, originally set out to scratch his own itch and arrived upon the idea for his company as a result. He was actually in the market for smart blades when he moved into his first condo in Toronto, but after all the budget got eaten up on essentials like a couch, a bed and a TV, there wasn’t much left in the bank for luxuries like smart shades – especially after he actually found out how much they cost.

“Even though I was a techie, and I wanted automated shades, I couldn’t afford it,” Pham told me in an interview. “I went to the designer and got quoted for some really nice Hunter Douglas. And they quoted me just over $ 1,000 bucks a window with the motorization option. So I opted just for manual shades. A couple of months later, when it’s really hot and sunny, I’m just really noticing the heat so I go back to the designer and ask him ‘Hey can I actually get my shades motorized now, I have a little bit more money, I just want to do my living room.’ And that’s when I learned that once you have your shades installed, you actually can’t motorize them, you have to replace them with brand new shades.”

With his finance background, Pham saw an opportunity in the market that was ignored by the big legacy players, and potentially relatively easy to address with tech that wasn’t all that difficult to develop, including a relatively simple motor and the kind of wireless connectivity that’s much more readily available thanks to the smartphone component supply chain. And the market demand was there, Pham says – especially with younger homeowners spending more on their property purchases (or just renting) and having less to spare on expensive upgrades like motorized shades.

AXIS Gear 1The Axis solution is relatively affordable (though its regular asking price of $ 249 per unit can add up depending on how many windows you’re looking to retrofit) and also doesn’t require you to replace your entire existing shades or blinds, so long as you have the type that the Gear is compatible with (which includes quite a lot of commonly available shades). There are a couple of power options, including an AC adapter for a regular outlet, or a solar bar with back-up from AA batteries in case there’s no outlet handy.

Pham explained how in early investor meetings, he would cite Dyson as an inspiration, because that company took something that was standard and considered central to their very staid industry and just removed it altogether – specifically referring to their bagless design. He sees Axis as taking a similar approach in the smart blind market, which has too much to gain from maintaining its status quo to tackle Axis’ approach to the market. Plus, Pham notes, Axis has six patents filed and three granted for its specific technical approach.

“We want to own the idea of smart shades to the end consumer,” he told me. “And that’s where the focus really is. It’s a big opportunity, because you’re not just buying one doorbell or one thermostat – you’re buying multiple units. We have customers that buy one or two right away, come back and buy more, and we have customers that buy 20 right away. So our ability to sell volume to each household is very beneficial for us as a business.”

Which isn’t to say Axis isn’t interested in larger-scale commercial deployment – Pham says that there are “a lot of [commercial] players and hotels testing it,” and notes that they also “did a project in the U.S. with one of the largest developers in the country.” So far, however, the company is laser-focused on its consumer product and looking at commercial opportunities as they come inbound, with plans in future to tackle the harder work of building a proper commercial sales team. But it could afford Axis a lot of future opportunity, especially because their product can help building managers get compliant with measures like the Americans with Disabilities Act to outfit properties with the requisite amount of unites featuring motorized shades.

To date, Axis has been funded entirely via angel investors, along with family and friends, and through a crowdfunding project on Indiegogo which secured its first orders. Pham says revenue and sales, along with year-over-year growth, have all been strong so far, and that they’ve managed to ship “quite a few units so far” though he declined to share specifics. The startup is about to close a small bridge round and then will be looking to pin down its Series A funding after that, as it looks to expand its product line – with a focus on greater window coverings style compatibility as top priority.

 

Gadgets – TechCrunch