Amazon reportedly ramps development on Alexa-powered home robot on wheels

Bloomberg reported last April that Amazon was working on a home robot codenamed ‘Vesta’ (after the Roman goddess of the hearth and home) last year, and now the publication says that development on the project continues. Plus, the report includes new details about the specifics of the robot, including that it will indeed support Alexa and have wheels to help it move around. My terrible artist’s rendering of what that could look like is above.

The plan for Vesta was apparently to release it this year, but it’s not yet quite ready for mass production, according to Bloomberg’s sources. And while it could end up mothballed and never see the light of day, as with any project being developed ahead of launch, the company is said to be putting more engineering and development resources into the team working on its release.

Current prototypes of the robot are said to be about waist-high, per the report, and make their way through the world aided by sensor-fed computer-vision. It’ll come when you call thanks to the Alexa integration, per an internal demo described by Bloomberg, and should ostensibly offer all the same kind of functionality you’d get with an Echo device, including calling, timers and music playback.

For other clues as to what Vesta could look like, if and when it ever launches, a good model might be Kuri, the robot developed by Bosch internal startup Mayfield Robotics which was shuttered a year ago and never made it to market. Kuri could also record video and take photos, play games and generally interact with the household.

Meanwhile, Amazon is also apparently readying a Sonos-competing high-quality Echo speaker to debut next year.

Gadgets – TechCrunch

Google launches Jetpack Compose, an open-source, Kotlin-based UI development toolkit

Google today announced the first preview of Jetpack Compose, a new open-source UI toolkit for Kotlin developers who want to use a reactive programming model similar to what React Native and Vue.js.

Jetpack Compose is an unbundled toolkit that is part of Google’s overall Android Jetpack set of software components for Android developers, but there is no requirement to use any other Jetpack components. With Jetpack Compose, Google is essentially bringing the UI-as-code philosophy to Android development. Compose’s UI components are fully declarative and allow developers to create layouts by simply describing what the UI should look like in their code. The Compose framework will handle all the gory details of UI optimization for the developer.

Developers can mix and match the Jetpack Compose APIs and view with those based on Android’s native APIs. Out of the box, Jetpack Compose also natively supports Google’s Material Design.

As part of today’s overall Jetpack update, Google is also launching a number of new Jetpack components and features. These range from support for building apps for Android for Cars and Android Auto to an Enterprise library for making it easier to integrate apps with Enterprise Mobility Management solutions and built-in benchmarking tools

The standout feature, though, is probably CameraX, a new library that allows developers to build camera-centric features and applications that gives developers access to essentially the same features as the native Android camera app.


Android – TechCrunch

Anaxi brings more visibility to the development process

Anaxi‘s mission is to bring more transparency to the software development process. The tool, which is now live for iOS, with web and Android versions planned for the near future, connects to GitHub to give you actionable insights about the state of your projects and manage your projects and issues. Support for Atlassian’s Jira is also in the works.

The new company was founded by former Apple engineering manager and Docker EVP of product development Marc Verstaen and former CodinGame CEO John Lafleur. Unsurprisingly, this new tool is all about fixing the issues these two have seen in their daily lives as developers.

“I’ve been doing software for 40 years,” Verstaen told me.” And every time is the same. You start with a small team and it’s fine. Then you grow and you don’t know what’s going on. It’s a black box.” While the rest of the business world now focuses on data and analytics, software development never quite reached that point. Verstaen argues that this was acceptable until 10 or 15 years ago because only software companies were doing software. But now that every company is becoming a software company, that’s not acceptable anymore.

Using Anaxi, you can easily see all issue reports and pull requests from your GitHub repositories, both public and private. But you also get visual status indicators that tell you when a project has too many blockers, for example, as well as the ability to define your own labels. You also can define due dates for issues.

One interesting aspect of Anaxi is that it doesn’t store all of this information on your phone or on a proprietary server. Instead, it only caches as little information as necessary (including your handles) and then pulls the rest of the information from GitHub as needed. That cache is encrypted on the phone, but for the most part, Anaxi simply relies on the GitHub API to pull in data when needed. There’s a bit of a trade-off here in terms of speed, but Verstaen noted that this also means you always get the most recent data and that GitHub’s API is quite fast and easy to work with.

The service is currently available for free. The company plans to introduce pricing plans in the future, with prices based on the number of developers that use the product inside a company.


Android – TechCrunch

Google’s Android development studio gets a new update with visual navigation editing

Android’s development studio is getting a new update as Google rolls out Android Studio 3.2 Canary, adding new tools for visual navigation editing and Jetpack.

The new release includes build tools for the new Android App Bundle format, Snapshots, a new optimizer for smaller app code and a new way to measure an app’s impact on battery life. The Snapshots tool is baked into the Android Emulator and is geared toward getting the emulator up and running in two seconds. All this is geared toward making Android app development easier as the company looks to woo developers — especially potentially early ones — into an environment that’s built around creating Android apps.

The visual navigation editing looks a bit like a flow chart, where users can move screens around and connect them. You can add new screens, position them in your flow, and under covers will help you manage the whole stack in the background. Google has increasingly worked to abstract away a lot of the complex elements of building applications, whether that’s making its machine learning framework TensorFlow more palatable by letting developers create tools using their preferred languages or trying to make it easier to build an app quickly. Visual navigation is one way to further abstract out the complex process of programming in different activities within an app.

As competition continues to exist between Apple and Google, it’s important that Google ensures that the apps are launching on Google Play in order to continue to drive Android device adoption. The sped-up emulator, in particular, may solve a pain point for developers that want to rapidly test parts of their apps and see how they may operate in the wild without having to wait for the app to load in an emulator or on a test device.


Android – TechCrunch

Nuance ends development of the Swype keyboard apps

The party is over for third party keyboards. But hey, it was fun while it lasted. Nuance, the company that acquired veteran swipe-to-type keyboard maker Swype — all the way back in 2011, shelling out a cool $ 100M — has ended development of its Swype+Dragon dictation Android and iOS apps.

The news was reported earlier by the Xda developer blog, which spotted a Reddit post by a user and says it got confirmation from Nuance that development for both the Android and iOS apps has been discontinued. We’ve also reached out to the company with questions. A search for the Swype app on iOS now results in suggestions for rival keyboard apps.

As Xda points out, Nuance has been concentrating on its b2b business using its speech recognition tech to enable speech to text utility — such as a dedicated version of its dictation product which is targeted at healthcare workers.

The b2b space also provides the business model that’s so often been lacking for keyboard players in the consumer space (even those with hundreds of millions of users — frankly, the typing was on the wall when major player Swiftkey took the exit route to Microsoft back in 2016).

The wider context here is that as speech recognition technologies have got better — improvements in turn made possible thanks to language models trained with data sucked up from keyboard inputs — voice interfaces can start to supplant keyboard-based input methods in more areas.

In the consumer space, Google especially has also doubled down on its own Gboard keyboard (which includes a dictation feature). While Apple’s native iOS keyboard is less fully featured but does include next-word prediction built in. So with mobile’s platform giants wading in there’s added survival pressure on third party keyboard app makers.

Nuance targeting its efforts at a narrow problem like patient documentation also makes sense because of the specialist nomenclature and routine procedures involved, which naturally provides a better framework for voice input accuracy vs more unpredictable and/or creative environments where dictation inaccuracies might more easily creep in.

So while Siri might still suck at understanding what you’re asking, a dedicated speech to text engine that’s been trained on medical data-sets and processes can provide compelling utility for clinicians needing to quickly capture patient notes, potentially even reducing inaccuracies that can creep in via old handwritten ways of doing things.

Connectivity getting embedded into more and more types of devices, including things that lack screens like (many) smart speakers, also means voice interfaces are naturally getting more uplift. And Nuance has been building dictation products for cars too, for example.

Still, it’s not quite the end of the road for third party consumer keyboard plays. VC backed freemium keyboard app Grammarly — which last year raised a whopping $ 110M, promising to improve your writing not just pick up typos but keylogging everything you type to do so — has been making a lot of noise and plastering its ads all over the Internet to drive consumer uptake. (My App Store search for Swype returned an ad for Grammarly as the top result, for example.)

And while the startup is taking revenue via a set of pricing plans for a more fully featured version of its service, its privacy policy also says its using typing data to improve its underlying algorithms and language models. So it remains to be seen what its data-mining keyboard business might evolve into (or exit to) in time.

Another consumer player, the Fleksy keyboard, also got revived last year — with a new developer team behind it, whose vision is for the keyboard to be a services platform. The team’s stated mission is to keep an independent and pro-privacy keyboard dream alive. So don’t stop typing just yet.


Android – TechCrunch