Keepon, carry on

I’m back in the South Bay this week, banging away at an introduction in the hotel lobby a few minutes before our crew heads to Shoreline for Google I/O. There’s a guy behind in a business suit and sockless loafers, taking a loud business meeting on his AirPods. It’s good to be home.

I’ve got a handful of meetings lined up with startups and VCs and then a quiet, robot-free day and a half in Santa Cruz for my birthday. Knowing I was going to be focused on this developer even all day, I made sure to line some stuff up for the week. Turns out I lined up too much stuff – which is good news for all of you.

In addition to the usual roundup and job openings, I’ve got two great interviews for you.

Two weeks back, I posted about a bit of digging around I was doing in the old MIT pages – specifically around the Leg Lab. It included this sentence, “Also, just scrolling through that list of students and faculty: Gill Pratt, Jerry Pratt, Joanna Bryson, Hugh Herr, Jonathan Hurst, among others. Boy howdy.”

After that edition of Actuator dropped, Bryson noted on Twitter,

Boy howdy?

I never worked on the robots, but I liked the lab culture / vibe & meetings.  Marc, Gill & Hugh were all welcoming & supportive (I never got time to visit Hugh’s version though). My own supervisor (Lynn Stein) didn’t really do labs or teams.

I discovered subsequent to publishing that I may well be the last person on Earth saying, “Boy Howdy” who has never served as an editor at Creem Magazine (call me). A day or two before, a gen-Z colleague was also entirely baffled by the phrase. It’s one in a growing list of archaic slang terms that have slowly ingratiated themselves into my vernacular, and boy howdy, am I going to keep using it.

As far as the second (and substantially more relevant) bit of the tweet, Bryson might be the one person on my initial list who I had never actually interacted with at any point. Naturally, I asked if she’d be interested in chatting. As she noted her tweet, she didn’t work directly with the robots themselves, but her work has plenty of overlap with that world.

Bryson currently serves as the Professor of Ethics and Technology at the Hertie School in Berlin. Prior to that, she taught at the the University of Bath and served as a research fellow at Oxford and the University of Nottingham. Much of her work focuses on artificial and natural intelligence, including ethics and governance in AI.

Given all talk around generative AI, the recent open letter and Geoffrey Hinton’s recent exit from Google, you couldn’t ask for better timing. Below is an excerpt from the conversation we recently had during Bryson’s office hours. 

Q&A Joanna Bryson

Image Credits: Hertie School

You must be busy with all of this generative AI news bubbling up.

I think generative AI is only part of why I’ve been especially busy. I was super, super busy from 2015 to 2020. That was when everybody was writing their policy. I also was working part-time because my partner had a job in New Jersey. That was a long way from Bath. So, I cut back to half time and was paid 30%. Because I was available, and people were like, “we need to figure out our policy,” I was getting flown everywhere. I was hardly ever at home. It seems like it’s been more busy, but I don’t know how much of that is because of [generative AI].

Part of the reason I’m going to this much detail is that for a lot of people, this is on their radar for the first time for some reason. They’re really wrapped up in the language thing. Don’t forget, in 2017, I did a language thing and people were freaked out by that too, and was there racism and sexism in the word embeddings? What people are calling “generative AI” – the ChatGPT stuff – the language part on that is not that different. All the technology is not all that different. It’s about looking at a lot of exemplars and then figuring out, given a start, what things are most likely coming next. That’s very related to the word embeddings, which is for one word, but those are basically the puzzle pieces that are now getting stuff together by other programs.

I write about tech for a living, so I was aware of a lot of the ethical conversations that were happening early. But I don’t think the most people were. That’s a big difference. All of the sudden your aunt is calling you to ask about AI.

I’ve been doing this since the 80s, and every so often, something would happen. I remember when the web happened, and also when it won chess, when it won Go. Every so often that happens. When you’re in those moments, it’s like, “oh my gosh, now people finally get AI.” We’ve known about it since the 30s, but now we keep having these moments. Everyone was like, “oh my god, nobody could have anticipated this progress and Go.” Miles Brundage showed during his PhD that it’s actually linear. We could have predicted within the month when it was going to pass human competence.

Is there any sense in which this hype bubble feels different from previous?

Hertie School was one of the first places to come out with policy around generative AI. At the beginning of term, I said this new technology is going to come in, in the middle of the semester.  We’ll get through it, but it’s going to be different at the end than it was at the beginning. In a way, it’s been more invisible than that. I think probably the students are using it extensively, but it isn’t as disruptive as people think, so far. […] I think part of the issue with technological change is everyone thinks that leads to unemployment and it doesn’t.

The people who have been made most unemployed are everybody in journalism — and not by replacing them but rather by stealing their revenue source, which was advertising. It’s a little flippant, but actually there is this whole thing about telephone operators. They were replaced by simple switches. That was the period when it switched to being more women in college than men, and it was because they were mostly women’s jobs. We got the more menial jobs that were being automated. […]

This is James Bessen’s research. Basically what happens is you bring in a technology that makes it easier to do some task, and then you wind up hiring more people for that task, because they’re each more valuable. Bank tellers were one of the early examples that people talked about, but this has been true in weaving and everything else. Then you get this increase in hiring and then you finally satiate. At some point, there’s enough cloth, there’s enough financial services, and then any further automation does a gradual decline in the number of people employed in that sector. But it’s not an overnight thing like people think.

You mention these conversations you were having years ago around setting guidelines. Were the ethical concerns and challenges the same as now? Or have they shifted over time?

There’s two ways to answer that question: what were the real ethical concerns they knew they had? If a government is flying you out, what are they concerned about? Maybe losing economic status, maybe losing domestic face, maybe losing security. Although, a lot of the time people think of AI as the goose that laid the golden egg. They think cyber and crypto are the security, when they’re totally interdependent. They’re not the same thing, but they rely on each other.

It drove me nuts when people said, “Oh, we have to rewrite the AI because nobody had been thinking about this.” But that’s exactly how I conceived of AI for decades, when I was giving all of these people advice. I get that bias matters, but it was like if you only talked about water and didn’t worry about electricity and food. Yes, you need water, but you need electricity and food, too. People decided, “Ethics is important and what is ethics? It’s bias.” Bias is a subset of it.

What’s the electricity and what’s the food here?

One is employment and another is security. A lot of people are seeing more how their jobs are going to change this time, and they’re afraid. They shouldn’t be afraid of that so much because of the AI — which is probably going to make our jobs more interesting — but because of climate change and the kinds of economic threats we’re under. This stuff will be used as an excuse. When do people get laid off? They get laid off when the economy is bad, and technology is just an excuse there. Climate change is the ultimate challenge. The digital governance crisis is a thing, and we’re still worrying about if democracy is sustainable in a context where people have so much influence from other countries. We still have those questions, but I feel like we’re getting on top of them. We have to get on top of them as soon as possible. I think that AI and a well-governed digital ecosystem help us solve problems faster.

I’m sure you know Geoffrey Hinton. Are you sympathetic with his recent decision to quit Google?

I don’t want to criticize Geoff Hinton. He’s a friend and an absolute genius. I don’t think all the reasons for his move are public. I don’t think it’s entirely about policy, why he would make this decision. But at the same time, I really appreciate that he realizes that now is a good time to try to help people. There are a bunch of people in machine learning who are super geniuses. The best of the best are going into that. I was just talking to this very smart colleague, and we were saying that 2012 paper by Hinton et al. was the biggest deal in deep learning. He’s just a super genius. But it doesn’t matter how smart you are — we’re not going to get omniscience.

It’s about who has done the hard work and understood economic consequences. Hinton needs to sit down as I did. I went to a policy school and attended all of the seminars. It was like, “Oh, it’s really nice, the new professor keeps showing up,” but I had to learn. You have to take the time. You don’t just walk into a field and dismiss everything about it. Physicists used to do that, and now machine learning people are doing that. They add noise that may add some insight, but there are centuries of work in political science and how to govern. There’s a lot of data from the last 50 years that these guys could be looking at, instead of just guessing.

There are a lot of people who are sending up alarms now.

So, I’m very suspicious about that too. On the one hand, a bunch of us noticed there were weird things. I got into AI ethics as a PhD student at MIT, just because people walked up to me and said things that sounded completely crazy to me. I was working on a robot that didn’t work at all, and they’d say, “It would be unethical to unplug that.” There were a lot of working robots around, but they didn’t look like a person. The one that looked like a person, they thought they had an obligation to.

I asked them why, and they said, “We learned from feminism that the most unlikely things can turn out to be people.” This is motors and wires. I had multiple people say that. It’s hard to derail me. I was a programmer trying not to fail out of MIT. But after it happened enough times, I thought, this is really weird. I’d better write a paper about it, because if I think it’s weird and I’m at MIT, it must be weird. This was something not enough people were talking about, this over-identification with AI. There’s something weird going on. I had a few papers I’d put out every four years, and finally, after the first two didn’t get read, the third one I called “Robots Should be Slaves,” and then people read it. Now all of a sudden I was an AI expert.

There was that recent open letter about AI. If pausing advancements won’t work, is there anything short-term that can be done?

There are two fundamental things. One is, we need to get back to adequately investing in government, so that the government can afford expertise. I grew up in the ’60s and ’70s, when the tax rate was 50% and people didn’t have to lock their doors. Most people say the ’90s [were] okay, so going back to Clinton-level tax rates, which we were freaked out by at the time. Given how much more efficient we are, we can probably get by with that. People have to pay their taxes and cooperate with the government. Because this was one of the last places where America was globally dominant, we’ve allowed it to be under-regulated. Regulation is about coordination. These guys are realizing you need to coordinate, and they’re like “stop everything, we need to coordinate.” There are a lot of people who know how to coordinate. There are basic things like product law. If we just set enough enforcement in the digital sector, then we would be okay. The AI act in the EU is like the most boring thing ever, but it’s so important, because they’re saying we noticed that digital products are products and it’s particularly important to enforcement when you have a system that’s automatically making decisions that affect human lives.

Image Credits: BeatBots LLC / Hideki Kozima / Marek Michalowski

Keepon groovin’

It’s an entirely unremarkable video in a number of ways. A small, yellow robot – two tennis balls formed into an unfinished snowman. Its face is boiled down to near abstraction: two widely spaced eyes stretched above a black button nose. The background is a dead gray, the kind they use to upholster cubicles.

“I Turn My Camera On: It’s the third track on Spoon’s fifth album, Gimme Fiction, released two years prior – practically 10 months to the day after YouTube went live. It’s the Austin-based indie band’s stripped down take on Prince-style funk – an perfect little number that could get anyone dancing, be it human or robot. For just over three-and-a-half minutes, Keepon grooves in a hypnotic rhythmic bouncing.

It was the perfect video for the 2007 internet, and the shiny new video site, roughly half a year after being acquired by Google for $1.65 billion. The original upload is still live, having racked up 3.6 million views over its lifetime.

A significantly higher budget follow up commissioned by Wired did quite well the following year, with 2.1 million views under its belt. This time, Keepon’s dance moves enticed passersby on the streets of Tokyo, with Spoon members making silent cameos throughout.

In 2013, the robot’s makers released a $40 commercial version of the research robot under the name My Keepon. A year later, the internet trail runs cold. Beatbots, the company behind the consumer model, posted a few more robots and then silence. I know all of this because I found myself down this very specific rabbit hole the other week. I will tell you that, as of the writing of this, you can still pick up a secondhand model for cheap on eBay – something I’ve been extremely tempted to do for a few weeks now.

I had spoken with cofounder Marek Michalowski a handful of times during my PCMag and Engadget days, but we hadn’t talked since the Keepon salad days. Surely, he must still be doing interesting things in robotics. The short answer is: yes. Coincidentally, in light of last week’s Google-heavy edition of Actuator, it turns out he’s currently working as a product manager at Alphabet X.

I didn’t realize it when I was writing last week’s issue, but his story turns out to be a great little microcosm of what’s been happening under the Alphabet umbrella since the whole robot startup shopping spree didn’t go as planned. Here’s the whole Keepon arc in his words.

Q&A with Marek Michalowski

Let’s start with Keepon’s origin story.

I was working on my PhD in human robot interaction at Carnegie Mellon. I was interested in this idea of rhythmic synchrony and social interaction, something that social psychologists were discovering 50 years ago in video recorded interactions of people in normal situations. They were drawing out these charts of every little micro movement and change in direction and accent in the speech and finding that there are these rhythms that are in sync within a particular person — but then also between people. The frequency of nodding and gesturing in a smooth interaction ends up being something like a dance. The other side of it is that when those rhythms are kind of unhealthy or out of sync, that that might be indicative of some problem in the interaction.

You were looking at how we can use robots to study social interaction, or how robots can interact with people in a more natural way?

Psychologists have observed something happening we don’t really understand — the mechanisms. Their robots can both be a tool for us to experiment and better understand those those social rhythmic phenomena. And also in the engineering problem of building better interactive robots, those kinds of rhythmic capabilities might be an important part of that. There’s both the science question that could be answered with the help of robots, but also the engineering problem of making better robots that would benefit from an answer to that question.

The more you know about the science, the more you’re able to put that into a robot.

Into the engineering. Basically, that was high level interest. I was trying to figure out what’s a good robotic medium for testing that. During that PhD, I was doing sponsored research trips to Japan, and I met this gentleman named Hideki Kozima, who had been a former colleague of one of one of my mentors, Brian Scassellati. They had been at MIT together working on the Cog and Kismet projects. I visited Dr. Kozima, who had just recently designed and built the first versions of Keepon. He had originally been designing humanoid robots, and also had psychology research interests that he was pursuing through those robots. He had been setting up some interactions between this humanoid and children, and he noticed this was not a good foundation for kind of naturalistic, comfortable social interactions. They’re focusing on the moving parts and the complexity.

Keepon was the first robot I recall seeing with potential applications for Autism treatment. I’ve been reading a bit on ASD recently, and one of the indicators specialists look for is a lack of sustained eye contact and an inability to maintain the rhythm of conversation. With the other robot, the issue was that the kids were focused on the visible moving parts, instead of the yes.

That’s right. With Keepon, the whole mechanism is hidden away, and it’s designed to really draw attention to those eyes, which are cameras. The nose is a microphone, and the use case here was for a researcher or therapists to be able to essentially puppeteer this robot, from a distance in the next room. Over the long term, they could observe how different children are engaging with this toy, and how those relationships develop over time.

There were two Spoon videos. The first was “I Turn My Camera On.”

I sent it to some friends, and they were like, “this is hilarious. You should put it on YouTube. YouTube was new. This was this was I think, March 2007. I actually wrote to the band’s management, and said, “I’m doing this research. I used your song in this video. Is it okay if I put it up on YouTube?”  The manager wrote back, like, “oh, you know, let me let me check with [Britt Daniel]. They wrote back, “nobody ever asks, thanks for asking. Go ahead and do it.”

It was the wild west back then.

It’s amazing that that video is, is still there snd still racking up views, but with a week, it was on the front page of YouTube. I think it was a link from Boing Boing, and from there, we had a lot of incoming interest from Wired Magazine. They set set up the subsequent video that we did with withe band in Tokyo. On the basis of those kinds of 15 minutes of fame, there was a lot of there was inbound interest from other researchers at various institutions and universities around the world who were asking, “Hey, can I get one of these robots and do some research with it?” There was also some interest from toy companies, so Dr. Kozima and I started Beatbots as a way of making some more of these research robots, and then to license the Keepon IP.

[…]I was looking to relocate myself to San Francisco, and I had learned about this company called Bot and Dolly — I think I think it was from a little half page ad in Wired Magazine. They were using robots in entertainment in a very different way, which is on film sets to hold cameras and lights and do the motion control.

They did effects for Gravity.

Yes, exactly. They were actually in the midst of doing that project. That was a really exciting and compelling use of these robots that were designed for automotive manufacturing. I reached out to them, and their studio was this amazing place filled with robots. They let me rent room in the corner to do Beatbots stuff, and then co-invest in a machine shop that they wanted to build. I set up shop there, and over the next couple of years I became really interested in the kinds of things they were doing. At the same time, we were doing a lot of these projects, which we were talking with various toy companies about. Those are on the Beatbots website.  […]You can do a lot when you’re building one research robot. You can craft it by hand and money is no object. You can buy kind of the best motors and so forth. It’s a very different thing to put something in a toy store and the retail price is roughly four times the like bill of materials.

Image Credits: BeatBots LLC / Hideki Kozima / Marek Michalowski

The more you scale, the cheaper the components get, but it’s incredible hard to hit a $40 price point with a first-gen hardware project.

With mass commercial products, that’s the challenge of how can you reduce the number of motors and what tricks can you can you do to make any given degree of freedom serve multiple purposes. We learned a lot, but also ran into physics and economics challenges.

[…]I needed to decide, do I want to push on the boundaries of robotics by making these things as inexpensively as possible? Or would I rather be in a place where you can use the best available tools and resources? That was a question I faced, but it was sort of answered for me the opportunities that were coming up with the things that Bot and Dolly was doing.

Google acquired Bot and Dolly with eight or so other robotics companies, including Boston Dynamics.

I took that up. That’s when the Beatbots thing was put on ice. I’ve been working on Google robotics efforts for — I guess it’s coming on nine years now. It’s been really exciting. I should say that Dr. Kozima is still working on Keepon in these in these research contexts. He’s a professor at Tohoku University.

News

6 River Systems flagship robot, Chuck, helps warehouse workers pick items faster.

Image Credits: 6 River Systems (opens in a new window) under a license.

Hands down the biggest robotics news of this week arrived at the end of last week. After announced a massive 20% cut to its 11,600-person staff, Shopify announced that it was selling of its Shopify Logistics division to Flexport. Soon after, word got out that it had also sold of 6 River Systems to Ocado, a U.K. licenser of grocery technology.

I happened to speak to 6 River Systems cofounder Jerome Dubois about how the initial Shopify/6 River deal was different that Amazon’s Kiva purchase. Specifically, the startup made its new owner agree to continue selling the technology to third parties, rather than monopolizing it for its own 3PL needs. Hopefully the Ocada deal plays out similarly.

“We are delighted to welcome new colleagues to the Ocado family. 6 River Systems brings exciting new IP and possibilities to the wider Ocado technology estate, as well as valuable commercial and R&D expertise in non-grocery retail segments,” Ocado CEO James Matthews said in a release. “Chuck robots are currently deployed in over 100 warehouses worldwide, with more than 70 customers. We’re looking forward to supporting 6 River Systems to build on these and new relationships in the years to come.”

Locus Robotics

Image Credits: Locus Robotics

On a very related note, DHL this week announced that it will deploy another 5,000 Locus robotics systems in its warehouses. The two companies have been working together for a bit, and the logistics giant is clearly quite pleased with how things have been going. DHL has been fairly forward thinking warehouse automation, including the first major purchase of Boston Dynamics’ trucking unloading robot, Stretch.

Locus remains the biggest player in the space, while managing to remain independent, unlike its larges competitor, 6 River. CEO Rick Faulk recently told me that the company is planning an immanent IPO, once market forces calm down.

Amp Robotics

A sorter machine from AMP Robotics.

Recycling robotics heavyweight AMP Robotics this weekend announced a new investment from Microsoft’s Climate Fund, pushing its $91 million Series C up to $99 million. There has always been buzz around the role of robotics could/should have in addressing climate change. The Denver-based firm is one of the startups tackling the issue head-on. It’s also a prime example of the “dirty” part of the three robotic Ds.

“The capital is helping us scale our operations, including deploying technology solutions to retrofit existing recycling infrastructure and expanding new infrastructure based on our application of AI-powered automation,” founder and CEO Matanya Horowitz told TechCrunch this week.

Amazon Astro in front of dog on couch

Image Credits: Amazon

Business insider has the scoop on an upcoming version of Amazon’s home robot, Astro. We’ve known for a while that the company is really banking on the product’s success. It seems like a longshot, given the checkered history of companies attempting to break into the home robotics market. iRobot is the obvious exception. Not much update on that deal, but last we hard about a month or so ago is that regulatory concerns have a decent shot at sidelining the whole thing.

Astro is an interesting product that is currently hampered by pricing and an unconvincing feature set. It’s going to take a lot more than what’s currently on offer to change the tide in home robots. We do know that Amazon is currently  investing a ton into catching up with the likes of Chat GPT and Google on the generative AI front. Certainly, a marriage of the two makes sense. It’s easy to see how conversational AI could go a long way in a product like Astro, whose speech capabilities are currently limited.

Robot Jobs for Human People

Agility Robotics  (20+ Roles)

ANYbotics  (20+ Roles)

AWL Automation  (29 Roles)

Bear Robotics  (4 Roles)

Canvas Construction  (1 Role)

Dexterity  (34 Roles)

Formic (8 Roles)

Keybotic (2 Roles)

Neubility (20 Roles)

OTTO Motors  (23 Roles)

Prime Robotics (4 Roles)

Sanctuary AI (13 Roles)

Viam  (4 Roles)

Woven by Toyota (3 Roles)

Image Credits: Bryce Durbin/TechCrunch

Keepon, carry on by Brian Heater originally published on TechCrunch

Canon sidles up to vloggers with PowerShot V10

Who says compact cameras are dead? Well, the market does, actually; the mobile phone has all but killed off the category altogether, so it’s interesting to see Canon take another stab at the market. The PowerShot series of cameras today gained its newest family member, the V10.

The main question this product has to overcome is ‘why not just use your phone,’ and the camera nerd in me is relieved to have a good answer: Phone cameras are great, but they have to be small. With that restriction removed, Canon can do what Canon does best: Build cameras. On paper, the little camera looks great; it is built around a 1-inch CMOS sensor. That alone is a big deal; phone cameras are getting good enough now that we are right at the edge of what physics can do for us: Sensors get hot, and with all those pixels packed into a tiny space, and limitations on how good and precise lenses can be, you’re butting up against the practical maximum image quality you can get.

If you want to get better from there, you need to add smarts (i.e. image processing enhancements, which is increasingly AI-powered), or shift to bigger sensors. Canon’s Powershot V10 treats us to both: it has 14 built-in color filters and ‘Smooth Skin’ mode to help you look your best.

The sturdy vertical body is an entirely new design for Canon – and it makes a lot of sense in a world where these cameras are more likely to be placed somewhere to film yourself, than hand-held and shooting. Which is an additional point: Smartphone cameras are getting great, but a lot of this kind of content is filmed with the selfie camera, which is often lower resolution and quality than the rear-facing cameras. Canon’s V10 resolves that by making the screen flippable, and you get the best of both worlds: Full-quality camera and a preview to frame your shot.

Canon’s newest helps make it possible to shoot in both vertical and horizontal orientations, matching the various social media platforms. If you’re wanting to use the camera as a webcam, streaming cam, or content production cam, it has you covered, whether you’re shooting for YouTube, TikTok, or – a favorite among home chefs – OnlyPans. Whatever else the cool kids are doing with their video, Canon hopes it can win folks over. 

The camera includes two high-quality stereo microphones, and a third microphone added to aid noise reduction. It also connects with the Canon Camera Connect app to transfer videos over Wi-Fi (no more messing about with memory cards!) and the camera has USB ports to easily transfer to a computer at higher speeds, or to be used as a web cam.

The V10’s adjustable stand means you don’t have a lug a tripod with you everywhere. Neat! Image Credit: Canon

For those on the go, the camera charges over USB-C, which is a nice perk, and it has a built-in stand that makes it easy to tilt the camera up to 30 degrees. Genius; it’s as if camera manufacturers finally realized that not everybody is as excited as I am to buy tripods in all weights and sizes.

The Canon PowerShot V10 can be purchased as part of two different bundles. The standard kit includes a power cable, soft case, lens cap, windshield and wrist strap. For those who want more creative options, an advanced kit includes a cage that can be used to attach additional lighting options or hold an external microphones etc.

Starting at $429, the camera is well-priced to be positioned as cheaper and better than a smartphone and a stepping stone on the way up to a full-size interchangeable lens camera.

Canon produced a video to show how it all works, along with some sample footage from the camera:

Canon sidles up to vloggers with PowerShot V10 by Haje Jan Kamps originally published on TechCrunch

Fairphone gets its audio groove on with repairable over-ear BT headphones

Dutch social enterprise Fairphone is best known for its mission to build ethical smartphones via a brand that promises fairer wages for supply chain workers and design choices that encourage consumers to cherish and repair the hardware rather than toss last year’s model thanks to modularity and a web shop selling replacement (and upgradeable) spare parts.

Today it’s taking the leap into a new device category — applying the same principled approach to shipping the most sustainable and fair consumer electronics it can, within limits imposed by wider industry practices which set the availability (and compatibility) of electronics components, to a pair of its own-design, over-ear Bluetooth headphones. The latest repairable Fairphone product is called Fairbuds XL.

Confusingly, Fairbuds XL are very much not (in-ear) earbuds. So the choice of name is evidently a bit of a pun. Fairphone tells TechCrunch there was hot debate internally over what to call the headphones. We can only imagine what other options were toyed with and rejected. But, clearly, naming a pair of over-ear headphones “buds” will cause some to howl in disbelief. Still, at least they sidestepped the obvious (yet alluring) pitfall of calling the cans Fairphones (see what we did there!). The final name choice was favored for being “unique”, as they tell it.

The Fairbuds XL are available to buy through Fairphone’s website (and select retailers) from today — retailing for €249 in the EU. (As with the company’s smartphones they’re mostly only shipping to Europe at present so US readers are out of luck, or else will have to find their own creative shipping solution.)

The repairable headphones come in a choice of two colors: Speckled black (pictured above); or speckled green with some snazzy orange detailing (shown in non-exploded view below):

Fairphone Fairbuds XL headphones in green

Image credits: Fairphone

Fairphone is remaining tight lipped on projections of how many pairs of headphones it expects to ship. And clearly it’s taking a bit of a leap here.

That said, fans of the company may have noticed it does already sell a pair of wireless earbuds (with the more vanilla name of “True Wireless Stereo Earbuds“), so it has been dabbling in the audio accessory space for a while. However Fairbuds XL represent a new category for the device maker, according to Fairphone’s head of product management, Miquel Ballester, and audio product manager, Bob van Iersel, a more recent addition to the team. This is because — unlike with the (actually) earbuds it sells — they’re not working with off-the-shelf components for the over-ear ‘phones. Rather they’ve designed this new audio product from scratch themselves. And designed the headphones to be easily dissembled for repairability.

As with Fairphone’s eponymous (screwdriver-friendly) smartphones, Fairbuds XL are comprised of modular parts that connect up to support ease of repair and promote function longevity — furthering the core mission of shipping more sustainable electronics (vs the built-in-obsolescence industry modus operandi that quickly leads to heaps of environmentally unfriendly e-waste).

“The level of modularity is similar or even more than the phones,” says Ballester, discussing Fairbuds XL in an interview with TechCrunch. “All the [replaceable] parts will be available on the website. And [there are] more specific components that we’ve also designed to be easy to replace — those specific small parts we will not be offering at the beginning on our website. Later, based on need, and what we see on the market, we might want to make all these other parts available.”

“The headphones are built up out of nine modules slash spare parts,” continues van Iersel. “Some of these parts are really relatively simple mechanical parts. Think about the ear cushions, the headband, the hinges. And there are also more complex parts — like the right speaker module which also has the buttons, the Bluetooth chipset and whatnot. So that’s the more comprehensive spare part.

“But in theory, all of them are replaceable. So there is not one single part that defines what the base of the headphones is, let’s say. So even if your Bluetooth module breaks down for whatever reason it’s not the case that you can’t use all the other ones — it’s easy to simply order replacement parts for that and the rest of the headphones will work as they are supposed to.”

“With this product, we truly design it from the ground up,” van Iersel also tells us. “That’s reflected in the modularity, the repairability of the design… With the headphones we saw an opportunity to enter a market which could could benefit a lot from the Fairphone approach — with making products that are more repairable, more durable and really designed to last a long time. That’s that’s what we set out to do with these headphones and what we think that we achieved as well.”

“For us, the headphones is a way to bring to the market what we do in the supply chain in the design of our products,” rejoins Ballester. “So this is kind of a proof that it can be done. And — for me — we are closing a gap in the market… There are no other companies doing, in this case headphones, as an artefact for changing the industry.”

Other elements that check Fairphone’s core ethical electronics mission are a “living wage” pledge applied to the headphone’s suppliers to encourage them to provide fairer working conditions for workers in their factories. There’s also fairtrade gold integrated into the Fairbuds supply chain. While recycled materials make up over 80% of the plastic weight of the headphones — with 100% recycled aluminium in structural parts, too.

Some non-recycled plastic has been used in areas where acoustic considerations are more sensitive, per van Iersel. So there’s an element of Fairphone needing to balance core product performance against sustainability targets. But Ballester says they’ll continue seeking to push the boundaries of what’s possible on the recycled materials front.

Designing such a bespoke audiophile product necessitated Fairphone bringing audio expertise in-house (to supplement its existing mobile hardware smarts). They also worked with partners on tuning the audio — touting a “signature sound” for the Fairbuds XL, as van Iersel puts it. On the audio performance side, he expresses confidence that the sound quality is reflective of what consumers can expect for over-ear headphones retailing at this price-point. (Cheaper Beats cans are in the same sort of price range, for example.)

“Fairphone did not originally have the audio expertise in house to build these kinds of projects so we made sure to partner up really well — with both hardware suppliers as well as on the software side. We have partnered to ensure the great sound quality that we managed to achieve in the end. Because, with this product, we didn’t want to just offer a ‘fair’ version of what headphones could be; but they should also be able to carry their weight when it comes to sound quality,” he tells TechCrunch.

“On the hardware side, there are different things that make up for the sound quality. It’s not only the components that you put in but also how you put them in. So we’ve chosen a chipset that supports… a high fidelity audio codec to make sure whenever you throw at it from your phone it gets processed in a Hi-Fi definition. And next to that we use 40 millimetre dynamic drivers, which is comparable to what anything in this category would have. But also, we made sure not just to select the right drivers but also to have our partner carefully design the acoustic chamber in which they are placed inside the headphones themselves — because that’s massively affects the final sound as well.”

Other sound quality considerations van Iersel says the Fairbuds’ designers have paid attention to include the clamping force of the headphones and the material for the ear cushions to ensure a proper seal. As noted above, it also worked with a third party audio calibration partner, called Sonarworks, to tune the sound.

“We developed a custom Fairphone sound signature that is part of these headphones. And it’s even something that we could carry on over to future audio products as well,” says van Iersel, noting Sonarworks created a selection of audio pre-sets Fairbuds’ users can choose from in a companion app which is launching with the headphones (both for iOS and Android).

“We also intend to reach different or new target audience with this,” he adds. “Because with the phone there’s obviously a huge threshold for consumers to switch brands, to go for something that they don’t know yet. But we do have a lot of fans of Fairphone that like our mission and would be happy to support it but aren’t willing to take that leap to buy a smartphone. But this could be a much more like the entry level Fairphone [product].

“That’s also why we chose to develop the application not just for Androids but also for iOS. So we’re not just targeting current Fairphone customers… It’s really a product that’s meant to compete with all other headphones — and not just be seen as a Fairphone accessory because that’s absolutely not what it is.”

The need to build up the necessary expertise in new product category goes some way to explaining why it took Fairphone a (fair) bit longer than it had originally expected to get the Fairbuds XL to market (circa four months). But, well, hardware is hard and its repairable Bluetooth headphones are juggling both swappable mechanical (moving) parts and higher tech chipsets, as well as shipping with the aforementioned companion apps to let users custom-tune the sound. So there’s perhaps more work involved in Fairphone delivering decent modular cans than you might consider at first glance.

A range of spare parts are available to buy for the Fairbuds XL on its web shop from launch — such as new ear cap covers for a few Euros or a new battery for around €20. Consumers of the product get a two year warranty on purchase, so any component breakages in that time are likely to be covered by Fairphone (well, unless you damage the product by sitting on it or something).

Given Fairphone offers a five year warranty on its new smartphones, a two year warranty for Fairbuds XL may seem a little low ball for a brand that centers sustainability. And Ballester admits they had wanted to be able to offer the same five year pledge. But he says uncertainties attached to shipping a device in a new category, and specifically needing to see how the headphones stand up to real world daily use/abuse, led them to opt for the less risky choice of a shorter warranty at launch. He adds that they hope to be able to extend it in the future as they see how the Fairbuds perform in the wild.

One neat longevity feature he highlights is the product has been designed so it can always function as wired headphones — meaning that, even many years hence, when it might finally be impossible to get a replacement battery for this particular Fairbuds model (even from Fairphone’s own web shop), the headphones will still function without a battery by plugging them in — at least assuming there’s a USB-C socket to hand. (Regionally at least, that’s a fairly safe bet since EU lawmakers are pushing for USB-Type C to be the charging standard for consumer electronics.)

Talking of supportive policymaking, the EU is working on right to repair legislation that looks set to give Fairphone’s approach considerable regional uplift in the years to come. And Ballester welcomes the planned expansions to EU ecodesign legislation. Discussing this, he also suggests lawmakers could go further, too — flagging the need for them to pay greater attention to consumer electronics business models and find more ways to support models that aim to sell consumers on sustainability, rather than sticking with the dirty old gadget-makers’ game of driving resource-hungry hardware upgrade cycles.

In Fairphone’s home market of the Netherlands, it’s now offering a smartphone subscription service, called Fairphone Easy, that lets users rent its handsets for a flat monthly fee which covers the cost of any necessary repairs and/or replacement. When the lease expires or the device breaks the handsets are returned to the company for reuse (refurbishment) or else for recycling components at end of their useful life to maximize resource utilization and minimize e-waste.

“I think it’s two models that will have to coexist,” suggests Ballester, tracking where sustainable consumer electronics may be headed in the years to come. “You will have the type of consumers that are more convenience driven. Like ‘I am very sustainably minded but I don’t need to repair a product myself.’ [Who] will buy into a service proposition because [they] don’t get the burden of ownership. And that’s totally fair. And there will be a type of consumers that will be fine with that burden of ownership because they know they need to recycle at the end of life. They need to keep their device as long as possible. They know that they need to repair.

“So for me these two models will coexist in the future. And we’ll have to divert more convenience-driven consumers to our service propositions… And I think companies should be smart enough to create the business models that really unlock sustainability for any type of consumer. And I think service propositions have a role there.”

Fairphone gets its audio groove on with repairable over-ear BT headphones by Natasha Lomas originally published on TechCrunch

Here’s everything Google has announced at I/O so far

It’s that moment you’ve been waiting for all year: Google I/O keynote day! Google kicks off its developer conference each year with a rapid-fire stream of announcements, including many unveilings of recent things it’s been working on. Brian already kicked us off by sharing what we are expecting.

We know you don’t always have time to watch the whole two-hour presentation today, so we’re taking that on and will deliver quick hits of the biggest news from the keynote as they are announced, all in an easy-to-digest, easy-to-skim list. Here we go!

Google Maps

Google's new Immersive View for Routes feature

Image Credits: Google

Google Maps unveiled a new “Immersive View for Routes” feature in select cities. The new feature brings all of the information that a user may need into one place, including details about traffic simulations, bike lanes, complex intersections, parking and more. Read more.

Magic Editor

We’re always wanting to change something about the photo we just took, and now Google’s Magic Editor feature is AI-powered for more complex edits in specific parts of the photos, for example the foreground or background and can also fill in gaps in the photo or even reposition the subject for a better-framed shot. Check it out.

PaLM 2

Image Credits: Google

Frederic has your look at PaLM 2, Google’s newest large language model (LLM). He writes “PaLM 2 will power Google’s updated Bard chat tool, the company’s competitor to OpenAI’s ChatGPT, and function as the foundation model for most of the new AI features the company is announcing today.” PaLM 2 also now features improved support for writing and debugging code. More here. Also, Kyle takes a deeper dive into PaLM 2 with a more critical look at the model through the lens of a Google-authored research paper.

Bard gets smarter

Good news: Google is not only removing its waitlist for Bard and making its available, in English, in over 180 countries and territories, but it’s also launching support for Japanese and Korean with a goal of supporting 40 languages in the near future. Also new is Bard’s ability to surface images in its responses. Find out more. In addition, Google is partnering with Adobe for some art generation capabilities via Bard. Kyle writes that “Bard users will be able to generate images via Firefly and then modify them using Express. Within Bard, users will be able to choose from templates, fonts and stock images as well as other assets from the Express library.”

Workspace

Google Workspace Icons

Image Credits: TechCrunch

Google’s Workspace suite is also getting the AI touch to make it smarter, with the addition of an automatic table (but not formula) generation in Sheets and image creation in Slides and Meet. Initially, the automatic table is simpler, though Frederic notes there is more to come with regard to using AI to create formulas. The new features for Slides and Meet include the ability to type in what kind of visualization you are looking for, and the AI will create that image. Specifically for Google Meet, that means custom backgrounds. Check out more.

MusicLM

Google MusicLM

Image Credits: Google

MusicLM is Google’s new experimental AI tool that turns text into music. Kyle writes that for example, if you are hosting a dinner party, you can simply type, “soulful jazz for a dinner party” and have the tool create several versions of the song. Read more.

Stay tuned for more developments as the day “unfolds,” get it?

Read more about Google I/O 2023 on TechCrunch

Here’s everything Google has announced at I/O so far by Christine Hall originally published on TechCrunch

Google’s Find My Device network to warn about unknown AirTags with you

Shortly after last week’s joint announcement which saw Apple and Google teaming up on Bluetooth tracker safety measures and a new specification, Google today introduced a series of improvements coming to its own Find My Device network, including proactive alerts about unknown trackers traveling with you with support for Apple’s AirTag and others.

The news, detailed today at Google’s I/O developer conference, follows Apple and Google’s recently announced plan to lead an industry-wide initiative to draft a specification that would alert users in the case of unwanted tracking from Bluetooth devices.

The companies’ larger goal is to offer increased safety and security for their own respective user bases by making these alerts work across platforms in the same way — meaning, for example, the work Apple did to make AirTags safer following reports they were being used for stalking would also make its way to Android devices.

Today, Google is building on that announcement by noting that its own Find My Device network will soon automatically notify users if their phone detects an unknown tracker moving with them. The feature, arriving later this summer, will work with Bluetooth trackers, including Apple AirTags, and all the other trackers that are already compatible with Google’s Find My Device network.

In addition, Google is updating its Find My Device experience to make it easier to locate devices by ringing them or viewing their location on a map — even if they’re offline, it says.

This, too, will arrive later this summer, along with new support for Bluetooth trackers from Tile, Chipolo, and Pebblebee, as well as audio devices like Pixel Buds and headphones from Sony and JBL.

It seems the technology in the draft specification around trackers proposed by Apple and Google will be making its way to Android devices ahead of the production release — which is expected to arrive by year-end, the companies previously noted.

Their draft had been submitted as an Internet-Draft via a standards development organization, the Internet Engineering Task Force (IETF). Other interested parties were invited to review and comment over the next few months.

Apple and Google said other tracker makers like Samsung, Tile, Chipolo, eufy Security and Pebblebee had also expressed interest in their draft.

Read more about Google I/O 2023 on TechCrunch

Google’s Find My Device network to warn about unknown AirTags with you by Sarah Perez originally published on TechCrunch