‘Carpentry Compiler’ turns 3D models to instructions on how to build them

Even to an experienced carpenter, it may not be obvious what the best way is to build a structure they’ve designed. A new digital tool, Carpentry Compiler, provides a way forward, converting the shapes of the structure to a step-by-step guide on how to produce them. It could help your next carpentry project get off the screen and into the shop.

“If you think of both design and fabrication as programs, you can use methods from programming languages to solve problems in carpentry, which is really cool,” said project lead Adriana Schulz from the University of Washington’s computer science department, in a news release.

It sounds a bit detached from the sawdust and sweat of hands-on woodworking, but they don’t say “measure twice, cut once” for nothing. Carpentry is a cerebral process more than a physical one, and smart, efficient solutions tend to replace ones that are merely well made.

What Carpentry Compiler does is codify the rules that govern design and carpentry, for example what materials are available, what tools can do, and so on, and use those to create a solution (in terms of cuts and joins) to a problem (how to turn boards into a treehouse).

Users design in a familiar 3D model interface, as many already do, creating the desired structure out of various shapes that they can modify, divide, pierce, attach, and so on. The program then takes those shapes and determines the best way to create them from your existing stock, with the tools you have — which you can select from a list.

Need to make the roof of your treehouse but only have 2x4s? It’ll provide a recipe with that restriction. Got some plywood sheets? It’ll use those, and the leftovers contribute to the base so there’s less waste. By evaluating lots and lots of variations on how this might be accomplished, the program arrives at what it believes are the best options, and presents multiple solutions.

“If you want to make a bookcase, it will give you multiple plans to make it,” said Schulz. “One might use less material. Another one might be more precise because it uses a more precise tool. And a third one is faster, but it uses more material. All these plans make the same bookcase, but they are not identical in terms of cost. These are examples of tradeoffs that a designer could explore.”

A 24-inch 2×4 gets cut at 16 inches at a 30-degree angle.

That’s really the same kind of thing that goes on inside a woodworker’s brain: I could use that fresh sheet to make this part, and it would be easy, or I could cut those shapes from either corner and it would leave room in the middle, but that’ll be kind of a pain… That sort of thing. It can also optimize for spatial elements, if for example you wanted to pack the parts in a box, or for cost if you wanted to shave a few bucks off the project.

Eventually the user is provided with a set of instructions specific to their set of tools. And the carpenters themselves act as the “processor,” executing operations, like “cut at this angle,” on real-world materials. In Carpenter Compiler, computer programs you!

The team presented their work at SIGGRAPH Asia last month. You can read more about the project (and learn how you can try it yourself) at its webpage.

Gadgets – TechCrunch

How Microsoft turns an obsession with detail into micron-optimized keyboards

Nestled among the many indistinguishable buildings of Microsoft’s Redmond campus, a multi-disciplinary team sharing an attention to detail that borders on fanatical is designing a keyboard… again and again and again. And one more time for good measure. Their dogged and ever-evolving dedication to “human factors” shows the amount of work that goes into making any piece of hardware truly ergonomic.

Microsoft may be known primarily for its software and services, but cast your mind back a bit and you’ll find a series of hardware advances that have redefine their respective categories:

The original Natural Keyboard was the first split-key, ergonomic keyboard, the fundamentals of which have only ever been slightly improved upon.

The Intellimouse Optical not only made the first truly popular leap away from ball-based mice, but did so in such a way that its shape and buttons still make its descendants among the best all-purpose mice on the market.

Remember me?

Although the Zune is remembered more for being a colossal boondoggle than a great music player, it was very much the latter, and I still use and marvel at the usability of my Zune HD. Yes, seriously. (Microsoft, open source the software!)

More recently, the Surface series of convertible notebooks have made bold and welcome changes to a form factor that had stagnated in the wake of Apple’s influential mid-2000s MacBook Pro designs.

Microsoft is still making hardware, of course, and in fact it has doubled down on its ability to do so with a revamped hardware lab filled with dedicated, extremely detail-oriented people who are given the tools they need to get as weird as they want — as long as it makes something better.

You don’t get something like this by aping the competition.

First, a disclosure: I may as well say at the outset that this piece was done essentially at the invitation (but not direction) of Microsoft, which offered the opportunity to visit their hardware labs in Building 87 and meet the team. I’d actually been there before a few times, but it had always been off-record and rather sanitized.

Knowing how interesting I’d found the place before, I decided I wanted to take part and share it at the risk of seeming promotional. They call this sort of thing “access journalism,” but the second part is kind of a stretch. I really just think this stuff is really cool, and companies seldom expose their design processes in the open like this. Microsoft obviously isn’t the only company to have hardware labs and facilities like this, but they’ve been in the game for a long time and have an interesting and almost too detailed process they’ve decided to be open about.

Although I spoke with perhaps a dozen Microsoft Devices people during the tour (which was still rigidly structured), only two were permitted to be on record: Edie Adams, Chief Ergonomist, and Yi-Min Huang, Principal Design and Experience Lead. But the other folks in the labs were very obliging in answering questions and happy to talk about their work. I was genuinely surprised and pleased to find people occupying niches so suited to their specialities and inclinations.

Generally speaking the work I got to see fell into three general spaces: the Human Factors Lab, focused on very exacting measurements of people themselves and how they interact with a piece of hardware; the anechoic chamber, where the sound of devices is obsessively analyzed and adjusted; and the Advanced Prototype Center, where devices and materials can go from idea to reality in minutes or hours.

The science of anthropometry

microsoft building87 7100095Inside the Human Factors lab, human thumbs litter the table. No, it isn’t a torture chamber — not for humans, anyway. Here the company puts its hardware to the test by measuring how human beings use it, recording not just simple metrics like words per minute on a keyboard, but high-speed stereo footage that analyzes how the skin of the hand stretches when it reaches for a mouse button down to a fraction of a millimeter.

The trend here, as elsewhere in the design process and labs, is that you can’t count anything out as a factor that increases or decreases comfort; the little things really do make a difference, and sometimes the microscopic ones.

“Feats of engineering heroics are great,” said Adams, “but they have to meet a human need. We try to cover the physical, cognitive, and emotional interactions with our products.”

(Perhaps you take this, as I did, as — in addition to a statement of purpose — a veiled reference to a certain other company whose keyboards have been in the news for other reasons. Of this later.)

The lab is a space perhaps comparable to a medium-sized restaurant, with enough room for a dozen or so people to work in the various sub-spaces set aside for different highly specific measurements. Various models of body parts have been set out on work surfaces, I suspect for my benefit.

microsoft building87 7100099Among them are that set of thumbs, in little cases looking like oversized lipsticks, each with a disturbing surprise inside. These are all cast from real people, ranging from the small thumb of a child to a monster that, should it have started a war with mine, I would surrender unconditionally.

Next door is a collection of ears, not only rendered in extreme detail but with different materials simulating a variety of rigidities. Some people have soft ears, you know. And next door to those is a variety of noses, eyes, and temples, each representing a different facial structure or interpupillary distance.

This menagerie of parts represents not just a continuum of sizes but a variety of backgrounds and ages. All of them come into play when creating and testing a new piece of hardware.

microsoft building87 7100104 1“We want to make sure that we have a diverse population we can draw on when we develop our products,” said Adams. When you distribute globally it is embarrassing to find that some group or another, with wider-set eyes or smaller hands, finds your product difficult to use. Inclusivity is a many-faceted gem, indeed it has as many facets as you are willing to cut. (The Xbox Adaptive Controller, for instance, is a new and welcome one.)

In one corner stands an enormous pod that looks like Darth Vader should emerge from it. This chamber, equipped with 36 DSLR cameras, produces an unforgivingly exact reproduction of one’s head. I didn’t do it myself, but many on the team had; in fact, one eyes-and-nose combo belonged to Adams. The fellow you see pictured there also works in the lab; that was the first such 3D portrait they took with the rig.

With this they can quickly and easily scan in dozens or hundreds of heads, collecting metrics on all manner of physiognomical features and creating an enviable database of both average and outlier heads. My head is big, if you want to know, and my hand was on the upper range too. But well within a couple standard deviations.

So much for static study — getting reads on the landscape of humanity, as it were. Anthropometry, they call it. But there are dynamic elements as well, some of which they collect in the lab, some elsewhere.

“When we’re evaluating keyboards, we have people come into the lab. We try to put them in the most neutral position possible,” explained Adams.

It should be explained that by neutral, she means specifically with regard to the neutral positions of the joints in the body, which have certain minima and maxima it is well to observe. How can you get a good read on how easy it is to type on a given keyboard if the chair and desk the tester is sitting at are uncomfortable?

Here as elsewhere the team strives to collect both objective data and subjective data; people will say they think a keyboard, or mouse, or headset is too this or too that, but not knowing the jargon they can’t get more specific. By listening to subjective evaluations and simultaneously looking at objective measurements, you can align the two and discover practical measures to take.

microsoft building87 7100096One such objective measure involved motion capture beads attached to the hand while an electromyographic bracelet tracks the activation of muscles in the arm. Imagine if you will a person whose typing appears normal and of uniform speed — but in reality they are putting more force on their middle fingers than the others because of the shape of the keys or rest. They might not be able to tell you they’re doing so, though it will lead to uneven hand fatigue, but this combo of tools could reveal the fact.

“We also look at a range of locations,” added Huang. “Typing on a couch is very different from typing on a desk.”

One case, such as a wireless Surface keyboard, might require more of what Huang called “lapability,” (sp?) while the other perhaps needs to accommodate a different posture and can abandon lapability altogether.

A final measurement technique that is quite new to my knowledge involves a pair of high-resolution, high-speed black and white cameras that can be focused narrowly on a region of the body. They’re on the right, below, with colors and arrows representing motion vectors.

microsoft building87 7100106

A display showing various anthropometric measurements.

These produce a very detailed depth map by closely tracking the features of the skin; one little patch might move further than the other when a person puts on a headset, suggesting it’s stretching the skin on the temple more than it is on the forehead. The team said they can see movements as small as ten microns, or micrometers (therefore you see that my headline was only light hyperbole).

You might be thinking that this is overkill. And in a way it most certainly is. But it is also true that by looking closer they can make the small changes that cause a keyboard to be comfortable for five hours rather than four, or to reduce error rates or wrist pain by noticeable amounts — features you can’t really even put on the box, but which make a difference in the long run. The returns may diminish, but we’re not so far along the asymptote approaching perfection that there’s no point to making further improvements.

The quietest place in the world

microsoft building87 7100109Down the hall from the Human Factors lab is the quietest place in the world. That’s not a colloquial exaggeration — the main anechoic chamber in Building 87 at Microsoft is in the record books as the quietest place on Earth, with an official ambient noise rating of negative 20.3 decibels.

You enter the room through a series of heavy doors and the quietness, though a void, feels like a physical medium that you pass into. And so it is, in fact — a near-total lack of vibrations in the air that feels as solid as the nested concrete boxes inside which the chamber rests.

I’ve been in here a couple times before, and Hundraj Gopal, the jovial and highly expert proprietor of quietude here, skips the usual tales of Guinness coming to test it and so on. Instead we talk about the value of sound to the consumer, though they may not even realize they do value it.

Naturally if you’re going to make a keyboard, you’re going to want to control how it sounds. But this is a surprisingly complex process, especially if, like the team at Microsoft, you’re really going to town on the details.

The sounds of consumer products are very deliberately designed, they explained. The sound your car door makes when it shuts gives a sense of security — being sealed in when you’re entering, and being securely shut out when you’re leaving it. It’s the same for a laptop — you don’t want to hear a clank when you close it, or a scraping noise when you open it. These are the kinds of things that set apart “premium” devices (and cars, and controllers, and furniture, etc) and they do not come about by accident.

microsoft building87 7100113Keyboards are no exception. And part of designing the sound is understanding that there’s more to it than loudness or even tone. Some sounds just sound louder, though they may not register as high in decibels. And some sounds are just more annoying, though they might be quiet. The study and understanding of this is what’s known as psychoacoustics.

There are known patterns to pursue, certain combinations of sounds that are near-universally liked or disliked, but you can’t rely on that kind of thing when you’re, say, building a new keyboard from the ground up. And obviously when you create a new machine like the Surface and its family they need new keyboards, not something off the shelf. So this is a process that has to be done from scratch over and over.

As part of designing the keyboard — and keep in mind, this is in tandem with the human factors mentioned above and the rapid prototyping we’ll touch on below — the device has to come into the anechoic chamber and have a variety of tests performed.

microsoft building87 7100116

A standard head model used to simulate how humans might hear certain sounds. The team gave it a bit of a makeover.

These tests can be painstakingly objective, like a robotic arm pressing each key one by one while a high-end microphone records the sound in perfect fidelity and analysts pore over the spectrogram. But they can also be highly subjective: They bring in trained listeners — “golden ears” — to give their expert opinions, but also have the “gen pop” everyday users try the keyboards while experiencing calibrated ambient noise recorded in coffee shops and offices. One click sound may be lost in the broad-spectrum hubbub in a crowded cafe but annoying when it’s across the desk from you.

This feedback goes both directions, to human factors and prototyping, and they iterate and bring it back for more. This progresses sometimes through multiple phases of hardware, such as the keyswitch assembly alone; the keys built into their metal enclosure; the keys in the final near-shipping product before they finalize the keytop material, and so on.

Indeed, it seems like the process really could go on forever if someone didn’t stop them from refining the design further.

“It’s amazing that we ever ship a product,” quipped Adams. They can probably thank the Advanced Prototype Center for that.

Rapid turnaround is fair play

If you’re going to be obsessive about the details of the devices you’re designing, it doesn’t make a lot of sense to have to send off a CAD file to some factory somewhere, wait a few days for it to come back, then inspect for quality, send a revised file, and so on. So Microsoft (and of course other hardware makers of any size) now use rapid prototyping to turn designs around in hours rather than days or weeks.

This wasn’t always possible even with the best equipment. 3D printing has come a long way over the last decade, and continues to advance, but not long ago there was a huge difference between a printed prototype and the hardware that a user would actually hold.

microsoft building87 7100128Multi-axis CNC mills have been around for longer, but they’re slower and more difficult to operate. And subtractive manufacturing (i.e. taking a block and whittling it down to a mouse) is inefficient and has certain limitations as far as the structures it can create.

Of course you could carve it yourself out of wood or soap, but that’s a bit old-fashioned.

So when Building 87 was redesigned from the ground up some years back, it was loaded with the latest and greatest of both additive and subtractive rapid manufacturing methods, and the state of the art has been continually rolling through ever since. Even as I passed through they were installing some new machines (desk-sized things that had slots for both extrusion materials and ordinary printer ink cartridges, a fact that for some reason I found hilarious).

The additive machines are in constant use as designers and engineers propose new device shapes and styles that sound great in theory but must be tested in person. Having a bunch of these things, each able to produce multiple items per print, lets you for instance test out a thumb scoop on a mouse with 16 slightly different widths. Maybe you take those over to Human Factors and see which can be eliminated for over-stressing a joint, then compare comfort on the surviving 6 and move on to a new iteration. That could all take place over a day or two.

microsoft building87 7100092

Ever wonder what an Xbox controller feels like to a child? Just print a giant one in the lab.

Softer materials have become increasingly important as designers have found that they can be integrated into products from the start. For instance, a wrist wrest for a new keyboard might have foam padding built in.

But how much foam is too much, or too little? As with the 3D printers, flat materials like foam and cloth can be customized and systematically tested as well. Using a machine called a skiver, foam can be split into thicknesses only half a millimeter apart. It doesn’t sound like much — and it isn’t — but when you’re creating an object that will be handled for hours at a time by the sensitive hands of humans, the difference can be subtle but substantial.

For more heavy-duty prototyping of things that need to be made out of metal — hinges, laptop frames, and so on — there is bank after bank of 5-axis CNC machines, lathes, and more exotic tools, like a system that performs extremely precise cuts using a charged wire.

The engineers operating these things work collaboratively the designers and researchers, and it was important to the people I talked to that this wasn’t a “here, print this” situation. A true collaboration has input from both sides, and that is what seems to be happening here. Someone inspecting a 3D model for printability before popping it into the 5-axis might say to the designer, you know, these pieces could fit together more closely if we did so-and-so, and it would actually add strength to the assembly. (Can you tell I’m not an engineer?) Making stuff, and making stuff better, is a passion among the crew and that’s a fundamentally creative drive.

Making fresh hells for keyboards

If any keyboard has dominated the headlines for the last year or so, it’s been Apple’s ill-fated butterfly switch keyboard on the latest MacBook Pros. While being in my opinion quite unpleasant to type on, they appeared to fail at an astonishing rate judging by the proportion of users I saw personally reporting problems, and are quite expensive to replace. How, I wondered, did a company with Apple’s design resources create such a dog?

microsoft building87 7100129

Here’s a piece of hardware you won’t break any time soon.

I mentioned the subject to the group towards the end of the tour but, predictably and understandably, it wasn’t really something they wanted to talk about. But a short time later I spoke with one of the people in charge of Microsoft’s reliability managers. They too demurred on the topic of Apple’s failures, opting instead to describe at length the measures Microsoft takes to ensure that their own keyboards don’t suffer a similar fate.

The philosophy is essentially to simulate everything about the expected 3-5 year life of the keyboard. I’ve seen the “torture chambers” where devices are beaten on by robots (I’ve seen these personally, years ago — they’re brutal), but there’s more to it than that. Keyboards are everyday objects, and they face everyday threats; so that’s what the team tests, with things falling into three general categories:

Environmental: This includes cycling the temperature from very low to very high, exposing the keyboard to dust and UV. This differs for each product, since some will obviously be used outside more than others. Does it break? Does it discolor? Where does the dust go?

Mechanical: Every keyboard undergoes key tests to make sure that keys can withstand however many million presses without failing. But that’s not the only thing that keyboards undergo. They get dropped and things get dropped on them, of course, or left upside-down, or have their keys pressed and held at weird angles. All these things are tested, and when a keyboard fails because of a test they don’t have, they add it.

Chemical. I found this very interesting. The team now has more than 30 chemicals that it exposes its hardware to, including: lotion, Coke, coffee, chips, mustard, ketchup, and Clorox. The team is constantly adding to the list as new chemicals enter frequent usage or new markets open up. Hospitals, for instance, need to test a variety of harsh disinfectants that an ordinary home wouldn’t have. (Note: Burt’s Bees is apparently bad news for keyboards.)

Testing is ongoing, with new batches being evaluated continuously as time allows.

To be honest it’s hard to imagine that Apple’s disappointing keyboard actually underwent this kind of testing, or if it did, that it was modified to survive it. The number and severity of problems I’ve heard of with them suggest the “feats of engineering heroics” of which Adams spoke, but directed singlemindedly in the direction of compactness. Perhaps more torture chambers are required at Apple HQ.

7 factors and the unfactorable

All the above are more tools for executing a design and not or creating one to begin with. That’s a whole other kettle of fish, and one not so easily described.

Adams told me: “When computers were on every desk the same way, it was okay to only have one or two kinds of keyboard. But now that there are so many kinds of computing, it’s okay to have a choice. What kind of work do you do? Where do you do it? I mean, what do we all type on now? Phones. So it’s entirely context dependent.”

microsoft building87 7100120

Is this the right curve? Or should it be six millimeters higher? Let’s try both.

Yet even in the great variety of all possible keyboards there are metrics that must be considered if that keyboard is to succeed in its role. The team boiled it down to seven critical points:

  • Key travel: How far a key goes until it bottoms out. Neither shallow nor deep is necessarily good, but serve different purposes.
  • Key spacing: Distance between the center of one key and the next. How far can you differ from “full-size” before it becomes uncomfortable?
  • Key pitch: On many keyboards the keys do not all “face” the same direction, but are subtly pointed towards the home row, because that’s the direction your fingers hit them from. How much is too much? How little is too little?
  • Key dish: The shape of the keytop limits your fingers’ motion, captures them when they travel or return, and provides a comfortable home — if it’s done right.
  • Key texture: Too slick and fingers will slide off. Too rough and it’ll be uncomfortable. Can it be fabric? Textured plastic? Metal?
  • Key Sound: As described above the sound indicates a number of things and has to be carefully engineered.
  • Force to fire: How much actual force does it take to drive a given key to its actuation point? Keep in mind this can and perhaps should differ from key to key.

In addition to these core concepts there are many secondary ones that pop up for consideration: Wobble, or the amount a key moves laterally (yes, this is deliberate), snap ratio, involving the feedback from actuation. Drop angle, off-axis actuation, key gap for chiclet boards… and of course the inevitable switch debate.

Keyboard switches, the actual mechanism under the key, have become a major sub-industry as many companies started making their own at the expiration of a few important patents. Hence there’s been a proliferation of new key switches with a variety of aspects, especially on the mechanical side. Microsoft does make mechanical keyboards, and scissor-switch keyboards, and membrane as well, and perhaps even some more exotic ones (though the original touch-sensitive Surface cover keyboard was a bit of a flop).

“When we look at switches, whether it’s for a mouse, QWERTY, or other keys, we think about what they’re for,” said Adams. “We’re not going to say we’re scissor switch all the time or something — we have all kinds. It’s about durability, reliability, cost, supply, and so on. And the sound and tactile experience is so important.”

As for the shape itself, there is generally the divided Natural style, the flat full style, and the flat chiclet style. But with design trends, new materials, new devices, and changes to people and desk styles (you better believe a standing desk needs a different keyboard than a sitting one), it’s a new challenge every time.

They collected a menagerie of keyboards and prototypes in various stages of experimentation. Some were obviously never meant for real use — one had the keys pitched so far that it was like a little cave for the home row. Another was an experiment in how much a design could be shrunk until it was no longer usable. A handful showed different curves a la Natural — which is the right one? Although you can theorize, the only way to be sure is to lay hands on it. So tell rapid prototyping to make variants 1-10, then send them over to Human Factors and text the stress and posture resulting from each one.

“Sure, we know the gable slope should be between 10-15 degrees and blah blah blah,” said Adams, who is actually on the patent for the original Natural Keyboard, and so is about as familiar as you can get with the design. “But what else? What is it we’re trying to do, and how are we achieving that through engineering? It’s super fun bringing all we know about the human body and bringing that into the industrial design.”

Although the comparison is rather grandiose, I was reminded of an orchestra — but not in full swing. Rather, in the minutes before a symphony begins, and all the players are tuning their instruments. It’s a cacophony in a way, but they are all tuning towards a certain key, and the din gradually makes its way to a pleasant sort of hum. So it is that a group of specialists all tending their sciences and creeping towards greater precision seem to cohere a product out of the ether that is human-centric in all its parts.

Gadgets – TechCrunch

Google turns your Android phone into a security key

Your Android phone could soon replace your hardware security key to provide two-factor authentication access to your accounts. As the company announced at its Cloud Next conference today, it has developed a Bluetooth-based protocol that will be able to talk to its Chrome browser and provide a standards-based second factor for access to its services, similar to modern security keys.

It’s no secret that two-factor authentication remains one of the best ways to secure your online accounts. Typically, that second factor comes to you in the form of a push notification, text message or through an authentication app like the Google Authenticator. There’s always the risk of somebody intercepting those numbers or phishing your account and then quickly using your second factor to log in, though. Because a physical security key also ensures that you are on the right site before it exchanges the key, it’s almost impossible to phish this second factor. The key simply isn’t going to produce a token on the wrong site.

Because Google is using the same standard here, just with different hardware, that phishing protection remains intact when you use your phone, too.

Bluetooth security keys aren’t a new thing, of course, and Google’s own Titan keys include a Bluetooth version (though they remain somewhat controversial). The user experience for those keys is a bit messy, though, since you have to connect the key and the device first. Google, however, says that it has done away with all of this thanks to a new protocol that uses Bluetooth but doesn’t necessitate the usual Bluetooth connection setup process. Sadly, though, the company didn’t quite go into details as to how this would work.

Google says this new feature will work with all Android 7+ devices that have Bluetooth and location services enabled. Pixel 3 phones, which include Google’s Titan M tamper-resistant security chip, get some extra protections, but the company is mostly positioning this as a bonus and not a necessity.

As far as the setup goes, the whole process isn’t all that different from setting up a security key (and you’ll still want to have a second or third key handy in case you ever lose or destroy your phone). You’ll be able to use this new feature for both work and private Google accounts.

For now, this also only works in combination with Chrome. The hope here, though, is to establish a new standard that will then be integrated into other browsers, as well. It’s only been a week or two since Google enabled support for logging into its own service with security keys on Edge and Firefox. That was a step forward. Now that Google offers a new service that’s even more convenient, though, it’ll likely be a bit before these competing browsers will offer support, too, once again giving Google a bit of an edge.


Android – TechCrunch

Review: The $199 Echo Link turns the fidelity up to 11

The Echo Link takes streaming music and makes it sound better. Just wirelessly connect it to an Echo device and plug it into a set of nice speakers. It’s the missing link.

The Link bridges the gap between streaming music and a nice audio system. Instead of settling for the analog connection of an Echo Dot, the Echo Link serves audio over a digital connection and it makes just enough of a difference to justify the $ 200 price.

I plugged the Eco Link into the audio system in my office and was pleased with the results. This is the Echo device I’ve been waiting for.

In my case the Echo Link took Spotfiy’s 320 kbps stream and opened it up. The Link creates a wider soundstage and makes the music a bit more full and expansive. The bass hits a touch harder and the highs now have a new-found crispness. Lyrics are clearer and easier to pick apart. The differences are subtle. Everything is just slightly improved over the sound quailty found when using an Echo Dot’s 3.5mm output.

Don’t have a set of nice speakers? That’s okay, Amazon also just released the Echo Link Amp, which features a built-in amplifier capable of powering a set of small speakers (read the review here).

Here’s the thing: I’m surprised Amazon is making the Echo Link. The device caters to what must be a small demographic of Echo owners looking to improve the quality of Pandora or Spotify when using an audio system. And yet, without support for local or streaming high resolution audio, it’s not good enough for audiophiles. This is for wannabe audiophiles. Hey, that’s me.

Review

There are Echo’s scattered throughout my house. The devices provide a fantastic way to access music and NPR. The tiny Echo Link is perfect for the system in my office where I have a pair of Definitive Technology bookshelf speakers powered by an Onkyo receiver and amp. I have a turntable and SACD player connected to the receiver but those are a hassle when I’m at my desk. The majority of the time I listen to Spotify through the Amazon Echo Input.

I added the Onkyo amplifier to the system last year and it made a huge difference to the quality. The music suddenly had more power. The two-channel amp pushes harder than the receiver, and resulted in audio that was more expansive and clear. And at any volume, too. I didn’t know what I was missing. That’s the trick with audio. Most of the time the audio sounds great until it suddenly sounds better. The Echo Link provided me with the same feeling of discovery.

To be clear the $ 200 Echo Link does not provide a night and day difference in my audio quality. It’s a slight upgrade over the audio outputted by a $ 20 Echo Input — and don’t forget, an Echo device (like the $ 20 Echo Input) is required to make the Echo Link work.

The Echo Link provides the extra juice lacking from the Echo Input or Dot. Those less expensive options output audio to an audio system, but only through an analog connection. The Echo Link offers a digital connection through Toslink or Digital Coax. It has analog outputs that’s powered by a DAC with a superior dynamic range and total harmonic distortion found in the Input or Dot. It’s an easy way to improve the quality of music from streaming services.

The Echo Link, and Echo Link Amp, also feature a headphone amp. It’s an interesting detail. With this jack, someone could have the Echo Link on their desk and use it to power a set of headphones without any loss of quality.

I set up a simple A/B test to spot the differences between a Link and a Dot. First, I connected the Echo Link with a Toslink connection to my receiver and an Echo Input. I also connected an Echo Dot through its 3.5mm analog connection to the receiver. I created a group in the Alexa app of the devices. This allowed each of the devices to play the same source simultaneously. Then, as needed, I was able to switch between the Dot and Link with just a touch of a button, providing an easy and quick way to test the differences.

I’ll leave it up to you to justify the cost. To me, as someone who has invested money into a quality audio system, the extra cost of the Echo Link is worth it. But to others an Echo Dot could be enough.

It’s important to note that the Echo Link works a bit differently than other Echo devices connected to an audio system. When, say, a Dot is connected to an audio system, the internal speakers are turned off and all of the audio is sent to the system. The Echo Link doesn’t have to override the companion Echo. When an Echo Link is connected to an Echo device, the Echo still responds through its internal speakers; only music is sent to the Echo Link. For example, when the Echo is asked about the weather, the forecast is played back through the speakers in the Echo and not the audio system connected to the Echo Link. In most cases this allows the owner to turn off the high-power speakers and still have access to voice commands on the Echo.

The Echo Link takes streaming music and instantly improves the quality. In my case the improvements were slight but noticeable. It works with all the streaming services supported by Echo devices, but it’s important to note it does not work with Tidal’s high-res Master Audio tracks. The best the Echo Link can do is 320 kbps from Spotify or Tidal. This is a limiting factor and it’s not surprising. If the Echo Link supported Tidal’s Master Tracks, I would likely sign up for that service, and that is not in the best interest of Amazon which hopes I sign up for Amazon Music Unlimited.

I spoke to Amazon about the Echo Link’s lack of support for Tidal Master Tracks and they indicated they’re interested in hearing how customers will use the device before committing to adding support.

The Link is interesting. Google doesn’t have anything similar in its Google Home Line. The Sonos Amp is similar, but with a built-in amplifier, it’s a closer competitor to the Echo Link Amp. Several high-end audio companies sell components that can stream audio over digital connections yet none are as easy to use or as inexpensive as the Echo Link. The Echo Link is the easiest way to improve the sound of streaming music services.

Gadgets – TechCrunch

Gmail turns 15, gets smart compose improvements and email scheduling

Exactly fifteen years ago, Google decided to confuse everybody by launching its long-awaited web-based email client on April 1. This definitely wasn’t a joke, though, and Gmail went on to become one of Google’s most successful products. Today, to celebrate its fifteenth birthday (and maybe make you forget about today’s final demise of Inbox and tomorrow’s shutdown of Google+), the Gmail team announced a couple of a new and useful Gmail features, including improvements to Smart Compose and the ability to schedule emails to be sent in the future.

Smart Compose, which tries to autocomplete your emails as you type them, will now be able to adapt to the way you write the greetings in your emails. If you prefer ‘Hey’ over ‘Hi,’ then Smart Compose will learn that. If you often fret over which subject to use for your emails, then there’s some relief here for you, too, because Smart Compose can now suggest a subject line based on the content of your email.

With this update, Smart Compose is now also available on all Android devices. Google says that it was previously only available on Pixel 3 devices, though I’ve been using it on my Pixel 2 for a while already, too. Support for iOS is coming soon.

In addition to this, Smart Compose is also coming to four new languages: Spanish, French, Italian and Portuguese.

That’s all very useful, but the feature that will likely get the most attention today is email scheduling. The idea here is as simple as the execution. The ‘send’ button now includes a drop-down menu that lets you schedule an email to be sent at a later time. Until now, you needed third-party services to do this, but now it’s directly integrated into Gmail.

Google is positioning the new feature as a digital wellness tool. “We understand that work can often carry over to non-business hours, but it’s important to be considerate of everyone’s downtime,” Jacob Bank, Director of Product Management, G Suite, writes in today’s announcement. “We want to make it easier to respect everyone’s digital well-being, so we’re adding a new feature to Gmail that allows you to choose when an email should be sent.”


Android – TechCrunch