The Xbox Elite Wireless Controller Series 2 is a truly great game controller

Microsoft’s original Xbox Elite controller was a major step-up for gamers, with customizable buttons, changeable physical controls and adjustable sensitivity for serious personalization. The new Xbox Elite Controller Series 2 has just landed, and it offers similar features, but with new and improved features that add even more customization options, along with key hardware improvements that take what was one of the best gaming controllers available, and make it that much better.

USB-C

This might seem like a weird place to start, but the fact that the new Xbox Elite 2 comes with USB-C for charging and wired connections is actually a big deal, especially given that just about every other gadget in our lives has moved on to adapting this standard. Micro USB is looking decidedly long in the tooth, and if you’re like me, one of the only reasons you still have those cables around at all is to charging your game controllers.

In the box, you get a braided USB-A to USB-C charging cable, which is plenty long enough to reach from your console to your couch at nine feet. Of course, you can also use your phone, tablet, MacBook or any other USB-C charger and cable combo to power up the Elite 2, which is why it’s such a nice upgrade.

This is big for one other key reason: Apple recently added Xbox controller compatibility to its iPad lineup, which also charges via USB-C. That’s what makes this the perfect controller for anyone looking to turn their tablets into a portable gaming powerhouse, since it reduces the amount of kit you need to pack when you want to grab the controller and have a good option for digging into some iPad gaming.

Adjustable everything

Probably the main reason to own the Elite 2 is that it offers amazing customization options. New to this generation, you can even adjust the resistance of the thumbsticks, which is immensely useful if you’re a frequent player of first-person shooter (FPS) games, for instance. This lets you tune the sensitivity of the sticks to help ensure you’re able to find the right balance of sensitivity vs. resistance for accurate aiming, and it should help pros and enthusiasts make the most of their own individual play style.

The shoulder triggers also now have even shorter hair trigger locks, which mean you can fire quicker with shorter squeezes in-game. And in the case, you’ll find other thumbsticks that you can swap out for the ones that are pre-installed, as well as a d-pad you can use to place the multi-directional pad.

On top of the hardware customization, you can also tweak everything about the controller in software on Windows 10 and Xbox One, using Microsoft’s Accessories app. You can even assign a button to act as a ‘Shift’ key to provide even more custom options, so that you can set up key combos to run even more inputs. Once you find a configuration you like, you can save it as a profile to the controller and switch quickly between them using a physical button on the controller’s front face.

Even if you’re not a hardcore multiplayer competitive gamer, these customization options can come in handy. I often use profiles that assign thumbstick clicks to the rear paddle buttons, for instance, which makes playing a lot of single-player games much more comfortable, especially during long sessions.

Dock and case included

The Xbox Elite 2 includes a travel case, just like the first generation, but this iteration is improved, too. It has a removable charging dock, which is a quality accessory in its own right. The dock offers pass-through charging even while the controller is inside the case, too, thanks to a USB-C cut-through that you can also seal with a rubberized flap when it’s not in use.

In addition to housing the charger and controller, the case can hold the additional sticks and D-pad, as well as the paddles when those aren’t in use. It’s got a mesh pocket for holding charging cables and other small accessories, and the exterior is a molded hard plastic wrapped in fabric that feels super durable, and yet doesn’t take up much more room than the controller itself when packed in a bag.

The case is actually a huge help in justifying that $ 179.99 price tag, since all of this would be a significant premium as an after-market add-on accessory for a standard controller.

Bottom line

Microsoft took its time with a successor to the original Xbox Elite Wireless Controller, and while at first glance you might think that not much has changed, there’s actually a lot of significant improvements here. The controller’s look and feel also feel better, with more satisfying button, pad and the stick response, and a better grip thanks to the new semi-textured finish on the front of the controller.

USB-C and more customization options might be good enough reason even for existing Elite Controller owners to upgrade, but anyone on the fence about getting an Elite to begin with should definitely find this a very worthwhile upgrade over a standard Xbox One controller.

Gadgets – TechCrunch

Pixel 4 review: Google ups its camera game

Google’s first-party hardware has always been a drop in the bucket of global smartphone sales. Pixel devices have managed to crack the top five in the U.S. and Western Europe, but otherwise represent less than 1% of the overall market. It’s true, of course, that the company got a late start, largely watching on the sidelines as companies like Samsung and Huawei shipped millions of Android devices.

Earlier this year, Google admitted that it was feeling the squeeze of slowing smartphone sales along with the rest of the industry. During Alphabet’s Q1 earnings call, CEO Sundar Pichai noted that poor hardware numbers were a reflection of “pressure in the premium smartphone industry.”

Introduced at I/O, the Pixel 3a was an attempt to augment disappointing sales numbers with the introduction of a budget-tier device. With a starting price of $ 399, the device seemingly went over as intended. The 3a, coupled with more carrier partners, helped effectively double year over year growth for the line. Given all of this, it seems like a pretty safe bet that the six-month Pixel/Pixela cycle will continue, going forward.

Of course, the addition of a mid-range device adds more onus for the company to differentiate the flagship. With a starting price of $ 799, the Pixel 4 certainly isn’t expensive by modern flagship standards. But Google certainly needs to present enough distinguishing features to justify a $ 400 price gulf between devices — especially as the company disclosed software upgrades introduced on flagship devices will soon make their way onto their cheaper counterparts.

Indeed, the much-rumored and oft-leaked devices bring some key changes to the line. The company has finally given in and added a dual-camera setup to both premium models, along with an upgraded 90Hz display, face unlock, radar-based gestures and a whole bunch of additional software features.

The truth is that the Pixel has always occupied a strange place in the smartphone world. As the successor to Google’s Nexus partnerships, the product can be regarded as a showcase for Android’s most compelling features. But gone are the days of leading the pack with the latest version of the operating system. The fact that OnePlus devices already have Android 10 means Google’s going head to head against another reasonably price manufacturer of quality handsets.

google pixel 4 009

The Pixel line steps up a bit on the design side to distinguish the product from the “a” line. Google’s phones have never been as flashy as Samsung’s or Apple’s, and that’s still the case here, but a new dual-sided glass design (Gorilla Glass 5 on both), coupled with a metal band, does step up the premium feel a bit. The product is also a bit heavier and thicker than the 3, lending some heft to the device.

There are three colors now: black, white and a poppy “Oh So Orange,” which is available in limited quantities here in the U.S. The color power button continues to be a nice touch, lending a little character to the staid black and white devices. While the screen gets a nice update to 90Hz OLED, Google still has no interest in the world of notches or hole punches. Rather, it’s keeping pretty sizable bezels on the top and bottom.

The Pixel 4 gets a bit of a screen size boost from 5.5 to 5.7 inches, with an increase of a single pixel per inch, while the Pixel 4 XL stays put at 6.4 inches (with a PPI increase of 522 to 537). The dual front-facing camera has been ditched this time out, instead opting for the single eight megapixel, similar to what you’ll find on the 3a.

Storage hasn’t changed, with both 64 and 128GB options for both models; RAM has been bumped up to a default 6GB from 4GB last time out. The processor, too, is the latest and greatest from Qualcomm, bumping from a Snapdragon 845 to an 855. Interestingly, however, the batteries have actually been downgraded.

google pixel 4 013

The 4 and 4 XL sport a 2,800 and 3,700mAh, respectively. That should be augmented a bit by new battery-saving features introduced in Android 10, but even still, that’s not the direction you want to see these things going.

The camera is, in a word, great. Truth be told, I’ve been using it to shoot photos for the site since I got the phone last week. This Google Nest Mini review, Amazon Echo review and Virgin Galactic space suit news were all shot on the Pixel 4. The phone isn’t yet a “leave your DSLR at home” proposition, of course, but damn if it can’t take a fantastic photo in less than ideal and mixed light with minimal futzing around.

There’s no doubt that this represents a small but important shift in philosophy for Google. After multiple generations of suggesting that software solutions could do more than enough heavy lifting on image processing, the company’s finally bit the bullet and embraced a second camera. Sometimes forward progress means abandoning past stances. Remember when the company dug its heels in on keeping the headphone jack, only to drop it the following year?

google pixel 4 010

The addition of a second camera isn’t subtle, either. In fact, it’s hard to miss. Google’s adopted a familiar square configuration on the rear of the device. That’s just how phones look now, I suppose. Honestly, it’s fine once you conquer a bit of trypophobia, with a pair of lenses aligned horizontally and a sensor up top and flash on bottom — as one of last week’s presenters half joked, “we hope you’ll use it as a flash light.”

google pixel 4 008

That, of course, is a reference to the Pixel’s stellar low-light capabilities. It’s been a welcome feature, in an age where most smartphone users continue to overuse their flashes, completely throwing off the photo in the process. Perhaps the continued improvements will finally break that impulse in people — though I’m not really getting my hopes up on that front. Old habits, etc.

The 4 and 4 XL have the same camera set up, adopting the 12.2-megapixel (wide angle) lens from their predecessors and adding a 16-megapixel (telephoto) into the mix. I noted some excitement about the setup in my write-up. That’s not because the two-camera setup presents anything remarkable — certainly not in this area of three, four and five-camera flagships. It’s more about the groundwork that Google has laid out in the generations leading up to this device.

00100trPORTRAIT 00100 BURST20191016105119747 COVER 1

Essentially it comes down to this: Look at what the company has been able to accomplish using software and machine learning with a single camera setup. Now add a second telephoto camera into the mix. See, Super High Res Zoom is pretty impressive, all told. But if you really want a tighter shot without degrading the image in the process, optical zoom is still very much the way to go.

There’s a strong case to be made that the Pixel 4’s camera is the best in class. The pictures speak for themselves. The aforementioned TechCrunch shots were done with little or no manual adjustments or post-processing. Google offers on-screen adjustments, like the new dual-exposure control, which lets you manually adjust brightness and shadow brightness on the fly. Honestly, though, I find the best way to test these cameras is to use them the way most buyers will: by pointing and shooting.

The fact is that a majority of people who buy these handsets won’t be doing much fiddling with the settings. As such, it’s very much on handset makers to ensure that users get the best photograph by default, regardless of conditions. Once again, software is doing much of the heavy lifting. Super Res Zoom works well in tandem with the new lens, while Live HDR+ does a better job approximating how the image will ultimately look once fully processed. Portrait mode shots look great, and the device is capable of capturing them at variable depths, meaning you don’t have to stand a specific distance from the subject to take advantage of the well-done artificial bokeh.

Our video producer, Veanne, who is admittedly a far better photographer than I can ever hope to be, tested out the camera for the weekend. 

Although Veanne was mostly impressed by the Pixel 4’s camera and photo editing capabilities, here are three major gripes.

“Digital zoom is garbage.”

Google Pixel 4 digital zoom is garbage

 

“In low lighting situations, you lose ambiance. Saturday evening’s intimate, warmly lit dinner looked like a cafeteria meal.”

Pixel 4 camera sample

 

“Bright images in low lighting gives you the impression that the moving objects would be in focus as well. That is not the case.”

Other additions round out the experience, including “Frequent Faces,” which learns the faces of subjects you frequently photograph. Once again, the company is quick to point out that the feature is both off by default and all of the processing happens on the device. Turning it off also deletes all of the saved information. Social features have been improved, as well, with quick access to third-party platforms like Snapchat and Instagram.

Google keeps pushing out improvements to Lens, as well. This time out, language translation, document scanning and text copy and pasting can be performed with a quick tap. Currently the language translation is still a bit limited, with only support for English, Spanish, German, Hindi and Japanese. More will be “rolling out soon,” per the company.

google pixel 4 003

Gestures is a strange one. I’m far from the first to note that Google is far from the first to attempt the feature. The LG G8 ThinQ is probably the most recent prominent example of a company attempting to use gestures as a way to differentiate themselves. To date, I’ve not seen a good implementation of the technology — certainly not one I could ever see myself actually using day to day.

The truth is, no matter how interesting or innovative a feature is, people aren’t going to adopt it if it doesn’t work as advertised. LG’s implementation was a pretty big disappointment.

Simply put, the Pixel’s gestures are not that. They’re better in that, well, they work, pretty much as advertised. This is because the underlying technology is different. Rather than relying on cameras like other systems, the handset uses Project Soli, a long-promised system that utilizes a miniature radar chip to detect far more precise movement.

Soli does, indeed work, but the precision is going to vary a good deal from user to user. The thing is, simply detecting movement isn’t enough. Soli also needs to distinguish intention. That means the system is designed to weed out accidental gestures of the manner we’re likely making all the time around our phones. That means the system appears to be calibrated to bigger, intentional movements.

picka 2

That can be a little annoying for things like advancing tracks. I don’t think there are all that many instances where waving one’s hands across a device Obi-Wan Kenobi-style is really saving all that much time or effort versus touching a screen. If, however, Google was able to customize the experience to the individual over time using machine learning, it could be a legitimately handy feature.

That brings us to the next important point: functionality. So you’ve got this neat new piece of tiny radar that you’re sticking inside your phone. You say it’s low energy and more private than a camera. Awesome! So, how do you suggest I, you know, use it?

There are three key ways, at the moment:

  • Music playback
  • Alarm Silencing
  • Waving at Pokémon

The first two are reasonably useful. The primary use case I can think of are when, say, your phone is sitting in front of you at your desk. Like mine is, with me, right now. Swiping my hand left to right a few inches above the device advances the track. Right to left goes a track back. The movements need to be deliberate, from one end of the device to the other.

And then there’s the phenomenon of “Pokémon Wave Hello.” It’s not really correct to call the title a game, exactly. It’s little more than a way of showcasing Motion Sense — albeit an extremely delightful way.

You might have caught a glimpse of it at the keynote the other day. It came and went pretty quickly. Suddenly Pikachu was waving at the audience, appearing out of nowhere like so many wild Snorlaxes. Just as quickly, he was gone.

More than anything, it’s a showcase title for the technology. A series of five Pokémon, beginning with Pikachu, appear demanding you interact with them through a series of waves. It’s simple, it’s silly and you’ll finish the whole thing in about three minutes. That’s not really the point, though. Pokémon Wave Hello exists to:

  1. Get you used to gestures.
  2. Demonstrate functionality beyond simple features. Gaming, AR — down the road, these things could ultimately find fun and innovative ways to integrate Soli.

For now, however, use is extremely limited. There are some fun little bits, including dynamic wallpaper that reacts to movement. The screen also glows subtly when detecting you — a nice little touch (there’s a similar effect for Assistant, as well).

Perhaps most practical, however, is the fact that the phone can detect when you’re reaching for it and begin the unlocking process. That makes the already fast new Face Unlock feature ever faster. Google ditched the fingerprint reader this time around, opting for neither a physical sensor nor in-screen reader. Probably for the best on the latter front, given the pretty glaring security woes Samsung experienced last week when a British woman accidentally spoofed the reader with a $ 3 screen protector. Yeeesh.

There are some nice security precautions on here. Chief among them is the fact that the unlock is done entirely on-device. All of the info is saved and processed on the phone’s Titan M chip, meaning it doesn’t get sent up to the cloud. That both makes it a speedier process and means Google won’t be sharing your face data with its other services — a fact Google felt necessary to point out, for obvious reasons.

For a select few of us, at least, Recorder feels like a legitimate game changer. And its ease of use and efficacy should be leaving startups like Otter.ai quaking at its potential, especially if/when Google opts to bring it to other Android handsets and iOS.

I was initially unimpressed by the app upon trying it out at last week’s launch event. It struggles to isolate audio in noisy environments — likely as much of a hardware as software constraint. One on one and it’s far better, though attempting to, say, record audio from a computer can still use some work.

google pixel 4 004

Open the app and hit record and you’ll see a waveform pop up. The line is blue when detecting speech and gray when hearing other sounds. Tap the Transcript button and you’ll see the speech populate the page in real time. From there you can save it with a title and tag the location.

The app will automatically tag keywords and make everything else searchable for easy access. In its first version, it already completely blows Apple’s Voice Memos out of the water. There’s no comparison, really. It’s in a different league. Ditto for other apps I’ve used over the years, like Voice Record.

Speaking to the product, the recording was still a little hit or miss. It’s not perfect — no AI I’ve encountered is. But it’s pretty good. I’d certainly recommend going back over the text before doing anything with it. Like Otter and other voice apps, you can play back the audio as it highlights words, karaoke-style.

The text can be saved to Google Drive, but can’t be edited in app yet. Audio can be exported, but not as a combined file. The punctuation leaves something to be desired and Recorder is not yet able to distinguish individual voices. These are all things a number of standalone services offer, along with a web-based platform. That means that none of them are out of business yet, but if I was running any of them, I’d be pretty nervous right about now.

As someone who does interviews for a living, however, I’m pretty excited by the potential here. I can definitely see Recorder become one of my most used work apps, especially after some of the aforementioned kinks get ironed out in the next version. As for those who don’t do this for a living, usefulness is probably a bit limited, though there are plenty of other potential uses, like school lecturers.

google pixel 4 005

The Pixel continues to distinguish itself through software updates and camera features. There are nice additions throughout that set it apart from the six-month-old 3a, as well, including a more premium design and new 90Hz display. At $ 799, the price is definitely a vast improvement over competitors like Samsung and Apple, while retaining flagship specs.

The Pixel 4 doesn’t exactly address what Google wants the Pixel to be, going forward. The Pixel 3a was confirmation that users were looking for a far cheaper barrier of entry. The Pixel 4, on the other hand, is priced above OnePlus’s excellent devices. Nor is the product truly premium from a design perspective.

It’s unclear what the future will look like as Google works to address the shifting smartphone landscape. In the meantime, however, the future looks bright for camera imaging, and Google remains a driving force on that front.


Android – TechCrunch

The Analogue Pocket might be the perfect portable video game system

Very few modern tech companies have executed on their mission as consistently, and at such a high level of quality as Analogue. Analogue’s obsessively engineered modern consoles for old-school physical cartridge video games are museum-quality hardware design, housing specially tuned processors that offer pitch- and pixel-perfect play of all NES, Sega Genesis, SNES and other retro console games on modern HD TVs – and their new $ 199 Analogue Pocket aims to provide the best way to play classic portable console titles in similar high fidelity.

The Analogue Pocket is a portable gaming console that can play the entire library of Game Boy, Game Boy Color and Game Boy Advance games out of the box – natively, without emulation, so that the gaming experience is exactly as you remember it (or as it was intended, if this is your first experience with these classic titles). That’s not all, though: Using cartridge adapters, the Analogue Pocket can support Game Gear, Neo Geo Pocket Color, Atari Lynx and other games, too.

4 Analogue Pocket All

It uses two FPGAs (Field Programmable Gate Arrays) which are processors that have been programmed specifically to play these games back as they were originally intended, mimicking the the operation of the original silicon found in the consoles that these games were designed for with the faithfulness of true restoration hardware. The result is a great gaming experience that will feel like the original – but played on the Analogue Pocket’s much more impressive hardware, which offers a 3.5-inch, 1600×1440 LCD display that provides a very high-resolution 615 ppi. For those keeping tracks that’s ten times the resolution of the original Game Boy display. And it’s color tuned for amazing color rendering and brightness – it could actually be the best display on a dedicated gaming device, period, let alone for a retro console.

The Analogue Pocket also works with an accessory called the Analogue Dock (sold separately, pricing TBD), which adds HDMI out and Bluetooth/wired controller support, to turn the Pocket into a home console for your big screen TV, too. The dock offers two standard USB ports for wired controllers, and its Bluetooth support works with any of 8Bitdo’s excellent gamepads. It’s basically a Switch but for all your favorite Game Boy series games, and with what looks like much better quality hardware.

6 Analogue Dock

That would be plenty to offer in a portable console, but the Analogue Pocket is designed to do still more. It has a built-in synthesizer and sequencer for making digital music, and the second FPGA it’s packing is designed to be used specifically for development. It allows the development community to bring their own cores to the platform, which means it could potentially support a whole host of classic and ported games in future.

Analogue Pocket is set to release some time in 2020, with a more specific date to be announced later on. It’s a natural next step for the company that delivered excellent gaming experiences via the Nt Mini, the Super Nt and the Mega Sg, but it’s still a nice, exciting surprise to find out that they’re tackling the rich history of mobile gaming next.

Gadgets – TechCrunch

Why did last night’s ‘Game of Thrones’ look so bad? Here comes the science!

Last night’s episode of “Game of Thrones” was a wild ride and inarguably one of an epic show’s more epic moments — if you could see it through the dark and the blotchy video. It turns out even one of the most expensive and meticulously produced shows in history can fall prey to the scourge of low quality streaming and bad TV settings.

The good news is this episode is going to look amazing on Blu-ray or potentially in future, better streams and downloads. The bad news is that millions of people already had to see it in a way its creators surely lament. You deserve to know why this was the case. I’ll be simplifying a bit here because this topic is immensely complex, but here’s what you should know.

(By the way, I can’t entirely avoid spoilers, but I’ll try to stay away from anything significant in words or images.)

It was clear from the opening shots in last night’s episode, “The Longest Night,” that this was going to be a dark one. The army of the dead faces off against the allied living forces in the darkness, made darker by a bespoke storm brought in by, shall we say, a Mr. N.K., to further demoralize the good guys.

If you squint you can just make out the largest army ever assembled

Thematically and cinematographically, setting this chaotic, sprawling battle at night is a powerful creative choice and a valid one, and I don’t question the showrunners, director, and so on for it. But technically speaking, setting this battle at night, and in fog, is just about the absolute worst case scenario for the medium this show is native to: streaming home video. Here’s why.

Compression factor

Video has to be compressed in order to be sent efficiently over the internet, and although we’ve made enormous strides in video compression and the bandwidth available to most homes, there are still fundamental limits.

The master video that HBO put together from the actual footage, FX, and color work that goes into making a piece of modern media would be huge: hundreds of gigabytes if not terabytes. That’s because the master has to include all the information on every pixel in every frame, no exceptions.

Imagine if you tried to “stream” a terabyte-sized video file. You’d have to be able to download 200 megabytes per second for the full 80 minutes of this episode. Few people in the world have that kind of connection — it would basically never stop buffering. Even 20 megabytes per second is asking too much by a long shot. 2 is doable — slightly under the 25 megabit speed (that’s bits… divide by 8 to get bytes) we use to define broadband download speeds.

So how do you turn a large file into a small one? Compression — we’ve been doing it for a long time, and video, though different from other types of data in some ways, is still just a bunch of zeroes and ones. In fact it’s especially susceptible to strong compression because of how one video frame is usually very similar to the last and the next one. There are all kinds of shortcuts you can take that reduce the file size immensely without noticeably impacting the quality of the video. These compression and decompression techniques fit into a system called a “codec.”

But there are exceptions to that, and one of them has to do with how compression handles color and brightness. Basically, when the image is very dark, it can’t display color very well.

The color of winter

Think about it like this: There are only so many ways to describe colors in a few words. If you have one word you can say red, or maybe ochre or vermilion depending on your interlocutor’s vocabulary. But if you have two words you can say dark red, darker red, reddish black, and so on. The codec has a limited vocabulary as well, though its “words” are the numbers of bits it can use to describe a pixel.

This lets it succinctly describe a huge array of colors with very little data by saying, this pixel has this bit value of color, this much brightness, and so on. (I didn’t originally want to get into this, but this is what people are talking about when they say bit depth, or even “highest quality pixels.)

But this also means that there are only so many gradations of color and brightness it can show. Going from a very dark grey to a slightly lighter grey, it might be able to pick 5 intermediate shades. That’s perfectly fine if it’s just on the hem of a dress in the corner of the image. But what if the whole image is limited to that small selection of shades?

Then you get what we see last night. See how Jon (I think) is made up almost entirely of only a handful of different colors (brightnesses of a similar color, really) in with big obvious borders between them?

This issue is called “banding,” and it’s hard not to notice once you see how it works. Images on video can be incredibly detailed, but places where there are subtle changes in color — often a clear sky or some other large but mild gradient — will exhibit large stripes as the codec goes from “darkest dark blue” to “darker dark blue” to “dark blue,” with no “darker darker dark blue” in between.

Check out this image.

Above is a smooth gradient encoded with high color depth. Below that is the same gradient encoded with lossy JPEG encoding — different from what HBO used, obviously, but you get the idea.

Banding has plagued streaming video forever, and it’s hard to avoid even in major productions — it’s just a side effect of representing color digitally. It’s especially distracting because obviously our eyes don’t have that limitation. A high-definition screen may actually show more detail than your eyes can discern from couch distance, but color issues? Our visual systems flag them like crazy. You can minimize it, but it’s always going to be there, until the point when we have as many shades of grey as we have pixels on the screen.

So back to last night’s episode. Practically the entire show took place at night, which removes about 3/4 of the codec’s brightness-color combos right there. It also wasn’t a particularly colorful episode, a directorial or photographic choice that highlighted things like flames and blood, but further limited the ability to digitally represent what was on screen.

It wouldn’t be too bad if the background was black and people were lit well so they popped out, though. The last straw was the introduction of the cloud, fog, or blizzard, whatever you want to call it. This kept he brightness of the background just high enough that the codec had to represent it with one of its handful of dark greys, and the subtle movements of fog and smoke came out as blotchy messes (often called “compression artifacts” as well) as the compression desperately tried to pick what shade was best for a group of pixels.

Just brightening it doesn’t fix things, either — because the detail is already crushed into a narrow range of values, you just get a bandy image that never gets completely black, making it look washed out, as you see here:

(Anyway, the darkness is a stylistic choice. You may not agree with it, but that’s how it’s supposed to look and messing with it beyond making the darkest details visible could be counterproductive.)

Now, it should be said that compression doesn’t have to be this bad. For one thing, the more data it is allowed to use, the more gradations it can describe, and the less severe the banding. It’s also possible (though I’m not sure where it’s actually done) to repurpose the rest of the codec’s “vocabulary” to describe a scene where its other color options are limited. That way the full bandwidth can be used to describe a nearly monochromatic scene even though strictly speaking it should be only using a fraction of it.

But neither of these are likely an option for HBO: Increasing the bandwidth of the stream is costly, since this is being sent out to tens of millions of people — a bitrate increase big enough to change the quality would also massively swell their data costs. When you’re distributing to that many people, that also introduces the risk of hated buffering or errors in playback, which are obviously a big no-no. It’s even possible that HBO lowered the bitrate because of network limitations — “Game of Thrones” really is on the frontier of digital distribution.

And using an exotic codec might not be possible because only commonly used commercial ones are really capable of being applied at scale. Kind of like how we try to use standard parts for cars and computers.

This episode almost certainly looked fantastic in the mastering room and FX studios, where they not only had carefully calibrated monitors with which to view it but also were working with brighter footage (it would be darkened to taste by the colorist) and less or no compression. They might not even have seen the “final” version that fans “enjoyed.”

We’ll see the better copy eventually, but in the meantime the choice of darkness, fog, and furious action meant the episode was going to be a muddy, glitchy mess on home TVs.

And while we’re on the topic…

You mean it’s not my TV?

Couple watching TV on their couch.Well… to be honest, it might be that too. What I can tell you is that simply having a “better” TV by specs, such as 4K or a higher refresh rate or whatever, would make almost no difference in this case. Even built-in de-noising and de-banding algorithms would be hard pressed to make sense of “The Long Night.” And one of the best new display technologies, OLED, might even make it look worse! Its “true blacks” are much darker than an LCD’s backlit blacks, so the jump to the darkest grey could be way more jarring.

That said, it’s certainly possible that your TV is also set up poorly. Those of us sensitive to this kind of thing spend forever fiddling with settings and getting everything just right for exactly this kind of situation.

Usually “calibration” is actually a pretty simple process of making sure your TV isn’t on the absolute worst settings, which unfortunately many are out of the box. Here’s a very basic three-point guide to “calibrating” your TV:

  1. Go through the “picture” or “video” menu and turn off anything with a special name, like “TrueMotion,” “Dynamic motion,” “Cinema mode,” or anything like that. Most of these make things look worse, especially anything that “smooths” motion. Turn those off first and never ever turn them on again. Don’t mess with brightness, gamma, color space, anything you have to turn up or down from 50 or whatever.
  2. Figure out lighting by putting on a good, well-shot movie in the situation you usually watch stuff — at night maybe, with the hall light on or whatever. While the movie is playing, click through any color presets your TV has. These are often things like “natural,” “game,” “cinema,” “calibrated,” and so on and take effect right away. Some may make the image look too green, or too dark, or whatever. Play around with it and whichever makes it look best, use that one. You can always switch later – I myself switch between a lighter and darker scheme depending on time of day and content.
  3. Don’t worry about HDR, dynamic lighting, and all that stuff for now. There’s a lot of hype about these technologies and they are still in their infancy. Few will work out of the box and the gains may or may not be worth it. The truth is a well shot movie from the ’60s or ’70s can look just as good today as a “high dynamic range” show shot on the latest 8K digital cinema rig. Just focus on making sure the image isn’t being actively interfered with by your TV and you’ll be fine.

Unfortunately none of these things will make “The Long Night” look any better until HBO releases a new version of it. Those ugly bands and artifacts are baked right in. But if you have to blame anyone, blame the streaming infrastructure that wasn’t prepared for a show taking risks in its presentation, risks I would characterize as bold and well executed, unlike the writing in the show lately. Oops, sorry, couldn’t help myself.

If you really want to experience this show the way it was intended, the fanciest TV in the world wouldn’t have helped last night, though when the Blu-ray comes out you’ll be in for a treat. But here’s hoping the next big battle takes place in broad daylight.

Gadgets – TechCrunch

Game streaming’s multi-industry melee is about to begin

Almost exactly 10 years ago, I was at GDC participating in a demo of a service I didn’t think could exist: OnLive. The company had promised high-definition, low-latency streaming of games at a time when real broadband was uncommon, mobile gaming was still defined by Bejeweled (though Angry Birds was about to change that), and Netflix was still mainly in the DVD-shipping business.

Although the demo went well, the failure of OnLive and its immediate successors to gain any kind of traction or launch beyond a few select markets indicated that while it may be in the future of gaming, streaming wasn’t in its present.

Well, now it’s the future. Bandwidth is plentiful, speeds are rising, games are shifting from things you buy to services you subscribe to, and millions prefer to pay a flat fee per month rather than worry about buying individual movies, shows, tracks, or even cheeses.

Consequently, as of this week — specifically as of Google’s announcement of Stadia on Tuesday — we see practically every major tech and gaming company attempting to do the same thing. Like the beginning of a chess game, the board is set or nearly so, and each company brings a different set of competencies and potential moves to the approaching fight. Each faces different challenges as well, though they share a few as a set.

Google and Amazon bring cloud-native infrastructure and familiarity online, but is that enough to compete with the gaming know-how of Microsoft, with its own cloud clout, or Sony, which made strategic streaming acquisitions and has a service up and running already? What of the third parties like Nvidia and Valve, publishers and storefronts that may leverage consumer trust and existing games libraries to jump start a rival? It’s a wide-open field, all right.

Before we examine them, however, it is perhaps worthwhile to entertain a brief introduction to the gaming space as it stands today and the trends that have brought it to this point.

Gadgets – TechCrunch