News-reading app Flipboard expands local coverage, including coronavirus updates, to 12 more US metros

Earlier this year, personalized news aggregation app Flipboard expanded into local news. The feature brought local news, sports, real estate, weather, transportation news and more to 23 cities across the U.S. Today, Flipboard is bringing local news to 12 more U.S. metros and is adding critical coronavirus local coverage to all of the 35 supported locales.

The 12 new metros include the following:  Baltimore, Charlotte, Cleveland, Detroit, Indianapolis, Nashville, Pittsburgh, Orlando, Raleigh, Salt Lake City, St. Louis and Tampa Bay.

They join the 23 cities that were already supported: Atlanta, Austin, Boston, Chicago, Dallas, Denver, Houston, Las Vegas, Los Angeles, Miami, Minneapolis-St. Paul, New Orleans, New York City, Philadelphia, Phoenix, Portland, Sacramento, San Diego, San Francisco Bay Area, Seattle, Toronto, Vancouver and Washington, D.C.

To offer local news in its app, Flipboard works with area partners, big and small, like The Plain Dealer’s Cleveland.com, the Detroit Free Press and the St. Louis Post-Dispatch. It has now added to the list of partners local news service Patch and ProPublica, including its Local Reporting Network partners and its collaborative journalism project Electionland.

Patch alone is putting out more than 200 local coronavirus stories per day. Meanwhile, the ProPublica Local Reporting Network funds and jointly publishes year-long investigative projects with 23 local news organizations across the U.S. The Electionland initiative reports on problems that disenfranchise eligible voters, like misinformation, changing voting laws and rules, voter harassment, equipment failures and long lines at the polls.

To determine if a user should be shown local news, based on a user’s IP address — not a precise location — the app may recommend stories relevant to local audiences. It will also offer the Local sections inside the Explore tab in the Flipboard app. Once added, users can then browse their local news alongside other content they’re interested in, across a variety of topics.

At present, there are two main areas of interest to news readers — the COVID-19 outbreak and the 2020 Election, both of which are now offered in the local sections. In addition to understanding the current state of the pandemic on a global and national level, Flipboard readers in the supported areas will be able to track how the COVID-19 outbreak is impacting where they live. This could include coverage of things like local ordinances, school closings, shelter-in-place laws, number of cases and deaths, testing resources and more.

“Understanding the decisions state and local governments make and their impact on the community is not only important, but gives people a greater connection to their local leaders and the media,” said Marci McCue, VP of Content and Communications at Flipboard. “For instance, as a local resident you may want coverage from national newspapers about the coronavirus outbreak, but even more importantly is a local source that tells you where you can get tested and measures local leaders are taking that impact your daily life,” she noted.

The addition of coronavirus special coverage at a local level, aggregated from across publishers, means readers will be able to track stories without having to hop around different sites or apps from area newspapers or broadcasters.

For Flipboard’s business, adding local news allows advertisers to target against user interests, which may now include a city’s metro region as one of those interests.

Flipboard’s mobile app today reaches 145 million users per month. Local news is available in the supported metros across both iOS and Android .

India’s MX Player expands to US, UK and other markets in international push

MX Player, the on-demand video streaming service owned by India’s conglomerate Times Internet, is expanding to more than half a dozen new international markets including the U.S. and the UK to supply more entertainment content to millions of people trapped in their homes.

The Singapore-headquartered on-demand video streaming service, which raised $111 million in a round led by Tencent last year, said it has expanded to Canada, Australia, New Zealand, Bangladesh, and Nepal in addition to the U.S. and the UK.

Like in India, MX Player will offer its catalog at no charge to users in the international markets and monetize through ads, Karan Bedi, chief executive of the service, told TechCrunch in an interview.

The streaming service, which has amassed over 175 million monthly active users in India, is offering locally relevant titles in each market, he said. This is notably different from Disney’s Hotstar expansion into select international markets, where it has largely aimed to cater to the Indian diaspora.

MX Player is not currently offering any originally produced titles in any international market — instead offering movies and shows it has licensed from global and local studios — but the streamer plans to change that in the coming months, said Bedi.

Even as the expansion comes at a time when the world is grappling with containing and fighting the coronavirus outbreak, Bedi said MX Player had already been testing the service in several markets for a few months.

“We believe in meeting this rapidly rising demand from discerning entertainment lovers with stories that strike a chord. To that end, we have collaborated with some of the best talent and content partners globally who will help bring us a step closer to becoming the go-to destination for entertainment across the world,” said Nakul Kapur, Business Head for International markets at MX Player, in a statement.

Times Internet acquired MX Player, an app popular for efficiently playing a plethora of locally-stored media files on entry-level Android smartphones, in 2018 for about $140 million. In the years since, Times Internet has introduced video streaming service to it, and then live TV channels in India.

MX Player has also bundled free music streaming (through Gaana, another property owned by Times Internet) and has introduced in-app casual games for users in the country.

Bedi said the company is working on bringing these additional services to international markets, and also looking to enter additional regions including the Middle East and South Asia.

Survey shows growth in podcasts and voice assistants, little change in streaming

A new annual survey taken before the current COVID-19 crisis led to restrictions of movement in much of the U.S. suggests good news for Amazon, Facebook’s dominance unthreatened and continued growth in podcasting.

Edison Research and Triton Digital released their annual Infinite Dial survey last week, compiling data on consumers’ use of smart speakers, podcasts, music streaming and social media from 1,500 people (aged 12 and older) to compare year-over-year changes. Here are a few interesting findings:

Voice assistants and smart speakers

Sixty-two percent said they use a voice-based virtual assistant, most commonly via a phone or a computer. There has been a lot written about interactive voice as the next major medium for human-computer interaction after mobile phones, so it’s noteworthy to see that use of the technology is still associated with personal computing devices rather than hands-free smart speakers placed in the surrounding environment.

Smart speaker ownership did increase to 27% of respondents, up from 24% in 2019, even though respondents owned an average 2.2 speakers. In fact, the cohort that owned three or more speakers increased from one-quarter to one-third of owners in just a year, with Amazon Alexa continuing to dominate market share.

Forensic Architecture redeploys surveillance-state tech to combat state-sponsored violence

The specter of constant surveillance hangs over all of us in ways we don’t even fully understand, but it is also possible to turn the tools of the watchers against them. Forensic Architecture is exhibiting several long-term projects at the Museum of Art and Design in Miami that use the omnipresence of technology as a way to expose crimes and violence by oppressive states.

Over seven years Eyal Weizman and his team have performed dozens of investigations into instances of state-sponsored violence, from drone strikes to police brutality. Often these events are minimized at all levels by the state actors involved, denied or no-commented until the media cycle moves on. But sometimes technology provides ways to prove a crime was committed and occasionally even cause the perpetrator to admit it — hoisted by their own electronic petard.

Sometimes this is actual state-deployed kit, like body cameras or public records, but it also uses private information co-opted by state authorities to track individuals, like digital metadata from messages and location services.

For instance, when Chicago police shot and killed Harith Augustus in 2018, the department released some footage of the incident, saying that it “speaks for itself.” But Forensic Architecture’s close inspection of the body cam footage and cross reference with other materials makes it obvious that the police violated numerous rules (including in the operation of the body cams) in their interaction with him, escalating the situation and ultimately killing a man who by all indications — except the official account — was attempting to comply. It also helped additional footage see the light which was either mistakenly or deliberately left out of a FOIA release.

In another situation, a trio of Turkish migrants seeking asylum in Greece were shown, by analysis of their WhatsApp messages, images and location and time stamps, to have entered Greece and been detained by Greek authorities before being “pushed back” by unidentified masked escorts, having been afforded no legal recourse to asylum processes or the like. This is one example of several recently that appear to be private actors working in concert with the state to deprive people of their rights.

Situated testimony for survivors

I spoke with Weizman before the opening of this exhibition in Miami, where some of the latest investigations are being shown off. (Shortly after our interview he would be denied entry to the U.S. to attend the opening, with a border agent explaining that this denial was algorithmically determined; we’ll come back to this.)

The original motive for creating Forensic Architecture, he explained, was to elicit testimony from those who had experienced state violence.

“We started using this technique when in 2013 we met a drone survivor, a German woman who had survived a drone strike in Pakistan that killed several relatives of hers,” Weizman explained. “She has wanted to deliver testimony in a trial regarding the drone strike, but like many survivors her memory was affected by the trauma she has experienced. The memory of the event was scattered, it had lacunae and repetitions, as you often have with trauma. And her condition is like many who have to speak out in human rights work: The closer you get to the core of the testimony, the description of the event itself, the more it escapes you.”

The approach they took to help this woman, and later many others, jog her own memory, was something called “situated testimony.” Essentially it amounts to exposing the person to media from the experience, allowing them to “situate” themselves in that moment. This is not without its own risks.

“Of course you must have the appropriate trauma professionals present,” Weizman said. “We only bring people who are willing to participate and perform the experience of being again at the scene as it happened. Sometimes details that would not occur to someone to be important come out.”

A digital reconstruction of a drone strike’s explosion was recreated physically for another exhibition.

But it’s surprising how effective it can be, he explained. One case exposed American involvement hitherto undisclosed.

“We were researching a Cameroon special forces detention center, torture and death in custody occurred, for Amnesty International,” he explained. “We asked detainees to describe to us simply what was outside the window. How many trees, or what else they could see.” Such testimony could help place their exact location and orientation in the building and lead to more evidence, such as cameras across the street facing that room.

“And sitting in a room based on a satellite image of the area, one told us: ‘yes, there were two trees, and one was over by the fence where the American soldiers were jogging.’ We said, ‘wait, what, can you repeat that?’ They had been interviewed many times and never mentioned American soldiers,” Weizman recalled. “When we heard there were American personnel, we found Facebook posts from service personnel who were there, and were able to force the transfer of prisoners there to another prison.”

Weizman noted that the organization only goes where help is requested, and does not pursue what might be called private injustices, as opposed to public.

“We require an invitation, to be invited into this by communities that invite state violence. We’re not a forensic agency, we’re a counter-forensic agency. We only investigate crimes by state authorities.”

Using virtual reality: “Unparalleled. It’s almost tactile.”

In the latest of these investigations, being exhibited for the first time at MOAD, the team used virtual reality for the first time in their situated testimony work. While VR has proven to be somewhat less compelling than most would like on the entertainment front, it turns out to work quite well in this context.

“We worked with an Israeli whistleblower soldier regarding testimony of violence he committed against Palestinians,” Weizman said. “It has been denied by the Israeli prime minister and others, but we have been able to find Palestinian witnesses to that case, and put them in VR so we could cross reference them. We had victim and perpetrator testifying to the same crime in the same space, and their testimonies can be overlaid on each other.”

Dean Issacharoff — the soldier accused by Israel of giving false testimony — describes the moment he illegally beat a Palestinian civilian. (Caption and image courtesy of Forensic Architecture)

One thing about VR is that the sense of space is very real; if the environment is built accurately, things like sight-lines and positional audio can be extremely true to life. If someone says they saw the event occur here, but the state says it was here, and a camera this far away saw it at this angle… these incomplete accounts can be added together to form something more factual, and assembled into a virtual environment.

“That project is the first use of VR interviews we have done — it’s still in a very experimental stage. But it didn’t involve fatalities, so the level of trauma was a bit more controlled,” Weizman explained. “We have learned that the level and precision we can arrive at in reconstructing an incident is unparalleled. It’s almost tactile; you can walk through the space, you can see every object: guns, cars, civilians. And you can populate it until the witness is satisfied that this is what they experienced. I think this is a first, definitely in forensic terms, as far as uses of VR.”

A photogrammetry-based reconstruction of the area of Hebron where the incident took place.

In video of the situated testimony, you can see witnesses describing locations more exactly than they likely or even possibly could have without the virtual reconstruction. “I stood with the men at exactly that point,” says one, gesturing toward an object he recognized, then pointing upwards: “There were soldiers on the roof of this building, where the writing is.”

Of course it is not the digital recreation itself that forces the hand of those involved, but the incontrovertible facts it exposes. No one would ever have known that the U.S. had a presence at that detainment facility, and the country had no reason to say it did. The testimony wouldn’t even have been enough, except that it put the investigators onto a line of inquiry that produced data. And in the case of the Israeli whistleblower, the situated testimony defies official accounts that the organization he represented had lied about the incident.

Avoiding “product placement” and tech incursion

Sophie Landres, MOAD’s curator of Public Programs and Education, was eager to add that the museum is not hosting this exhibit as a way to highlight how wonderful technology is. It’s important to put the technology and its uses in context rather than try to dazzle people with its capabilities. You may find yourself playing into someone else’s agenda that way.

“For museum audiences, this might be one of their first encounters with VR deployed in this way. The companies that manufacture these technologies know that people will have their first experiences with this tech in a cultural or entertainment contrast, and they’re looking for us to put a friendly face on these technologies that have been created to enable war and surveillance capitalism,” she told me. “But we’re not interested in having our museum be a showcase for product placement without having a serious conversation about it. It’s a place where artists embrace new technologies, but also where they can turn it towards existing power structures.”

Boots on backs mean this not an advertisement for VR headsets or 3D modeling tools.

She cited a tongue-in-cheek definition of “mixed reality” referring to both digital crossover into the real world and the deliberate obfuscation of the truth at a greater scale.

“On the one hand you have mixing the digital world and the real, and on the other you have the mixed reality of the media environment, where there’s no agreement on reality and all these misinformation campaigns. What’s important about Forensic Architecture is they’re not just presenting evidence of the facts, but also the process used to arrive at these truth claims, and that’s extremely important.”

In openly presenting the means as well as the ends, Weizman and his team avoid succumbing to what he calls the “dark epistemology” of the present post-truth era.

“The arbitrary logic of the border”

As mentioned earlier, Weizman was denied entry to the U.S. for reasons unknown, but possibly related to the network of politically active people with whom he has associated for the sake of his work. Disturbingly, his wife and children were also stopped while entering the states a day before him and separated at the airport for questioning.

In a statement issued publicly afterwards, Weizman dissected the event.

In my interview the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled… I was asked to supply the Embassy with additional information, including fifteen years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.

This much we know: we are being electronically monitored for a set of connections – the network of associations, people, places, calls, and transactions – that make up our lives. Such network analysis poses many problems, some of which are well known. Working in human rights means being in contact with vulnerable communities, activists and experts, and being entrusted with sensitive information. These networks are the lifeline of any investigative work. I am alarmed that relations among our colleagues, stakeholders, and staff are being targeted by the US government as security threats.

This incident exemplifies – albeit in a far less intense manner and at a much less drastic scale – critical aspects of the “arbitrary logic of the border” that our exhibition seeks to expose. The racialized violations of the rights of migrants at the US southern border are of course much more serious and brutal than the procedural difficulties a UK national may experience, and these migrants have very limited avenues for accountability when contesting the violence of the US border.

The works being exhibited, he said, “seek to demonstrate that we can invert the forensic gaze and turn it against the actors — police, militaries, secret services, border agencies — that usually seek to monopolize information. But in employing the counter-forensic gaze one is also exposed to higher-level monitoring by the very state agencies investigated.”

Forensic Architecture’s investigations are ongoing; you can keep up with them at the organization’s website. And if you’re in Miami, drop by MOAD to see some of the work firsthand.

How ‘The Mandalorian’ and ILM invisibly reinvented film and TV production

“The Mandalorian” was a pretty good show. On that most people seem to agree. But while a successful live-action Star Wars TV series is important in its own right, the way this particular show was made represents a far greater change, perhaps the most important since the green screen. The cutting edge tech (literally) behind “The Mandalorian” creates a new standard and paradigm for media — and the audience will be none the wiser.

What is this magical new technology? It’s an evolution of a technique that’s been in use for nearly a century in one form or another: displaying a live image behind the actors. The advance is not in the idea but the execution: a confluence of technologies that redefines “virtual production” and will empower a new generation of creators.

As detailed in an extensive report in American Cinematographer Magazine (I’ve been chasing this story for some time, but suspected this venerable trade publication would get the drop on me), the production process of “The Mandalorian” is completely unlike any before, and it’s hard to imagine any major film production not using the technology going forward.

“So what the hell is it?” I hear you asking.

Meet “the Volume.”

Formally called Stagecraft, it’s 20 feet tall, 270 degrees around, and 75 feet across — the largest and most sophisticated virtual filmmaking environment yet made. ILM just today publicly released a behind-the-scenes video of the system in use, as well as a number of new details about it.

It’s not easy being green

In filmmaking terms, a “volume” generally refers to a space where motion capture and compositing take place. Some volumes are big and built into sets, as you might have seen in behind-the-scenes footage of Marvel or Star Wars movies. Some are smaller, plainer affairs, where the motions of the actors behind CG characters play out their roles.

But they generally have one thing in common: They’re static. Giant, bright green, blank expanses.

Does that look like fun to shoot in?

One of the most difficult things for an actor in modern filmmaking is getting into character while surrounded by green walls, foam blocks indicating obstacles to be painted in later and people with mocap dots on their face and suits with ping-pong balls attached. Not to mention everything has green reflections that need to be lit or colored out.

Advances some time ago (think prequels-era Star Wars) enabled cameras to display a rough pre-visualization of what the final film would look like, instantly substituting CG backgrounds and characters onto monitors. Sure, that helps with composition and camera movement, but the world of the film isn’t there, the way it is with practical sets and on-site shoots.

Practical effects were a deliberate choice for “The Child” (AKA Baby Yoda) as well.

What’s more, because of the limitations in rendering CG content, the movements of the camera are often restricted to a dolly track or a few pre-selected shots for which the content (and lighting, as we’ll see) has been prepared.

This particular volume, called Stagecraft by ILM, the company that put it together, is not static. The background is a set of enormous LED screens such as you might have seen onstage at conferences and concerts. The Stagecraft volume is bigger than any of those — but more importantly, it’s smarter.

See, it’s not enough to just show an image behind the actors. Filmmakers have been doing that with projected backgrounds since the silent era! And that’s fine if you just want to have a fake view out of a studio window or fake a location behind a static shot. The problem arises when you want to do anything more fancy than that, like move the camera. Because when the camera moves, it immediately becomes clear that the background is a flat image.

The innovation in Stagecraft and other, smaller LED walls (the more general term for these backgrounds) is not only that the image shown is generated live in photorealistic 3D by powerful GPUs, but that 3D scene is directly affected by the movements and settings of the camera. If the camera moves to the right, the image alters just as if it were a real scene.

This is remarkably hard to achieve. In order for it to work, the camera must send its real-time position and orientation to, essentially, a beast of a gaming PC, because this and other setups like it generally run on the Unreal engine (Epic does its own breakdown of the process here). This must take that movement and render it exactly in the 3D environment, with attendant changes to perspective, lighting, distortion, depth of field and so on — all fast enough so that those changes can be shown on the giant wall nearly instantly. After all, if the movement of the background lagged the camera by more than a handful frames it would be noticeable to even the most naive viewer.

Yet fully half of the scenes in “The Mandalorian” were shot within Stagecraft, and my guess is no one had any idea. Interior, exterior, alien worlds or spaceship cockpits, all used this giant volume for one purpose or another.

There are innumerable technological advances that have contributed to this; “The Mandalorian” could not have been made as it was five years ago. The walls weren’t ready; the rendering tech wasn’t ready; the tracking wasn’t ready — nothing was ready. But it’s ready now.

It must be mentioned that Jon Favreau has been a driving force behind this filmmaking method for years now; films like the remake of “The Lion King” were in some ways tech tryouts for “The Mandalorian.” Combined with advances made by James Cameron in virtual filmmaking, and, of course, the indefatigable Andy Serkis’s work in motion capture, this kind of production is only just now becoming realistic due to a confluence of circumstances.

Not just for SFX

Of course Stagecraft is probably also the most expensive and complex production environments ever used. But what it adds in technological overhead (and there’s a lot) it more than pays back in all kinds of benefits.

For one thing, it nearly eliminates on-location shooting, which is phenomenally expensive and time-consuming. Instead of going to Tunisia to get those wide-open desert shots, you can build a sandy set and put a photorealistic desert behind the actors. You can even combine these ideas for the best of both worlds: Send a team to scout locations in Tunisia and capture them in high-definition 3D to be used as a virtual background.

This last option produces an amazing secondary benefit: Reshoots are way easier. If you filmed at a bar in Santa Monica and changes to the dialogue mean you have to shoot the scene over again, no need to wrangle permits and painstakingly light the bar again. Instead, the first time you’re there, you carefully capture the whole scene with the exact lighting and props you had there the first time and use that as a virtual background for the reshoots.

The fact that many effects and backgrounds can be rendered ahead of time and shot in-camera rather than composited in later saves a lot of time and money. It also streamlines the creative process, with decisions able to be made on the spot by the filmmakers and actors, since the volume is reactive to their needs, not vice versa.

Lighting is another thing that is vastly simplified, in some ways at least, by something like Stagecraft. The bright LED wall can provide a ton of illumination, and because it actually represents the scene, that illumination is accurate to the needs of that scene. A red-lit interior of a space station, and the usual falling sparks and so on, shows red on the faces and of course the highly reflective helmet of the Mandalorian himself. Yet the team can also tweak it, for instance sticking a bright white line high on the LED wall out of sight of the camera but which creates a pleasing highlight on the helmet.

Naturally there are some trade-offs. At 20 feet tall, the volume is large but not so large that wide shots won’t capture the top of it, above which you’d see cameras and a different type of LED (the ceiling is also a display, though not as powerful). This necessitates some rotoscoping and post-production, or limits the angles and lenses one can shoot with — but that’s true of any soundstage or volume.

A shot like this would need a little massaging in post, obviously.

The size of the LEDs, that is of the pixels themselves, also limits how close the camera can get to them, and of course you can’t zoom in on an object for closer inspection. If you’re not careful, you’ll end up with Moiré patterns, those stripes you often see on images of screens.

Stagecraft is not the first application of LED walls — they’ve been used for years at smaller scales — but it is certainly by far the most high-profile, and “The Mandalorian” is the first real demonstration of what’s possible using this technology. And believe me, it’s not a one-off.

I’ve been told that nearly every production house is building or experimenting with LED walls of various sizes and types — the benefits are that obvious. TV productions can save money but look just as good. Movies can be shot on more flexible schedules. Actors who hate working in front of green screens may find this more palatable. And you better believe commercials are going to find a way to use these as well.

In short, a few years from now it’s going to be uncommon to find a production that doesn’t use an LED wall in some form or another. This is the new standard.

This is only a general overview of the technology that ILM, Disney and their many partners and suppliers are working on. In a follow-up article I’ll be sharing more detailed technical information directly from the production team and technologists who created Stagecraft and its attendant systems.