Is China building the metaverse?

There is a heated debate on the state of the race between the United States and China to dominate in AI. But perhaps the more strategic question is whether China is building the metaverse.

Built upon infrastructural technologies like AI, the metaverse refers to the vast array of digital experiences and ecosystems, from e-commerce and entertainment to social media and work, where we spend more and more of our lives. It’s soon going to be hard to conceive of a world in which much of our social and economic lives are not defined by the rules of the metaverse. To the builder goes the opportunity to establish rules to their own benefit.

In truth, both the U.S. and China are trying to build and lay claim to the metaverse, with other actors such as Europe trying to do so as well, but they simply don’t control enough of the core technologies that make the metaverse possible.

These core technologies include AI, 5G, end-user devices and the sector-straddling super apps that bring everything together — and related technologies such as smartwatches and eyewear. Competence and dominance across these four criteria is what may give China an insurmountable head start over the U.S. in the race to build the future of the virtualized human experience.

China’s AI advantage

The Chinese leadership understands that AI is revolutionizing virtually all aspects of social life, including consumption. AI is a top priority for government and business, and the Chinese government has called for China to achieve major new breakthroughs by 2025 and become the global leader in AI by 2030.

If the metaverse does become the successor to the internet, who builds it, and how, will be extremely important to the future of the economy and society as a whole.

The strategy was initially outlined in the Chinese government’s New Generation Artificial Intelligence Development Plan in 2017. It has since spurred both new policies and billions of dollars of R&D investments from ministries, provincial governments and private companies.

As a result of China’s AI initiatives, the American advantage in the sector has been steadily eroding: In 2017 the U.S. had an 11x lead over China, but by 2019, that lead had come down to 7x. By 2020, the U.S. was left with a narrow lead of 6x. Even this lead has been uncertain, and the ex-chief software officer of the Pentagon went so far as to say that China already had an insurmountable lead in AI and machine learning.

Moreover, some question the American lead when it comes to the availability of training data. In the privacy versus public good debate, the U.S. tends to lean toward privacy, whereas China has long exercised government intervention in maintaining a civil society as a public good.

Finally, China has access to vast data sets to train AI, which presents a significant strategic advantage, especially considering the country’s population of 1.4 billion.

China builds the devices

The capacity to build and ultimately become the preeminent force in the metaverse starts with China’s long-standing and unrivaled dominance of consumer device manufacturing. From smartphones and notebooks to AR and VR headsets, Chinese manufacturers are building the largest portion and widest varieties of the devices that consumers need to access digital platforms and social experiences. The most advanced design and production competencies are likely to already reside in cities like Shenzhen.

Facebook researchers build better skin and fingertips for softer, more sensitive robots

According to Facebook AI Research, the next generation of robots should be much better at feeling — not emotions, of course, but using the sense of touch. And to advance the ball in this relatively new area of AI and robotics research, the company and its partners have built a new kind of electronic skin and fingertip that are inexpensive, durable, and provide a basic and reliable tactile sense to our mechanical friends.

The question of why exactly Facebook is looking into robot skin is obvious enough that AI head Yann LeCun took it on preemptively on a media call showing off the new projects.

Funnily enough, he recalled, it started with Zuckerberg noting that the company seemed to have no good reason to be looking into robotics. LeCun seems to have taken this as a challenge and started looking into it, but a clear answer emerged in time: if Facebook was to be in the business of providing intelligent agents — and what self-respecting tech corporation isn’t? — then those agents need to understand the world beyond the output of a camera or microphone.

The sense of touch isn’t much good at telling whether something is a picture of a cat or a dog, or who in a room is speaking, but if robots or AIs plan to interact with the real world, they need more than that.

“What we’ve become good at is understanding pixels and appearances,” said FAIR research scientist Roberto Calandra, “But understanding the world goes beyond that. We need to go towards a physical understanding of objects to ground this.”

While cameras and microphones are cheap and there are lots of tools for efficiently processing that data, the same can’t be said for touch. Sophisticated pressure sensors simply aren’t popular consumer products, and so any useful ones tend to stay in labs and industrial settings.

DIGIT was released in 2020 as an open source design; it uses a tiny camera pointed at the pads to produce a detailed image of the item being touched. You can see the fingertips themselves in the image at top; it’s quite sensitive, as you can see from the detailed maps it’s able to create when touching various items:

Objects shown above images of the signals produced by the robotic fingertip.

Image Credits: Facebook

The ReSkin project has roots dating back to 2009; we wrote about the MIT project called GelSight in 2014, then again in 2020 — the company has spun out and is now the manufacturing partner for this well-documented approach to touch. Basically you have magnetic particles suspended in a soft gel surface, and a magnetometer beneath it can sense the displacement of those particles, translating those movements into accurate force maps of the pressures causing the movement.

One of the advantages of the GelSight type system is that the hard component — the chip with the magnetometer and logic and so on — is totally separate from the soft component, which is just a flexible pad impregnated with magnetic dots. That means the surface can get dirty or scratched and is easily replaced, while the sensitive part can hide safely below.

In the case of ReSkin, it means you can hook a bunch of the chips up in any shape and lay a slab of magnetic elastomer on top, then integrate the signals and you’ll get touch information from the whole thing. Well… it’s not quite that simple, since you have to calibrate it and all, but it’s a lot simpler than other artificial skin systems that were possible to operate at scales beyond a couple square inches.

You can even make it into little dog shoes, because why not?

Animated image of a dog with pressure-sensing pads on its feet and the readings from them.

Animated image of a good dog with pressure-sensing pads on its feet and the readings from them.

With a pressure-sensitive surface like this, robots and other devices can more easily sense the presence of objects and obstacles, without relying on, say, extra friction from the joint exerting force in that direction. This could make assistive robots much more gentle and responsive to touch — not that there are many assistive robots out there to begin with. But part of the reason why is because they can’t be trusted not to crush things or people, since they don’t have a good sense of touch!

Facebook’s work here isn’t about new ideas, but about making an effective approach more accessible and affordable. The software framework will be released publicly and the devices can be bought for fairly cheap, so it will be easier for other researchers to get into the field.