Axon adds license plate recognition to police dash cams, but heeds ethics board’s concerns

Law enforcement tech outfitter Axon has announced that it will include automated license plate recognition in its next generation of dash cams. But its independent ethics board has simultaneously released a report warning of the dire consequences should this technology be deployed irresponsibly.

Axon makes body and dash cams for law enforcement, the platform on which that footage is stored (Evidence.com), and some of the weapons officers use (Taser, the name by which the company was originally known). Fleet 3 is the new model of dash cam, and by recognizing plate numbers will come with the ability to, for example, run requested plates without an officer having to type them in while driving.

The idea of including some kind of image recognition in these products has naturally occurred to them, and indeed there are many situations where law enforcement where such a thing would be useful; Automated icense plate recognition, or ALPR, is no exception. But the ethical issues involved in this and other forms of image analysis (identifying warrant targets based on body cam footage for instance) are many and serious.

In an effort to earnestly engage with these issues and also to not appear evil and arbitrary (as otherwise it might), Axon last year set up an independent advisory board that would be told of Axon’s plans and ideas and weigh in on them in official reports. Today they issued their second, on the usage of ALPR.

Although I’ll summarize a few of its main findings below, the report actually makes for very interesting reading. The team begins by admitting that there is very little information on how police actually use ALPR data, which makes it difficult to say whether it’s a net positive or negative, or whether this or that benefit or risk is currently in play.

That said, the very fact that ALPR use is largely undocumented is evidence in itself of negligence on the part of authorities to understand and limit the potential uses of this technology.

axon camera

“The unregulated use of ALPRs has exposed millions of people subject to surveillance by law enforcement, and the danger to our basic civil rights is only increasing as the technology is becoming more common,” said Barry Friedman, NYU law professor and member of the ethics board, in a press release. “It is incumbent on companies like Axon to ensure that ALPRs serve the communities who are subject to ALPR usage. This includes guardrails to ensure their use does not compromise civil liberties or worsen existing racial and socioeconomic disparities in the criminal justice system.”

You can see that the ethics board does not pull its punches. It makes a number of recommendations to Axon, and it should come as no surprise that transparency is at the head of them.

Law enforcement agencies should not acquire or use ALPRs without going through an open, transparent, democratic process, with adequate opportunity for genuinely representative public analysis, input, and objection.

Agencies should not deploy ALPRs without a clear use policy. That policy should be made public and should, at a minimum, address the concerns raised in this report.

Vendors, including Axon, should design ALPRs to facilitate transparency about their use, including by incorporating easy ways for agencies to share aggregate and de-identified data. Each agency then should share this data with the community it serves.

And let’s improve security too, please.

Interestingly the board also makes a suggestion on the part of conscientious objectors to the current draconian scheme of immigration enforcement: “Vendors, including Axon, must provide the option to turn off immigration-related alerts from the National Crime Information Center so that jurisdictions that choose not to participate in federal immigration enforcement can do so.”

There’s an aspect of state’s rights and plenty of other things wrapped up in that, but it’s a serious consideration these days. A system like this shouldn’t be a cat’s paw for the feds.

Axon, for its part, isn’t making any particularly specific promises, partly because the board’s recommendations reach beyond what it is capable of promising. But it did agree that the data collected by its systems will never be sold for commercial purposes. “We believe the data is owned by public safety agencies and the communities they serve, and should not be resold,” said Axon founder and CEO Rick Smith in the same press release.

I asked for Axon’s perspective on the numerous other suggestions made in the report. A company representative said that Axon appreciates the board’s “thoughtful guidance” and agrees with “their overall approach.” More specifically, the statement continued:

In the interest of transparency, both with our law enforcement customers and the communities they serve, we have announced this initiative approximately a year ahead of initial deployments of Axon Fleet 3. This time period will give us the opportunity to define best practices and a model framework for implementation through conversations with leading public safety and civil liberties groups and the Ethics Board. Prior to releasing the product, we will issue a specific and detailed outline of how we are implementing relevant safeguards including items such as data retention and ownership, and creating an ethical framework to help prevent misuse of the technology.

It’s good that this technology is being deployed amidst a discussion of these issues, but the ethics board isn’t The Board, and Axon (let alone its subordinate ethics team) can’t dictate public policy.

This technology is coming, and if the communities most impacted by it and things like it want to protect themselves, or if others want to ensure they are protected, the issues in the report should be carefully considered and brought up as a matter of policy with local governments. That’s where the recommended changes can really start to take root.

Axon Ethics Report 2 v2 by TechCrunch on Scribd

Gadgets – TechCrunch

Police body-cam maker Axon says no to facial recognition, for now

Facial recognition is a controversial enough topic without bringing in everyday policing and the body cameras many (but not enough) officers wear these days. But Axon, which makes many of those cameras, solicited advice on the topic from and independent research board, and in accordance with its findings has opted not to use facial recognition for the time being.

The company, formerly known as Taser, established its “AI and Policing Technology Ethics Board” last year, and the group of 11 experts from a variety of fields just issued their first report, largely focused (by their own initiative) on the threat of facial recognition.

The advice they give is unequivocal: don’t use it — now or perhaps ever.

More specifically, their findings are as follows:

  • Facial recognition simply isn’t good enough right now for it to be used ethically.
  • Don’t talk about “accuracy,” talk about specific false negatives and positives, since those are more revealing and relevant.
  • Any facial recognition model that is used shouldn’t be overly customizable, or it will open up the possibility of abuse.
  • Any application of facial recognition should only be initiated with the consent and input of those it will affect.
  • Until there is strong evidence that these programs provide real benefits, there should be no discussion of use.
  • Facial recognition technologies do not exist, nor will they be used, in a political or ethical vacuum, so consider the real world when developing or deploying them.

The full report may be read here; there’s quite a bit of housekeeping and internal business, but the relevant part starts on page 24. Each of the above bullet points gets a couple pages of explanation and examples.

Axon, for its part, writes that it is quite in agreement: “The first board report provides us with thoughtful and actionable recommendations regarding face recognition technology that we, as a company, agree with… Consistent with the board’s recommendation, Axon will not be commercializing face matching products on our body cameras at this time.”

Not that they won’t be looking into it. The idea, I suppose, is that the technology will never be good enough to provide the desired benefits if no one is advancing the science that underpins it. The report doesn’t object except to advise the company that it adhere to the evolving best practices of the AI research community to make sure its work is free from biases and systematic flaws.

One interesting point that isn’t always brought up is the difference between face recognition and face matching. Although the former is the colloquial catch-all term for what we think of as being potentially invasive, biased, and so on, in the terminology here it is different from the latter.

Face recognition is just finding a face in the picture — this can be used by a smartphone to focus its camera or apply an effect, for instance. Face matching is taking the features of the detected face and comparing it to a database in order to match it to one on file — that could be to unlock your phone using Face ID, but it could also be the FBI comparing everyone entering an airport to the most wanted list.

Axon uses face recognition and to a lesser extent face matching to process the many, many hours of video that police departments full of body cams produce. When that video is needed as evidence, faces other than the people directly involved may need to be blurred out, and you can’t do that unless you know where the faces are and which is which.

That particular form of the technology seems benign in its current form, and no doubt there are plenty of other applications that it would be hard to disagree with. But as facial recognition techniques grow more mainstream it will be good to have advisory boards like this one keeping the companies that use them honest.

Gadgets – TechCrunch

Google’s new voice recognition system works instantly and offline (if you have a Pixel)

Voice recognition is a standard part of the smartphone package these days, and a corresponding part is the delay while you wait for Siri, Alexa, or Google to return your query, either correctly interpreted or horribly mangled. Google’s latest speech recognition works entirely offline, eliminating that delay altogether — though of course mangling is still an option.

The delay occurs because your voice, or some data derived from it anyway, has to travel from your phone to the servers of whoever operates the service, where it is analyzed and sent back a short time later. This can take anywhere from a handful of milliseconds to multiple entire seconds (what a nightmare!), or longer if your packets get lost in the ether.

Why not just do the voice recognition on the device? There’s nothing these companies would like more, but turning voice into text on the order of milliseconds takes quite a bit of computing power. It’s not just about hearing a sound and writing a word — understanding what someone is saying word by word involves a whole lot of context about language and intention.

Your phone could do it, for sure, but it wouldn’t be much faster than sending it off to the cloud, and it would eat up your battery. But steady advancements in the field have made it plausible to do so, and Google’s latest product makes it available to anyone with a Pixel.

Google’s work on the topic, documented in a paper here, built on previous advances to create a model small and efficient enough to fit on a phone (it’s 80 megabytes, if you’re curious), but capable of hearing and transcribing speech as you say it. No need to wait until you’ve finished a sentence to think whether you meant “their” or “there” — it figures it out on the fly.

So what’s the catch? Well, it only works in Gboard, Google’s keyboard app, and it only works on Pixels, and it only works in American English. So in a way this is just kind of a stress test for the real thing.

“Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application,” writes Google, as if it is the trends that need to do the hard work of localization.

Making speech recognition more responsive, and to have it work offline, is a nice development. But it’s sort of funny considering hardly any of Google’s other products work offline. Are you going to dictate into a shared document while you’re offline? Write an email? Ask for a conversion between liters and cups? You’re going to need a connection for that! Of course this will also be better on slow and spotty connections, but you have to admit it’s a little ironic.

Gadgets – TechCrunch

OrCam’s MyMe uses facial recognition to remember everyone you meet

Meet the Orcam MyMe, a tiny device that you clip on your T-shirt to help you remember faces. The OrCam MyMe features a small smartphone-like camera and a proprietary facial-recognition algorithm so that you can associate names with faces. It can be a useful device at business conferences, or to learn more about how you spend a typical day.

This isn’t OrCam’s first device. The company has been selling the MyEye for a few years. It’s a wearable device for visually impaired people that you clip to your glasses. Thanks to its camera and speaker, you can point your finger at some text and get some audio version of the test near your ear. It can also tell you if there’s somebody familiar in front of you.

OrCam is expanding beyond this market with a mass market product. It features the same technological foundation, but with a different use case. OrCam’s secret sauce is that it can handle face recognition and optical character recognition on a tiny device with a small battery — images are not processed in the cloud.

It’s also important to note that the OrCam MyMe doesn’t record video or audio. When the device detects a face, it creates a signature and tries to match it with existing signatures. While it’s not a spy camera, it still feels a bit awkward when you realize that there’s a camera pointed at you.

When there’s someone in front of you, the device sends a notification to your phone and smart watch. You can then enter the name of this person on your phone so that the next notification shows the name of the person you’re talking with.

If somebody gives you a business card, you can also hold it in front of you. The device then automatically matches the face with the information on the business card.

After that, you can tag people in different categories. For instance, you can create a tag for family members, another one for colleagues and another one for friends.

The app shows you insightful graphs representing your work-life balance over the past few weeks and months. If you want to quantify everything in your life, this could be an effective way of knowing that you should spend more time with your family for instance.

While the device isn’t available just yet, the company already sold hundreds of early units on Kickstarter. Eventually, OrCam wants to create a community of enthusiasts and figure out new use cases.

I saw the device at CES last week and it’s much smaller than you’d think based on photos. You don’t notice it unless you’re looking for the device. It’s not as intrusive as Google Glass for instance. You can optionally use a magnet if the clip doesn’t work with what you’re wearing.

OrCam expects to ship the MyMe in January 2020 for $ 399. It’s an impressive little device, but the company also faces one challenge — I’m not sure everyone feels comfortable about always-on facial recognition just yet.

Gadgets – TechCrunch