Why Our Faces are so Valuable

For governments, tracking our faces means finding terrorists faster or catching criminals easier.

For social media companies, it means they can add dog ears to our faces or tag people in pictures.

For Walmart and McDonalds, it means optimizing sales and improving service quality.

But is it really worth it, for us?

From stories of an innocent college student receiving death threats for “committing a terrorist attack,” to instances of citizens falsely accused of crimes, the effects of inaccurate facial recognition systems are devastating.

Even facial recognition software used at roller skating rinks can be harmful.

Origins

In 2018, researchers Joy Buolamwini and Timnit Gebru published their research testing the facial recognition software of major tech companies. Ultimately, their study sparked a new movement questioning the efficacy of facial recognition models and how their shortcomings threaten human rights.

Buolamwini and Gebru composed a diverse dataset of people to test the accuracy of IBM’s, Microsoft’s, and Face++’s facial recognition software. Buolamwini would later also test Amazon’s Rekognition software in 2019.

The testing showed significant flaws in all three programs. Overall, the software was very accurate but was far more inaccurate in detecting females and people of color than white males. As cited in their research, Face++ misidentified 34% of colored-female faces; overall, the models inaccurately categorized one in every three females.

IBM immediately responded, claiming to have improved their systems and acknowledged the harmful effects of biased AI. Keep in mind this was three years ago. Microsoft had released a similar statement on its website, claiming to have made modifications and called for legislative support, as did IBM.

Joy would later test their software again and find significant improvement, but still not on par with accuracy rates for white males.

Dozens of companies provide facial recognition software, but IBM and Microsoft are among the most well-known, as well as Amazon.

Amazon came under fire in January of 2019 after the ACLU filed a lawsuit claiming Amazon’s facial recognition software was biased after it inaccurately identified members of congress.

The research and lawsuits created a temporary buzz, but this news, like others, fell out of the news cycle. Amazon’s software remained in circulation, all until a year ago.

On June 10, 2020, IBM announced they were no longer working on their facial recognition software, citing their algorithm’s racial and gender bias, as well as the privacy issues surrounding their software.

Two days following the announcement, Amazon placed a one-year ban on police use of Rekognition. Amazon did not provide any reasoning, although there could be several reasons why. In following suit with IBM and Amazon, Microsoft announced they would stop the development of their facial recognition software as well on June 11, 2020.

However, on May 18 of 2021, Amazon extended its moratorium on police use of Rekognition software indefinitely. Amazon maintains that its software does not perform worse on individuals with darker skin tones.

Buolamwini and Gebru’s work was the catalyst for corporate reform. But other prominent figures identified similar issues with facial recognition software as well.

The Department of Homeland Security conducted its own tests in 2018 on 11 facial recognition software and found that skin color was the leading trigger for varying accuracy rates.

Further, in 2019, the National Institute of Standards and Technology tested facial recognition software provided by Idemia and found disparate accuracy rates amongst races.

The largest users of facial recognition software in the US are government agencies tasked with identifying criminals, illegal immigrants, or just verifying the identities of ordinary people. Software that cannot accurately identify individuals is also vulnerable to falsely matching an individual with another face, putting people of color at greater risk.

Many cities have already prohibited public use of facial recognition software or are in the process of doing so. The decision from big tech to discontinue their software seemed like a breath of fresh air for activists and citizens overall. It seems as though companies care about their social impact. However, there are still some things that don’t add up.

Social Media

Social media companies like Facebook don’t sell their facial recognition services to clients, but they certainly have them, and unlike Amazon, Microsoft, and IBM, Facebook has one of the largest collection of people’s faces: the worst part is, we give it to them when we consent to their terms and conditions. Facebook, at the moment, only uses its software and database to add to its platform: Facebook can auto-tag people and add filters to our faces. Facebook doesn’t sell this data, but its mere existence is a serious threat to our privacy. Imagine if all of this was physical and someone was going through looking at images of us whenever we post a picture online.

Moreover, this data isn’t always well-protected. For example, the US government has accessed databases of prominent social media companies to conduct their own surveillance, and cyber-attacks have exposed users’ personal data on several occasions.

However, in a surprising turn of events, Facebook, now Meta, has announced that it will be rolling back its facial recognition software across all of its divisions but noted that it would continue developing software that prioritizes privacy. In a blog post, Meta cited growing concern over privacy issues and the abundance of enacted and anticipated legislation. In a preemptive move, Meta made this decision, abandoning the very software attributed to the company’s popularity.

While Facebook would be free of facial recognition software, Meta would be conceived with it from day one.

What’s really going on?

As of 2019, the global facial recognition industry was worth 4.35 billion dollars and is estimated to triple by 2027. Clearly, there is a lot of money invested here.

Even as major corporations seemingly abandon the market, sales for facial recognition technology continue to increase, even past 2021. Smaller companies have also arisen, providing their own solutions. While these companies are not necessarily selling to law enforcement (primarily developing software for commercial settings or biometric authentication), their software can infringe on privacy and perpetuate algorithmic bias.

Also alarming, tech companies are spending more on lobbying than ever before. 2018 marked a record year in total amount spent on lobbying. Collectively, Facebook, Amazon, IBM, Microsoft, and Amazon spent $51.7 million on lobbying in 2020 alone. While correlation does not equal causation, it’s ironic that major tech companies are calling for Congressional action.

If lobbyists succeed, companies can write laws with favorable provisions. Perhaps with new legislation, companies can develop their software to sell to private clients instead of public ones.

Fixing it from the inside

The bias in these AI models can be attributed to a lack of diversity in databases companies use to test these algorithms.

Government agencies that use facial recognition software have to supply their own datasets, and biases that exist here translate into the software itself. With incarceration rates highest amongst African-Americans and Latinos, facial recognition systems see images of this demographic far more often, creating more opportunities for the software to produce false matches.

So once again, it boils down to the data. Companies have to work towards creating a more diverse array of training data, and institutions that use this software have to be aware of potential sources of biases.

Or the software can be banned altogether, but the money says otherwise.

Big Picture

These events have shown us that tech companies are these powerhouses, controlling all aspects of life. The mere fact that it took two years for companies (2018–2020) to stop their facial recognition software shows the true intentions of these companies. These software products were already in use before Joy’s research, and the benchmark testing she used wasn’t even that rigorous. In fact, Joy herself says that the people in the test data were looking directly at the camera, making it easier for the software to detect their faces.

2020 brought a halt to these companies’ actions. Protests regarding George Floyd’s death, among others, have brought a new wave of awareness. Companies have realized this. While not directly related, BLM protests proved that actions have consequences. We possess the power to create change, and with enough pressure, large tech companies will rethink their decisions. Overall, transparency is also key to preventing companies from making shady deals out of the public’s eye.

For governments and companies, faces are a metric or tool to be manipulated. To us, they’re our identity.

--

--

--

Interested in the intersection between tech and ethics.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Separate Voice, Music, and Sound Effects With AI

90 Days In: The World of AI, Clinical Research, and Medtech

YOUR CRASH COURSE IN AI INVESTING (PART 3)

Three Levels of Motivation

How to Become an AI Developer

What if One Day, We Weren’t Allowed to Make Our Own Decisions?

Pneumatics is DEAD! Long LIVE Pneumatics!

Can you explain this?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Abhiram Pulavarthi

Abhiram Pulavarthi

Interested in the intersection between tech and ethics.

More from Medium

An Economic Christmas Carol — What Economic Values Can Be Learned From Scrooge?

Mentoring for Leadership Multiplication #4 Pushing wisely more than pampering

Little Valley — On Our American Fellowship

An Ode to Friendship