The views expressed by contributors are their own and not the view of The Hill

Big Tech’s artificial intelligence aristocracy

When he testified before Congress, Facebook CEO Mark Zuckerberg loved to tell legislators that his team would “follow up with you” on that, or that his team is building AI tools for that. These AI tools would supposedly solve many content moderation problems, ranging from misinformation to terrorism to fake accounts. Today, you could add coronavirus misinformation to that list, but you could also ask if these AI tools have actually solved any of these problems (or if Zuckerberg’s team ever did follow up).

Many decisions today, such as ranking a website in search results, are made by algorithms. These algorithms are perceived as objective, mechanical and unbiased, while humans are perceived as subjective, fallible and full of bias. That model of the world mostly works — at least until AI is added into the picture. AI shatters that traditional dichotomy between objective algorithms and messy humans. If algorithms are a green light and humans are a red light, then AI would be a yellow light. AI can be more human than traditional algorithms, and that is simultaneously its greatest asset and its greatest liability.

So how does AI work? In the book “Prediction Machines,” a trio of economists explain that “cheap changes everything.” Just like the internet made search cheap — today you search with Google, not the Yellow Pages or the Dewey Decimal System — AI has made predictions cheap. For example, AI can take a radiology image and predict if the person has breast cancer.

In many AI applications, the problem is framed as predicting what a human would do, and that’s exactly when those human biases can creep into the system. That issue of bias does not pose a problem when you are predicting what a human radiologist would do — the machines are even starting to detect breast cancer more accurately than humans — but it does pose major problems for more vague and nebulous tasks, such as identifying hate speech or misinformation.

Consider machine learning. As Cassie Kozyrkov explained, the simple idea of machine learning is “explain with examples, not with instructions.” To use her internet-approved example of cats, we can provide a machine with many images, each one labeled as a cat or not-cat, and from there it will learn how to tell them apart.

When the machines are trained, who labels each image as a cat or not-cat? In many cases, it’s a human. That’s not a problem when humans label images as cat or not-cats, or when human radiologists label images as positive or negative for breast cancer, but it is a problem when humans label things as hate speech or not hate speech. When the humans labeling that data are your company’s biased moderators, or the health experts who pulled an about-face on social distancing during the protests, then the machines will learn their biases. If the humans won’t label Chinese propaganda as misinformation, then the machines won’t label it as misinformation.

If the aristocracy defines terms like “hate speech” and “misinformation” — and if they control the training data — then the machines will simply become tools of the aristocracy.

And speaking of the aristocracy, is there any clearer example than YouTube CEO Susan Wojcicki appearing on CNN’s “Reliable” Sources with Brian Stelter, declaring that “anything that goes against WHO recommendations would be a violation of our policy”? The same WHO that tweeted on Jan. 14 that there’s no evidence of human transmission of the coronavirus? The same WHO that ignored Taiwan’s warning in December about human transmission (and still won’t invite Taiwan to its meetings)? The same WHO that mistakenly claimed asymptomatic transmission is very rare, prompting Dr. Anthony Fauci to say that their remark “was not correct”? That’s certainly not a meritocracy.

And what about Facebook’s use of AI tools? Facebook recently created a “supreme court” for content moderation, though the composition of that board has been criticized by many conservatives. For obvious reasons, this court can only review a small sliver of content, so there’s an obvious opportunity for an AI tool that can accurately predict the court’s decisions. But could you trust this tool? Only if you trusted this court.

Artificial intelligence, after all, is not really intelligent. It’s just really good at predicting what humans would do, regardless of whether those humans are part of the meritocracy or the aristocracy.

Mike Wacker is a former software engineer for Google and one of the Lincoln Network’s 2020 Policy Hackers fellows. Follow him on Twitter @m_wacker.

Tags Anthony Fauci anti-conservative bias Applications of artificial intelligence Artificial intelligence Bias Facebook Machine learning Mark Zuckerberg Technology World Health Organization YouTube

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more