The United States has a long history of consumer protection and product safety, led by government, nonprofit organizations, businesses and individuals. All the way back in March 1962, President John F. Kennedy presented a speech to Congress in which he proclaimed four basic consumer rights: The right to safety, the right to be informed, the right to choose and the right to be heard. For decades, these tenets and the official apparatus built around them protected consumers from harmful cars, toys, appliances, food and drugs.
However, the recent scandals over social media data privacy and information integrity show that these safeguards haven’t kept pace with the unique challenges of the digital age. Facebook is still experiencing the fallout of Cambridge Analytica scraping and misusing millions of users’ personal information, and now faces yet another data-sharing scandal involving Chinese firms. Fake followers and bogus retweets run rampant on Twitter, with new research suggesting that they may have even delivered Donald Trump’s victory in the presidential election.
{mosads}Terms like “bots” and “scraping” — once obscure industry jargon used only by cybersecurity professionals — are suddenly in the forefront of public discourse.
You’d expect a focused government response and a comeuppance for the social media companies that have allowed the problems to fester and grow. But that hasn’t happened. In fact, lawmakers are still playing catch-up — the Facebook hearings proved just how little some of them knew about how the big tech companies operate, let alone how to govern security and privacy on the internet.
Scattershot solutions
No wonder the legislative and regulatory landscape is so fragmented. At a time when we sorely need a unified, coherent approach to better protect data privacy and integrity, the landscape is littered with pockets of isolated rules and proposals. For example:
- On May 30, the U.S. Department of Commerce and the U.S. Department of Homeland Security publicly released a report on the opportunities and challenges of “reducing threats from automated, distributed attacks,” providing a list of goals and suggested actions to reduce these threats and “improve the resilience and redundancy of the ecosystem.” While the report put a spotlight on the issue, it’s unclear what specific action will result.
- The European Union’s General Data Protection Regulation (GDPR) now requires organizations to obtain explicit consent from EU citizens before they store or process personal information. GDPR’s impact is global because it applies to any business that collects data on people inside the EU, regardless of where the company is based.
But in a missed opportunity to boost internet privacy, it appears that most organizations are following the letter of the law by applying the new rules to users with European IP addresses but are not implementing GDPR elements elsewhere. At its heart, GDPR aims to give internet users a better understanding of and more control over their data. While other countries may not agree on the implementation details, they’d be well-served to emulate the basic philosophy.
- Two bills introduced in California — one by Assemblyman Marc Levine and the other by Sen. Bob Hertzberg — would require social platforms like Facebook and Twitter to identify automated bot accounts, which have been used to manipulate public opinion and disseminate fake news. “Right now we have no law and it’s just the Wild West,” Hertzberg said after filing his bill.
He’s right, but until the federal government or many more states step up and enact similar laws, we could end up with a hodgepodge of laws in different states rather than the coordinated approach that’s needed for an internet that knows no geographical boundaries.
- Sen. Amy Klobuchar of Minnesota and New York Gov. Andrew Cuomo have called on states to enact laws requiring social media companies to disclose who is paying for political ads, similar to the declarations in those on TV and radio. Another good idea, but little has been reported about it since an initial burst of attention in early March.
Users are exposed
It’s clear that the regulatory environment is still evolving and has yet to catch up to the current threat. Which leads to the inescapable conclusion that, despite the scandals of the last year and assurances by the social media companies that they are doing more to protect users, the protections that exist today are not adequate.
Sure, people can be more careful about the information they provide, scour privacy settings, and actually read privacy statements and user agreements. But we never have expected, say, a car owner to be responsible for the vehicle’s safety, nor did we trust the manufacturer to guarantee it. That’s why government consumer protection agencies were established to provide healthy oversight. Why is the internet any different?
It may be a pipe dream in the current political environment, but the recent scandals demonstrate the need to apply America’s strong consumer safety ethic to the social media platforms and anywhere else on the internet where the rights John F. Kennedy talked about are threatened.
More needs to be done
People should demand action from their legislators, insisting on a concerted effort to address the issues rather than scattershot bills that make headlines for a day or two and then disappear into the ether.
Congress could find a good model in the 1996 Digital Millennium Copyright Act, a bipartisan effort on Capitol Hill to move the nation’s copyright law into the digital age. Lawmakers should come together once again to come to grips with bots and other nefarious actors.
As JFK put it in 1962, “The march of technology … has increased the difficulties of the consumer along with his opportunities; and it has outmoded many of the old laws and regulations and made new legislation necessary.”
Rami Essaid is co-founder and chairman of Distil Networks, a cybersecurity company specializing in bot mitigation.