The views expressed by contributors are their own and not the view of The Hill

It’s not just Facebook’s problem

Getty Images

Facebook’s data privacy debacle exposed 87 million users’ private data to misuse by political consultancy Cambridge Analytica. This scandal is not just Facebook’s problem or just a data privacy or app developer issue. It is an example of a more general failure to integrate ethics into decision-making.

Ethics are not an eraser or a clean-up act. Ethics should precede and accompany action. Asking “what could happen?” not “what just happened?” is more about foresight than forensics.

What should tech companies do to improve ethics? Tech companies should see ethics as one of their most effective pro-innovation strategies and risk-management tools.

{mosads}But they must integrate ethics into decision-making in every phase: product design and development, decisions about how and when to unleash products on society and relationships with advertisers, the media, shareholders and regulators.

 

Preventive ethics require thinking down the chain of possible uses and reuses of products and data generated from products. Facebook isn’t the only major company seeing ethics with 20/20 hindsight.

Uber didn’t even bother to ask about known non-tech risks, like reporting criminal activity in Uber vehicles. Amazon should disclose the theory behind the Amazon Echo Look algorithm that ranks users’ videotaped fashion choices.

Preventive ethics may require testing new apps and add-ons (and even research projects) with small groups. Then companies might see where unexpected problems emerge before they unleash new technology on two billion users. Now is the time to check whether Facebook Messenger Kids really is ready for exposure to six-year-olds. 

When the Cambridge Analytica scandal broke, Facebook proposed important improvements to the app developer situation. But there are more steps that all tech companies should take, and Facebook’s new central page for privacy and security settings might be a step in this direction.

Tech companies should dramatically improve transparency about consent. They should speak to users in plain language both through the central privacy settings and every time they mention the use of our data or other risks— not just through a central platform.

Classic pillars of ethics, such as informed consent, wobble in today’s context, whether liability releases required to board a civilian spaceship or a click on a Facebook questionnaire. Much of our consent is more about giving up than giving willingly, like clicking “I agree” to the many single-spaced pages of legalese required to watch a movie on iTunes.

A rewritten Facebook questionnaire might say, “The data you provide in responding will be used exclusively for academic research. The app developer will not have access to any of your other data except X.”

Or, it might read: “By completing this questionnaire you are exposing yourself to the app’s function that could scrape additional data from your Facebook profile. We have no way to know whether the app developer will respect its contractual obligation with Facebook to use the data only for the stated purpose of X.” 

Amazon Echo Look’s disclosure might read: “Should you choose to photograph or videotape your outfit, the data will be stored in the cloud and might be shared with third parties. We are choosing not to tell you where the data could end up or how our algorithm for evaluating your choices works.”

Tech companies should not infer consent. Facebook learned from its 2012 stealth mood-manipulation experiment, which sent positive and negative news stories to users to determine whether their moods shifted. 

Also, tech companies, whose global influence and financial capacity out-power many governments, should raise the standards of their own governance. Options include:

  • add ethics experts to their boards (and board ethics committees with insiders and outside experts);
  • give earlier and more thorough board-level consideration to the risks new products and services could trigger; and
  • reflect on risks to health and well-being, political systems and national security. 

Tech companies should also shift their approach to regulation from lobbying to deploying ethics that fly higher than the law.

Proactive ethics encourage targeted regulatory response, reducing the threat to innovation, the risk of losing advertising and the uncertain impact on share price that senior bankers signaled this week. With a few simple safety improvements, Uber might not have lost its license to operate in London (pending appeal).

What should the rest of us do? 

I am not minimizing Facebook’s responsibility in the current crisis, and I don’t believe that Facebook is, either. But the broader question is how we allocate the responsibility for the ethics of technology. We will need multi-stakeholder solutions.

Regulators must stop holding companies responsible for all technology risk. The law will always lag behind technology. And national legal systems are ill-suited to protect against the borderless consequences of misuse or misfire of technology.

But innovation-supportive (and ideally cross-border) regulatory frameworks are possible — protecting space for technology to develop, accelerating effective responses to unexpected problems and treading carefully around free speech.

Regulators can also require governance improvements, as they have in past ethics crises, such as the 2008 financial system meltdown. Regulators should always ask, “What could this regulation do?,” to avoid unintended consequences.

Academic institutions and researchers must reinforce research ethics standards in the complex world of third-party data. Most academics respect the boundaries of informed consent and university research ethics board approval in collection and use of data and appropriately commercializing research.

Companies like Facebook should be able to rely on the contractual commitment of academics, but the contagious nature of the data-privacy failure shows that they must redouble efforts to prevent, verify and respond to wrongdoing.

Facebook has committed to preventing abuses such as Cambridge University lecturer Aleksandr Kogan’s alleged sale of data to Cambridge Analytica in violation of Facebook’s terms. Others should follow its lead.

Investors from pension funds to venture capitalists should require substantially more robust corporate governance and ethics oversight — not just through shareholder lawsuits like the one against Facebook.

Venture capitalists often tell me that ethics is not high on their list of concerns. In a competitive investment environment, entrepreneurs “have other priorities.”

Users also have some responsibility. We must pay attention to improved disclosure and privacy settings, shareable data and exposing children to social media. But as Mark Zuckerberg said, the companies owe users transparency and protection — as do the regulators.

In the current crisis, we shouldn’t lose track of the immense opportunity that companies like Facebook create. Nor should we consider their positive contributions an excuse for ethics failure.

But in allocating ethics responsibility, the worst outcome would be to stifle innovation through over-regulation or the misguided belief that the companies alone will solve the ethics challenges of technology. 

Susan Liautaud, Ph.D., teaches cutting edge ethics at Stanford University, chairs the Ethics Policy Committee at the London School of Economics and Political Science and advises companies globally on pro-innovation ethics.

Tags Amazon Cambridge Analytica Computing Ethics Facebook Informed consent Internet privacy Mark Zuckerberg Mark Zuckerberg Medical ethics Privacy Software

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Middle ↴
See all Hill.TV See all Video
Main Area Bottom ↴

Most Popular

Load more