“The Metaverse” — It’s a term on the tip of everyone’s tongue, and yet still a concept that lacks clear definition. It’s an idea that is capturing the brightest minds — and wallets — at the biggest technology companies — Apple, Facebook, Microsoft — leading headlines and major business deals, and yet no one really knows what it will be. While questions remain, what we do know is that it’s coming. Arguably, it already exists.
And that’s why Washington needs to start taking it very seriously, starting with public safety.
Just like President Biden appointed key experts to focus on tackling Big Tech antitrust issues, he needs to do the same here. Imagine just sitting by and doing nothing while the metaverse develops. Here’s what happens: Companies like Meta, the new parent company name for Facebook, will only get bigger, more monopolistic and intrusive. It’s time to get out ahead of a tech issue for once.
Society is only beginning to grapple with the safety implications of social media platforms. Everything from widespread misinformation that threatens our democracy to live-streamed terrorist attacks are happening online today and could all be significantly amplified in the metaverse. In fact, pernicious behavior is already happening on the metaverse. Just last week, a U.K. mother described as she watched in horror while a gang sexually assaulted her avatar less than a minute after she entered Facebook’s virtual online world. In response, attorneys in the U.K. are already advocating for laws to be updated to include assaults committed by people who hide behind avatars. This is just the beginning.
How do we ensure accurate information and safety prevail, especially in a context where the alteration of reality is the point?
Life on the metaverse will not look or feel like real life, and that’s by design. So how do we keep people safe?
The answer starts with not letting companies off the hook. For too long, social media platforms have pumped themselves more and more into people’s daily lives and then feigned surprise when bad things emerge on their platforms.
Metaverse companies must be required to develop their own safety protocols early and they must be held accountable to meeting those standards. Just as a brick-and-mortar business must show they are compliant with health codes, or a new apartment complex must share its fire escape plans with relevant agencies before it’s given a certificate of occupancy, so too should digital companies be required to provide for the safety of their users.
And there’s precedent for holding companies to this standard. In 2020, the British government introduced a plan that would create an internet regulator to oversee companies online. The regulator is expected to be under Ofcom, the UK’s media regulator, and would be responsible for monitoring internet content and issuing penalties against companies that do not do enough to combat harmful and illegal terrorist and child abuse content. As part of these reforms, Parliament is advancing a legal standard of a “duty of care” that would put responsibility squarely upon companies for the safety of their users. The draft bill even contains reserved powers for Ofcom to pursue criminal action against named senior managers whose companies do not comply with Ofcom’s requests for information. Imagine how quickly change would come if Mark Zuckerberg could be held personally liable for evading requests from Congress. Imagine how much more interesting his next appearance on the Hill would be. The metaverse presents an opportunity for Congress to explore this legal standard that would put the onus on companies and clarify who is responsible and for what, once-and-for-all.
Above all, there should be especially strong considerations for how children will participate in the metaverse. We are learning more every day about the dangerous effects social media platforms have on the well-being of young people, particularly teenage girls. Cyberbullying, sex trafficking, stalking and other corrosive activity could easily find a comfortable home in the metaverse. Conversations about building the right guardrails need to start now.
We know how this story usually goes. A tech company comes up with a new, interesting idea. They launch, gain customers and market share. Then the government notices and tries to step in — except it’s very, very hard to put the genie back in the bottle.
We’ve seen this movie before: Let’s flip the script this time and get it right.
Bradley Tusk is a venture capitalist and political strategist who previously served as campaign manager for former New York mayor and Democratic presidential candidate Mike Bloomberg.
Erika Tannor is a policy expert and political strategist. She currently serves as a Senior Vice President at Tusk Strategies where she specializes in public policy, government relations and strategic communications.