The views expressed by contributors are their own and not the view of The Hill

Who will keep kids safe in an AI world?

Getty Images


With virtual assistants sitting on our kitchen counters, connected toys living in our kids’ bedrooms, and facial recognition software popping up on our street corners, it can sometimes feel like we are living in an episode of Black Mirror. Artificial Intelligence (AI) may be revolutionizing our world, but we can’t take it for granted that these technologies will be positive for our kids and the next generation. To keep kids safe online, we must develop a culture of responsibility now — one in which online safety relies upon government, tech companies, schools, parents, and kids themselves.

When policymakers think about AI these days, they tend to focus on jobs and the economy. They don’t think as much about the risks, particularly for children and young people who will encounter these technologies as they come of age in this new world.

In an AI world, government should shoulder a reasonable amount of responsibility, finding a sweet spot of regulation that doesn’t squash individual liberties or innovation. Creating a position of federal Chief Online Safety Officer — to work alongside the U.S. Chief Technology Officer — would help ensure someone at the federal level is focused on this vital issue. Likewise, the Federal Trade Commission should continue updating and enforcing the COPPA Rule — the Children’s Online Privacy Protection Act — to protect the privacy of children under the age of thirteen.

On Capitol Hill, several pieces of legislation offer reasonable steps forward, such as CAMRA — the Children and Media Research Advancement Act — which would provide the National Institute of Health $95 million in funds to research the impact of media, including mobile devices, social media, artificial intelligence, and virtual and augmented reality. 

We need well-resourced law enforcement trained to handle the bad actors who could use the advancements in AI to their advantage. And while providing police with the tech tools they need, we must also remain vigilant that we do not create an unaccountable surveillance system that allows potential abuses of the extraordinary power of machine learning, facial recognition, and national databases.

The tech industry is a key part of building this culture of responsibility. In the past two years, we have seen a backlash against the tech firms for problems from privacy breaches to questionable content moderation policies that allow hate speech and worse to spread. We must continue to demand real, comprehensive industry self-regulatory efforts — tools to filter, to report, to keep posts private, and to encourage positive behaviors. Tech companies’ policies and practices must keep pace with the increasing scale of their operations both domestically and around the world. There must be more than lip service to the concept of time well spent and digital well-being. It is essential that the tech industry collaborates with government and law enforcement to keep the potential harmful effects of AI to a minimum.

Teachers and educators can also help. It is estimated that AI in U.S. education will grow by 47 percent by 2021. Machine learning will be used for the development of skills and testing systems. The hope is that these technologies will help fill gaps in learning and to provide greater personalization and efficiencies so teachers will be freed up to provide personal coaching to students. Given that our children will inherit an AI-rich world, it is essential that schools use AI responsibility as part of the teaching repertoire.

Finally, children and young people must be brought into the discussions and decisions about what this AI-rich future will look like. Giving young people agency over their online lives is perhaps the greatest gift we can give them — helping them to develop resiliency and the strength to stand up to bullies, predators, and others who act out inappropriately online and off. If we get this right, we will encourage a generation of young people to make wise choices about the content they access and post, about who they contact and who they allow to contact them, and how they conduct themselves online.

In so many parts of the world, we are witnessing young people using social media and new tech tools to create social movements that address our biggest challenges. We need only look to advocates like Greta Thunberg — the 16-year-old Swedish climate change advocate — to see the power and change that technology can bring to the next generation. Let’s create a culture of responsibility to foster this promising future for our kids.

Stephen Balkam is the Founder and CEO of the Family Online Safety Institute (FOSI), an international nonprofit organization headquartered in Washington, D.C. that seeks to make the online world safer for kids and their families. Follow him on Twitter @StephenBalkamFOSI’s 20+ members — including Google, Facebook, Verizon, Twitter, and Amazon — represent the leading Internet and communications companies in the world. 

Tags Applications of artificial intelligence Artificial intelligence Children's Online Privacy Protection Act Educational technology Family Online Safety Institute Internet privacy Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more