The views expressed by contributors are their own and not the view of The Hill

Privacy or security is a false choice

The Hill illustration

Twenty-five years ago, the “Mission: Impossible” franchise began with a simple plot: National security operative Ethan Hunt (played by Tom Cruise) must uncover a traitor while Hunt is under suspicion of being the traitor he seeks to uncover. As the plot unfolds, the question “What about privacy?” is never considered.

This is the Ethan Hunt Problem: Whom can one trust when the stakes are high?

All national security professionals must safeguard classified and other sensitive information, but only some have the job of determining eligibility to join the national security field. If each person applying for positions of trust in the national security field falls along a spectrum of risk – and we do – then the adjudicators must trade someone’s privacy for trust. 

In October 2020, The Defense Counterintelligence and Security Agency (DCSA) took over the responsibility for personnel vetting and adjudicating people in positions of public trust. DCSA already has nearly a million security clearance holders enrolled in its Continuous Vetting (CV) program, one of the many improvements taking place in the security vetting community.

However, also within the recent past, privacy entered the public sphere as a major concern, and placed America’s technology platforms in the spotlight. If a decade ago the corporate villain in the American story was the banking executive, today it is the titan of social media. And because much of the information gathered during the investigative process is highly personal and impacts a person’s ability to obtain and keep a job, DCSA professionals face a dual challenge: identify threats to our national security while also respecting the individual’s personal information.

This mission applies almost universally today, not just at DCSA. Hire an employee. Enroll someone in a trusted traveler program. Approve a new bank account. Almost everyone feels the trade-off: You can have privacy or security, but you cannot have both. 

But the trade-off between privacy and security is a false choice. In my years of work in the national security space, I’ve heard this lament many times: “It’s impossible to have security and privacy at the same time.”

It might have been, once. But it’s not any longer. As technology has evolved to create privacy challenges, it has also evolved to resolve them. 

For years, risk professionals in national security and financial institutions have relied on data-as-a-service providers who create data sets to screen employees, applicants and customers. These data sets are hand-curated by thousands of people who are hired to read arrest records and other news and tag this data in a directory of people. This feels privacy invasive. And these data sets consider only small amounts of data and so miss massive amounts of risk.

The first problem with this approach is that the data sorting stems from identities, not behaviors; data vendors create a stored dossier for each individual living in the United States. The second problem of hand-curated data is that it just doesn’t work in today’s world of data abundance. As the amount of publicly available information grows every moment of every day, large enough armies of data collectors cannot be hired. 

But because of the evolution of machine learning technology, privacy and security can coexist and grow together. Machine learning is, essentially, pattern detection, and behaviors appear as patterns. The adjudicative criteria for continuous vetting are also behaviors. For example, the vetting process looks at factors like criminal history, drug use and financial difficulties. Think about dimensions of risk as patterns of human behavior.

Algorithms can reindex publicly available information on the internet (both the open and the deep web) to perform sorting by behavior and then entity resolution on unstructured data, with a limited number of false positives. Technology can enable finding needles in haystacks without building a privacy invasive file on each and every person.

Further, using machine learning and AI can dramatically reduce false positives (flagging a loyal agent as a rogue), so that impact on people’s privacy is dramatically reduced — because screeners can detect more threats without having to dig into the personal lives of professionals falsely flagged.

Privacy with security happens by using more data and machine learning to detect patterns of behavior tied to the very same adjudicative criteria, or whatever criteria one uses in their mission to screen out risk and solve the Ethan Hunt problem — confirm who can be trusted.

Gary M. Shiffman, Ph.D., a former chief of staff of U.S. Customs and Border Protection, is an adjunct professor at Georgetown University. He is the founder of Giant Oak and Consilient and author of “The Economics of Violence: How Behavioral Science Can Transform our View of Crime, Insurgency, and Terrorism.”

Tags Ethan Hunt Identity management Internet privacy Medical privacy National security Security clearance

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more