Protecting the good-guy hackers
On Sept. 26, 2016, moderator Lester Holt of NBC News became the first person in American history to utter the word “cyber” during a general presidential debate. President Trump responded with his now famous “400 pound” hacker comment and called the issue “the cyber.”
We are living increasingly with “the cyber.” From our pacemakers to our thermostats to our cars, things around us are more and more computerized and networked. When this technology is insecure, we are insecure. Security research, where experts work to identify flaws in products in order to evaluate security and privacy properties, is a key element of securing our digital lives. However, security research suffers under a shadow of laws and policies that can discourage or even criminalize valuable work.
{mosads}Unfortunately, even with all of the public attention around Russian interference in the 2016 election, the chill on “white hat” security researchers (read: good-guy hackers) is still not getting much attention in the national cybersecurity conversation. When it is an active discussion, often among those steeped in the policy nuances, it is heated. We need to both turn down the temperature on the debate and bring the conversation into more mainstream venues.
To that end, we must start to answer the hard questions in security research, and develop policy responses that would reduce the ambiguity around security researchers, while also recognizing that computer crime is a serious challenge that demands an aggressive, if properly calibrated, law enforcement response. The Center for Democracy & Technology has started the exploration of this issue in a new paper.
As it notes, the first thing we must ask is: what legal changes are necessary to give security researchers a freer hand in research that has occasionally elicited law enforcement attention, but that most agree should be legal and encouraged?
Foremost here is reform of the Computer Fraud and Abuse Act (CFAA). Passed in the wake of the 1980s Hollywood techno-thriller War Games, the CFAA is a federal statute criminalizing “unauthorized” access to computers.
The problem is that no one knows what “authorization” means, and prosecutors have deployed the CFAA to punish password-sharing, violations of websites’ terms of service and, a particularly big deal to researchers, the automated collection of information that is publicly available on the internet. We must also address the Digital Millennium Copyright Act (DMCA), which often dissuades reverse engineering of devices to identify vulnerabilities, and overbroad export controls.
The second “hard question” to tackle is: has the Department of Justice been doing enough, or should it be doing more, to encourage discretion of government prosecutors in edge cases? The tragic suicide of Aaron Swartz, who faced a wildly overzealous multicount CFAA indictment and many decades in jail for the mass downloading of academic articles, highlights the serious consequences of not finding more rational solutions.
We also must look at the recent explosion in “bug bounty” programs, which reward independent vulnerability hunters when they disclose bugs to participating vendors. This discussion needs to include vendors who have formal vulnerability disclosure programs in place. We must ask whether there are best practices that companies and security researchers should adopt.
Finally, we need to take on arguably the hardest question in security research: are there security research activities that are simply too far beyond the pale, even if they result in the disclosure of important computer flaws? For instance, is it ever okay to try and access the avionics equipment on a plane in flight to prove it is possible? Or, to show that a dating app can be used by stalkers, can a researcher demonstrate such stalking on an unwitting user?
While these may seem deeply unsavory, these are good examples of flaws we would certainly want to know about and certainly want to fix so that they could not be exploited for harm. If a clear set of norms or guidelines were developed, it may allow us to find even these kinds of flaws while working to minimize potential risks involved.
These are the conversations we look forward to leading and engaging others in. These will not be easy discussions, but they are necessary if we want a to live in a more cyber secure world. As the recent presidential election shows, the security and integrity of our political system in the United States is irretrievably linked to computers and our digital lives.
Our privacy and civil liberties live or die based on the security of our computing devices. The folks on the frontlines of that fight are the security researchers who excavate flaws in our code and get them fixed. We hope all can agree that this should be encouraged, and we welcome a new conversation about how to do so.
Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a nonprofit that advocates for global digital rights.
The views expressed by contributors are their own and are not the views of The Hill.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts