Erin Fleury, MJLST Managing Editor
Earlier this year, the general public became acutely aware of the Heartbleed security bug which exposed vast amounts of encrypted data from websites using OpenSSL technology (estimated to affect at least 66% of active websites). Software companies are still fixing these vulnerabilities but many servers remain vulnerable and surely victims could continue to suffer from these data breaches long after they occurred. While Heartbleed, and the fact that it was around for nearly two years prior to detection, is troubling by itself, it also raises concerns about the scope of the Computer Fraud and Abuse Act (CFAA), 18 U.S.C. §1030, and white-hat hackers.
The CFAA prohibits “intentionally accessing a computer without authorization or exceed[ing] authorized access” and thereby “obtain[ing] information from a protected computer.” See § 1030(a)(2). It would appear that the Heartbleed bug operates by doing exactly that. In very simplistic terms, OpenSSL authorizes limited requests for information but Heartbleed exploits a flaw to cause systems to send back far more than what is intended. Of course, the CFAA is meant to target people who use exploits such as this to gain unauthorized access to computer systems, so it would seem that using Heartbleed is clearly within the scope and purpose of the CFAA.
The real problem arises, however, for people interested in independently (i.e. without authorization) testing a system to determine if it is still susceptible to Heartbleed or other vulnerabilities. With Heartbleed, the most efficient way to test for the bug is to send an exploitive request and see if the system sends back extra information. This too would seem to fall squarely within the ambit of the CFAA and could potentially be a violation of federal law. Even testing a website which has been updated so that it is no longer vulnerable could potentially be a violation under §1030(b)(“attempting to commit a violation under subsection (a)”).
At first glance it might seem logical that no one should be attempting to access systems they do not own, but there are a number of non-nefarious reasons someone might do so. Perhaps customers may simply wish to determine whether a website is secure before entering their personal information. More importantly, independent hackers can play a significant role in finding system weaknesses (and thereby helping the owner make the system more secure), as evidenced by the fact that many major companies now offer bounty programs to independent hackers. Yet those who do not follow the parameters of a bounty program, or who discover flaws in systems without such a program, may be liable under the CFAA because of their lack of authorization. Furthermore, the CFAA has been widely criticized for being overly broad because, among other reasons, it does not fully distinguish between the reasons one might “exceed authorization.” Relatively minor infractions (such as violating the Terms of Service on MySpace) may be sufficient to violate federal law, and the penalties for fairly benevolent violations (such as exploiting security flaws but only reporting it to the media rather than using the obtained information for personal gains) can seem wildly disproportional to the offense.
These security concerns are not limited to websites or the theft of data either. Other types of systems could pose far greater safety risks. The CFAA’s definition of a “protected computer” in § 1030(e)(1-2) applies to a wide range of electronics and this definition will only expand as computers are integrated into more and more of the items we use on a daily basis. In efforts to find security weaknesses, researchers have successfully hacked and taken control of implantable medical devices or even automobiles. Merely checking a website to see if it is still susceptible to Heartbleed is unlikely to draw the attention of the FBI, so in many ways these concerns can be dismissed for the simple reason that broad enforcement is unlikely and, of course, many of the examples cited above involved researchers who had authorization. Yet, the CFAA’s scope is still concerning because of the chilling effect it could have on research and overall security by dissuading entities from testing systems for weaknesses without permission or, perhaps more likely, by discouraging individuals from disclosing these weaknesses when they find them.
Without question, our laws should punish those who use exploits (such as Heartbleed) to steal valuable information or otherwise harm people. But the CFAA also seems to apply with great force to unauthorized access which ultimately serves a tremendous societal good and should be somewhat excusable, if not encouraged. The majority of the CFAA was written decades ago and, while there have been recent efforts to amend it, it remains a highly-controversial law. Surely, issues surrounding cybersecurity are unlikely to disappear anytime soon. It will be interesting to see how courts and lawmakers react to solve these challenging issues in an evolving landscape.