Privacy

Google Fined for GDPR Non-Compliance, Consumers May Not Like the Price

Julia Lisi, MJLST Staffer

On January 14th, 2019, France’s Data Protection Authority (“DPA”) fined Google 50 million euros in one of the first enforcement actions taken under the EU’s General Data Protection Regulation (“GDPR”). The GDPR, which took effect in May of 2018, sent many U.S. companies scrambling in attempts to update their privacy policies. You, as a consumer, probably had to re-accept updated privacy policies from your social media accounts, phones, and many other data-based products. Google’s fine makes it the first U.S. tech giant to face GDPR enforcement. While a 50 million euro (roughly 57 million dollars) fine may sound hefty, it is actually relatively small compared to maximum fine allowed under the GDPR, which, for Google, would be roughly five billion dollars.

The French fine clarifies a small portion of the uncertainty surrounding GDPR enforcement. In particular, the French DPA rejected Google’s methods for getting consumers to consent to its  Privacy Policy and Terms of Service. The French DPA took issue with the (1) numerous steps users faced before they could opt out of Google’s data collection, (2) the pre-checked box indicating users’ consent, and (3) the inability of users to consent to individual data processes, instead requiring whole cloth acceptance of both Google’s Privacy Policy and Terms of Service.

The three practices rejected by the French DPA are commonplace in the lives of many consumers. Imagine turning on your new phone for the first time and scrolling through seemingly endless provisions detailing exactly how your daily phone use is tracked and processed by both the phone manufacturer and your cell provider. Imagine if you had to then scroll through the same thing for each major app on your phone. You would have much more control over your digital footprint, but would you spend hours reading each provision of the numerous privacy policies?

Google’s fine could mark the beginning of sweeping changes to the data privacy landscape. What once took a matter of seconds—e.g., checking one box consenting to Terms of Service—could now take hours. If Google’s fine sets a precedent, consumers could face another wave of re-consenting to data use policies, as other companies fall in line with the GDPR’s standards. While data privacy advocates may applaud the fine as the dawn of a new day, it is unclear how the average consumer will react when faced with an in-depth consent process.


Access Denied: Fifth Amendment Invoked to Prevent Law Enforcement from Accessing Phone

Hunter Moss, MJLST Staffer 

Mobile phones are an inescapable part of modern life. Research shows that 95% of Americans carry some sort of cell phone, while 77% own smartphones. These devices contain all sorts of personal information, including: call logs, emails, pictures, text messages, and access to social networks. It is unsurprising that the rise of mobile phone use has coincided with an increased interest from law enforcement. Gaining access to a phone could provide a monumental breakthrough in a criminal investigation.

Just as law enforcement is eager to rummage through a suspect’s phone, many individuals hope to keep personal data secret from prying eyes. Smartphone developers use a process called encryption to ensure their consumers’ data is kept private. In short, encryption is a process of encoding data and making it inaccessible without an encryption key. Manufacturers have come under increasing pressure to release encryption keys to law enforcement conducting criminal investigations. Most notable was the confrontation between the F.B.I. and Apple in the wake of the San Bernardino shooting. A magistrate judge ordered Apply to decrypt the shooter’s phone. The tech giant refused, stating that granting the government such a power would undermine the security, and the privacy, of all cellphone users.

The legal theory of a right to privacy has served as the foundation of defenses against government requests for cellphone data. These defenses have been couched in the Fourth Amendment, which is the Constitutional protection guaranteeing security against unreasonable searches. In a ruling that will have profound implications for the future of law enforcement, the Fourth Amendment protection was first extended to mobile phone data when the Supreme Court decided Carpenter v. United States in early 2018. The holding in Carpenter necessitates that warrants are granted during any government investigation seeking to obtain mobile phone records from service providers.

A case from Florida was the most recent iteration of a novel legal theory to shield smartphone users from government encroachment. While the Carpenter decision relied on the Fourth Amendment’s right to privacy, last week’s ruling by the Florida Court of Appeals invokes the Fifth Amendment to bar law enforcement agents from compelling suspects to enter their passcodes and unlocking their phones. This evolution of the Fifth Amendment was grounds for the court to quash a juvenile court’s order for the defendant to reveal his password, which would relinquish the privacy of his phone.

The Fifth Amendment is the constitutional protection from self-incrimination. A suspect in a criminal case cannot be compelled to communicate inculpatory evidence. Because a phone’s passcode is something that we, as the owners, “know,” being forced to divulge the information would be akin to being forced to testify against oneself. While mobile phone users might feel relieved that the development of Fifth Amendment is expanding privacy protections, smartphone owners shouldn’t be too quick to celebrate. While the Fifth Amendment might protect what you “know,” it does not protect what you “are.” Several courts have recognized that the police may unlock a phone using a suspect’s fingerprint or facial recognition software. Given that fingerprinting and mug shots are already routine procedures during an arrest, courts have been reluctant to view unlocking a phone in either manner as an additional burden on suspects.

Technology has seen some incredible advancements over the last few years, particularly in the field of mobile devices. Some have even theorized that our phones are becoming extensions of our minds. The legal framework providing constitutional protections supporting the right to privacy and the right against self-incrimination have trailed the pace of these developments. The new string of cases extending the Fifth Amendment to cellphone searches is an important step in the right direction. As phones have become a ubiquitous part of modern life, containing many of our most private and intimate information, it is clear that the law must continue to evolve to ensure that they are safeguarded from unwanted and unlimited government intrusion.


Carpenter Might Unite a Divided Court

Ellen Levis, MJLST Staffer

 

In late 2010, there was a robbery at a Radio Shack in Detroit. A few days later: a stick up at a T-Mobile store. A few more months, a few more robberies – until law enforcement noticed a pattern and eventually, in April 2011, the FBI arrested four men under suspicion of violating the Hobbs Act (that is, committing robberies that affect interstate commerce.)

One of the men confessed to the crimes and gave the FBI his cell phone number and the numbers of the other participants. The FBI used this information to obtain “transactional records” for each of the phone numbers, which magistrate judges granted under the Stored Communications Act. Based on this “cell-site evidence,” the government charged Timothy Carpenter with a slew of offenses. At trial, Carpenter moved to suppress the government’s cell-site evidence, which included 127 days of GPS tracking and placed his phone at 12,898 locations. The district court denied the motion to suppress; Carpenter was convicted and sentenced to 116 years in prison. The Sixth Circuit affirmed the district court’s decision when Carpenter appealed.

In November 2017, the Supreme Court heard what might be the most important privacy case of this generation. Carpenter v. United States asks the Supreme Court to consider whether the government, without a warrant, can track a person’s movement via geo-locational data beamed out by cell phone.   

Whatever they ultimately decide, the Justices seemed to present a uniquely united front in their questioning at oral arguments, with both Sonia Sotomayor and Neil Gorsuch hinting that warrantless cell-site evidence searches are incompatible with the protections promised by the Fourth Amendment.  

In United States v Jones, 132 S.Ct. 945 (2012), Sotomayor wrote a prescient concurring analysis of the challenge facing the Court as it attempts to translate the Fourth Amendment into the digital age. Sotomayor expressed doubt that “people would accept without complaint the warrantless disclosure to the Government of a list of every Web site they had visited in the last week, or month, or year.” And further, she “would not assume that all information voluntarily disclosed to some member of the public for a limited purpose is, for that reason alone, disentitled to Fourth Amendment protection.”

In the Carpenter oral argument, Sotomayor elaborated on the claims she made in United States v Jones 132 S.Ct. 945 (2012). Similarly, throughout the Carpenter argument, Sotomayor gave concrete examples of how extensively Americans use their cellphones and how invasive cell phone tracking could become. “I know that most young people have the phones in the bed with them. . . I know people who take phones into public restrooms. They take them with them everywhere. It’s an appendage now for some people . . .Why is it okay to use the signals that phone is using from that person’s bedroom, made accessible to law enforcement without probable cause?”

Gorsuch, on the other hand, drilled down on a property-rights theory of the Fourth Amendment, questioning whether a person had a property interest in the data they created. He stated,  “it seems like [the] whole argument boils down to — if we get it from a third party we’re okay, regardless of property interest, regardless of anything else.” And he continued, “John Adams said one of the reasons for the war was the use by the government of third parties to obtain information forced them to help as their snitches and snoops. Why isn’t this argument exactly what the framers were concerned about?”


New Data Protection Regulation in European Union Could have Global Ramifications

Kevin Cunningham, MJLST Staffer

 

For as long as the commercial web has existed, companies have monetized personal information by mining data. On May 25, however, individuals in the 28 member countries of the European Union will have the ability to opt into the data collection used by so many data companies. The General Data Protection Regulation (GDPR), agreed upon by the European Parliament and Council in April 2016, will replace Data Protection Directive 95/46/ec as the primary law regulating how companies protect personal data of individuals in the European Union. The requirements of the new GDPR aim to create more consistent protection of consumer and personal data across the European Union.

 

Publishers, banks, universities, data and technology companies, ad-tech companies, devices, and applications operating in the European Union will have to comply with the privacy and data protection requirements of the GDPR or be subject to heavy fines (up to four (4) percent of annual global revenue) and penalties. Some of the requirements include: requiring consent of subjects for data processing; anonymizing collected data to protect privacy; providing data breach notifications within 72 hours of the occurrence; safely handling the transfer of data across borders; requiring certain companies to appoint a data protection officer to oversee compliance of the Regulation. Likewise, the European Commission posted on its website that a social network platform will have to adhere to user requests to delete photos and inform search engines and other websites that used the photos that the images should be removed. This baseline set of standards for companies handling data in the EU will better protect the processing and movement of personal data.

 

Companies will have to be clear and concise about the collection and use of personally identifiable information such as name, home address, data location, or IP address. Consumers will have the right to access data that companies store about the individuals, as well as the right to correct false or inaccurate information. Moreover, the GDPR imposes stricter conditions applying to the collection of ‘sensitive data’ such as race, political affiliation, sexual orientation, and religion. The GDPR will still allow businesses to process personally identifiable information without consumer consent for legitimate business interests which include direct marketing through mail, email, or online ads. Still, companies will have to account

 

The change to European law could have global ramifications. Any company that markets goods or service to EU residents will be subject to the GDPR. Many of the giant tech companies that collect data, such as Google and Facebook, look to keep uniform systems and have either revamped or announced a change to privacy settings to be more user-friendly.


Car Wreck: Data Breach at Uber Underscores Legal Dangers of Cybersecurity Failures

Matthew McCord, MJSLT Staffer

 

This past week, Uber’s annus horribilis and the everincreasing reminders of corporate cybersecurity’s persistent relevance reached singularity. Uber, once praised as a transformative savior of the economy by technology-minded businesses and government officials for its effective service delivery model and capitalization on an exponentially-expanding internet, has found itself impaled on the sword that spurred its meteoric rise. Uber recently disclosed that hackers were able to access the personal information of 57 million riders and drivers last year. It then paid hackers $100,000 to destroy the compromised data, and failed to inform its users or sector regulators of the breach at the time. These hackers apparently compromised a trove of personally identifiable information, including names, telephone numbers, email addresses, and driver’s licenses of users and drivers through a flaw in their company’s GitHub security.

Uber, a Delaware corporation, is required to present notice of a data breach in the “most expedient time possible and without unreasonable delay” to affected customers per Delaware statutes. Most other states have adopted similar legislation which affects companies doing business in those states, which could allow those regulators and customers to bring actions against the company. By allegedly failing to provide timely notification, Uber opened itself to the parade of announced investigations from regulators into the breach: the United Kingdom’s Information Commissioner, for instance, has threatened fines following an inquiry, and U.S. state regulators are similarly considering investigations and regulatory action.

Though regulatory action is not a certainty, the possibility of legal action and the dangers of lost reputation are all too real. Anthem, a health insurer subject to far stricter federal regulation under HIPAA and its various amendments, lost $115 million to settlement of a class action suit over its infamous data breach. Short-term impacts on reputation rattle companies (especially those who respond less vigorously), with Target having seen its sales fall by almost 50% in 2013 Q4 after its data breach. The cost of correcting poor data security on a technical level also weighs on companies.

This latest breach underscores key problems facing businesses in the continuing era of exponential digital innovation. The first, most practical problem that companies must address is the seriousness with which companies approach information security governance. An increasing number of data sources and applications, and increasing complexity of systems and vectors, similarly increases the potential avenues to exposure for attack. One decade ago, most companies used at least somewhat isolated, internal systems to handle a comparatively small amount of data and operations. Now, risk assessments must reflect the sheer quantity of both internal and external devices touching networks, the innumerable ways services interact with one another (and thus expose each service and its data to possible breaches), and the increasing competence of organized actors in breaching digital defenses. Information security and information governance are no longer niches, relegated to one silo of a company, but necessarily permeate most every business area of an enterprise. Skimping on investment in adequate infrastructure far widens the regulatory and civil liability of even the most traditional companies for data breaches, as Uber very likely will find.

Paying off data hostage-takers and thieves is a particularly concerning practice, especially from a large corporation. This simply creates a perverse incentive for malignant actors to continue trying to siphon off and extort data from businesses and individuals alike. These actors have grown from operations of small, disorganized groups and individuals to organized criminal groups and rogue states allegedly seeking to circumvent sanctions to fund their regimes. Acquiescing to the demands of these actors invites the conga line of serious breaches to continue and intensify into the future.

Invoking a new, federal legislative scheme is a much-discussed and little-acted upon solution for disparate and uncoordinated regulation of business data practices. Though 18 U.S.C. § 1030 provides for criminal penalties for the bad actors, there is little federal regulation or legislation on the subject of liability or minimum standards for breached PII-handling companies generally. The federal government has left the bulk of this work to each state as it leaves much of business regulation. However, internet services are recognized as critical infrastructure by the Department of Homeland Security under Presidential Policy Directive 21. Data breaches and other cyber attacks result in data and intellectual property theft costing the global economy hundreds of billions of dollars annually, with widespread disruption potentially disrupting government and critical private sector operations, like the provision of utilities, food, and essential services, turning cybersecurity into a definite critical national risk requiring a coordinated response. Careful crafting of legislation authorizing federal coordination of cybersecurity best practices and adequately punitive federal action for negligence of information governance systems, would incentivize the private and public sectors to take better care of sensitive information, reducing the substantial potential for serious attacks to compromise the nation’s infrastructure and the economic well-being of its citizens and industries.


Should You Worry That ISPs Can Sell Your Browsing Data?

Joshua Wold, Article Editor

Congress recently voted to overturn the FCC’s October 2016 rules Protecting the Privacy of Customers of Broadband and Other Telecommunications Services through the Congressional Review Act. As a result, those rules will likely never go into effect. Had the rules been implemented, they would have required Internet Service Providers (ISPs) to get customer permission before making certain uses of customer data.

Some commentators, looking the scope of the rules relative to the internet ecosystem as a whole, and the fact that the rules hadn’t yet taken effect, thought that this probably wouldn’t have a huge impact on privacy. Orin Kerr suggested that the overruling of the privacy regulations was unlikely to change what ISPs would do with data, because other laws constrain them. Others, however, were less sanguine. The Verge quoted Jeff Chester of the Center for Digital Democracy as saying “For the foreseeable future, we’re going to be living in a commercial surveillance state.”

While the specific context of these privacy regulations is new (the FCC couldn’t regulate ISPs until 2015, when it defined them as telecommunications providers instead of information services), debates over privacy are not. In 2013, MJLST published Adam Thierer’s Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle. In it, the author argues that privacy threats (as well as many other threats from technological advancement) are generally exaggerated. Thierer then lays out a four-part analytic framework for weighing regulation, calling on regulators and politicians to identify clear harms, engage in cost-benefit analysis, consider more permissive regulation, and then evaluate and measure the outcomes of their choices.

Given Minnesota’s response to Congress’s action, the debate over privacy and regulation of ISPs is unlikely to end soon. Other states may consider similar restrictions, or future political changes could lead to a swing back toward regulation. Or, the current movement toward less privacy regulation could continue. In any event, Thierer’s piece, and particularly his framework, may be useful to those wishing the evaluate regulatory policy as ISP regulation progresses.

For a different perspective on ISP regulation, see Paul Gaus’s student note, upcoming in Volume 19, Issue 1. That article will focus on presenting several arguments in favor of regulating ISPs’ privacy practices, and will be a thoughtful contribution to the discussion about privacy in today’s internet.


Broadening the Ethical Concerns of Unauthorized Copyright and Rights of Publicity Usage: Do We Need More Acronyms?

Travis Waller, MJLST Managing Editor

In 2013, Prof. Micheal Murray of Valparaiso University School of Law published an article with MJLST entitled “DIOS MIO—The KISS Principle of the Ethical Approach to Copyright and Right of Publicity Law”. (For those of you unfamiliar with the acronyms, as I was previous to reviewing this article, DIOS MIO stands for “Don’t Include Other’s Stuff or Modify It Obviously”, just as KISS stands for “Keep it Simple, Stupid”). This article explored an ethical approach to using copyrighted material or celebrity likeness that has developed over the last decade due to several court cases merging certain qualities of the two regimes together.

The general principle embodied here is that current case law tends to allow for transformative uses of either a celebrity’s likeness or a copyrighted work – that is, a use of the image or work in a way that essentially provides a new or “transformative” take on the original. At the other extreme, the law generally allows individuals to use a celebrity’s likeness if the usage is not similar enough to the actual celebrity to be identifiable, or a copyrighted work if the element used is scenes a faire or a de minimis usage. Ergo, prudent advice to a would-be user of said material may, theoretically, be summed up as “seek first to create and not to copy or exploit, and create new expression by obvious modification of the old expression and content”, or DIOS MIO/KISS for the acronym savvy.

The reason I revisit this issue is not to advocate for this framework, but rather to illustrate just how unusual of bedfellows the regimes of copyright and “rights of publicity” are. As a matter of policy, in the United States, copyright is a federal regime dedicated to the utilitarian goals of “[p]romot[ing] the progress of science,” while rights of publicity laws are state level protections with roots going back to the Victorian era Warren & Brandies publication “The Right to Privacy” (and perhaps even further back). That is to say, the “right to publicity” is not typically thought of as a strictly utilitarian regime at all, and rather more as one dedicated to either the protection of an individual’s economic interests in their likeness (a labor argument), or a protection of that individual’s privacy (a privacy tort argument).

My point is, if, in theory, copyright is meant to “promote science”, while the right to publicity is intended to either protect an individual’s right to privacy, or their right to profit from their own image, is it appropriate to consider each regime under the age-old lens of “thou shalt not appropriate?” I tend to disagree.

Perhaps a more nuanced resolution to the ethical quandary would be for a would-be user of the image or work to consider the purpose of each regime, and to ask oneself if the usage of that work or image would offend the policy goals enshrined therein. That is, to endeavor on the enlightened path of determining whether, for copyright, if their usage of a work will add to the collective library of human understanding and progress, or whether the usage of that celebrity’s likeness will infringe upon that individual’s right to privacy, or unjustly deprive the individual of their ability to profit from their own well cultivated image.

Or maybe just ask permission.


Confusion Continues After Spokeo

Paul Gaus, MJLST Staffer

Many observers hoped the Supreme Court’s decision in Spokeo v. Robins would bring clarity to whether plaintiffs could establish Article III standing for claims based on future harm from date breaches. John Biglow explored the issue prior to the Supreme Court’s decision in his note It Stands to Reason: An Argument for Article III Standing Based on the Threat of Future Harm in Date Breach Litigation. For those optimistic the Supreme Court would expand access to individuals seeking to litigate their privacy interests, they were disappointed.

Spokeo is a people search engine that generates publicly accessible online profiles on individuals (they had also been the subject of previous FTC data privacy enforcement actions). The plaintiff claimed Spokeo disseminated a false report on him, hampering his ability to find employment. Although the Ninth Circuit held the plaintiff suffered “concrete” and “particularized” harm, the Supreme Court disagreed, claiming the Ninth Circuit analysis applied only to the particularization requirement. The Supreme Court remanded the matter back to the Ninth Circuit, casting doubt on whether the plaintiff suffered concrete harm. Spokeo violated the Fair Credit Reporting Act, but the Supreme Court characterized the false report as a bare procedural harm, insufficient for Article III standing.

Already, the Circuits are split on how Spokeo impacted consumer data protection lawsuits. The Eighth Circuit held that a cable company’s failure to destroy personally identifiable information of a former customer was a bare procedural harm akin to Spokeo in Braitberg v. Charter Communications. The Eighth Circuit reached this conclusion despite the defendant’s clear violation of the Cable Act. By contrast, the Eleventh Circuit held a plaintiff did have standing when she failed to receive disclosures of her default debt from her creditor under the Fair Debt Collections Practices Act in Church v. Accretive Health.

Many observers consider Spokeo an adverse result for consumers seeking to litigate their privacy interests. The Supreme Court punting on the issue continued the divergent application of Article III standing and class action privacy suits among the Circuits.


Did the Warriors Commit a Flagrant Privacy Foul?

Paul Gaus, MJLST Staffer

Fans of the National Basketball Association (NBA) know the Golden State Warriors for the team’s offensive exploits on the hardwood. The Warriors boast the NBA’s top offense at nearly 120 points per game. However, earlier this year, events in a different type of court prompted the Warriors to play some defense. On August 29, 2016, a class action suit filed in the Northern District of California alleged the Warriors, along with co-defendants Sonic Notify Inc. and Yinzcam, Inc., violated the Electronic Communications Privacy Act (18 U.S.C. §§ 2510, et. seq.).

Satchell v. Sonic Notify, Inc. et al, focuses on the team’s mobile app. The Warriors partnered with the two other co-defendants to create an app based on beacon technology. The problem, as put forth in the complaint, is that the beacon technology the co-defendants employed mined the plaintiff’s microphone embedded in the smartphone to listen for nearby beacons. The complaint alleges this enabled the Warriors to access the plaintiff’s conversation without her consent.

The use of beacon technology is heralded in the business world as a revolutionary mechanism to connect consumers to the products they seek. Retailers, major sports organizations, and airlines regularly use beacons to connect with consumers. However, traditional beacon technology is based on Bluetooth. According to the InfoSec Institute, mobile apps send out signals and gather data on the basis of Bluetooth signals received. This enables targeted advertising on smartphones.

However, the complaint in Satchell maintains the defendants relied on a different kind of beacon technology: audio beacon technology. In contrast to Bluetooth beacon technology, audio beacon technology relies on sounds. For functionality, audio beacons must continuously listen for audio signals through the smartphone user’s microphone. Therefore, the Warriors app permitted the co-defendants to listen to the plaintiff’s private conversations on her smartphone – violating the plaintiff’s reasonable expectation of privacy.

While the Warriors continue to rack up wins on the court, Satchell has yet to tip off. As of December 5, 2016, the matter remains in the summary judgment phase.


The GIF That Keeps on Giving: The Problem of Dealing with Incidental Findings in Genetic Research.

 Angela Fralish, MJLST Invited Blogger

The ability to sequence a whole genome invites a tremendous opportunity to improve medical care in modern society. We are now able to prepare for, and may soon circumvent, genes carrying traits such as Alzheimer’s, breast cancer and embryonic abnormalities. These advancements hold great promise as well as suggest many new ways of looking at relationships in human subject research.

A 2008 National Institute of Health article, The Law of Incidental Findings in Human Subjects Research, discussed how modern technology has outpaced the capacity of human subject researchers to receive and interpret data responsibly. Disclosure of incidental findings, “data [results] gleaned from medical procedures or laboratory tests that were beyond the aims or goals of the particular laboratory test or medical procedure” is particularly challenging with new genetic testing. Non-paternity for example, which has been found in up to 30% of participants in some studies, result in researchers deciding how to tell participants that they are not biologically related to their parent or child. This finding could not only impact inheritance, custody and adoptions rights, but can also cause lifelong emotional harm. Modern researchers must be equipped to handle many new psychosocial and emotional variables. So where should a researcher look to determine the proper way to manage these “incidentalomas”?

Perspectives, expectations, and interests dictating policies governing incidental finding management are diverse and inconsistent. Some researchers advocate for an absolute ban on all findings of non-paternity because of the potential harm. Others argue that not revealing misattributed paternity result in a lifetime of living with inaccurate family health history. These scenarios can be difficult for all involved parties.

Legal responsibility of disclosure was indirectly addressed in Ande v.Rock in 2001 when the court held that parents did not have property rights to research results which identified spina bifida in their child. In 2016, an incidental finding of genetic mutation led a family to Mayo Clinic for a second opinion on a genetic incidental finding. The family was initially told that a gene mutation related to sudden cardiac death caused their 13-year-old son to die in his sleep, and the gene mutation was also identified in 20 family members. Mayo Clinic revealed the gene was misdiagnosed, but the decedent’s brother already had a defibrillator implanted and received two inappropriate shocks to his otherwise normal and healthy heart. Establishing guidance for the scope and limits of disclosure of incidental findings is a complex process.

Under 45 C.F.R. §§ 46.111 and 46.116, also known as the Common Rule, researchers in all human subject research must discuss any risks or benefits to participants during informed consent. However, there is debate over classification of incidental findings as a risk or benefit because liability can attach. Certainly the parents in Ande v. Rock would have viewed the researchers’ decision not to disclose positive test results for spina bifida as a risk or benefit that should have been discussed at the onset of their four-year involvement. On the other hand, as in the Mayo Clinic example above, is a misdiagnosed cardiac gene mutation a benefit or risk? The answers to these question is very subjective.

The Presidential Commission for the Study of Bioethical Issues has suggested 17 ethical guidelines which include discussing risks and benefits of incidental finding disclosures with research participants. The Commission’s principles are the only guidelines currently addressing incidental findings. There is a desperate need for solid legal guidance when disclosing incidental findings. It is not an easy task, but the law needs to quickly firm-up a foundation for appropriate disclosure in incidental findings.