Social Media

Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


TikTok Settles in Class Action Data Privacy Lawsuit – Will Pay $92 Million Settlement

Sarah Nelson, MJLST Staffer

On November 15, 2021, TikTok users received the following notification within the app: “Class Action Settlement Notice: U.S. residents who used Tik Tok before 01 OCT 2021 may be eligible for a class settlement payment – visit https://www.TikTokDataPrivacySettlement.com for details.” The notification was immediately met with skepticism, with users taking to Twitter and TikTok itself to joke about how the notification was likely a scam. However, for those familiar with TikTok’s litigation track record on data privacy, this settlement does not come as a surprise. Specifically, in 2019, TikTok – then known as Musical.ly – settled with the Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act for $5.7 million. This new settlement is notable for the size of the payout and for what it tells us about the current state of data privacy and biometric data law in the United States.

Allegations in the Class Action

21 federal lawsuits against TikTok were consolidated into one class action to be overseen by the United States District Court for the Northern District of Illinois. All of the named plaintiffs in the class action are from either Illinois or California and many are minors. The class action comprises two classes – one class covers TikTok users nationwide and the other only includes Tik Tok users who are residents of Illinois.

In the suit, plaintiffs allege TikTok improperly used their personal data. This improper use includes accusations that TikTok, without consent, shared consumer data with third parties. These third parties allegedly include companies based in China, as well as well-known companies in the United States like Google and Facebook. The class action also accuses TikTok of unlawfully using facial recognition technology and of harvesting data from draft videos – videos that users made but never officially posted. Finally, plaintiffs allege TikTok actively took steps to conceal these practices.

What State and Federal Laws Were Allegedly Violated?

On the federal law level, plaintiffs allege TikTok violated the Computer Fraud and Abuse Act (CFAA) and the Video Privacy Protection Act (VPPA). As the name suggests, the CFAA was enacted to combat computer fraud and prohibits accessing “protected computers” in the absence of authorization or beyond the scope of authorization. Here, the plaintiff-users allege TikTok went beyond the scope of authorization by secretly transmitting personal data, “including User/Device Identifiers, biometric identifiers and information, and Private Videos and Private Video Images never intended for public consumption.” As for the VPPA, the count alleges the Act was violated when TikTok gave “personally identifiable information” to Facebook and Google. TikTok allegedly provided Facebook and Google with information about what videos a TikTok user had watched and liked, and what TikTok content creators a user had followed.

On the state level, the entire class alleged violations of the California Comprehensive Data Access and Fraud Act and a Violation of the Right to Privacy under the California Constitution. Interestingly, the plaintiffs within the Illinois subclasswere able to allege violations under the Biometric Information Privacy Act (BIPA). Under the BIPA, before collecting user biometric information, companies must inform the consumer in writing that the information is being collected and why. The company must also say how long the information will be stored and get the consumer to sign off on the collection. The complaint alleges TikTok did not provide the required notice or receive the required written consent.

Additionally, plaintiffs allege intrusion upon seclusion, unjust enrichment, and violation of both a California unfair competition law and a California false advertising law.

In settling the class action, TikTok denies any wrongdoing and maintains that this settlement is only to avoid the cost of further litigation. TikTok gave the following statement to the outlet Insider: “While we disagree with the assertions, we are pleased to have reached a settlement agreement that allows us to move forward and continue building a safe and joyful experience for the TikTok community.”

Terms of the Settlement

To be eligible for a settlement payment, a TikTok user must be a United States resident and must have used the app prior to October of 2021. If an individual meets these criteria, they must submit a claim before March 1, 2022. 89 million usersare estimated to be eligible to receive payment. However, members of the Illinois subclass are eligible to receive six shares of the settlement, as compared to the one share the nationwide class is eligible for. This difference is due to the added protection the Illinois subclass has from BIPA.

In addition to the payout, the settlement will require TikTok to revise its practices. Under the agreed upon settlement reforms, TikTok will no longer mine data from draft videos, collect user biometric data unless specified in the user agreement, or use GPS data to track user location unless specified in the user agreement. TikTok also said they would no longer send or store user data outside of the United States.

All of the above settlement terms are subject to final approval by the U.S. District Judge.

Conclusion

The lawyers representing TikTok users remarked that this settlement was “among the largest privacy-related payouts in history.” And, as noted by NPR, this settlement is similar to the one agreed to by Facebook in 2020 for $650 million. It is possible the size of these settlements will contribute to technology companies preemptively searching out and ceasing practices that may be privacy violative

It is also worth noting the added protection extended to residents of Illinois because of BIPA and its private right of actionthat can be utilized even where there has not been a data breach.

Users of the TikTok app often muse about how amazingly curated their “For You Page” – the videos that appear when you open the app and scroll without doing any particular search – seem to be. For this reason, even with potential privacy concerns, the app is hard to give up. Hopefully, users can rest a bit easier now knowing TikTok has agreed to the settlement reforms.


Whitelist for Thee, but Not for Me: Facebook File Scandals and Section 230 Solutions

Warren Sexson, MJLST Staffer

When I was in 7th grade, I convinced my parents to let me get my first social media account. Back in the stone age, that phrase was synonymous with Facebook. I never thought too much of how growing up in the digital age affected me, but looking back, it is easy to see the cultural red flags. It came as no surprise to me when, this fall, the Wall Street Journal broke what has been dubbed “The Facebook Files,” and in them found an internal study from the company showing Instagram is toxic to teen girls. While tragic, this conclusion is something many Gen-Zers and late-Millennials have known for years. However, in the “Facebook Files” there is another, perhaps even more jarring, finding: Facebook exempts many celebrities and elite influencers from its rules of conduct. This revelation demands a discussion of the legal troubles the company may find itself in and the proposed solutions to the “whitelisting” problem.

The Wall Street Journal’s reporting describes an internal process by Facebook called “whitelisting” in which the company “exempted high-profile users from some or all of its rules, according to company documents . . . .” This includes individuals from a wide range of industries and political viewpoints, from Soccer mega star Neymar, to Elizabeth Warren, and Donald Trump (prior to January 6th). The practice put the tech giant in legal jeopardy after a whistleblower, later identified as Frances Haugen, submitted a whistleblower complaint with the Securities and Exchange Commission (SEC) that Facebook has “violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors . . . .” See 17 CFR § 240.14a-9 (enforcement provision on false or misleading statements to investors). Mark Zuckerberg himself has made statements regarding Facebook’s neutral application of standards that are at direct odds with the Facebook Files. Regardless of the potential SEC investigation, the whitelist has opened up the conversation regarding the need for serious reform in the big tech arena to make sure no company can make lists of privileged users again. All of the potential solutions deal with 47 U.S.C. § 230, known colloquially as “section 230.”

Section 230 allows big tech companies to censor content while still being treated as a platform instead of a publisher (where they would incur liability for what is on their website). Specifically, § 230(c)(2)(A) provides that no “interactive computer service” shall be held liable for taking action in good faith to restrict “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable [content] . . . .” It is the last phrase, “otherwise objectionable,” that tech companies have used as justification for removing “hate speech” or “misinformation” from their platform without incurring publisher like liability. The desire to police such speech has led Facebook to develop stringent platform rules which has in turn created the need for whitelisting. This brings us to our first proposal, eliminating the phrase “otherwise objectionable” from section 230 itself. The proposed “Stop the Censorship Act of 2020” brought by Republican Paul Gosar of Arizona does just that. Proponents argue that it would force tech companies to be neutral or lose liability protections. Thus, no big tech company would ever create standards stringent enough to require a “whitelist” or an exempted class, because the standard is near to First Amendment protections—problem solved! However, the current governing majority has serious concerns about forced neutrality, which would ignore problems of misinformation or the mental health effects of social media in the aftermath of January 6th.

Elizabeth Warren, similar to a recent proposal in the House Judiciary Committee, takes a different approach: breaking up big tech. Warren proposes passing legislation to limit big tech companies in competing with small businesses who use the platform and reversing/blocking mergers, such as Facebook purchasing Instagram. Her plan doesn’t necessarily stop companies from having whitelists, but it does limit the power held by Facebook and others which could in turn, make them think twice before unevenly applying the rules. Furthermore, Warren has called for regulators to use “every tool in the toolbox,” in regard to Facebook.

Third, some have claimed that Google, Facebook, and Twitter have crossed the line under existing legal doctrines to become state actors. So, the argument goes, government cannot “induce” or “encourage” private persons to do what the government cannot. See Norwood v. Harrison, 413 U.S. 455, 465 (1973). Since some in Congress have warned big tech executives to restrict what they see as bad content, the government has essentially co-opted the hand of industry to block out constitutionally protected speech. See Railway Employee’s Department v. Hanson, 351 U.S. 225 (1956) (finding state action despite no actual mandate by the government for action). If the Supreme Court were to adopt this reasoning, Facebook may be forced to adopt a First Amendment centric approach since the current hate speech and misinformation rules would be state action; whitelists would no longer be needed since companies would be blocked from policing fringe content. Finally, the perfect solution! The Court can act where Congress cannot agree. I am skeptical of this approach—needless to say, such a monumental decision would completely shift the nature of social media. While Justice Thomas has hinted at his openness to this argument, it is unclear if the other justices will follow suit.

All in all, Congress and the Court have tools at their disposal to combat the disturbing actions taken by Facebook. Outside of potential SEC violations, Section 230 is a complicated but necessary issue Congress must confront in the coming months. “The Facebook Files” have exposed the need for systemic change in social media. What I once used to use to play Farmville, has become a machine that has rules for me, but not for thee.


Anti-Cyberbullying Efforts Should Focus on Everyday Tragedies

by Alex Vlisides, UMN Law Student, MJLST Staff

Cyberbullying. It seems every few weeks or months, another story surfaces in the media with the same tragic narrative. A teenager was bullied, both at school and over the internet. The quiet young kid was the target of some impossibly cruel torment by their peers. Tragically, the child felt they had nowhere to turn, and took their own life.

Most recently, a 12 year old girl from Lakeland, FL, named Rebecca Ann Sedwick jumped to her death from the roof of a factory after being bullied online for months by a group of 15 girls. The tragedy has spurred the same news narrative as the many before, and the same calls for inadequate action. Prosecute the bullies or their parents. Blame the victim’s parents for not caring enough. Blame the school for not stepping in.

News media’s institutional bias is to cover the shocking story. The problem is that when considering policy changes to help the huge number of kids who are bullied online, these tragic stories may be the exact wrong cases to consider. Cyberbullying is not an issue that tragically surfaces every few months like a hurricane or a forest fire. It goes on every day, in virtually every middle school and high school in the country. Schools need policies crafted not just to prevent the worst, but to make things better each day.

It is incredibly important to remember students like Sedwick. But to address cyberbullying, it may be just as important to remember the more common effects of bullying: the student who stops raising their hand in class or quits a sports team or fears even going on social media sites. These things should be thought of not as potential warning signs of a tragedy, but as small tragedies themselves.

The media will never run headlines on this side of bullying. This means that policy makers and those advocating for change must correct for this bias, changing the narrative and agenda of cyberbullying to include the common tragedies. The issue is complex, emotional and ever-changing. Though it may not make for breaking news, meaningful change will honor students like Rebecca Ann Sedwick, while protecting students who continue to face cyberbullying every day.


Discussing the Legal Job Market Online: Optimism, Observation, and Reform

by Elliot Ferrell, UMN Law Student, MJLST Staff

Thumbnail-Elliot-Ferrell.jpgThe average law student incurs $125,000 of debt and pays almost twice as much in tuition as a student did in 2001. Law students are understandably concerned with the legal market’s job prospects, and many are vocal about. Students are not the only ones voicing their concerns, as a lawyers (employed and unemployed), professors, employers, and business people add their opinions and observations to the discourse as well. A common theme is to decry the rise of tuition costs and debt and the fall of enrollment and job openings.

The Minnesota Journal of Law, Science and Technology’s publication, You’re Doing It Wrong: How the Anti-Law School Scam Blogging Movement Can Shape the Legal Profession, describes this dialogue with a sense of optimism. According to the article, unemployed and underemployed lawyers contribute to the legal community through the voice of an outsider, facilitated by the openness and anonymity afforded by the internet. These contributions may contain valuable ideas and observations but are often plagued by gripes and vulgarities so common to internet communications emanating from forums or the blogosphere.

Additionally, the online news world is littered with articles espousing reasons for the gloomy outlook in the legal job market. However, many carry the same sense of optimism as previously indicated. One such article suggests that, after using a little math and some average attrition rates, the number of law school graduates per year and the number of job openings per year will equalize by 2016. This result is due to dwindling average enrollment and approximately equal number of graduates getting jobs each year. Despite the apparent logic of this approach, holding onto all of the variables involved staying the course likely requires an ardently optimistic law student.

Several commentators step back from the optimistic approach and suggest reforms intended to curb the cost of law school and increase a graduate’s job prospects. One proposal would remove the third year of law school to cut the tuition debt and hasten a student’s path into the workforce. However, such an idea is not without its pitfalls, such as a reduced readiness for the bar exam. Another idea is to increase practical education through clinical courses and partnerships analogous to medical residencies. Many schools already offer an array of different clinic experiences, but the notion of a legal residency would seem attractive to law students as it would offer an additional path to permanent employment.

What is the role of the student in this discussion? Perhaps, it is to let it run its course and hope for the job market to right itself. Perhaps, it is to chime in and advocate or simply make observations. Either way, there are certainly valuable contributions to be made, and, with access to the internet, there is little standing in your way.


Growth of Social Media Outpaces Traditional Evidence Rules

by Sabrina Ly

Thumbnail-Sabrina-Ly.jpg Evidence from social networking websites is increasingly involved in a litany of litigation. Although the widespread use of social media can lead to increased litigation, as well as increasing the cost of litigation, use of social media has assisted lawyers and police officers in proving cases and solving crimes. In New Jersey, for example, two teenage brothers were arrested and charged with murder of a twelve year-old girl. What led to the two teenagers’ arrest was evidence left behind in their homes along with a Facebook post that made their mother suspicious enough to call the police. In another case, Antonio Frasion Jenkins Jr. had charges brought against him by an officer for making terroristic threats to benefit his gang. Jenkins posted a description of his tattoo on Facebook which stated: “My tattoo iz a pig get’n his brains blew out.” Pig is considered a derogatory term for a police officer.The tattoo also had the officer’s misspelled name and his badge number. The officer who is a part of the gang investigation team saw the Facebook post and immediately filed charges against Jenkins as he interpreted the tattoo as a direct threat against him and his family. These are two of the many situations in which social networking websites have been used as evidence to bring charges against or locate an individual.

The myriad of charges brought against an individual given evidence found on their social networking websites is the basis for Ira P. Robbin’s article “Writings on the Wall: The Need for an Author-Centric Approach to the Authentication of Social-Networking Evidence” published in Volume 13.1 of the Minnesota Journal of Law Science and Technology. Robbins begins by discussing the varying ways in which social networking websites have been used as evidence in personal injury and criminal matters. Specifically, Twitter, Facebook and Myspace postings have been deemed discoverable if relevant to the issue and admissible only if properly authenticated by the Federal Rules of Evidence. However, courts across the country have grappled with the evidentiary questions that are presented by social media. In some states, the court admitted the evidence given distinctive characteristics that created a nexus between the posting on the website and the owner of the account. In other states, the court found the proof of the nexus was lacking. Regardless, overall concerns of potential hackers or fictitious accounts created by a third-party posing as someone else create problems of authentication.

Robbins argues that the traditional Federal Rules of Evidence do not adapt well to evidence from social networking websites. Accordingly, Robbins proposes the courts adopt an author-centric authentication process that focuses on the author of the post and not just the owner of the account. Failing to adopt an authentication method for evidence obtained on social networking websites may create consequences that could harm the values and legitimacy of the judicial process. The ability to manipulate or fake a posting creates unreliable evidence that would not only undermine the ability of the fact-finder to determine its credibility but would also unfairly prejudice the party in which the evidence is presented against.

Technology is an area of law that is rapidly evolving and, as a result, has made some traditional laws antiquated. In order to keep pace with these changes, legislators and lawmakers must constantly reexamine traditional laws in order to promote and ensure fairness and accuracy in the judicial process. Robbins has raised an important issue regarding authentication of evidence in the technological world, but as it stands there is much work to be done as technological advances outpace the reformation of traditional laws that govern it.