internet

Digital Literacy, a Problem for Americans of All Ages and Experiences

Justice Shannon, MJLST Staffer

According to the American Library Association, “digital literacy” is “the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills.” Digital literacy is a term that has existed since the year 1997. Paul Gilster coined Digital literacy as “the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers.” In this way, the definition of digital literacy has broadened from how a person absorbs digital information to how one develops, absorbs, and critiques digital information.

The Covid-19 Pandemic taught Americans of all ages the value of Digital literacy. Elderly populations were forced online without prior training due to the health risks presented by Covid-19, and digitally illiterate parents were unable to help their children with classes.

Separate from Covid-19, the rise of crypto-currency has created a need for digital literacy in spaces that are not federally regulated.

Elderly

The Covid-19 pandemic did not create the need for digital literacy training for the elderly. However, the pandemic highlighted a national need to address digital literacy among America’s oldest population. Elderly family members quarantined during the pandemic were quickly separated from their families. Teaching family members how to use Zoom and Facebook messenger became a substitute for some but not all forms of connectivity. However, teaching an elderly family member how to use Facebook messenger to speak to loved ones does not enable them to communicate with peers or teach them other digital literacy skills.

To address digital literacy issues within the elderly population states have approved Senior Citizen Technology grants. Pennsylvania’s Department of Aging has granted funds to adult education centers for technology for senior citizens. Programs like this have been developing throughout the nation. For example, Prince George’s Community College in Maryland uses state funds to teach technology skills to its older population.

It is difficult to tell if these programs are working. States like Pennsylvania and Maryland had programs before the pandemic. Still, these programs alone did not reduce the distance between America’s aging population and the rest of the nation during the pandemic. However, when looking at the scale of the program in Prince George’s County, this likely was not the goal. Beyond that, there is a larger question: Is the purpose of digital literacy for the elderly to ensure that they can connect with the world during a pandemic, or is the goal simply ensuring that the elderly have the skills to communicate with the world? With this in mind, programs that predate the pandemic, such as the programs in Pennsylvania and Maryland, likely had the right approach even if they weren’t of a large enough scale to ensure digital literacy for the entirety of our elderly population.

Parents

The pandemic highlighted a similar problem for many American families. While state, federal, and local governments stepped up to provide laptops and access to the internet, many families still struggled to get their children into online classes; this is an issue in what is known as “last mile infrastructure.”During the pandemic, the nation quickly provided families with access to the internet without ensuring they were ready to navigate it. This left families feeling ill-prepared to support their children’s educational growth from home. Providing families with access to broadband without digital literacy training disproportionately impacted families of color by limiting their children’s growth capacity online compared to their peers. While this wasn’t an intended result, it is a result of hasty bureaucracy in response to a national emergency. Nationally, the 2022 Workforce Innovation Opportunity Act aims to address digital literacy issues among adults by increasing funding for teaching workplace technology skills to working adults. However, this will not ensure that American parents can manage their children’s technological needs.

Crypto

Separate from issues created by Covid-19 is cryptocurrency. One of the largest selling points of cryptocurrency is that it is largely unregulated. Users see it as “digital gold, free from hyper-inflation.”While these claims can be valid, consumers frequently are not aware of the risks of cryptocurrency. Last year the Chair of the SEC called cryptocurrencies “the wild west of finance rife with fraud, scams, and abuse.”This year the Department of the Treasury announced they would release instructional materials to explain how cryptocurrencies work. While this will not directly regulate cryptocurrencies providing Americans with more tools to understand cryptocurrencies may help reduce cryptocurrency scams.

Conclusion

Addressing digital literacy has been a problem for years before the Covid-19 pandemic. Additionally, when new technologies become popular, there are new lessons to learn for all age groups. Covid-19 appropriately shined a light on the need to address digital literacy issues within our borders. However, if we only go so far as to get Americans networked and prepared for the next national emergency, we’ll find that there are disparities between those who excel online and those who are are ill-equipped to use the internet to connect with family, educate their kids, and participate in e-commerce.


Extending Trademark Protections to the Metaverse

Alex O’Connor, MJLST Staffer

After a 2020 bankruptcy and steadily decreasing revenue that the company attributes to the Coronavirus pandemic, Chuck E. Cheese is making the transition to a pandemic-proof virtual world. Restaurant and arcade center Chuck E. Cheese is hoping to revitalize its business model by entering the metaverse. In February, Chuck E. Cheese filed two intent to use trademark filings with the USPTO. The trademarks were filed under the names “CHUCK E. VERSE” and “CHUCK E. CHEESE METAVERSE”. 

Under Section 1 of the Lanham Act, the two most common types of applications for registration of a mark on the Principal Register are (1) a use based application for which the applicant must have used the mark in commerce and (2) an “intent to use” (ITU) based application for which the applicant must possess a bona fide intent to use the mark in trade in the near future. Chuck E. Cheese has filed an ITU application for its two marks.

The metaverse is a still-developing virtual and immersive world that will be inhabited by digital representations of people, places, and things. Its appeal lies in the possibility of living a parallel, virtual life. The pandemic has provoked a wave of investment into virtual technologies, and brands are hurrying to extend protection to virtual renditions of their marks by registering specifically for the metaverse. A series of lawsuits related to alleged infringing use of registered marks via still developing technology has spooked mark holders into taking preemptive action. In the face of this uncertainty, the USPTO could provide mark holders with a measure of predictability by extending analogue protections of marks used in commerce to substantially similar virtual renditions. 

Most notably, Hermes International S.A. sued the artist Mason Rothschild for both infringement and dilution for the use of the term “METABIRKINS” in his collection of Non-Fungible Tokens (NFTs). Hermes alleges that the NFTs are confusing customers about the source of the digital artwork and diluting the distinctive quality of Hermes’ popular line of handbags. The argument continues that the term “META” is merely a generic term that simply means “BIRKINS in the metaverse,” and Rothschild’s use of the mark constitutes trading on Hermes’ reputation as a brand.  

Many companies and individuals are rushing to the USPTO to register trademarks for their brands to use in virtual reality. Household names such as McDonalds (“MCCAFE” for a virtual restaurant featuring actual and virtual goods), Panera Bread (“PANERAVERSE” for virtual food and beverage items), and others have recently filed applications for registration with the USPTO for virtual marks. The rush of filings signals a recognition among companies that the digital marketplace presents countless opportunities for them to expand their brand awareness, or, if they’re not careful, for trademark copycats to trade on their hard-earned good will among consumers.

Luckily for Chuck E. Cheese and other companies that seek to extend their brands into the metaverse, trademark protection in the metaverse is governed by the same set of rules governing regular analogue trademark protection. That is, the mark the company is seeking to protect must be distinctive, it must be used in commerce, and it must not be covered by a statutory bar to protection. For example, if a mark’s exclusive use by one firm would leave other firms at a significant non-reputation related disadvantage, the mark is said to be functional, and it can’t be protected. The metaverse does not present any additional obstacles to trademark protection, and so as long as Chuck E. Cheese eventually uses its two marks,it will enjoy their exclusive use among consumers in the metaverse. 

However, the relationship between new virtual marks and analogue marks is a subject of some uncertainty. Most notably, should a mark find broad success and achieve fame in the metaverse, would that virtual fame confer fame in the real world? What will trademark expansion into the metaverse mean for licensing agreements? Clarification from the USPTO could help put mark holders at ease as they venture into the virtual market. 

Additionally, trademarks in the metaverse present another venue in which trademark trolls can attempt to register an already well known mark with no actual intent to use it-—although the requirement under U.S. law that mark holders either use or possess a bona fide intent to use the mark can help mitigate this problem. Finally, observers contend that the expansion of commerce into the virtual marketplace will present opportunities for copycats to exploit marks. Already, third parties are seeking to register marks for virtual renditions of existing brands. In response, trademark lawyers are encouraging their clients to register their virtual marks as quickly as possible to head off any potential copycat users. The USPTO could ensure brands’ security by providing more robust protections to virtual trademarks based on a substantially similar, already registered analogue trademark.


“I Don’t Know What To Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


The Uniform Domain Name Dispute Resolution Policy (“UDRP”): Not a Trademark Court but a Narrow Administrative Procedure Against Abusive Registrations

Thao Nguyen, MJLST Staffer

Anyone can register a domain name through one of the thousands of registrars on a first-come, first-serve basis at a low cost. The ease of entry has created so-called “cybersquatters,” who register for domain names that reflect trademarks before the true trademark owners are able to do so. Cybersquatters often aim to profit from cybersquatting activities, either by selling the domain names back to the trademark holders for a higher price, by generating confusion in order to take advantage of the trademark’s goodwill, or by diluting the trademark and disrupting the business of a competitor. A single cybersquatter can cybersquat on several thousand domain names that incorporate well-known trademarks.

Paragraph 4(a) of the UDRP provides that the complainant must successfully establish all three of the following of elements: (i) that the disputed domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; (ii) that the registrant has no rights or legitimate interests in respect of the domain name; and (iii) that the registrant registered and is using the domain name in bad faith. Remedies for a successful complainant include cancellation or transfer to the complainant of the disputed domain name.

Although prized for being focused, expedient, and inexpensive, the UDRP is not without criticism, the bulk of which focuses on the issue of fairness. The frequent charge is that the UDRP is inherently biased in favor of trademark owners and against domain name holders, not all of whom are “cybersquatters.” This bias is indicated by statistics: 75% to 90% of URDP decisions each year are decided against the domain name owner.

Nonetheless, the asymmetry of outcomes, rather than being a sign of an unfair arbitration process, may simply reflect the reality that most UDRP complaints are brought when there is a clear case of abuse, and most respondents in the proceeding are true cybersquatters who knowingly and willfully violated the UDRP. Therefore, what may appear to be the UDRP’s shortcomings are in facts signs that the UDRP is fulfilling its primary purpose. Furthermore, to appreciate the UDRP proceeding and understand the asymmetry that might normally raise red flags in an adjudication, one must understand that the UDRP is not meant to resolve trademark dispute. A representative case where this purpose is addressed is Cameron & Company, Inc. v. Patrick Dudley, FA1811001818217 (FORUM Dec. 26, 2018), where the Panel wrote, “cases involving disputes regarding trademark rights and usage, trademark infringement, unfair competition, deceptive trade practices and related U.S. law issues are beyond the scope of the Panel’s limited jurisdiction under the Policy.” In other words, the UDRP’s scope is limited to detecting and reversing the damages of cybersquatting, and the administrative dispute-resolution procedure is streamlined for this purpose.[1]

That the UDRP is not a trademark court is evident in the UDRP’s refusal to handle cases where multiple legitimate complainants assert right to a single domain name registered by a cybersquatter. UDRP Rule 3(a) states: “Any person or entity may initiate an administrative proceeding by submitting a complaint.” The Forum’s Supplemental Rule 1(e) defines “The Party Initiating a Complaint Concerning a Domain Name Registration” as a “single person or entity claiming to have rights in the domain name, or multiple persons or entities who have a sufficient nexus who can each claim to have rights to all domain names listed in the Complaint.” UDRP cases with two or more complainants in a proceeding are possible only when the complainants are affiliated with each other as to share a single license to a trademark,[2] for example, when the complainant is assigned rights to a trademark registered by another entity,[3] or when the complainant has a subsidiary relationship with the trademark registrant.[4]

Since the UDRP does not resolve a good faith trademark dispute but intervenes only when there is clear abuse, the respondent’s bad faith is central: a domain name may be confusingly similar or even identical to a trademark, and yet a complainant cannot prevail if the respondent has rights and legitimate interests in the domain name and/or did not register and use the domain name in bad faith.[5] For this reason, the UDRP sets a high standard for the complainant to establish respondent’s bad faith. For example, UDRP provides a defense if the domain name registrant has made demonstrable preparations to use the domain name in a bona fide offering of goods or services. On the other hand, the Anticybersquatting Consumer Protection Act (“ACPA”) only provides a defense if there is prior good faith use of the domain name, not simply preparation to use. Another distinction between the UDRP and the ACPA is that the UDRP requires that complainant prove bad faith in both registration and use of the disputed domain to prevail, whereas the ACPA only requires complainant to prove bad faith in either registration or use.

Such a high standard for bad faith indicates that the UDRP is not equipped resolve issues where both parties dispute their respective rights in the trademark. In fact, when abuse is non-existent or not obvious, the UDRP Panel would refuse to transfer the disputed domain name from the respondent to the complainant.[6] Instead, the parties would need to resolve these claims in regular courts under either the ACPA or the Latham act. Limiting itself to addressing cybersquatting allows the UDRP to become extremely efficient in dealing with cybersquatting practices, a widespread and highly damaging abuse of the Internet age. This efficiency and ease of the UDRP process is appreciated by trademark-owning businesses and individuals, who prefer that disputes are handled promptly and economically. From the time of the UDRP’s creation until now, ICANN has not shown intention for reforming the Policy despite existing criticisms,[7] and for good reasons.

 

[Notes]

[1] Gerald M. Levine, Domain Name Arbitration: Trademarks, Domain Names, and Cybersquatting at 102 (2019).

[2] Tasty Baking, Co. & Tastykake Invs., Inc. v. Quality Hosting, FA 208854 (FORUM Dec. 28, 2003) (treating the two complainants as a single entity where both parties held rights in trademarks contained within the disputed domain names.)

[3] Golden Door Properties, LLC v. Golden Beauty / goldendoorsalon, FA 1668748 (FORUM May 7, 2016) (finding rights in the GOLDEN DOOR mark where Complainant provided evidence of assignment of the mark, naming Complainant as assignee); Remithome Corp v. Pupalla, FA 1124302 (FORUM Feb. 21, 2008) (finding the complainant held the trademark rights to the federally registered mark REMITHOME, by virtue of an assignment); Stevenson v. Crossley, FA 1028240 (FORUM Aug. 22, 2007) (“Per the annexed U.S.P.T.O. certificates of registration, assignments and license agreement executed on May 30, 1997, Complainants have shown that they have rights in the MOLD-IN GRAPHIC/MOLD-IN GRAPHICS trademarks, whether as trademark holder, or as a licensee. The Panel concludes that Complainants have established rights to the MOLD-IN GRAPHIC SYSTEMS mark pursuant to Policy ¶ 4(a)(i).”)

[4] Provide Commerce, Inc v Amador Holdings Corp / Alex Arrocha, FA 1529347 (FORUM Jan. 3, 2014) (finding that the complainant shared rights in a mark through its subsidiary relationship with the trademark holder); Toyota Motor Sales, U.S.A., Inc. v. Indian Springs Motor, FA 157289 (FORUM June 23, 2003) (“Complainant has established that it has rights in the TOYOTA and LEXUS marks through TMC’s registration with the USPTO and Complainant’s subsidiary relationship with TMC.”)

[5] Levine, supra note 1, at 99; see e.g., Dr. Alan Y. Chow, d/b/a Optobionics v. janez bobnik, FA2110001967817 (FORUM Nov. 23, 2021) (refusing to transfer the <optobionics.com> domain name despite its being identical to Complainant’s OPTOBIONICS mark and formerly owned by Complainant, since “[t]he Panel finds no evidence in the Complainant’s submissions . . . [that] the Respondent a) does not have a legitimate interest in the domain name and b) registered and used the domain name in bad faith.”).

[6] Swisher International, Inc. v. Hempire State Smoke Shop, FA2106001952939 (FORUM July 27, 2021).

[7] Id. at 359.


Counter Logic Broadband

Justice C. Shannon, MJLST Staffer

In 2015 Zaqueri “Aphromoo” Black won his first North American League of Legends championship series “LCS” championship playing support for Counter Logic Gaming. Since 2013 at least forty players have made the starting lineups for eight to ten LCS teams. Aphromoo is the only African American to win an LCS MVP. Aphromoo is the only African American player to win multiple LCS finals. Aphromoo is the only African American player to win a single LCS Final. Aphromoo is the only African American player to make it to an LCS final. Aphromoo is the only African American player to participate in LCS playoffs. Indeed, Aphromoo is the only African American player to have a starting role on an LCS team. Why? At least in part, because due to the digital divide.

More than a quarter of African Americans do not have broadband. Further, nearly 40% of the African Americans in the rural south do not have broadband. One quarter of the Latinx population does not have broadband. These discrepancies allow fewer African Americans and Latinx to play online video games like League of Legends. Okay, but if the digital divide only affects esports, why should the nation care? The digital divide, as seen in esports, is also seen in the American educational system. More than 15% of American households lacked broadband at the start of the pandemic. This gap was more pronounced in African American and Latinx households. These statistics demonstrate a national need to address the digital divide for entertainment purposes and, more importantly, educational purposes. So, what are some legal solutions to the digital divide? Municipal internet, subsidies, and low-income broadband laws.

Municipal Internet

Municipal broadband is not a new concept, but recently it has been seen as a solution to help address the digital divide. While the up-front cost to a city may be substantial, the long-term advantages can be significant. Highland, IL, and other communities across the United States provide high-speed internet for as low as $35 a month. Cities providing low-cost broadband through municipalities frequently have competitive prices for gigabit speeds as well. The most significant downside to this solution is that these cities are frequently in rural locations that do not provide for large populations. In addition, when municipalities attempt to provide broadband outside of their borders, state laws preempt them to protect ISPs. ISPs lobby for laws to deter or prevent municipal internet on the basis that they are necessary to prevent unfair competition; this fear of unfair competition, however, restricts communities from getting connected.

To avoid the preemption issue during the pandemic, some cities have established narrow versions of municipal broadband. In addition, these cities are providing free connectivity in heavily populated communities. For example, during the pandemic, Chattanooga, Tennessee, offered free broadband to low-income students. If these solutions stay in place, they will set an industry precedent for providing broadband to low-income communities.

Subsidies

The emergency Broadband Benefit provides up to $50 per month towards broadband services for eligible households and $75 a month for households on tribal lands. To qualify for the program, a household must meet one of five standards. Congress created the program to help low-income households stay connected during the pandemic. Congress allocated $3.2 billion to the FCC to enable the agency to provide the discount. This discount also comes with a one-time device discount of up to $100 so that users not only have broadband but have the tools to utilize broadband. The advantage of this subsidy is it directly addresses the issue of low-income recipients not being able to afford broadband, which can immediately affect the 15% of Americans who do not have broadband.

The downside of this solution is to qualify, a recipient must share their income on a webpage they have not visited before, which can be invasive. Further, this plan does not permanently address the cost of broadband, and once it ends, it is possible that the same groups of Americans who could not afford broadband before lose access to the internet. Additionally, when the average cost of a laptop in America is $700, a discount of $100 does not do very much to ensure that users are correctly benefitting from their new broadband connection. If the goal is to ensure that users can attend classes, complete homework assignments, and maybe play esports on the side, then a lower-cost tablet ($350 on average) would not address the problem of needing hardware to access broadband.

However, a program like this could be valued as a reasonable start if things continue to go in the right direction. A fair price for broadband is $60 a month. Reducing the cost of broadband to $10 per recipient for competitive speeds and reliability after subsidization could be a great tool to eliminate the digital divide so long as it persists after the pandemic.

Low-Income Broadband Laws

Low-cost broadband laws would require internet service providers to provide broadband plans for low-income recipients at a low-cost price. This approach would directly address Americans with physical access to broadband but who cannot pay for broadband solutions due to cost, thus, helping to bridge the digital divide. Low-cost broadband plans such as New York’s proposed Affordable Broadband Act would require all internet service providers serving more than 20,000 households to provide two low-cost plans to qualifying (low income) customers. However, New York’s law was stymied by ISPs arguing that it is an illegal way to close the digital divide as states are preempted from rate regulation of broadband by the Federal Communications Commission.

The ISPs argued that the Affordable Broadband Act operated within the field of interstate commerce and was thus likely preempted by the Federal Communications Act of 1934. However, as broadband is almost always interstate commerce, other state laws similar to New York’s Affordable Broadband Act would probably run into the same issue. Thus, a low-income broadband law would likely need to come from the federal level to avoid the same road bumps.

The Future of Broadband and the Digital Divide

An overlapping theme between many of these solutions is that they were implemented during the pandemic; this begs the question, are these short-term solutions to an unexpected life-changing event or rational long-term solutions for various long-term problems, including the pandemic? If cities, states, and the nation stay the course and implement more low-cost broadband solutions such as municipal internet, subsidies, and low-income broadband laws, it will be possible to address the digital divide. However, if jurisdictions treat these solutions like short-term stopgaps, communities that cannot afford traditional broadband solutions will again lose broadband access. Students will again go to McDonald’s to do homework assignments, and Aphromoo may continue to be the only active African American LCS player.


Whitelist for Thee, but Not for Me: Facebook File Scandals and Section 230 Solutions

Warren Sexson, MJLST Staffer

When I was in 7th grade, I convinced my parents to let me get my first social media account. Back in the stone age, that phrase was synonymous with Facebook. I never thought too much of how growing up in the digital age affected me, but looking back, it is easy to see the cultural red flags. It came as no surprise to me when, this fall, the Wall Street Journal broke what has been dubbed “The Facebook Files,” and in them found an internal study from the company showing Instagram is toxic to teen girls. While tragic, this conclusion is something many Gen-Zers and late-Millennials have known for years. However, in the “Facebook Files” there is another, perhaps even more jarring, finding: Facebook exempts many celebrities and elite influencers from its rules of conduct. This revelation demands a discussion of the legal troubles the company may find itself in and the proposed solutions to the “whitelisting” problem.

The Wall Street Journal’s reporting describes an internal process by Facebook called “whitelisting” in which the company “exempted high-profile users from some or all of its rules, according to company documents . . . .” This includes individuals from a wide range of industries and political viewpoints, from Soccer mega star Neymar, to Elizabeth Warren, and Donald Trump (prior to January 6th). The practice put the tech giant in legal jeopardy after a whistleblower, later identified as Frances Haugen, submitted a whistleblower complaint with the Securities and Exchange Commission (SEC) that Facebook has “violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors . . . .” See 17 CFR § 240.14a-9 (enforcement provision on false or misleading statements to investors). Mark Zuckerberg himself has made statements regarding Facebook’s neutral application of standards that are at direct odds with the Facebook Files. Regardless of the potential SEC investigation, the whitelist has opened up the conversation regarding the need for serious reform in the big tech arena to make sure no company can make lists of privileged users again. All of the potential solutions deal with 47 U.S.C. § 230, known colloquially as “section 230.”

Section 230 allows big tech companies to censor content while still being treated as a platform instead of a publisher (where they would incur liability for what is on their website). Specifically, § 230(c)(2)(A) provides that no “interactive computer service” shall be held liable for taking action in good faith to restrict “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable [content] . . . .” It is the last phrase, “otherwise objectionable,” that tech companies have used as justification for removing “hate speech” or “misinformation” from their platform without incurring publisher like liability. The desire to police such speech has led Facebook to develop stringent platform rules which has in turn created the need for whitelisting. This brings us to our first proposal, eliminating the phrase “otherwise objectionable” from section 230 itself. The proposed “Stop the Censorship Act of 2020” brought by Republican Paul Gosar of Arizona does just that. Proponents argue that it would force tech companies to be neutral or lose liability protections. Thus, no big tech company would ever create standards stringent enough to require a “whitelist” or an exempted class, because the standard is near to First Amendment protections—problem solved! However, the current governing majority has serious concerns about forced neutrality, which would ignore problems of misinformation or the mental health effects of social media in the aftermath of January 6th.

Elizabeth Warren, similar to a recent proposal in the House Judiciary Committee, takes a different approach: breaking up big tech. Warren proposes passing legislation to limit big tech companies in competing with small businesses who use the platform and reversing/blocking mergers, such as Facebook purchasing Instagram. Her plan doesn’t necessarily stop companies from having whitelists, but it does limit the power held by Facebook and others which could in turn, make them think twice before unevenly applying the rules. Furthermore, Warren has called for regulators to use “every tool in the toolbox,” in regard to Facebook.

Third, some have claimed that Google, Facebook, and Twitter have crossed the line under existing legal doctrines to become state actors. So, the argument goes, government cannot “induce” or “encourage” private persons to do what the government cannot. See Norwood v. Harrison, 413 U.S. 455, 465 (1973). Since some in Congress have warned big tech executives to restrict what they see as bad content, the government has essentially co-opted the hand of industry to block out constitutionally protected speech. See Railway Employee’s Department v. Hanson, 351 U.S. 225 (1956) (finding state action despite no actual mandate by the government for action). If the Supreme Court were to adopt this reasoning, Facebook may be forced to adopt a First Amendment centric approach since the current hate speech and misinformation rules would be state action; whitelists would no longer be needed since companies would be blocked from policing fringe content. Finally, the perfect solution! The Court can act where Congress cannot agree. I am skeptical of this approach—needless to say, such a monumental decision would completely shift the nature of social media. While Justice Thomas has hinted at his openness to this argument, it is unclear if the other justices will follow suit.

All in all, Congress and the Court have tools at their disposal to combat the disturbing actions taken by Facebook. Outside of potential SEC violations, Section 230 is a complicated but necessary issue Congress must confront in the coming months. “The Facebook Files” have exposed the need for systemic change in social media. What I once used to use to play Farmville, has become a machine that has rules for me, but not for thee.


21st Century Problem: Authentication of Prisoner Facebook Status Updates

by Eric Maloney, UMN Law Student, MJLST Staff

Thumbnail-Eric-Maloney.jpgFacebook has become a part of everyday life for people around the world. According to Mark Zuckerberg and Co., over one billion people (yes, with a “B”) are active on Facebook every month, with an average of more than 600 million active users every day in December 2012. Disregarding bogus or duplicate accounts, that means roughly one-seventh of the entire human population is active on Facebook every month (with the world population currently sitting somewhere in the neighborhood of seven billion people).

Apparently, Facebook has become so commonplace and ingrained in the daily routine of some that they feel the need to use the social networking service from the privacy of their prison cells.

A Harlem gang member named Devin Parsons has decided to cooperate with the government against fellow members of his gang, and is currently incarcerated while trial is pending. Instead of having the usual prison contraband smuggled in, he obtained a mobile phone and used it to post Facebook status updates under an assumed name. According to Trial Judge William H. Pauley III:

In some posts, Parsons reflected on his life in jail:

“everybody wanna live but don’t wanna die”;
“Life is crazy thay only miss yu ifyu dead or in jail”; and
“G.o.n.e”

In others, Parsons posted about his cooperation:

“I’m not tellin on nobody from HARLEM but I can give up some bx n****s that got bodys”; and
“be home sooner then yaH hereing 101[.]”

While not exactly “Letter from Birmingham Jail,” Parsons was surprisingly bold about disclosing the fact of his cooperation and about the risk of getting caught with a banned cell phone by the prison administration. The gang against which Parsons is testifying is charged with multiple counts of narcotics trafficking and murder, among other things.

One of the defendants in the case, Melvin Colon, sought to compel the disclosure of these postings under the Brady rule, which requires the government to release evidence to the defense before trial if the evidence is favorable to the defendant. Judge Pauley held that the government was not obligated to turn these postings over to Colon; for various reasons, the government was never in actual possession of the Facebook statuses and therefore had no duty to disclose under Brady.

This case highlights the continually growing relevance that Facebook and other social media data has in legal proceedings. In fact, this is not even the first ruling about Facebook in this case; the defendant Colon had earlier moved to suppress his own Facebook postings which the prosecution sought to introduce. Judge Pauley denied this motion as well, holding that Colon’s sharing of the postings with his Facebook “friends” meant he lacked a reasonable expectation of privacy in them.

A background issue in this case was the idea of authenticity of the Facebook poster; because Parsons was posting under a fake name, both sides were unaware of his conduct until after the account had already been deactivated. While not contested here, ensuring that the Facebook information originated from the user is an increasingly important evidentiary consideration as more and more of this data is used in both civil and criminal contexts.

Professor Ira P. Robbins laid out a possible framework for authenticating social networking evidence in his Minnesota Journal of Law, Science & Technology article “Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-Networking Evidence.” While voicing significant concerns about the current lack of a required nexus between the online content and its real-life poster, he proposed detailed admissions criteria for social network postings. He offered several factors to be examined by judges in making rulings about such data, including who owns the account, how secure the account is, and how / when the post in question was created.

As Facebook and other social networking information becomes increasingly important to the outcomes of legal cases, a framework like this is essential to bring our procedures in line with the nature of 21st century evidence and to ensure our system continues to meet Due Process standards. Digital evidence is largely unexplored territory for jurists and scholars alike, and it’s my hope that evidentiary standards like those proposed by Professor Robbins are seriously considered by the legal community.


Time for a New Approach to Cyber Security?

by Kenzie Johnson, UMN Law Student, MJLST Managing Editor

Kenzie Johnson The recent announcements by several large news outlets including the New York Times, Washington Post, Bloomberg News, and the Wall Street Journal reporting that they have been the victims of cyber-attacks have yet again brought cyber security into the news. These attacks reportedly all originated in China and were aimed at monitoring news reporting of Chinese issues. In particular, the New York Times announced that Chinese hackers persistently attacked their servers for a period of four months and obtained passwords for reporters and other Times employees. The Times reported that the commencement of the attack coincided with a story it published regarding mass amounts of wealth accumulated by the family of Chinese Prime Minister Wen Jiabao.

It is not only western news outlets that are the targets of recent cyber-attacks. Within the past weeks, the United States Department of Energy and Federal Reserve both announced that hackers had recently penetrated their servers and acquired sensitive information.

This string of high-profile cyber-attacks raises the need for an improved legal and response structure to deal with the growing threat of cyber-attacks. In the forthcoming Winter 2013 issue of Minnesota Journal of Law, Science, and Technology, Susan W. Brenner discusses these issues in an article entitled “Cyber-Threats and the Limits of Bureaucratic Control.” Brenner discusses the nature, causes, and consequences of cyber-threats if left unchecked. Brenner also analyzes alternative approaches to the United States’ current cyber-threat control regime, criticizes current proposals for improvements to the current regime, and proposes alternative approaches. As illustrated by these recent cyber-attacks, analysis of these issues is becoming more important to protect sensitive government data as well as private entities from cyber-threats.


Six Strikes and You’re Out: Can a New RIAA Policy Solve Old Online File Sharing Problems?

by Ian Birrell

Thumbnail-Ian-Birrell.jpgSince at least 1999 when Napster was originally launched, internet piracy, or downloading copyrighted materials (especially songs, videos, and games,) has been a contentious activity. The Recording Industry Association of America (RIAA) has historically taken a very public and aggressive stance by finding individuals associated with IP addresses matching those where this “file sharing” is coming from. After finding such a target, the RIAA would send a letter demanding a settlement for thousands of dollars or threatening litigation, risky and expensive to the target, despite a potentially very small monetary value of downloaded material. The RIAA suits, which have continued for a number of years, include a number of well publicized absurd claims.

This journal has written on the RIAA policies before. In 2008, we published a student note by Daniel Reynolds named The RIAA Litigation War on File Sharing and Alternatives more Compatible with Public Morality. Reynolds argued then that the policies were ineffective and unconscionable and urged change.

Change is coming. Later this year, after a number of years in development, a number of major carriers are planning to institute a “six-strikes” plan. This is a voluntary agreement between ISPs and certain content providers (the government is not involved,) and is made to target peer-to-peer downloading. The plan has a notice phase, an acknowledgement phase, and a mitigation phase. Under the plan, a private carrier – say, Time Warner – will first notify a user that there has been an allegation of illegal copyright activity, then force a user who may be infringing (and who may or may not own the account) to acknowledge having received such notices, before the user finally suffers consequences. These consequences can include throttling of internet speed or having popular websites blocked.

Proponents point to a few positives under this proposal, including the user’s right to appeal to an independent arbitrator (for a $35 fee.) Additionally, though lawsuits are still permitted by copyright holders, the hope is that the system will educate the public about copyright infringement and that, on notice that their behavior is illegal, infringement will at least slow down. Ron Wheeler, a Senior VP at Fox, said that, “This system is not designed to produce lawsuits–it’s designed to produce education.

Unfortunately, a lack of education may not be the underlying problem. Reynolds noted that, even in 2004, awareness of the (il)legality of file sharing was widespread. And increasing awareness may not sharply decrease infringement. Critics further note that, despite the safeguards, penalties are ultimately based on accusations rather than definite findings of infringement. If the system ultimately works, though, it may be worth the headaches for both sides. Consumers will not be able to infringe (as much) but the public will also not suffer suits against twelve-year-olds for sharing music.