Artificial Intelligence

Regulating the Revolution: A Legal Roadmap to Optimizing AI in Healthcare

Fazal Khan, MD-JD: Nexbridge AI

In the field of healthcare, the integration of artificial intelligence (AI) presents a profound opportunity to revolutionize care delivery, making it more accessible, cost-effective, and personalized. Burgeoning demographic shifts, such as aging populations, are exerting unprecedented pressure on our healthcare systems, exacerbating disparities in care and already-soaring costs. Concurrently, the prevalence of medical errors remains a stubborn challenge. AI stands as a beacon of hope in this landscape, capable of augmenting healthcare capacity and access, streamlining costs by automating processes, and refining the quality and customization of care.

Yet, the journey to harness AI’s full potential is fraught with challenges, most notably the risks of algorithmic bias and the diminution of human interaction. AI systems, if fed with biased data, can become vehicles of silent discrimination against underprivileged groups. It is essential to implement ongoing bias surveillance, promote the inclusion of diverse data sets, and foster community involvement to avert such injustices. Healthcare institutions bear the responsibility of ensuring that AI applications are in strict adherence to anti-discrimination statutes and medical ethical standards.

Moreover, it is crucial to safeguard the essence of human touch and empathy in healthcare. AI’s prowess in automating administrative functions cannot replace the human art inherent in the practice of medicine—be it in complex diagnostic processes, critical decision-making, or nurturing the therapeutic bond between healthcare providers and patients. Policy frameworks must judiciously navigate the fine line between fostering innovation and exercising appropriate control, ensuring that technological advancements do not overshadow fundamental human values.

The quintessential paradigm would be one where human acumen and AI’s analytical capabilities coalesce seamlessly. While humans should steward the realms requiring nuanced judgment and empathic interaction, AI should be relegated to the execution of repetitive tasks and the extrapolation of data-driven insights. Placing patients at the epicenter, this symbiotic union between human clinicians and AI can broaden access to healthcare, reduce expenditures, and enhance service quality, all the while maintaining trust through unyielding transparency. Nonetheless, the realization of such a model mandates proactive risk management and the encouragement of innovation through sagacious governance. By developing governmental and institutional policies that are both cautious and compassionate by design, AI can indeed be the catalyst for a transformative leap in healthcare, enriching the dynamics between medical professionals and the populations they serve.


Conflicts of Interest and Conflicting Interests: The SEC’s Controversial Proposed Rule

Shaadie Ali, MJLST Staffer

A controversial proposed rule from the SEC on AI and conflicts of interest is generating significant pushback from brokers and investment advisers. The proposed rule, dubbed “Reg PDA” by industry commentators in reference to its focus on “predictive data analytics,” was issued on July 26, 2023.[1] Critics claim that, as written, Reg PDA would require broker-dealers and investment managers to effectively eliminate the use of almost all technology when advising clients.[2] The SEC claims the proposed rule is intended to address the potential for AI to hurt more investors more quickly than ever before, but some critics argue that the SEC’s proposed rule would reach far beyond generative AI, covering nearly all technology. Critics also highlight the requirement that conflicts of interest be eliminated or neutralized as nearly impossible to meet and a departure from traditional principles of informed consent in financial advising.[3]

The SEC’s 2-page fact sheet on Reg PDA describes the 239-page proposal as requiring broker-dealers and investment managers to “eliminate or neutralize the effect of conflicts of interest associated with the firm’s use of covered technologies in investor interactions that place the firm’s or its associated person’s interest ahead of investors’ interests.”[4] The proposal defines covered technology as “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes in an investor interaction.”[5] Critics have described this definition of “covered technology” as overly broad, with some going so far as to suggest that a calculator may be “covered technology.”[6] Despite commentators’ insistence, this particular contention is implausible – in its Notice of Proposed Rulemaking, the SEC stated directly that “[t]he proposed definition…would not include technologies that are designed purely to inform investors.”[7] More broadly, though, the SEC touts the proposal’s broadness as a strength, noting it “is designed to be sufficiently broad and principles-based to continue to be applicable as technology develops and to provide firms with flexibility to develop approaches to their use of technology consistent with their business model.”[8]

This move by the SEC comes amidst concerns raised by SEC chair Gary Gensler and the Biden administration about the potential for the concentration of power in artificial intelligence platforms to cause financial instability.[9] On October 30, 2023, President Biden signed an Executive Order that established new standards for AI safety and directed the issuance of guidance for agencies’ use of AI.[10] When questioned about Reg PDA at an event in early November, Gensler defended the proposed regulation by arguing that it was intended to protect online investors from receiving skewed recommendations.[11] Elsewhere, Gensler warned that it would be “nearly unavoidable” that AI would trigger a financial crisis within the next decade unless regulators intervened soon.[12]

Gensler’s explanatory comments have done little to curb criticism by industry groups, who have continued to submit comments via the SEC’s notice and comment process long after the SEC’s October 10 deadline.[13] In addition to highlighting the potential impacts of Reg PDA on brokers and investment advisers, many commenters questioned whether the SEC had the authority to issue such a rule. The American Free Enterprise Chamber of Commerce (“AmFree”) argued that the SEC exceeded its authority under both its organic statutes and the Administrative Procedures Act (APA) in issuing a blanket prohibition on conflicts of interest.[14] In their public comment, AmFree argued the proposed rule was arbitrary and capricious, pointing to the SEC’s alleged failure to adequately consider the costs associated with the proposal.[15] AmFree also invoked the major questions doctrine to question the SEC’s authority to promulgate the rule, arguing “[i]f Congress had meant to grant the SEC blanket authority to ban conflicts and conflicted communications generally, it would have spoken more clearly.”[16] In his scathing public comment, Robinhood Chief Legal and Corporate Affairs Officer Daniel M. Gallagher alluded to similar APA concerns, calling the proposal “arbitrary and capricious” on the grounds that “[t]he SEC has not demonstrated a need for placing unprecedented regulatory burdens on firms’ use of technology.”[17] Gallagher went on to condemn the proposal’s apparent “contempt for the ordinary person, who under the SEC’s apparent world view [sic] is incapable of thinking for himself or herself.”[18]

Although investor and broker industry groups have harshly criticized Reg PDA, some consumer protection groups have expressed support through public comment. The Consumer Federation of America (CFA) endorsed the proposal as “correctly recogniz[ing] that technology-driven conflicts of interest are too complex and evolve too quickly for the vast majority of investors to understand and protect themselves against, there is significant likelihood of widespread investor harm resulting from technology-driven conflicts of interest, and that disclosure would not effectively address these concerns.”[19] The CFA further argued that the final rule should go even further, citing loopholes in the existing proposal for affiliated entities that control or are controlled by a firm.[20]

More generally, commentators have observed that the SEC’s new prescriptive rule that firms eliminate or neutralize potential conflicts of interest marks a departure from traditional securities laws, wherein disclosure of potential conflicts of interest has historically been sufficient.[21] Historically, conflicts of interest stemming from AI and technology have been regulated the same as any other conflict of interest – while brokers are required to disclose their conflicts, their conduct is primarily regulated through their fiduciary duty to clients. In turn, some commentators have suggested that the legal basis for the proposed regulations is well-grounded in the investment adviser’s fiduciary duty to always act in the best interest of its clients.[22] Some analysts note that “neutralizing” the effects of a conflict of interest from such technology does not necessarily require advisers to discard that technology, but changing the way that firm-favorable information is analyzed or weighed, but it still marks a significant departure from the disclosure regime. Given the widespread and persistent opposition to the rule both through the note and comment process and elsewhere by commentators and analysts, it is unclear whether the SEC will make significant revisions to a final rule. While the SEC could conceivably narrow definitions of “covered technology,” “investor interaction,” and “conflicts of interest,” it is difficult to imagine how the SEC could modify the “eliminate or neutralize” requirement in a way that would bring it into line with the existing disclosure-based regime.

For its part, the SEC under Gensler is likely to continue pursuing regulations on AI regardless of the outcome of Reg PDA. Gensler has long expressed his concerns about the impacts of AI on market stability. In a 2020 paper analyzing regulatory gaps in the use of generative AI in financial markets, Gensler warned, “[e]xisting financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the risks posed by deep learning.”[23] Regardless of how the SEC decides to finalize its approach to AI in conflict of interest issues, it is clear that brokers and advisers are likely to resist broad-based bans on AI in their work going forward.

Notes

[1] Press Release, Sec. and Exch. Comm’n., SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Jul. 26, 2023).

[2] Id.

[3] Jennifer Hughes, SEC faces fierce pushback on plan to police AI investment advice, Financial Times (Nov. 8, 2023), https://www.ft.com/content/766fdb7c-a0b4-40d1-bfbc-35111cdd3436.

[4] Sec. Exch. Comm’n., Fact Sheet: Conflicts of Interest and Predictive Data Analytics (2023).

[5] Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers,  88 Fed. Reg. 53960 (Proposed Jul. 26, 2021) (to be codified at 17 C.F.R. pts. 240, 275) [hereinafter Proposed Rule].

[6] Hughes, supra note 3.

[7] Proposed Rule, supra note 5.

[8] Id.

[9] Stefania Palma and Patrick Jenkins, Gary Gensler urges regulators to tame AI risks to financial stability, Financial Times (Oct. 14, 2023), https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac.

[10] Fact Sheet, White House, President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (Oct. 30, 2023).

[11] Hughes, supra note 3.

[12] Palma, supra note 9.

[13] See Sec. Exch. Comm’n., Comments on Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (last visited Nov. 13, 2023), https://www.sec.gov/comments/s7-12-23/s71223.htm (listing multiple comments submitted after October 10, 2023).

[14] Am. Free Enter. Chamber of Com., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270180-652582.pdf.

[15] Id. at 14-19.

[16] Id. at 9.

[17] Daniel M. Gallagher, Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-271299-654022.pdf.

[18] Id. at 43.

[19] Consumer Fed’n. of Am., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270400-652982.pdf.

[20] Id.

[21] Ken D. Kumayama et al., SEC Proposes New Conflicts of Interest Rule for Use of AI by Broker-Dealers and Investment Advisers, Skadden (Aug. 10, 2023), https://www.skadden.com/insights/publications/2023/08/sec-proposes-new-conflicts.

[22] Colin Caleb, ANALYSIS: Proposed SEC Regs Won’t Allow Advisers to Sidestep AI, Bloomberg Law (Aug. 10, 2023), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-proposed-sec-regs-wont-allow-advisers-to-sidestep-ai.

[23] Gary Gensler and Lily Bailey, Deep Learning and Financial Stability (MIT Artificial Intel. Glob. Pol’y F., Working Paper 2020) (in which Gensler identifies several potential systemic risks to the financial system, including overreliance and uniformity in financial modeling, overreliance on concentrated centralized datasets, and the potential of regulators to create incentives for less-regulated entities to take on increasingly complex functions in the financial system).


Brushstroke Battles: Unraveling Copyright Challenges With AI Artistry

Sara Seid, MJLST Staffer

Introduction

Imagine this: after a long day of thinking and participating in society, you decided to curl up on the couch with your phone and crack open a new fanfiction to decompress.  Fanfiction, a fictional work of writing based on another fictional work, has increased in popularity due to the expansion and increased use of the internet. Many creators publish their works to websites like Archive of Our Own (AO3), or Tumblr. These websites are free and provide a community for creative minds to share their creative works. While the legality of fanfiction in general is debated, the real concern among creators is regarding AI-generated works. Original characters and works are being used for profit to “create” works through the use of Artificial Intelligence. Profits can be generated from fanfiction through the use of paid AI text generators to create written works, or through advertisements on platforms. What was once a celebration of favorite works has become tarnished through the theft of fanfiction by AI programs.

First Case to Address the Issue

Thaler v. Perlmutter is a new and instructive case on the issue of copyright and AI-generated creative works – namely artwork.[1] The action was brought by Stephen Thaler against the Copyright Office for denying his application for copyright due to the lack of human authorship.[2]  The D.C. Circuit court was the first to rule on whether AI-generated art can have copyright protections.[3] The court held that AI-created artwork could not be copyrighted.[4] In considering the plaintiff’s copyright registration application for “A Recent Entrance to Paradise,” the Register concluded that this particular work would not support a claim to copyright because the work “lacked human authorship and thus no copyright existed in the first instance.”[5] The plaintiff’s primary contention was that the artwork was produced by the computer program he created, and, through its AI capabilities, the product was his.[6]

The court went on to opine that copyright is designed to adapt with the times.[7] Underlying that adaptability, however, has been a “consistent understanding that human creativity is the sine qua non at the core of copyrightability,” even as that human creativity is channeled through new tools or into new media.[8] Therefore, despite the plaintiff’s creation of the computer program, the painting was not produced by a human, and not eligible for copyright. This opinion, while relevant and clear, still leaves unanswered questions regarding the extent to which humans are involved in AI-generated work.[9] What level of human involvement is necessary for an AI creation to qualify for copyright?[10] Is there a percentage to meet? Does the AI program require multiple humans to work on it as a prerequisite? Adaptability with the times, while essential, also means that there are new, developing questions about the right ways to address new technology and its capabilities.

Implications of the Case for Fanfiction

Artificial Intelligence is a new concern among scholars. While its accessibility and convenience create endless new possibilities for a multitude of careers, it also directly threatens creative professions and creative outlets. Without the consent of or authority from creators, AI can use algorithms that process artwork and fictional literary works created by fans to create its own “original” work. AI has the ability to be used to replace professional and amateur creative writers. Additionally, as AI technological capacity increases, it can mimic and reproduce art that resembles or belongs to a human artist.[11]

However, the main concern for artists is wondering what AI will do to creative human industries in general.[12] Additionally, legal scholars are equally as concerned about what AI means for copyright law.[13] The main type of AI that fanfiction writers are concerned about is Generative AI.[14] Essentially, huge datasets are scraped together to train the AI, and through a technical process the AI is able to devise new content that resembles the training data but isn’t identical.[15] Creators are outraged at what they consider to be theft of their artistic creations.[16] Artwork, such as illustrations for articles, books, or album covers may soon face competition from AI, undermining a thriving area of commercial art as well.[17]

Currently, fanfiction is protected under the doctrine of fair use, which allows creators to add new elements, criticism, or commentary to an already existing work, in a way that transforms it.[18] The next question likely to stem from Thaler will be whether AI creations are subject to the same protections that fan created works are.

The fear of the possible consequences of AI can be slightly assuaged through the reality that AI cannot accurately and genuinely capture human memory, thoughts, and emotional expression. These human skills will continue to make creators necessary for their connections to humanity and the ability to express that connection. How a fan resonates with a novel or T.V. show, and then produces a piece of work based on that feeling, is uniquely theirs. The decision in Thaler reaffirms this notion. AI does not offer the human creative element that is required to both receive copyright and also connect with viewers in a meaningful way.[19]

Furthermore, the difficulty with new technology like AI is that it’s impossible to immediately understand and can cause feelings of frustration or a sense of threat. Change is uncomfortable. However, with knowledge and experience, AI might be a useful tool for fanfiction creators.

The element of creative projects that make them so meaningful to people is the way that they can provide a true insight and experience that is relatable and distinctly human.[20] The alternative to banning AI or completing rendering human artists obsolete is to find a middle ground that protects both sides. The interests of technological innovation should not supersede the concerns of artists and creators.

Ultimately, as stated in Thaler, AI artwork that has no human authorship does not get copyright.[21] However, this still leaves unanswered questions that future cases will likely present before the courts. Are there protections that can be made for online creators’ artwork and fictional writings to prevent their use or presence in AI databases? The Copyright Act exists to be malleable and adaptable with time.[22] Human involvement and creative control will have to be assessed as AI becomes more prominent in personal and professional settings.

Notes

[1] Thaler v. Perlmutter, 2023 U.S. Dist. LEXIS 145823, *1.

[2] Id.

[3] Id.

[4] Id.

[5] Id.

[6] Id. at *3.

[7] Id. at *10.

[8] Id.

[9] https://www.natlawreview.com/article/judge-rules-content-generated-solely-ai-ineligible-copyright-ai-washington-report.

[10] Id.

[11] https://www.theguardian.com/artanddesign/2023/jan/23/its-the-opposite-of-art-why-illustrators-are-furious-about-ai#:~:text=AI%20doesn%27t%20do%20the,what%20AI%20art%20is%20doing.%E2%80%9D.

[12] https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney.

[13] https://www.reuters.com/legal/ai-generated-art-cannot-receive-copyrights-us-court-says-2023-08-21.

[14] https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney.

[15] Id.

[16] Id.

[17] Id.

[18] https://novelpad.co/blog/is-fanfiction-legal# (citing Campbell v. Acuff Rose Music, 510 U.S. 569 (1994).

[19] https://www.reuters.com/default/humans-vs-machines-fight-copyright-ai-art-2023-04-01/.

[20] https://news.harvard.edu/gazette/story/2023/08/is-art-generated-by-artificial-intelligence-real-art/.

[21] Thaler v. Perlmutter, 2023 U.S. Dist. LEXIS 145823, *1.

[22] Id. at *10.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


Generate a JLST Blog Post: In the Absence of Regulation, Generative AI May Be Reigned in Through the Courts

Ted Mathiowetz, MJLST Staffer

In the space of a year, artificial intelligence (AI) has seemed to have grabbed hold of the contemporary conversation of technology and calls for increased regulation. With ChatGPT’s release in late-November of 2022 as well as the release of various other art generation softwares earlier in the year, the conversation surrounding tech regulation was quickly centered onto AI. In the wake of growing Congressional focus over AI, the White House quickly proposed a blueprint for a preliminary AI Bill of Rights as fears over unregulated advances in technology have grown.[1] The debate has raged on over the potential efficacy of this Bill of Rights and if it could be enacted in time to reign in AI development.[2] But, while Washington weighs whether the current regulatory framework will effectively set some ground rules, the matter of AI has already begun to be litigated.[3]

The growing fear over the power of AI has been mounting in numerous sectors as ChatGPT has proven its capabilities to pass exams such as the Multistate Bar Exam,[4] the US Medical Exam, and more.[5] Fears over AI’s capabilities and potential advancements are not just reaching academia either. The legal industry is already circling the wagons to prevent AI lawyers from representing would-be clients in court.[6] Edelson, a law firm based in Chicago, filed a class action complaint in California state court alleging that DoNotPay, an AI service that markets itself as “the world’s first robot lawyer” unlawfully provides a range of legal services.[7] The complaint alleges that DoNotPay is engaging in unlawful business practice by “holding itself out to be an attorney”[8] and “engaging in the unlawful practice of law by selling legal services… when it was not licensed to practice law.”[9]

Additional litigation has been filed against the makers of AI art generators, alleging copyright violations.[10]  The plaintiffs argue that a swath of AI firms have violated the Digital Millennium Copyright Act in constructing their AI models by using software that copied millions of images as a reference for the AI in building out user-requested images without compensation for those whose images were copied.[11] Notably, both of these suits are class-action lawsuits[12] and may serve as a strong blueprint for how weary parties can reign in AI through the court system.

Faridian v. DONOTPAY, Inc. — The Licensing Case

AI is here to stay for the legal industry, for better or worse.[13] However, where some have been sounding the alarm for years that AI will replace lawyers altogether,[14] the truth is likely to be quite different, with AI becoming a tool that helps lawyers become more efficient.[15] There are nonetheless existential threats to the industry as is seen in the Faridian case whereby DoNotPay is allowing people to write wills, contracts, and more without the help of a trained legal professional. This has led to shoddy AI-generated work, which creates concern that AI legal technology will likely lead to more troublesome legal action down-the-line for its users.[16]

It seems as though the AI Lawyer revolution may not be around to stay much longer as, in addition to the Faridian case, which sees DoNotPay being sued for their robot lawyer mainly engaging in transactional work, they have also run into problems trying to litigate. DoNotPay tried to get their AI Attorney into court to dispute traffic tickets and were later “forced” to withdraw the technology’s help in court after “multiple state bar associations [threatened]” to sue and they were cautioned that the move could see potential prison time for the CEO, Joshua Browder.[17]

Given that most states require applicants to the bar to 1) complete a Juris Doctor program from an accredited institution, 2) pass the bar exam, and 3) pass moral character evaluations in order to practice law, it’s rather likely that robot lawyers will not see a courtroom for some time, if ever. Instead, there may be a pro se revolution of sorts wherein litigants aid themselves with the help of AI legal services outside of the courtroom.[18] But, for the most part the legal field will likely incorporate AI into its repository of technology rather than be replaced by it. Nevertheless, the Faridian case, depending on its outcome, will likely provide a clear path forward for occupations with extensive licensing requirements that are endangered by AI advancement to litigate.

Sarah Andersen et al., v. Stability AI Ltd. — The Copyright Case

For occupations which do not have barriers to entry in the same way the legal field does, there is another way forward in the courts to try and stem the tide of AI in the absence of regulation. In the Andersen case, a class of artists have brought suit against various AI Art generation companies for infringing upon their copyrighted artwork by using their work to create the reference framework for their generated images.[19] The function of the generative AI is relatively straightforward. For example, if I were to log-on to an AI art generator and type in “Generate Lionel Messi in the style of Vincent Van Gogh” it would produce an image of Lionel Messi in the style of Van Gogh’s “Self-Portrait with a Bandaged Ear.” There is no copyright on Van Gogh’s artwork, but the AI accesses all kinds of copyrighted artwork in the style of Van Gogh for reference points as well as copyrighted images of Lionel Messi to create the generated image. The AI Image services have thus created a multitude of legal issues that their parent companies face including claims of direct copyright Infringement by storing copies of the works in building out the system, vicarious copyright Infringement when consumers generate artwork in the style of a given artist, and DMCA violations by not properly attributing existing work, among other claims.[20]

This case is being watched and is already being hotly debated as a ruling against AI could lead to claims against other generative AI such as ChatGPT for not properly attributing or paying for material that it’s used in building out its AI.[21] Defendants have claimed that the use of copyrighted material constitutes fair use, but these claims have not yet been fully litigated, so we will have to wait for a decision to come down on that front.[22] It is clear that as fast as generative AI seemed to take hold of the world, litigation has ramped up calling its future into question. Other world governments are also becoming increasingly weary of the technology, with Italy already banning ChatGPT and Germany heavily considering it, citing “data security concerns.”[23] It remains to be seen how the United States will deal with this new technology in terms of regulation or an outright ban, but it’s clear that the current battleground is in the courts.

Notes

[1] See Blueprint for an AI Bill of Rights, The White House (Oct. 5, 2022), https://www.whitehouse.gov/ostp/ai-bill-of-rights/; Pranshu Verma, The AI ‘Gold Rush’ is Here. What will it Bring? Wash. Post (Jan. 20, 2023), https://www.washingtonpost.com/technology/2023/01/07/ai-2023-predictions/.

[2] See Luke Hughest, Is an AI Bill of Rights Enough?, TechRadar (Dec. 10, 2022), https://www.techradar.com/features/is-an-ai-bill-of-rights-enough; see also Ashley Gold, AI Rockets ahead in Vacuum of U.S. Regulation, Axios (Jan. 30, 2023), https://www.axios.com/2023/01/30/ai-chatgpt-regulation-laws.

[3] Ashley Gold supra note 2.

[4] Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Score nearing 90th Percentile, ABA J. (Mar. 16, 2023), https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile.

[5] See e.g., Lakshmi Varanasi, OpenAI just announced GPT-4, an Updated Chatbot that can pass everything from a Bar Exam to AP Biology. Here’s a list of Difficult Exams both AI Versions have passed., Bus. Insider (Mar. 21, 2023), https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1.

[6] Stephanie Stacey, ‘Robot Lawyer’ DoNotPay is being Sued by a Law Firm because it ‘does not have a Law Degree’, Bus. Insider(Mar. 12, 2023), https://www.businessinsider.com/robot-lawyer-ai-donotpay-sued-practicing-law-without-a-license-2023-3

[7] Sara Merken, Lawsuit Pits Class Action Firm against ‘Robot Lawyer’ DoNotPay, Reuters (Mar. 9, 2023), https://www.reuters.com/legal/lawsuit-pits-class-action-firm-against-robot-lawyer-donotpay-2023-03-09/.

[8] Complaint at 2, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[9] Id. at 10.

[10] Riddhi Setty, First AI Art Generator Lawsuits Threaten Future of Emerging Tech, Bloomberg L. (Jan. 20, 2023), https://news.bloomberglaw.com/ip-law/first-ai-art-generator-lawsuits-threaten-future-of-emerging-tech.

[11] Complaint at 1, 13, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[12] Id. at 12; Complaint at 1, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[13] See e.g., Chris Stokel-Walker, Generative AI is Coming for the Lawyers, Wired (Feb. 21, 2023), https://www.wired.com/story/chatgpt-generative-ai-is-coming-for-the-lawyers/.

[14] Dan Mangan, Lawyers could be the Next Profession to be Replaced by Computers, CNBC (Feb.17, 2017), https://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-by-artificial-intelligence.html.

[15] Stokel-Walker, supra note 13.

[16] Complaint at 7, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[17] Debra Cassens Weiss, Traffic Court Defendants lose their ‘Robot Lawyer’, ABA J. (Jan. 26, 2023), https://www.abajournal.com/news/article/traffic-court-defendants-lose-their-robot-lawyer#:~:text=Joshua%20Browder%2C%20a%202017%20ABA,motorists%20contest%20their%20traffic%20tickets..

[18] See Justin Snyder, RoboCourt: How Artificial Intelligence can help Pro Se Litigants and Create a “Fairer” Judiciary, 10 Ind. J.L. & Soc. Equality 200 (2022).

[19] See Complaint, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[20] Id. at 10–12.

[21] See e.g., Dr. Lance B. Eliot, Legal Doomsday for Generative AI ChatGPT if Caught Plagiarizing or Infringing, warns AI Ethics and AI Law, Forbes (Feb. 26, 2023), https://www.forbes.com/sites/lanceeliot/2023/02/26/legal-doomsday-for-generative-ai-chatgpt-if-caught-plagiarizing-or-infringing-warns-ai-ethics-and-ai-law/?sh=790aecab122b.

[22] Ron. N. Dreben, Generative Artificial Intelligence and Copyright Current Issues, Morgan Lewis (Mar. 23, 2023), https://www.morganlewis.com/pubs/2023/03/generative-artificial-intelligence-and-copyright-current-issues.

[23] Nick Vivarelli, Italy’s Ban on ChatGPT Sparks Controversy as Local Industry Spars with Silicon Valley on other Matters, Yahoo! (Apr. 3, 2023), https://www.yahoo.com/entertainment/italy-ban-chatgpt-sparks-controversy-111415503.html; Adam Rowe, Germany might Block ChatGPT over Data Security Concerns, Tech.Co (Apr. 3, 2023), https://tech.co/news/germany-chatgpt-data-security.


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Only Humans Are Allowed: Federal Circuit Says No to “AI Inventors”

Vivian Lin, MJLST Staffer

On August 5, 2022, the U.S. Court of Appeals for the Federal Circuit affirmed the U.S. District for the Eastern Division of Virginia’s decision that artificial intelligence (AI) cannot be an “inventor” on a patent application,[1] joining many other jurisdictions in confirming that only a natural person can be an “inventor”.[2] Currently, South Africa remains the only jurisdiction that has granted Dr. Stephan Thaler’s patent naming DABUS, an AI, as the sole inventor of two patentable inventions.[3] With the release of the Federal Circuit’s opinion refusing to recognize AI as an inventor, Dr. Thaler’s fight to credit AI for inventions reaches a plateau. 

DABUS, formally known as Device for the Autonomous Bootstrapping of Unified Sentience, is an AI-based creativity machine created by Dr. Stephan Thaler, the founder of the software company Imagination Engine Inc. Dr. Thaler claimed that DABUS independently invented two patentable inventions: The Factual Container and the Neural Flame. For the past few years, Dr. Thaler has been in battle with patent offices around the world trying to receive patents for these two inventions. Until this date, every patent office, except one,[4] has refused to grant the patents on the grounds that the applications do not name a natural person as the inventor. 

The inventor of a patent being a natural person is a legal requirement in many jurisdictions. The recent Federal Circuit opinion ruled mainly based on statutory interpretation, arguing that the text is clear in requiring a natural person to be the inventor.[5] Though there are many jurisdictions that have left the term “inventor” undefined, it seems to be a general agreement that an inventor should be a natural person.[6]

Is DABUS the True Inventor?

There are many issues centered around AI inventorship. The first is whether AI can be the true inventor, and subsequently take credit for an invention, even though a human created the AI itself. Here it becomes necessary to inquire into whether there was human intervention during the discovery process, and if so, what type of intervention was involved. It might be the case that a natural human was the actual inventor of a product while AI only assisted in carrying out that idea. For example, when a developer designed the AI with a particular question in mind and carefully selected the training data, the AI is only assisting the invention while the developer is seen as the true inventor.[7] In analyzing the DABUS case, Dr. Rita Matulionyte, a senior lecturer at Macquarie Law School in Australia and an expert in intellectual property and information technology law, has argued that DABUS is not the true inventor because Dr. Thaler’s role in the inventions was unquestionable, assuming he formulated the problem, developed the algorithm, created the training date, etc.[8] 

However, it is a closer question when both AI and human effort are important for the invention. For example, AI might identify the compound for a new drug, but to conclude the discovery, a scientist still has to test the compound.[9] The U.S. patent law requires that the “inventor must contribute to the conception of the invention.”[10] Further defined, conception is “the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”[11] In the drug discovery scenario, it is difficult to determine who invented the new drug. Neither the AI developers nor the scientists fit the definition of “inventor”: The AI developers and trainers only built and trained the algorithm without any knowledge of the potential discovery while the scientists only confirmed the final discovery without contributing to the development of the algorithm or the discovery of the drug.[12] In this scenario, it is likely the AI did the majority of the work and made the important discovery itself, and should thus be the inventor of the new compound.[13]

The debate on who is the true inventor is important because mislabeling the inventor can cause serious consequences. Legally, improper inventorship attribution may cause a patent application to be denied, or it may lead to the later invalidation of a granted patent. Practically speaking, human inventors are able to take credit for their invention and that honor comes with recognition which may incentive future creative inventions. Thus, a misattribution may harm human inventiveness as true inventors could be discouraged by not being recognized for their contributions. 

Should AI-Generated Inventions be Patentable?

While concluding that AI is the sole inventor of an invention may be difficult as outlined in the previous section, what happens when AI is found to be the true, sole inventor? Society’s discussion on whether AI inventions should be patented focuses mostly on policy arguments. Dr. Thaler and Ryan Abbott, a law professor and the lead of Thaler’s legal team, have argued that allowing patent protection for AI-generated inventions will encourage developers to invest time in building more creative machines that will eventually lead to more inventions in the future.[14] They also argued that crediting AI for inventorship will protect the rights of human inventors.[15] For example, it cuts out the possibility of one person taking credit for another’s invention, which often happens when students participate in university research but are overlooked on patent applications.[16] Without patent applicability, the patent system’s required disclosure of inventions, it is very likely that owners of AI will keep inventions secret and privately benefit from the monopoly for however long it takes the rest of society to figure it out independently.[17] 

Some critics argue against Thaler and Abbott’s view. For one, they believe that AI at its current stage is not autonomous enough to be an inventor and human effort should be properly credited.[18] Even if AI can independently invent, its inventions should not be patentable because once it is, there will be too many patented inventions by AI in the same field owned by the same group of people who have access to these machines.[19] That will prevent smaller companies from entering into this field, having a negative effect on human inventiveness.[20]  Finally, there has been a concern that not granting patents to AI-invented creations will let AI owners keep the inventions as trade secrets, leading to a potential long-term monopoly. However, that might not be a big concern as inventions like the two created by DABUS are likely to be easily reverse engineered once they reach the market.[21]

Currently, Dr. Thaler plans to file appeals in each jurisdiction that has rejected his application and aims to seek copyright protection as an alternative in the U.S. It is questionable that Dr. Thaler will succeed on those appeals, but if he ever does, it will likely result in major changes to patent systems around the world. Even if most jurisdictions today forbid AI from being classified as an inventor, with the advancement of technology the need to address this issue will become more and more pressing as time goes on. 

Notes

[1] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).

[2] Ryan Abbott, July 2022 AIP Update Around the World, The Artificial Inventor Project (July 10, 2022), https://artificialinventor.com/867-2/.

[3] Id.

[4] South Africa’s patent law does not have a requirement on inventors being a natural person. Jordana Goodman, Homography of Inventorship: DABUS And Valuing Inventors, 20 Duke L. & Tech. Rev. 1, 17 (2022).

[5] Thaler, 43 F.4th at 1209, 1213.

[6] Goodman, supra note 4, at 10.

[7] Ryan Abbott, The Artificial Inventor Project, WIPO Magazine (Dec. 2019), https://www.wipo.int/wipo_magazine/en/2019/06/article_0002.html.

[8] Rita Matulionyte, AI as an Inventor: Has the Federal Court of Australia Erred in DABUS? 12 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3974219.

[9] Susan Krumplitsch et al. Can An AI System Be Named the Inventor? In Wake Of EDVA Decision, Questions Remain, DLA Piper (Sept. 13, 2019), https://www.dlapiper.com/en/us/insights/publications/2021/09/can-an-ai-system-be-named-the-inventor/#11

[10] 2109 Inventorship, USPTO, https://www.uspto.gov/web/offices/pac/mpep/s2109.html (last visited Oct. 8, 2022).

[11] Hybritech, Inc. v. Monoclonal Antibodies, Inc., 802 F.2d 1367, 1376 (Fed. Cir. 1986).

[12] Krumplitsch et al., supra note 9.

[13] Yosuke Watanabe, I, Inventor: Patent Inventorship for Artificial Intelligence Systems, 57 Idaho L. Rev. 473, 290.

[14] Abbott, supra note 2.

[15] Id.

[16] Goodman, supra note 4, at 21.

[17] Abbott, supra note 2.

[18] Matulionyte, supra note 8, at 10–14.

[19] Id. at 19.

[20] Id.

[21] Id. at 18.




“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Breaking the Tech Chain to Slow the Growth of Single-Family Rentals

Sarah Bauer, MJLST Staffer

For many of us looking to buy our first homes during the pandemic, the process has ranged from downright comical to disheartening. Here in Minnesota, the Twin Cities have the worst housing shortage in the nation, a problem that has both Republican and Democratic lawmakers searching for solutions to help both renters and buyers access affordable housing. People of color are particularly impacted by this shortage because the Twin Cities are also home to the largest racial homeownership gap in the nation

Although these issues have complex roots, tech companies and investors aren’t helping. The number of single-family rentals (SFR) units — single-family homes purchased by investors and rented out for profit — have risen since the great Recession and exploded over the course of the pandemic. In the Twin Cities, black neighborhoods have been particularly targeted by investors for this purpose. In 2021, 8% of the homes sold in the Twin Cities metro were purchased by investors, but investors purchased homes in BIPOC-majority zip codes at nearly double the rate of white-majority neighborhoods. Because property ownership is a vehicle for wealth-building, removing housing stock from the available pool essentially transfers the opportunity to build wealth from individual homeowners to investors who can both profit from rents as well as the increased value of the property at sale. 

It’s not illegal for tech companies and investors to purchase and rent out single-family homes. In certain circumstances, it may actually be desirable for them to be involved in the market. If you are a seller that needs to sell your home before buying a new one, house-flipping tech companies can get you out of your home faster by purchasing the home without a showing, an inspection, or contingencies. And investors purchasing single-family homes can provide a floor to the market during slowdowns like the Great Recession, a service which benefits homeowners as well as the investors themselves. But right now we have the opposite problem: not enough homes available for first-time owner-occupants. Assuming investor-ownership is becoming increasingly undesirable, what can we do about it? To address the problem, we need to understand how technology and investors are working in tandem to increase the number of single-family rentals.

 

The Role of House-Flipping Technology and iBuyers

The increase in SFRs is fueled by investors of all kinds: corporations, local companies, and wealthy individuals. For smaller players, recent developments in tech have made it easier for them to flip their properties. For example, a recent CityLab article discussed FlipOS, “a platform that helps investors prioritize repairs, access low-interest loans, and speed the selling process.” Real estate is a decentralized industry, and such platforms make the process of buying single-family homes and renting them out faster. Investors see this as a benefit to the community because rental units come onto the market faster than they otherwise would. But this technology also gives such investors a competitive advantage over would-be owner-occupiers.

The explosion of iBuying during the pandemic also hasn’t helped. iBuyers — short for “instant buyers” — use AI to generate automated valuation models to give the seller an all-cash, no contingency offer. This enables the seller to offload their property quickly, while the iBuyer repairs, markets, and re-sells the home. iBuyers are not the long-term investors that own SFRs, but the house-flippers that facilitate the transfer of property between long-term owners.

iBuyers like Redfin, Offerpad, Opendoor (and formerly Zillow) have increasingly purchased properties in this way over the course of the pandemic. This is true particularly in Sunbelt states, which have a lot of new construction of single-family homes that are easier to accurately price. As was apparent from the demise of Zillow’s iBuying program, these companies have struggled with profitability because home values can be difficult to predict. The aspects of real estate transactions that slow down traditional homebuyers (title check, inspections, etc…) also slow down iBuyers. So they can buy houses fast by offering all-cash offers with no inspection, but they can’t really offload them faster than another seller.

To the degree that iBuyers in the market are a problem, that problem is two-fold. First, they make it harder for first-time homeowners to purchase homes by offering cash and waiving inspections, something few first-time homebuyers can afford to offer. The second problem is a bigger one: iBuyers are buying and selling a lot of starter homes to large, non-local investors rather than back to owner-occupants or local landlords.

 

Transfer from Flippers to Corporate Investors

iBuyers as a group sell a lot of homes to corporate landlords, but it varies by company. After Zillow discontinued its iBuying program, Bloomberg reported that the company planned to offload 7,000 homes to real estate investment trusts (REITs). Offerpad sells 10-20% of its properties to institutional investors. Opendoor claims that it sells “the vast majority” of its properties to owner-occupiers. RedfinNow doesn’t sell to REITs at all. Despite the variation between companies, iBuyers on the whole sold one-fifth of their flips to institutional investors in 2021, with those sales more highly concentrated in neighborhoods of color. 

REITs allow firms to pool funds, buy bundles of properties, and convert them to SFRs. In addition to shrinking the pool of homes available for would-be owner-occupiers, REITs hire or own corporate entities to manage the properties. Management companies for REITs have increasingly come under fire for poor management, aggressively raising rent, and evictions. This is as true in the Twin Cities as elsewhere. Local and state governments do not always appear to be on the same page regarding enforcement of consumer and tenant protection laws. For example, while the Minnesota AG’s office filed a lawsuit against HavenBrook Homes, the city of Columbia Heights renewed rental occupancy licenses for the company. 

 

Discouraging iBuyers and REITs

If we agree as a policy matter that single-family homes should be owner-occupied, what are some ways to slowdown the transfer of properties and give traditional owner-occupants a fighting chance? The most obvious place to start is by considering a ban on iBuyers and investment firms from acquiring homes. The Los Angeles city council voted late last year to explore such a ban. Canada has voted to ban most foreigners from buying homes for two years to temper its hot real estate market, a move which will affect iBuyers and investors.

  Another option is to make flipping single-family homes less attractive for iBuyers. A state lawmaker from San Diego recently proposed Assembly Bill 1771, which would impose an additional 25% tax on the gain from a sale occurring within three years of a previous sale. This is a spin on the housing affordability wing of Bernie Sanders’s 2020 presidential campaign, which would have placed a 25% house-flipping tax on sellers of non-owner-occupied property, and a 2% empty homes tax on property of vacant, owned homes. But If iBuyers arguably provide a valuable service to sellers, then it may not make sense to attack iBuyers across the board. Instead, it may make more sense to limit or heavily tax sales from iBuyers to investment firms, or the opposite, reward iBuyers with a tax break for reselling homes to owner-occupants rather than to investment firms.

It is also possible to make investment in single-family homes less attractive to REITs. In addition to banning sales to foreign investors, the Liberal Party of Canada pitched an “excessive rent surplus” tax on post-renovation rent surges imposed by landlords. In addition to taxes, heavier regulation might be in order. Management companies for REITs can be regulated more heavily by local governments if the government can show a compelling interest reasonably related to accomplishing its housing goals. Whether REIT management companies are worse landlords than mom-and-pop operations is debatable, but the scale at which REITs operate should on its own make local governments think twice about whether it is a good idea to allow so much property to transfer to investors. 

Governments, neighborhood associations, and advocacy groups can also engage in homeowner education regarding the downsides of selling to an iBuyer or investor. Many sellers are hamstrung by needing to sell quickly or to the highest bidder, but others may have more options. Sellers know who they are selling their homes to, but they have no control over to whom that buyer ultimately resells. If they know that an iBuyer is likely to resell to an investor, or that an investor is going to turn their home into a rental property, they may elect not to sell their home to the iBuyer or investor. Education could go a long way for these homeowners. 

Lastly, governments themselves could do more. If they have the resources, they could create a variation on Edina’s Housing Preservation program, where homeowners sell their house to the City to preserve it as an affordable starter home. In a tech-oriented spin of that program, the local government could purchase the house to make sure it ends up in the hands of another owner-occupant, rather than an investor. Governments could decline to sell to iBuyers or investors single-family homes seized through tax forfeitures. Governments can also encourage more home-building by loosening zoning restrictions. More homes means a less competitive housing market, which REIT defenders say will make the single-family market less of an attractive investment vehicle. Given the competitive advantage of such entities, it seems unlikely that first-time homebuyers could be on equal footing with investors absent such disincentives.