Healthcare

FDA Approval of a SARS-CoV-2 Vaccine and Surrogate Endpoints

Daniel Walsh, Ph.D, MJLST Staffer

The emergence of the SARS-CoV-2 virus has thrown the world into chaos, taking the lives of more than a million worldwide to date. Infection with SARS-CoV-2 causes the disease COVID-19, which can have severe health consequences even for those that do not succumb. An unprecedented number of vaccines are under development to address this challenge. The goal for any vaccine is sterilizing immunity, which means viral infection is outright prevented. However, a vaccine that provides only partially protective immunity will still be a useful tool in fighting the virus. Either outcome would reduce the ability of the virus to spread, and hopefully reduce the incidence of severe disease in those who catch the virus. An effective vaccine is our best shot at ending the pandemic quickly.

For any vaccine to become widely available in the United States, it must first gain approval from the Food and Drug Administration (FDA). Under normal circumstances a sponsor (drug manufacturer) seeking regulatory approval would submit an Investigational New Drug (IND) application, perform clinical trials to gather data on safety and efficacy, and finally file a Biologics License Application (BLA) if the trials were successful. The FDA will review the clinical trial data and make a determination as to whether the benefits of the therapy outweigh the risks, and if appropriate, approve the BLA. Of course, degree of morbidity and mortality being caused by COVID-19 places regulators in a challenging position. If certain prerequisites are met, the FDA as the authority to approve a vaccine using an Emergency Use Authorization (EUA). As pertaining to safety and efficacy, the statutory requirements for issuing an EUA are lower than normal approval. It should also be noted that an initial approval via EUA does not preclude eventual normal approval.  Full approval of the antiviral drug remdisivir is an example of this occurrence.

In any specific instance, the FDA must conclude that a reason for using the EUA process (in this case SARS-CoV-2):

can cause a serious or life-threatening disease or condition . . . based on the totality of scientific evidence available . . . including data from adequate and well-controlled clinical trials, if available, it is reasonable to believe that . . . the product may be effective in diagnosing, treating, or preventing [SARS-CoV-2] . . . the known and potential benefits of the product, when used to diagnose, prevent, or treat [SARS-CoV-2], outweigh the known and potential risks of the product . . . .

21 USC 360bbb-3(c). On its face, this statute does not require the FDA to adhere to the full phased clinical trial protocol in grating an EUA approval. Of course, the FDA is free to ask for more than the bare minimum, and it has wisely done so by issuing a set of guidance documents in June and October. The FDA indicated that, at the minimum, a sponsor would need to supply an “interim analysis of a clinical endpoint from a phase 3 efficacy study;” that the vaccine should demonstrate an efficacy of at least 50% in a placebo controlled trial; that phase 1 and 2 safety data should be provided; and that the phase 3 data “should include a median follow-up duration of at least two months after completion of the full vaccination regimen” (among other requirements) in the October guidance.

It is clear from these requirements that the FDA is still requiring sponsors to undertake phase 1, 2, and 3 trials before FDA will consider issuing an EUA, but that the FDA is not going to wait for the trials to reach long term safety and efficacy endpoints, in an effort to get the public access to a vaccine in a reasonable time frame. The Moderna vaccine trial protocol, for example, has a study period of over two years. The FDA also has a statutory obligation to “efficiently review[] clinical research and take[] appropriate action . . . in a timely manner.” 21 USC § 393(b)(1).

One method of speeding up the FDA’s assessment of efficacy is a surrogate endpoint. Surrogate endpoints allow the FDA to look at an earlier, predictive metric of efficacy in a clinical trial when it would be impractical or unethical to follow the trial to its actual clinical endpoint. For example, we often use blood pressure as a surrogate endpoint when evaluating drugs intended to treat stroke. The FDA draws a distinction between candidate, reasonably likely, and validated surrogate endpoints. The latter two can be used to expedite approval. However, in its June guidance, the FDA noted “[t]here are currently no accepted surrogate endpoints that are reasonably likely to predict clinical benefit of a COVID-19 vaccine . . . .  [and sponsors should therefore] pursue traditional approval via direct evidence of vaccine safety and efficacy . . . .” This makes it unlikely surrogate endpoints will play any role in the initial EUAs or BLAs for any SARS-CoV-2 vaccine.

However, as the science around the virus develops the FDA might adopt a surrogate endpoint as it has for many other infectious diseases. Looking through this list of surrogate endpoints, a trend is clear. For vaccines, the FDA has always used antibodies as a surrogate endpoint. However, the durability of the antibody response to SARS-CoV-2 has been an object of much concern. While this concern is likely somewhat overstated (it is normal for antibody levels to fall after an infection is cleared), there is evidence that T-cells are long lasting after infection with SARS-CoV-1, and likely play an important role in immunity to SARS-CoV-2. It is important to note that T-Cells (which coordinate the immune response and some of which can kill virally infected cells) and B-Cells (which produce antibody proteins) are both fundamental, and interdependent pieces of the immune system. With this in mind, when developing surrogate endpoints for SARS-CoV-2 the FDA should consider whether it is open to a more diverse set of surrogate endpoints in the future, and if so, the FDA should communicate this to sponsors so they can begin to build the infrastructure necessary to collect the data to ensure vaccines can be approved quickly.

 


It’s a Small World, and Getting Smaller: The Need for Global Health Security

Madeline Vavricek, MJLST Staffer

The word “unprecedented” has been used repeatedly by every news organization and government official throughout the last several months. Though the times that we live in may be unprecedented, they are far from being statistically impossible—or even statistically unlikely. Based on the most recent implementation of the International Health Regulations released by the World Health Organization (WHO) in 2005, more than 70% of the world is deemed unprepared to prevent, detect, and respond to a public health emergency. The reality of this statistic was evidenced by the widespread crisis of COVID-19. As of September 29, 2020, the global COVID-19 death toll passed one million lives, with many regions still reporting surging numbers of new infections. Experts caution that the actual figure could be up to 10 times higher.

The impact of COVID-19 has made pandemic preparedness paramount in a way modern times have yet to experience. While individual countries look inward towards their own national response to the coronavirus, it is apparent now more than ever that global issues demand global solutions. The ongoing COVID-19 pandemic indicates a need for increased resiliency in public health systems to manage infectious diseases, a factor known as global health security.

The Centers for Disease Control and Prevention (CDC) defines global health security as “the existence of strong and resilient public health systems that can prevent, detect, and respond to infectious disease threats, wherever they occur in the world.” Through global health security initiatives, organizations such as the Global Health Security Agenda focus on assisting individual countries in planning and resource utilization to address gaps in health security in order to benefit not only the health and welfare of the individual countries, but the health and welfare of the world’s population as a whole. The Coronavirus has been reported in 214 countries, illustrating that one country’s health security can impact the health security of dozens of others. With the ever-increasing spread of globalization, it is easier for infectious diseases to spread more than ever before, making global health security even more essential than in the past.

Global health security effects more than just health and pandemic preparedness worldwide. Johnson & Johnson Chief Executive Officer Alex Gorsky recently stated that “[g]oing forward, we’re going to understand much better that if we don’t have global public health security, we don’t have national security, we don’t have economic security and we will not have security of society.” As demonstrated by COVID-19, failure to adequately prevent, detect, and respond to infectious diseases has economic, financial, and societal impacts. Due to the Coronavirus, the Dow Jones Industrial Average and the Financial Times Stock Exchange Group saw their biggest quarterly drops in the first three months of the year since 1987; industries such as travel, oil, retail, and others have all taken a substantial hit in the wake of the pandemic. Unemployment rates have increased dramatically as employers are forced to lay off employees across the majority of industries, amounting in an estimated loss of 30 million positions in the United States alone. Furthermore, Coronavirus unemployment has been shown to disproportionally affect women workers and people of color. The social and societal effects of COVID-19 continue to emerge, including, but not limited to, the interruption of education for an estimated 87% of students worldwide and an increase in domestic violence rates during shelter in place procedures. The ripple effect caused by the spread of infectious disease permeates nearly every aspect of a nation’s operation and its people’s lives, well beyond that of health and physical well-being.

With a myriad of lessons to glean from the global experience of COVID-19, one lesson countries and their leaders must focus on is the future of global health security. The shared responsibility of global health security requires global participation to strengthen health both at home and abroad so that future infectious diseases do not have the devastating health, economic, and social consequences that the coronavirus continues to cause.

 


A Cold-Blooded Cure: How COVID-19 Could Decimate Already Fragile Shark Populations

Emily Kennedy, MJLST Staffer

Movies like Jaws, Deep Blue Sea, and The Meg demonstrate that fear of sharks is commonplace. In reality, shark attacks are rare, and such incidents have even decreased during the COVID-19 pandemic with fewer people enjoying the surf and sand. Despite their bad, Hollywood-driven reputation sharks play a vital role in the ocean ecosystem. Sharks are apex predators and regulate the ocean ecosystem by balancing the numbers and species of fish lower in the food chain. There are over 500 species of sharks in the world’s oceans and 143 of those species are threatened, meaning that they are listed as critically endangered, endangered, or vulnerable. Sharks are particularly vulnerable because they grow slowly, mature later than other species, and have relatively few offspring. Shark populations are already threatened by ocean fishing practices, climate change, ocean pollution, and the harvesting of sharks for their fins. Sharks now face a new human-imposed threat: COVID-19.

While sharks cannot contract the COVID-19 virus, the oil in their livers, known as squalene, is used in the manufacture of vaccines, including COVID-19 vaccines currently being developed. Shark squalene is harvested via a process known as “livering,” in which sharks are killed for their livers and thrown back into the ocean to die after having their livers removed. The shark squalene is used in adjuvants, ingredients in vaccines that prompt a stronger immune response, and has been used in U.S. flu vaccines since 2016. Approximately 3 million sharks are killed every year to supply squalene for vaccines and cosmetic products, and this number will only increase if a COVID-19 vaccine that uses shark squalene gains widespread use. One non-profit estimates that the demand for COVID-19 vaccines could result in the harvest of over half a million sharks.

Sharks, like many other marine species, are uniquely unprotected by the law. It is easier to protect stationary land animals using the laws of the countries in which their habitats are located. However, ocean habitats largely ungoverned by the laws of any one country. Further, migratory marine species such as sharks may travel through the waters of multiple countries. This makes it difficult to enact and enforce laws that adequately protect sharks. In the United States, the Lacey Act, the Endangered Species Act, and the Magnuson-Stevens Fishery Conservation and Management Act govern shark importation and harvesting practices. One area of shark conservation that has gotten attention in recent years is the removal of shark fins for foods that are considered delicacies in some countries. The Shark Conservation Act was passed in the United States in response to the crisis caused by shark finning practices, in addition to the laws that several states had in place banning the practice. The harvest of shark squalene has not garnered as much attention as of yet, and there are no United States laws enacted to specifically address livering.

Internationally, the Convention on the Conservation of Migratory Species of Wild Animals (CMS) and the International Plan of Action for the Conservation and Management of Sharks (IPOA) are voluntary, nonbinding programs. Many of the primary shark harvesting nations have not signed onto CMS. The Convention on International Trade in Endangered Species of Wild Flora and Fauna (CITES) is binding, but there are loopholes and only 13 shark species are listed. In addition to these international programs, some countries have voluntarily created shark sanctuaries.

Nations that have refused to agree to voluntary conservation efforts, that circumvent existing international regulations, and lack sanctuaries leave fragile shark species unprotected and under threat. The squalene harvesting industry in particular lacks transparency and adequate regulations, and reports indicate that protected and endangered shark species end up as collateral damage in the harvesting process. A wide array of regional and international interventions may be necessary to provide sharks with the conservation protections they so desperately need.

Research and development of medical cures and treatments for humans often comes with animal casualties, but research to development of the COVID-19 vaccine can be conducted in a way that minimizes those casualties. There is already some financial support for non-animal research approaches and squalene can also be derived and synthesized from non-animal sources. Shark Allies, the conservation group that created a Change.org petition that now has over 70,000 signatures, suggests that non-shark sources of squalene be used in the vaccine instead, such as yeast, bacteria, sugarcane, and olive oil. These non-animal adjuvant sources are more expensive and take longer to produce, but the future of our oceans may depend on such alternative methods that do not rely on “the overexploitation of a key component of the marine environment.”


Pacemakers, ICDs, and ICMs – oh my! Implantable heart detection devices

Janae Aune, MJLST Staffer

Heart attacks and heart disease kill hundreds of thousands of people in the United States every year. Heart disease affects every person differently based on their genetic and ethnic background, lifestyle, and family history. While some people are aware of their risk of heart problems, over 45 percent of sudden heart cardiac deaths occur outside of the hospital. With a condition as spontaneous as heart attacks, accurate information tracking and reporting is vital to effective treatment and prevention. As in any market, the market for heart monitoring devices is diverse, with new equipment arriving every year. The newest device in a long line of technology is the LINQ monitoring device. LINQ builds on and works with already established devices that have been used by the medical community.

Pacemakers were first used effectively in 1969 when lithium batteries were invented. These devices are surgically implanted under the skin of a patient’s chest and are meant to help control the heartbeat. These devices can be implanted for temporary or permanent use and are usually targeted at patients who experience bradycardia, a slow heart rate. These devices require consistent check-ins by a doctor, usually every three to six months. Pacemakers must also be replaced every 5 to 15 years depending on how long the battery life lasts. These devices revolutionized heart monitoring but involve significant risks with the surgery and potential device malfunctioning.

Implantable cardioverter defibrillators (ICD) are also surgically implanted devices but differ from pacemakers in that they deliver one shock when needed rather than continuous electrode shocks. ICDs are similar to the heart paddles doctors use when trying to stimulate a heart in the hospital – think yelling “charge” and the paddles they use. These devices are used mostly in patients with tachycardia, a heartbeat that is too fast. Implantation of an ICD requires feeding wires through the blood vessels of the heart. A subcutaneous ICD (S-ICD) has been newly developed and gives patients who have structural defects in their heart blood vessels another option of ICDs. Similar to pacemakers, an ICD monitors activity constantly, but will be read only at follow-up appointments with the doctor. ICDs last an average of seven years before the battery will need to be replaced.

The Reveal LINQ system is a newly developed heart monitoring device that records and transmits continuous information to a patient’s doctor at all times. The system requires surgical implantation of a small device known as the insertable cardiac monitor (ICM). The ICM works with another component called the patient monitor, which is a bedside monitor that transmits the continuous information collected by the ICM to a doctor instantly. A patient assistant control is also available which allows the patient to manually mark and record particular heart activities and transmit those in more detail. The LINQ system allows a doctor to track a patient’s heart activity remotely rather than requiring the patient to come in for the history to be examined. Continuous tracking and transmitting allow a patient’s doctor to more accurately examine heart activity and therefore create a more effective treatment approach.

With the development of wearable technology meant to track health information and transmit it to the wearer, the development of devices such as the LINQ system provide new opportunities for technologies to work together to promote better health practices. The Apple Watch series 4 included electrocardiogram monitoring that records heart activity and checks the reading for atrial fibrillation (AFB). This is the same heart activity pacemakers, ICDs, and the LINQ system are meant to monitor. The future capability of heart attack and disease detection and treatment could be massively impacted by the ability to monitor heart behavior in multiple different ways. Between the ability to shock your heart, continuously monitor and transmit information about it, and report to you when your heart rate may be experiencing abnormalities from a watch it seems as if a future of decreased heart problems could be a reality.

With all of these newly developed methods of continuous tracking, it begs the question of how all of that information is protected? Health and heart behavior, which is internal and out of your control, is as personal as information gets. Electronic monitoring and transmission of this data opens it up to cybersecurity targeting. Cybersecurity and data privacy issues with these devices have started to be addressed more fully, however the concerns differ depends on which implantable device a patient has. Vulnerabilities have been identified with ICD devices which would allow an unauthorized individual to access and potentially manipulate the device. Scholars have argued that efforts to decrease vulnerabilities should be focused on protecting the confidentiality, integrity, and availability of information transmitted by implantable devices. The FDA has indicated that the use of a home monitor system could decrease the potential vulnerabilities. As the benefits from heart monitors and heart data continue to grow, we need to be sure that our privacy protections grow with it.


Wearable, Shareable, Terrible? Wearable Technology and Data Protection

Alex Wolf, MJLST Staffer

You might consider the first wearable technology of the modern-day to be the Sony Walkman, which celebrates its 40th anniversary this year. After the invention of Bluetooth 1.0 in 2002, commercial competitors began to realize the vast promise that this emergent technology afforded. Fifteen years later, over 265 million wearable tech devices are sold annually. It looks to be a safe bet that this trend will continue.

A popular subset of wearable technology is the fitness tracker. The user attaches the device to themselves, usually on their wrist, and it records their movements. Lower-end trackers record basics like steps taken, distance walked or run, and calories burned, while the more sophisticated ones can track heart rate and sleep statistics (sometimes also featuring fun extras like Alexa support and entertainment app playback). And although this data could not replace the care and advice of a healthcare professional, there have been positive health results. Some people have learned of serious health problems only once they started wearing a fitness tracker. Other studies have found a correlation between wearing a FitBit and increased physical activity.

Wearable tech is not all good news, however; legal commentators and policymakers are worried about privacy compromises that result from personal data leaving the owner’s control. The Health Insurance Portability and Protection Act (HIPAA) was passed by Congress with the aim of providing legal protections for individuals’ health records and data if they are disclosed to third parties. But, generally speaking, wearable tech companies are not bound by HIPAA’s reach. The companies claim that no one else sees the data recorded on your device (with a few exceptions, like the user’s express written consent). But is this true?

A look at the modern American workplace can provide an answer. Employers are attempting to find new ways to manage health insurance costs as survey data shows that employees are frequently concerned with the healthcare plan that comes with their job. Some have responded by purchasing FitBits and other like devices for their employees’ use. Jawbone, a fitness device company on its way out, formed an “Up for Groups” plan specifically marketed towards employers who were seeking cheaper insurance rates for their employee coverage plans. The plan allows executives to access aggregate health data from wearable devices to help make cost-benefit determinations for which plan is the best choice.

Hearing the commentators’ and state elected representatives’ complaints, members of Congress have responded; Senators Amy Klobuchar and Lisa Murkowski introduced the “Protecting Personal Health Data Act” in June 2019. It would create a National Task Force on Health Data Protection, which would work to advise the Secretary of Health and Human Services (HHS) on creating practical minimum standards for biometric and health data. The bill is a recognition that HIPAA has serious shortcomings for digital health data privacy. As a 2018 HHS Committee Report noted, “A class of health records that can be subject to HIPAA or not subject to HIPAA is personal health records (PHRs) . . . PHRs not subject to HIPAA . . . [have] no other privacy rules.”  Dena Mendolsohn, a lawyer for Consumer Reports, remarked favorably that the bill is needed because the current framework is “out of date and incomplete.”

The Supreme Court has recognized privacy rights in cell-site location data, and a federal court recognized standing to sue for a group of plaintiffs whose personally identifiable information (PII) was hacked and uploaded onto the Dark Web. Many in the legal community are pushing for the High Court to offer clearer guidance to both tech consumers and corporations on the state of protection of health and other personal data, including private rights of action. Once there is a resolution on these procedural hurdles, we may see firmer judicial directives on an issue that compromises the protected interests of more and more people.

 


Mystery Medicine: How AI in Healthcare is (or isn’t) Different from Current Medicine

Jack Brooksbank, MJLST Staffer

Artificial Intelligence (AI) is a funny creature. When we say AI, generally we mean algorithms, such as neural networks, that are “trained” based on some initial dataset. This dataset can be essentially anything, such as a library of tagged photographs or the set of rules to a board game. The computer is given a goal, such as “identify objects in the photos” or “win a game of chess.” It then systematically iterates some process, depending on which algorithm is used, and checks the result against the known results from the initial dataset. In the end, the AI finds some pattern— essentially through brute force  —and then uses that pattern to accomplish its task on new, unknown inputs (by playing a new game of chess, for example).

AI is capable of amazing feats. IBM-made Deep Blue famously defeated chess master Gary Kasparov back in 1997, and the technology has only gotten better since. Tesla, Uber, Alphabet, and other giants of the technology world rely on AI to develop self-driving cars. AI is used to pick stocks, to predict risk for investors, spot fraud, and even determine whether to approve a credit card application.

But, because AI doesn’t really know what it is looking at, it can also make some incredible errors. One  neural network AI trained to detect sheep  in photographs instead noticed that sheep tend to congregate in grassy fields. It then applied the “sheep” tag to any photo of such a field, fluffy quadrupeds or no. And when shown a photo of sheep painted orange, it handily labeled them “flowers.” Another cutting-edge AI platform has, thanks to a quirk of the original dataset it was trained on, a known propensity to spot giraffes where none exist. And the internet is full of humorous examples of AI-generated weirdness, like one neural net that invented color names such as  “snowbonk,” “stargoon,” and “testing.”

One area of immense potential for AI applications is healthcare. AIs are being investigated for applications including diagnosing diseases  and aiding in drug discovery. Yet the use of AI raises challenging legal questions. The FDA has been given a statutory mandate to ensure that many healthcare items, such as drugs or medical devices, are safe. But the review mechanisms the agency uses to ensure that drugs or devices are safe generally rely on knowing how the thing under review works. And patients who receive sub-standard care have legal recourse if they can show that they were not treated with the appropriate standard of care.  But AI is helpful essentially because we don’t know how it works—because AI develops its own patterns beyond what humans can spot. The opaque nature of AI could make effective regulatory oversight very challenging. After all, a patient mis-diagnosed by a substandard AI may have no way of proving that the AI was flawed. How could they, when nobody knows how it actually works?

One possible regulatory scheme that could get around this issue is to have AI remain “supervised” by humans. In this model, AI could be used to sift through data and “flag” potential points of interest. A human reviewer would then see what drew the AI’s interest, and make the final decision independently. But while this would retain a higher degree of accountability in the process, it would not really be using the AI to its full potential. After all, part of the appeal of AI is that it could be used to spot things beyond what humans could see. And there would also be the danger that overworked healthcare workers would end up just rubber stamping the computer’s decision, defeating the purpose of having human review.

Another way forward could be foreshadowed by a program the FDA is currently testing for software update approval. Under the pre-cert program, companies could get approval for the procedures they use to make updates. Then, as long as future updates are made using that process, the updates themselves would be subject to a greatly reduced approval burden. For AI, this could mean agencies promulgating standardized methods for creating an AI system—lists of approved algorithm types, systems for choosing the dataset the AI are trained on—and then private actors having to show only that their system has been set up well.

And of course, another option would be to simply accept some added uncertainty. After all, uncertainty abounds in the current healthcare system today, despite our best efforts. For example, Lithium is prescribed to treat bipolar disorder, despite uncertainty in the medical community of how it works. Indeed, the mechanism for many drugs remains mysterious. We know that these drugs work, even if we don’t know how; perhaps using the same standard for AI in medicine wouldn’t really be so different after all.


Changing Families: Time for a Change in Family Law?

MJLST Staffer, Hannah Mosby

 

Reproductive technology allows individuals to start families where it may not otherwise have been possible. These technologies range from relatively advanced procedures—those using assisted reproductive technology (or “ART,” for short)—to less invasive fertility treatments. ART encompasses procedures like in vitro fertilization—in fact, the CDC defines ART as including “all fertility treatments in which both eggs and embryos are handled” (Link to: https://www.cdc.gov/art/whatis.html)—while other kinds of reproductive assistance range from artificial insemination to self-administered fertility drugs. In a study published by the CDC, the number of ART procedures completed in 2014 in the U.S. alone was almost 170,000. As scientific knowledge grows and new procedures develop, that number will undoubtedly increase.

Individuals choosing to utilize these reproductive technologies, however, can find themselves in legal limbo when it comes to determining parentage. In instances where an individual uses a donor gamete (a sperm or an egg) to conceive, that donor could be a legal parent of the offspring produced—even if that result wasn’t intended by the any of the parties involved. For example, the 2002 version of the Uniform Parentage Act—variations of which have been adopted by many states—provides for the severance of the parental rights of a sperm donor in the event of consent by the “woman,” as well as consent or post-birth action by the “man” assuming paternal rights. If statutory conditions aren’t met, the donor could retain his parental rights over any offspring produced by the procedure. To further complicate things, the use of gendered terms makes it unclear how these statutes apply to same-sex couples. A new version of the Act was proposed in 2017 to comply with the Supreme Court’s recognition of marriage equality in Obergefell v. Hodges, but it has yet to be adopted by any state . Even murkier than the laws governing donor gametes are those governing surrogacy contracts, which some states still refuse to legally recognize. Overall, these laws create an environment where even the most intentional pregnancies can have unintended consequences when it comes to establishing legal parentage.

For further illustration, let’s revisit artificial insemination. Jane and John, a Minnesotan couple, decide to undergo an artificial insemination procedure so that Jane can become pregnant. However, they aren’t married. Pursuant to Minn. Stat. 257.56, the couple’s marriage is a necessary condition for the automatic severance of the sperm donor’s parental status—therefore, since Jane and John aren’t married, the sperm donor retains his parental rights. The statute also requires that the procedure be performed “under the supervision of a licensed physician” in order for severance to occur. If there was no doctor present, then the sperm donor—and not John—would have legal parental status over the offspring produced. The example becomes more complicated if the couple is same-sex rather than heterosexual, because the statute requires the consent of the “husband” to the procedure. Further still, if Jane lived in a different state, the sperm donor might be able to establish parental rights after the fact—even if they were initially severed—by maintaining a relationship with the child. As one can imagine, this makes the use of known donors (rather than anonymous donors) particularly complicated.

Ultimately, ART and related procedures provide opportunities for individuals to create the families they want, but could not otherwise have—an enormously impactful medical development. However, utilization of these procedures can produce legal consequences that are unforeseen—and, often, unwanted—by the parents of children born using these procedures. The state law that exists to govern these procedures is varied and lagging. In the age of marriage equality and donor gametes, such laws are highly inadequate. . . In order for society to reap the biggest benefit from these life-creating technologies, the legal world will have to play a serious game of catch-up.

 


Prevalence of Robot-Assisted Surgery Illustrates the Negatives of Fee-For-Service Systems

Jacob Barnyard, MJSLT Staffer

 

In 2000, the Food and Drug Administration approved the use of the da Vinci Surgical System, a robot designed to aid surgeons perform minimally invasive surgeries. The system consists of multiple arms carrying a camera and surgical instruments controlled by a nearby surgeon through a specialized console.

While few would argue the cool-factor of this technology, the actual benefits are significantly less clear. Researchers have conducted multiple studies to determine how the system affects patient outcomes, with results varying based on the type of procedure. One finding has been fairly consistent, however: unsurprisingly, costs associated with the use of robots are significantly higher.  

The use of the da Vinci Surgical System has increased enormously since its initial release, even in surgeries with little or no evidence of any benefit. A rational consumer, however, would try to maximize expected utility by only undergoing robotically-assisted surgery if the expected benefits for that particular surgery outweighed the expected increase in cost. A possible explanation for part of the growing popularity of this technology may be the prevalence of fee-for-service models in the U.S. healthcare system.  

In a fee-for-service model, each service provider involved in a patient’s care charges separately and charges for each service provided. As a result, these providers have an incentive to perform as many different services as possible, frequently providing unnecessary care. The consumer has little reason to care about these increased costs because they are often paid by insurance companies. Consequently, when a surgeon suggests the use of the da Vinci Surgical System, the patient has no incentive to research whether the system actually provides any benefits for the surgery they are undergoing.

A proposed alternative method to the fee-for-service model is a system using bundled payments. Under this system, a provider charges one lump sum for its services and divides it between each party involved in providing the care. This eliminates the incentive to provide unnecessary care as that would only increase the provider’s costs without increasing revenue. Robots would theoretically only be used in surgeries if they actually provide a net benefit. A potential drawback, however, is a decrease in potentially helpful services in an effort to cut costs. Currently, the available evidence suggests that this is not an issue in practice, however, and that some performance indicators may actually improve.  

The Affordable Care Act included incentives to adopt the bundled payment system, but fee-for-service is still vastly more common in the United States. While bundled payments have been shown to lead to a modest decrease in healthcare costs, many physicians are unsurprisingly opposed to the idea. Consequently, change to a bundled payment system on a meaningful scale is unlikely to occur under the incentive structure created by current laws.


Perpetuating Inequality and Illness Through Environmental Injustice

Nick Redmond, MJLST Staffer

In Sidney D. Watson’s Lessons from Ferguson and Beyond, published in issue 1 of MJLST’s 18th volume, the author focuses on issues of inherent racial bias in access to health care for African Americans, and how the Affordable Care Act may be able to help. The author “explores the structural, institutional, and interpersonal biases that operate in the health care system and that exacerbate Black/white health disparities.” The article’s focus on health care in particular is a critical component of inequality in the U.S., but it also only briefly touches on another important piece of the disparity puzzle: environmental justice. Conversations about environmental justice have taken place in multiple contexts, and in many ways serve to emphasize the multiple facets of racial disparity in the U.S., including police violence, access to health care, access to education, and other issues which are all influenced by the accessibility and the dangers of our built environment.

Such systemic inequalities can include access to public transportation and competitive employment, but they can also be problems of proximity to coal plants or petroleum refineries or even a lack of proximity to public natural spaces for healthy recreation. Lack of access to safe, clean, and enjoyable public parks, for instance, can serve to exacerbate the prevalence of diabetes and obesity, and even take a toll on the mental health of residents trapped in concrete jungles (which the article refers to as “social determinants” of poor health). Though there is some indication that environmental factors can harm neighborhoods regardless of income, industrial zones and polluted environments tend to lie just around the corner from low-income neighborhoods and disproportionately affect those who live there, primarily communities of color.

Often the result of urban development plans, housing prices, and even exclusionary zoning, issues of environmental justice are an insidious form of inequality that are often on the periphery of our national political conversations, if addressed at all. Indeed, the U.S. Environmental Protection Agency’s Office of Civil rights (established in 1993) has not once made a formal finding of discrimination, despite President Bill Clinton’s executive order which made it the duty of federal agencies to consider environmental justice in their actions. When the primary federal agency tasked with ensuring access to environmental justice appears to be asleep at the wheel, what recourse do communities have? The answer, it seems, is depressingly little.

A high profile example in our current discourse, environmental justice appears to have failed Flint, Michigan, and it seems likely that the issue won’t be resolved any time soon. Other examples like Columbus, Mississippi and Anniston, Alabama, are becoming more and more prevalent at a disturbingly high rate. Impoverished people with little political or legal recourse struggle against the might of the booming natural gas industry and new advances in hydraulic fracturing, and as water runs out these communities will be the first to feel the squeeze of rising food prices and access to the most essential resource on the planet.

At risk of sounding apocalyptic, there is some hope. National groups like the NRDC or the ACLU have long litigated these issues with success, and more local or regional groups like the Minnesota Center for Environmental Advocacy or the Southern Environmental Law Center have made enormous impacts for communities of color and the public at large. But as Sidney Watson states at the end of her article: “[w]e need to talk about race, health, and health care. We need to take action to reduce and eliminate racial inequities in health care.” These same sentiments apply to our built environment and the communities that we have pushed to the periphery to take the brunt of the harmful effects of our dirty technologies and waste. Few people would choose to live near a coal plant; those who are forced to do so are often trapped in an endless cycle of illness, poverty, and segregation.


No Divorce Just Yet, But Clearly This Couple Has Issues: Medicaid and the Future of Federal-State Health Policy

Jordan Rude, MJLST Staffer

With the recent demise of the American Health Care Act (AHCA), the Affordable Care Act (ACA) will remain in effect, at least for now. One of the crucial issues that divided the Republican caucus was Medicaid—specifically, whether the ACA’s expansion of Medicaid should remain in place or be rolled back (or eliminated entirely). Moderate or centrist Republicans, and particularly some Republican governors, wanted to retain the expansion, while the House Freedom Caucus and other conservatives wanted to eliminate it, either immediately or in the near future.

Sara Rosenbaum, in her article Can This Marriage Be Saved? Federalism and the Future of U.S. Health Policy Under the Affordable Care Act examined the changing relationship between federal and state health policy under the ACA. Two areas in which this relationship was most affected were the ACA’s health insurance marketplaces and expansion of Medicaid: In both, the ACA significantly increased the federal government’s role at the expense of state control. The Supreme Court’s ruling in National Federation of Independent Business v. Sebelius held that the federal government could not require states to expand their Medicaid coverage, pushing back against increased federal power in this area. As of today, approximately 20 states have taken advantage of this ruling and chosen not to expand their programs. Rosenbaum argued that the tension between the ACA’s promise of universal coverage and some states’ refusal to expand Medicaid would defeat the purpose of the ACA, and she proposed a federal “Medicaid fallback” to replace lost coverage in those states.

The AHCA proposed a different, and simpler, solution to this problem—phase out the Medicaid expansion over time until it is completely gone. As noted above, this did not have much of a positive reception. Now that the AHCA’s proposal has been shelved, if only momentarily, some states that had not previously expanded Medicaid (such as Kansas) are moving forward with plans to expand it now. Such plans still face stiff opposition from conservatives, but the failure of the AHCA, along with the ACA’s growing popularity, may shift the argument in favor of expansion.

The end result of this recent healthcare debate, however, was retention of the status quo: The ACA is still in effect, and a significant number of states have still not expanded Medicaid coverage. The underlying issue that Rosenbaum discussed in her article has still not been addressed. The clash between federal and state policy continues: The marriage is not over, but it is not clear whether it can be saved.