Internet

Digital Tracking: Same Concept, Different Era

Meibo Chen, MJLST Staffer

The term “paper trail” continues to become more anachronistic in today’s world as time goes on.  While there are some people who still prefer the traditional old-fashioned pen and paper, our modern world has endowed us with technologies like computers and smartphones.  Whether we like it or not, this digital explosion is slowly consuming and taking over the lives of the average American (73% of US adults own a desktop or laptop computer, and 68% own a smartphone).

These new technologies have forced us to re-consider many novel legal issues that arose from their integration into our daily lives.  Recent Supreme Court decisions such as Riley v. California in 2014 pointed out the immense data storage capacity of a modern cell phone, and requires a warrant for its search in the context of a criminal prosecution.  In the civil context, many consumers are concerned with internet tracking.  Indeed, the MJLST published an article in 2012 addressing this issue.

We have grown so accustomed to seeing “suggestions” that eerily match our respective interests.  In fact, internet tracking technology has become far more sophisticated than the traditional cookies, and can now utilizes “fingerprinting” technology to look at battery status or window size to identify a user’s presence or interest. This leads many to fear for their data privacy in similar digital settings.  However, isn’t this digital tracking just the modern adaptation to “physical” tracking that we have grown so accustomed to?

When we physically go to a grocery store, don’t we subject ourselves to the prying eyes of those around us?  Why should it be any different in a cyberspace context?  While seemingly scary accurate at times, “suggestions” or “recommended pages” based on one’s browsing history can actually be beneficial to both the tracked and the tracker.  The tracked gets more personalized search results while the tracker uses that information for better business results between him and the consumer.  Many browsers already sport the “incognito” function to disable the tracks, bring a balance to when consumers want their privacy.  Of course, this tracking technology can be misused, but malicious use of a beneficial technology has always been there in our world.


Faux News vs. Freedom of Speech?

Tyler Hartney, MJLST Staffer

This election season has produced a lot of jokes on social media. Some of the jokes are funny and other jokes lack an obvious punch line. Multiple outlets are now reporting that this fake news may’ve influenced voters in the 2016 presidential election. Both Facebook and Google have made conscious efforts to reduce the appearance of these fake news stories on their sites in attempt to reduce the click bait, and thus the revenue streams, of these faux news outlets. With the expansion of the use of technology and social media, these types of stories become of a relevant circulation to possibly warrant misinformation being spread on a massive level. Is this like screaming “fire” in a crowded theatre? How biased would filtering this speech become? Facebook was blown to shreds by the media when it was found to have suppressed conservative news outlets, but as a private business it had every right to do so. Experts are now saying that the Russian government made efforts to help spread this fake news to help Donald Trump win the presidency.

First, the only entity that cannot place limits on speech is the state. If Facebook or Google chose to filter the news broadcasted on each site, users still do not have a claim against the entity; this would be a considered a private business choice. These faux news outlets circulate stories that have appeared to be, at times, intentionally and willfully misleading. Is this similar to a man shouting “fire” in a crowded theatre? In essence, the man in the aforementioned commonly used hypothetical knows that his statement is false and that it has a high probability of inciting panic, but the general public will not be aware of the validity of his statement and will have no time to check. The second part of that statement is key. The general public would not hypothetically have time to check the validity of the statement. If government were to begin passing regulations and cracking down on the circulation and creation of these hoax news stories, it would have to prove that these stories create a “clear and present danger” that will bring significant troubles that Congress has the right to protect the public from. This standard was created in the Supreme Court’s decision in Schenck v. United States. The government will not likely be capable of banning these types of faux news stories because, while some may consider these stories dangerous, the audience has the capability of validating the content from these untrusted sources.

Even contemplating government action under this circumstance would require the state to walk a fine line with freedom of political expression. What is humorous and what is dangerously misleading? For example, The Onion posted an article entitled “Biden Forges Presidents Signature Executive Order 54723,” clearly this is a joke; however, it holds the potential ability to insight fury from those who might believe it and create a misinformed public that might use this as material information when casting a ballot. This Onion article is not notably different from another post entitled “FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE” published by the Denver Guardian. With the same potential to mislead the public, there wouldn’t really be any identifiable differences between the two stories. This area of gray would make it extremely difficult to methodically stop the production of fake news while ensuring the protection of the comedic parody news. The only way to protect the public from the dangers of these stories that are apparently being pushed on to the American voting public by the Russian government in an attempt to influence election outcomes is to educate the public on how to verify online accounts.


The Best Process for the Best Evidence

Mary Riverso, MJLST Staffer

Social networking sites are now an integral part of American society. Almost everyone and everything has a profile, typically on multiple platforms. And people like to use them. Companies like having direct contact with their customers, media outlets like having access to viewer opinions, and people like to document their personal lives.

However, as the use of social-networking continues to increase in scope, the information placed in the public sphere is playing an increasingly centralized role in investigations and litigation. Many police departments conduct regular surveillance of public social media posts in their communities because these sites have become conduits for crimes and other wrongful behavior. As a result, litigants increasingly seek to offer records of statements made on social media sites as evidence. So how exactly can content from social media be used as evidence? Ira Robbins explores this issue in her article Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-Networking Evidence. The main hurdle is one of reliability. In order to be admitted as evidence, the source of information must be authentic so that a fact-finder may rely on the source and ultimately its content as trustworthy and accurate. However, social media sites are particularly susceptible to forgery, hacking, and alterations. Without a confession, it is often difficult to determine who is the actual author responsible for posting the content.

Courts grapple with this issue – some allow social media evidence only when the record establishes distinctive characteristics of the particular website under Federal Rule of Evidence 901(b)(4), other courts believe authentication is a relatively low bar and as long as the witness testifies to the process by which the record was obtained, then it is ultimately for the jury to determine the credibility of the content. But is that fair? If evidence is supposed to assist the fact-finder in “ascertaining the truth and securing a just determination,” should it not be of utmost importance to determine the author of the content? Is not a main purpose of authentication to attribute the content to the proper author? Social media records may well be the best evidence against a defendant, but without an authorship-centric approach, the current path to their admissibility may not yet be the best process.


Are News Aggregators Getting Their Fair Share of Fair Use?

Mickey Stevens, MJLST Note & Comment Editor

Fair use is an affirmative defense to copyright that permits the use of copyrighted materials without the author’s permission when doing so fulfills copyright’s goal of promoting the progress of science and useful arts. One factor that courts analyze to determine whether or not fair use applies is whether the use is of a commercial nature or if it is for nonprofit educational purposes—in other words, whether the use is “transformative.” Recently, courts have had to determine whether automatic news aggregators can invoke the fair use defense against claims of copyright infringement. An automatic news aggregator scrapes the Internet and republishes pieces of the original source without adding commentary to the original works.

In Spring 2014, MJLST published “Associated Press v. Meltwater: Are Courts Being Fair to News Aggregators?” by Dylan J. Quinn. That article discussed the Meltwater case, in which the United States District Court for the Southern District of New York held that Meltwater—an automatic news aggregator—could not invoke the defense of fair use because its use of copyrighted works was not “transformative.” Meltwater argued that it should be treated like search engines, whose actions do constitute fair use. The court rejected this argument, stating that Meltwater customers were using the news aggregator as a substitute for the original work, instead of clicking through to the original article like a search engine.

In his article, Quinn argued that the Meltwater court’s interpretation of “transformative” was too narrow, and that such an interpretation made an untenable distinction between search engines and automatic news aggregators who function similarly. Quinn asked, “[W]hat if a news aggregator can show that its commercial consumers only use the snippets for monitoring how frequently it is mentioned in the media and by whom? Is that not a different ‘use’?” Well, the recent case of Fox News Network, LLC v. TVEyes, Inc. presented a dispute similar to Quinn’s hypothetical that might indicate support for his argument.

In TVEyes, Fox News claimed that TVEyes, a media-monitoring service that aggregated news reports into a searchable database, had infringed copyrighted clips of Fox News programs. The TVEyes database allowed subscribers to track when, where, and how words of interest are used in the media—the type of monitoring that Quinn argued should constitute a “transformative” use. In a 2014 ruling, the court held that TVEyes’ search engine that displayed clips was transformative because it converted the original work into a research tool by enabling subscribers to research, criticize, and comment. 43 F. Supp. 3d 379 (S.D.N.Y. 2014). In a 2015 decision, the court analyzed a few specific features of the TVEyes service, including an archiving function and a date-time search function. 2015 WL 5025274 (S.D.N.Y. Aug. 25, 2015). The court held that the archiving feature constituted fair use because it allowed subscribers to detect patterns and trends and save clips for later research and commentary. However, the court held that the date-time search function (allowing users to search for video clips by date and time of airing) was not fair use. The court reasoned that users who have date and time information could easily obtain that clip from the copyright holder or licensing agents (e.g. by buying a DVD).

While the court’s decision did point out that the video clip database was different in kind from that of a collection of print news articles, the TVEyes decisions show that the court may now be willing to allow automatic news aggregators to invoke the fair use defense when they can show that their collection of print news articles enables consumers to track patterns and trends in print news articles for research, criticism, and commentary. Thus, the TVEyes decisions may lead the court to reconsider the distinction between search engines and automatic news aggregators established in Meltwater that puts news aggregators at a disadvantage when it comes to fair use.


Digital Millennium Copyright Act Exemptions Announced

Zach Berger, MJLST Staffer

The Digital Millennium Copyright Act (DMCA) first enacted in 1998, prevents owners of digital devices from making use of these devices in any way that the copyright holder does not explicitly permit. Codified in part in 17 U.S.C. § 1201, the DMCA makes it illegal to circumvent digital security measures that prevent unauthorized access to copyrighted works such has movies, video games, and computer programs. This law prevents users from breaking what is known as access controls, even if the purpose would fall under lawful fair use. According to the Electronic Frontier Foundation’s (a nonprofit digital rights organization) staff attorney Kit Walsh, “This ‘access control’ rule is supposed to protect against unlawful copying. But as we’ve seen in the recent Volkswagen scandal . . . it can be used instead to hide wrongdoing hidden in computer code.” Essentially, everything not explicitly permitted is forbidden.

However, these restrictions are not iron clad. Every three years, users are allowed to request exemptions to this law for lawful fair uses from the Library of Congress (LOC), but these exemptions are not easy to receive. In order to receive an exemption, activists must not only propose new exemptions, but also plead for ones already granted to be continued. The system is flawed, as users often need to have a way to circumvent their devices to make full use of the products. However, the LOC has recently released its new list of exemptions, and this expanded list represents a small victory for digital rights activists.

The exemptions granted will go into effect in 2016, and cover 22 types of uses affecting movies, e-books, smart phones, tablets, video games and even cars. Some of the highlights of the exemptions are as follows:

  • Movies where circumvention is used in order to make use of short portions of the motion pictures:
    • For educational uses by University and grade school instructors and students.
    • For e-books offering film analysis
    • For uses in noncommercial videos
  • Smart devices
    • Can “jailbreak” these devices to allow them to interoperate with or remove software applications, allows phones to be unlocked from their carrier
    • Such devices include, smart phones, televisions, and tablets or other mobile computing devices
      • In 2012, jailbreaking smartphones was allowed, but not tablets. This distinction has been removed.
    • Video Games
      • Fan operated online servers are now allowed to support video games once the publishers shut down official servers.
        • However, this only applies to games that would be made nearly unplayable without the servers.
      • Museums, libraries, and archives can go a step further by jailbreaking games as needed to get them functioning properly again.
    • Computer programs that operate things primarily designed for use by individual consumers, for purposes of diagnosis, repair, and modification
      • This includes voting machines, automobiles, and implantation medical devices.
    • Computer programs that control automobiles, for purposes of diagnosis, repair, and modification of the vehicle

These new exemptions are a small, but significant victory for consumers under the DMCA. The ability to analyze your automotive software is especially relevant in the wake of the aforementioned Volkswagen emissions scandal. However, the exemptions are subject to some important caveats. For example, only video games that are almost completely unplayable can have user made servers. This means that games where only an online multiplayer feature is lost, such servers are not allowed. A better long-term solution is clearly needed, as this burdensome process is flawed and has led to what the EFF has called “unintended consequences.” Regardless, as long as we still have this draconian law, exemptions will be welcomed. To read the final rule, register’s recommendation, and introduction (which provides a general overview) click here.


The Legal Persona of Electronic Entities – Are Electronic Entities Independent Entities?

Natalie Gao, MJLST Staffer

The advent of the electronic age brought about digital changes and easier accessibility to more information but with this electronic age came certain electronic problems. One such problem is whether or not electronic entities like, (1) usernames online, (2) software agents, (3) avatars, (4) robots, and (5) artificial intelligence, are independent entities under law. A username for a website like eBay or for a forum, for all intents and purposes may well be just a pseudonym for the person behind the computer. But at what point does the electronic entity become an independent entity, and at what point does the electronic entity start have the rights and responsibilities of a legally independent entity?

In 2007, Plaintiff Marc Bragg brought suit against Defendants Linden Research Inc. (Linden), owner of the massive multiplayer online role playing game (MMORPG) Second Life, and its Chief Executive Officer. Second Life is a game with a telling title and it essentially allows its players to have a second life. It has a market for goods, extensive communications functions, and even a red-light district, and real universities have been given digital campuses in the game, where they have held lectures. Players of Second Life purchase items and land in-game with real money.

Plaintiff Bragg’s digital land was frozen in-game by moderators due to “suspicious” activity(s) and Plaintiff brought suit claiming he had property rights to the digital land. Bragg v. Linden Research, Inc., like its descendants including Evans v. Linden Research, Inc. (2011), have been settled out of court and therefore do not offer the legal precedents it could potentially have had regarding its unique fact pattern(s). And Second Life is also a very unique game because pre-2007, Linden had been promoting Second Life by announcing they recognize virtual property rights and that whatever users owned in-game would be belong to the user instead of to Linden. But can the users really own digital land? Would it be the users themselves owning the ditigal land or the avatars they make on the website, the ones living this “second life”, be the true owners? And at what point can avatars or any electronic entity even have rights and responsibilities?

An independent entity is not the same as a legal independent entity because an latter, beyond just existing independently, has rights and responsibilities pursuant to law. MMORPGs may use avatars to allow users to play games and avatars may be one step more independent than a username, but is that avatar an independent entity that can, for example, legally conduct commercial transactions? Or rather, is the avatar conducting a “transaction” in a leisure context? In Bragg v. Linden Research, Inc., the court touches on the issue of transactions but it rules only on civil procedure and contract law. And what about avatars existing now in some games that can play itself? Is “automatic” enough to make something an “independent entity”?

The concept of an independent electronic entity is discussed in length in Bridging the Accountability Gap: Rights for New Entities in the Information Society. Authors Koops, Hildebrandt, and Jaquet-Chiffelle compares the legal personhood of electronic artificial entities with animals, ships, trust funds, and organizations, arguing that giving legal personhood to basically all (or just “all”) currently existing electronic entities bring up problems such as needing representation with agency, lacking the “intent” required for certain crimes and/or areas of law, and likely needing to base some of their legal appeals in area of human/civil rights. The entities may be “actants” (in that they are capable of acting) but they are not always autonomous. A robot will need mens rea to assess responsibility, and none of the five listed entities do not have consciousness (which animals do have), let alone self-consciousness. The authors argue that none of the artificial entities fit the prima facies definition of a legal person and instead they moved to evaluate the entities on a continuum from automatic (acting) to autonomic (acting on its own), as well as the entity’s ability to contract and bear legal responsibility. And they come up with three possible solutions, one “Short Term”, one “Middle Term”, and one “Long Term”. The Short Term method, which seems to be the most legally feasible under today’s law, purposes creating a corporation (a legally independent entity) to create the electronic entity. This concept is reminiscent of theorist Gunther Teubner’s idea of a using a hybrid entity, one that combines an electronic agent(s) with a company with limited liability, instead of an individual entity to give something rights and responsibilities.

Inevitably, even though under the actual claims brought to the court, Bragg v. Linden Research, Inc. mostly seems more like an open-source licensing issue than an issue of electronic independent entity, Koops, Hildebrandt, and Jaquet-Chiffelle still tries to answer some questions that may be very salient one day. Programs can be probabilistic algorithms but no matter how unpredictable the program may be, their unpredictability is fixed in the algorithm. An artificial intelligence (AI), a program that grows and learns and create unpredictability on its own, may be a thing of science fiction and The Avengers, may one day be reality. And an AI does not have to be the AI of IRobot; it does not have to have a personality. At what point will we have to treat electronic entities as legally autonomic and hold it responsible for the things it has done? Will the future genius-programmer, who creates an AI to watch over the trusts in his/her care, be held accountable when that AI starts illegally funneling money out to the AmeriCorp bank account the AI was created to watch over, into the personal saving accounts of lamer non-MJLST law journals in the University of Minnesota? Koops, Hildebrandt, and Jaquet-Chiffelle argues yes, but it largely depends on the AI itself and the area of law.


Data Breach and Business Judgment

Quang Trang, MJLST Staffer

Data breaches are a threat to major corporations. Corporations such as Target Co. and Wyndham Worldwide Co. have been victim of mass data breaches. The damage caused by such breaches have led to derivative lawsuits being filed by shareholders to hold board of directors responsible.

In Palkon v. Holmes, 2014 WL 5341880 (D. N.J. 2014), Wyndham Worldwide Co. shareholder Dennis Palkon filed a lawsuit against the company’s board of directors. The judge granted the board’s motion to dismiss partially because of the Business Judgment Rule. The business judgement rule governs when boards refuse shareholder demands. The principle of the business judgment rule is that “courts presume that the board refused the demand on an informed basis, in good faith and in honest belief that the action taken was in the best interest of the company.” Id. The shareholder who brings the derivative suit has the burden to rebut the presumption that the board acted in good faith or that the board did not base its decision on reasonable investigation.

Cyber security is a developing area. People are still unsure how prevalent the problem is and how damaging it is. It is difficult to determine what a board needs to do with such ambiguous information. In a time when there is no set corporate cyber security standards, it is difficult for a shareholder to show bad faith or lack of reasonable investigation. Until clear standards and procedures for cyber security are widely adopted, derivative suits over data breaches will likely be dismissed such as in Palkon.


Bitcoin Regulation: Lifeline or Kiss of Death?

Ethan Mobley, MJLST Articles Editor

Bitcoin’s ever-increasing popularity has sparked fierce debate over the extent to which the alternative currency should be regulated, if at all. Bitcoin, a “cryptocurrency,” is the leading digital currency used today. The cryptocurrency can be used to buy and sell goods online or in traditional brick-and-mortar stores but is also used for speculative currency trading. As Bitcoin is adopted by more and more users, numerous businesses have sprouted geared toward facilitating Bitcoin transactions. One such company is Coinbase, which serves as a currency exchange allowing users to buy and sell Bitcoin (XBT) for USD and other currencies. Coinbase also acts as a “wallet” for Bitcoin, allowing users purchase Bitcoin at the market exchange rate, store that Bitcoin on their phone, and then pay for items using their phone’s “wallet.”

Bitcoin proponents claim the cryptocurrency is superior to traditional fiat for several reasons: 1) Bitcoin supply is self-regulating, and hence not susceptible to changes in government policy; 2) Bitcoin eliminates transaction costs between the buyer and seller of goods, which is especially helpful for small merchants; and 3) buyers using Bitcoin are not vulnerable to identity theft if the merchant incurs a security breach. Bitcoin opponents argue the cryptocurrency is problematic because it can be used for illicit purposes (e.g. transactions on Silk Road) while protecting its users due to relative transaction anonymity. Whatever the advantages and disadvantages, Bitcoin’s success is ultimately dependent upon wide-spread use by buyers and sellers and government regulation that permits free-use of the currency.

Recently, California legislators introduced a bill to regulate digital currencies. California isn’t the first state to consider such legislation, but it is arguably the most important considering California is home to more Bitcoin users than any other US state. Specifically, California AB-1326 would establish a regulatory framework for entities engaged in the “virtual currency business,” which would impose licensure and fee requirements on those entities. As defined, a “virtual currency business” is one that maintains “full custody or control of virtual currency in this state on behalf of others.” Specifically excluded from the bill are entities primarily engaged in buying and selling goods or services. Thus AB-1236 would not impose any burden on retailers–only quasi-banking entities like Coinbase would be subject to the regulation. Such regulation would ideally reduce Bitcoin market risk and volatility, thereby making the cryptocurrency a more viable alternative to traditional fiat. Nevertheless, Bitcoin advocacy groups disagree over whether the bill will ultimately encourage or inhibit widespread adoption of Bitcoin. After all, Bitcoin’s government-independence is one of its most beloved features. Agree or disagree with policies advanced by AB-1236, but one thing is clear—Bitcoin’s ubiquitous influence makes widespread regulation inevitable, and early legislation such as AB-1236 will serve as a model for other states to follow.


The Shift Toward Data Privacy: Workplace, Evidence, and Death

<Ryan Pesch, MJLST Staff Member

I’m sure I am not alone in remembering the constant urgings to be careful what I post online. I was told not to send anything in an email I wouldn’t want made public, and I guess it made some sense that the internet was commonly viewed as a sort of public forum. It was the place teens went to be relieve their angst, to post pictures, and to exchange messages. But the demographic of people that use the internet is constantly growing. My mom and sister communicate their garden interests using Pinterest (despite the fact that my mom needs help to download her new podcasts), and as yesterday’s teens become today’s adults, what people are comfortable putting online continues to expand. For example, the advent of online finances illustrate that the online world is about so much more than frivolity. The truth of the matter is that the internet shapes the way we think about ourselves. And as Lisa Durham Taylor observed in her article for MJLST in the spring of 2014, the courts are taking notice.

The article concerns the role of internet privacy in the employment context, noting that where once a company could monitor its employee’s computer activity with impunity (after all, it was being done on the company time and with company resources), courts have recently realized that the internet stands for more than dalliance. In it, Taylor notes that the connectedness of employees brings with it both advantages and disadvantages to the corporation. It both helps and hinders productivity, offering a more efficient way of accomplishing a task, but providing the material for procrastination in an accompanying hand. When the line blurs, and people start using company time for personal acts, the line-drawing can get tricky. Companies have an important interest in preserving the confidentiality of their work, but courts have recently been drawing the lines to favor the employee over the employer. This is in stark contrast to the early decisions, which gave companies a broad right to discharge an “at-will” employee and found that there was no expectation of privacy in the workplace. Luckily, courts are beginning to recognize that the nature of a person’s online interactions make the company’s snooping more analogous to going through an employee’s personal possessions than it is to monitoring an employee’s efficiency.

I would add into the picture the recently-decided Supreme Court case of Riley v. California, where the Court held that a police needed a warrant to search a suspect’s phone. The Court said that there was not reasonable cause to search a cell phone because the nature of the technology means that the police would be violating more than necessary to conduct normal business. They likened it to previous restrictions which prevented police from searching locked possessions incident to arrest, and sarcastically observed that cell phones have become “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.” The “vast quantities of personal information” and the fact that the phone itself is not a weapon make its taking unjustified in the course of a normal search.

This respect for the data of individuals seems to be signaling a new and incredibly complicated age of law. When does a person have the right to protect their data? When can that protection be broken? As discussed in a recent post on this blog, there is an ongoing debate about what to do with the data of decedents. To me, a conservative approach makes the most sense, especially in context with the cases discussed by Lisa Taylor and the decision in Riley v. California. However, courts have sided with those seeking access because the nature of a will grants the property of the deceased to the heirs, which has been extended to online “property.” What Rebecca Cummings points out to help swing the balance back in favor of privacy, is that it is not just the property of the deceased to which you are granting access. The nature of email means that a person’s inbox has copies of letters from others which may have never been intended for the eyes of someone else.

I can only imagine the number of people who, had they the presence of mind to consider this eventuality, would act differently either in the writing of their will or their management of their communications. I am sure that this is already something lawyers advise their clients about when discussing their plans for their estate, but for many, death comes before they have the chance to fully consider these things. As generations who have grown up on the internet start to encounter the issue in earnest, I have no doubt that the message will spread, but I can’t help but feel it should be spreading already. So: what would your heirs find tucked away in the back of your online closet? And if the answer to that is something you’d rather not think about, perhaps we should support the shift to privacy in more aspects of the digital world.


Postmortem Privacy: What Happens to Online Accounts After Death?

Steven Groschen, MJLST Staff Member

Facebook recently announced a new policy that grants users the option of appointing an executor of their account. This policy change means that an individual’s Facebook account can continue to exist after the original creator has passed. Although Facebook status updates from “beyond the grave” is certainly a peculiar phenomenon, it fits nicely into the larger debate of how to handle one’s digital assets after their death.

Rebecca G. Cummings, in her article The Case Against Access to Decedents’ Email: Password Protection as an Exercise of the Right to Destroy, discusses some of the arguments for and against providing access to a decedent’s online account. Those favoring access to a decedent’s account may assert one of two rationales: (1) access eases administrative burdens for personal representatives of estates; and (2) digital accounts are merely property to be passed on to one’s descendants. The response from those disagreeing with access is that the intent of the deceased should be honored above other considerations. Further they argue that if there is no clear intent from the deceased (which is not uncommon because many Americans die without wills), then the presumption should be that the decedent’s online accounts were intended to remain private.

Email and other online accounts (e.g. Facebook, Twitter, dating profiles) present novel problems for property rights of the deceased. Historically, a diary or the occasional love letter were among the most intimate property that could be transferred to one’s descendants. The vast catalogs of information available in an email account drastically changes what is available to be passed on. In contrast to a diary, an email account contains far more than the highlights of an individual’s day — emails provide a detailed account of an individual’s daily tasks and communications. Interestingly, this in-depth cataloging of daily activities has led some to the argument that information should be passed on as a way of creating a historical archive. There is certainly historical value in preserving an individual’s social media or email accounts, however, it must be balanced against the potential invasion of his or her privacy.

As of June 2013, seven states have passed laws that explicitly govern digital assets after death. However, the latest development in this area is the Uniform Fiduciary Access to Digital Access Act, which was created by the Uniform Law Commission. This act attempts to create consistency among the various states on how digital assets are handled after an individual’s death. Presently, the act is being considered for enactment in fourteen states. The act grants fiduciaries in certain instances the “same right to access those [digital] assets as the account holder, but only for the limited purpose of carrying out their fiduciary duties.” Whether or not this act will satisfy both parties in this debate remains to be seen.