Internet

The Limits of Free Speech

Paul Overbee, MJLST Editor

A large portion of society does not put much thought into what they post on the internet. From tweets and status updates to YouTube comments and message board activities, many individuals post on impulse without regard to how their messages may be interpreted by a wider audience. Anthony Elonis is just one of many internet users that are coming to terms with the consequences of their online activity. Oddly enough, by posting on Facebook Mr. Elonis took the first steps that ultimately led him to the Supreme Court. The court is now considering whether the posts are simply a venting of frustration as Mr. Elonis claims, or whether the posts constitute a “true threat” that will direct Mr. Elonis directly to jail.

The incident in question began a week after Tara Elonis obtained a protective order against her husband. Upon receiving the order, Mr. Elonis posted to Facebook, “Fold up your PFA [protection-from-abuse order] and put it in your pocket […] Is it thick enough to stop a bullet?” According the Mr. Elonis, he was trying to emulate the rhyming styles of the popular rapper Eminem. At a later date, an FBI agent visited Mr. Elonis regarding his threatening posts about his wife. Soon after the agent left, Mr. Elonis again returned to Facebook to state “Little agent lady stood so close, took all the strength I had not to turn the [expletive] ghost. Pull my knife, flick my wrist and slit her throat.”

Due to these posts, Mr. Elonis was sentenced to nearly four years in federal prison, and Elonis v. United States is now in front of the Supreme Court. Typical state statutes define these “true threats” without any regard to whether the speaker actually intended to cause such terror. For example, Minnesota’s “terroristic threats” statute includes “reckless disregard of the risk of causing such terror.” Some states allow for a showing of “transitory anger” to overcome a “true threat” charge. This type of defense arises where the defendant’s actions are short-lived, have no intent to terrorize, and clearly are tied to an inciting event that caused the anger.

The Supreme Court’s decision will carry wide First Amendment implications for free speech rights and artistic expression. A decision that comes down harshly on Mr. Elonis may have the effect of chilling speech on the internet. The difference between a serious statement and one that is joking many times depends on the point of view of the reader. Many would rather stop their posting on the internet instead of risk having their words misinterpreted and charges brought. On the other hand, if the Court were to look towards the intent of Mr. Elonis, then “true threat” statutes may lose much of their force due to evidentiary issues. A decision in favor of Mr. Elonis may lead to a more violent internet where criminals such as stalkers have a longer leash in which to persecute their victims. Oral argument on the case was held on December 1, 2014, and a decision will be issued in the near future.


The UETA: Are Attorneys Automatically Authenticating Every Email?

Dylan Quinn, MJLST Lead Note Comment Editor

The work week is winding down and you are furiously trying to reach an agreement with opposing counsel on some issue or dispute. You email back and forth until it appears you have reached an agreement – at least for the weekend. You will tell your client about the essential terms next week to see if you should “finalize” everything with the other side.

I don’t want to ruin your weekend, but you may have already bound the client to an enforceable agreement. How, you ask, can this be possible if I did not sign anything? Well, in light of the UETA and developing case law, that automatic signature block at the bottom of all your emails might be enough.

Minnesota Statutes Section 481.08 provides that an “attorney may bind a client, at any stage of an action or proceeding, by agreement made … in writing and signed by such attorney.” In addition, Minnesota has long joined almost every other state by adopting a variation of the Uniform Electronic Transactions Act (UETA). The purpose of the UETA is to provide a legal framework for the use of electronic signatures and records in government of business transactions, making them as legal as paper and manually signed signature. In sum, the UETA will apply to agreements reached under Section 481.08.

Minnesota Statutes Section 325L(h), defines “electronic signature” as “an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record.” Furthermore, Section 325L.05 (b), makes clear that the UETA in Minnesota only applies to transactions between parties where they both have “agreed to conduct transactions by electronic means,” which is determined by the “context and surrounding circumstances, including the parties’ conduct.” However, any attorney negotiating a settlement or other stipulation via email will open themselves up to the argument that they intended to transact business electronically, so the central question is whether or not an attorney intended the signature block to constitute a legally significant act that authenticates the email, thus binding the client to a settlement or other agreement.

It has long been held that an email chain can constitute a binding agreement. This past summer, the Minnesota Court of Appeals, held that “an electronic signature in an email message does not necessarily evidence intent to electronically sign a document attached to the e-mail.” See SN4, LLC v. Anchor Bank, fsb, 848 N.W.2d 559, 567 (Minn. Ct. App. 2014). While the decision adds to a growing body of jurisprudence in this area, the question of automated signature blocks was tabled by the decision and the parties involved were not attorneys. The Minnesota Supreme Court denied review this past September.

Other jurisdictions can offer some guidance. For example, In New York, where another law outside the UETA effectively serves the same purpose, it has long been held that automated imprints or signatures were insufficient to authentic every document. See Parma Tile Mosaic & Marble Co. v. Estate of Fred, 663 N.E.2d 633, 635 (NY Ct. App. 1996) (finding for Statute of Frauds purposes, automatic imprint of “MRLS Construction” on every faxed document did not amount to “sender’s apparent intention to authentic every document subsequently faxed.”).

In Texas, there is a split among the Courts on the issue of an attorneys signature block creating an enforceable agreement. Compare Cunningham v. Zurich Am. Ins. Co., 352 S.W.3d 519, 529-30 (Tex. App. 2011) (determining settlement agreement had not been reached because the Court declined “to hold that mere sending … of an email containing a signature block satisfies the signature requirement when no evidence suggests that the information was typed purposefully rather than generated automatically.”), with Williamson v. Bank of New York Mellon, 947 F. Supp. 2d 704, 710 (N.D. Tex. 2013) (disagreeing with Cunningham because (1) the attorney must have typed in the signature block information “at some point in the past,” (2) a broad view of the electronic signature definition comports with UETA’s purpose, and (3) “email communication is a reasonable and legitimate means of reaching a settlement in this day and age.”).

On the one hand, it seems like a strong argument to point out the fact that all emails contain the signature block. How can that possibly evidence the requisite intent to authenticate statements or agreements? Do we really want to allow attorneys to use this argument any time they get close enough to reaching an agreement when emailing back and forth? In response, one must ask: in what instance should we allow an attorney to seemingly agree with opposing counsel via email, but get out of it because they did not use “/s/”, and just had their automated signature block?

Regardless of the outcome, the potential impact of a decision one way or the other will have far reaching impacts on legal practice, and more specifically litigation, in Minnesota. As the Court recognized in Williamson, “email communication is a reasonable and legitimate means of reaching a settlement in this day and age.” If the entire purpose of the UETA was to facilitate electronic transactions, and the Minnesota Supreme Court is in charge of providing professional and ethical guidance for the profession within the state, they should grant review as opposed to tabling the issue.

Until then, all parties transacting business electronically, but especially attorneys, should be conscious of that little signature block they typed in the first day they set up their email account.


An Authorship-Centric Approach to the Authentication of Social-Networking Evidence

Sen “Alex” Wang, MJLST Staff Member

In Volume 13 Issue 1 of the Minnesota Journal of Law, Science & Technology, Ira P. Robbins called for special attention for social-networking evidence used in civil and criminal litigation and proposed an authorship-centric approach to the authentication of such evidence. In recent years, social-networking websites like Facebook, MySpace, and Twitter have become an ingrained part of our culture. However, at least as it appears to Robbins, people are stupid with regard to their online postings since they document their every move no matter how foolish or incriminating on social-networking sites. The lives and careers of not only ordinary citizens, but also lawyers, judges, and even Congress members have been damaged by their own social-networking postings.

Social-networking sites are designed to facilitate interpersonal relationships and information exchanges, but they have also been used to harass, intimidate, and emotionally abuse or bully others. With no effective check on fake accounts or false profiles, the anonymity of social-networking sites permits stalkers and bullies to take their harmful conduct above and beyond traditional harrying. The infamous Lori Drew and Latisha Monique Frazier cases provide excellent examples. Moreover, hackers and identity thieves have also taken advantages of the personal information posted on social-networking sites. Thus, Robbins argued that the growth in popularity of social-networking sites and the rising number of fake accounts and incidents of hacking signal that information from social-networking sites will begin to play a central role in both civil and criminal litigation.

Often unbeknownst to the social-networking user, postings leave a permanent trail that law-enforcement agents and lawyers frequently rely upon in crime solving and trial strategy. Robbins argued that the ease with which social-networking evidence can be altered, forged, or posted by someone other than the owner of the account should raise substantial admissibility concerns. Specifically, Robbins stated that social-networking postings are comparable to postings on websites rather than e-mails. Thus, the authentication of social-networking evidence is the critical first step to ensuring that the admitted evidence is trustworthy and, ultimately, that litigants receive a fair and just trial.

Robbins, however, further argued that the current judicial approaches to authentication of such evidence have failed to require rigorous showings of authenticity despite the demonstrated unreliability of information on social-networking sites. In the first approach, the court effectively shirks its gate-keeping function, deflecting all reliability concerns associated with social-networking evidence to the finder of fact. Under the second approach, the court authenticates a social-networking posting by relying solely on testimony of the recipient. The third approach requires testimony about who, aside from the owner, can access the social-networking account in question. With the fourth approach, the court focuses on establishing the author of a specific posting but failed to provide a thorough framework.

As a solution, Robbins proposed an authorship-centric approach that instructs courts to evaluate multiple factors when considering evidence from social-networking websites. The factors fall into three categories: account security, account ownership, and the posting in question. Although no one factor in these categories is dispositive, addressing each will help to ensure that admitted evidence possesses more than a tenuous link to its purported author. For account security, the inquiry should include at least the following questions: (1) Does the social-networking site allow users to restrict access to their profiles or certain portions of their profiles? (2)Is the account that was used to post the proffered evidence password protected? (3) Does anyone other than the account owner have access to the account? (4) Has the account been hacked into in the past? (5) Is the account generally accessed from a personal or a public computer? (6) How was the account accessed at the time the posting was made? As to account ownership, a court should address, at a minimum, the following key questions: (1) Who is the person attached to the account that was used to post the proffered evidence? (2) Is the e-mail address attached to the account one that is normally used by the person? (3) Is the alleged author a frequent user of the social-networking site in question? Finally, the court should ask at least these questions regarding the posting in question: (1) How was the evidence at issue placed on the social-networking site? (2) Did the posting at issue come from a public or a private area of the social-networking website? (3) How was the evidence at issue obtained from the website?

This authorship-centric approach properly shifts a court’s attention from content and account ownership to authorship, it underscores the importance of fairness and accuracy in the outcome of judicial proceedings that involve social-networking evidence. In addition, it fit within the current circumstantial-evidence authentication framework set out by Federal Rules of Evidence 901(b)(4) and will not require the courts to engage in a more exhaustive inquiry than is already required for other types of evidence.


The Data Dilemma for Cell Phone Carriers: To Throttle or Not to Throttle? FTC Seeks to Answer by Suing AT&T Over Speed Limitations for Wireless Customers

Benjamin Borden, MJLST Staff Member

Connecting to the Internet from a mobile device is an invaluable freedom in the modern age. That essential BuzzFeed quiz, artsy instagram picture, or new request on Friendster are all available in an instant. But suddenly, and often without warning, nothing is loading, everything is buffering, and your once treasured piece of hand-held computing brilliance is no better than a cordless phone. Is it broken? Did the satellites fall from the sky? Did I accidentally pick up my friend’s blackberry? All appropriate questions. The explanation behind these dreadfully slow speeds, however, is more often than not a result of data throttling courtesy of wireless service providers. This phenomenon arises from the use of unlimited data plans on the nation’s largest cell phone carriers. Carriers such as AT&T and Verizon phased out their unlimited data plans in 2010 and 2011, respectively. This came just a few years after requiring unlimited data plans for new smartphone purchases. Wireless companies argue that tiered data plans offer more flexibility and better value for consumers, while others suggest that the refusal to offer unlimited data plans is motivated by a desire to increase revenue by selling to data hungry consumers.

Despite no longer offering unlimited data plans to new customers, AT&T has allowed customers who previously signed up for these plans to continue that service. Verizon also allows users to continue, but refuses to offer discounts on new phones if they keep unlimited plans. Grandfathering these users into unlimited data plans, however, meant that wireless companies had millions of customers able to stream movies, download music, and post to social media without restraint, and more importantly, without a surcharge. Naturally, this was deemed to be too much freedom. So, data throttling was born. Once a user of an unlimited data plan goes over a certain download size, 3-5GB for AT&T in a billable month, their speeds are lowered by 80-90% (to 0.15 mbps in my experience). This speed limit makes even the simplest of smartphone functions an exercise in patience.

I experienced this data throttling firsthand and found myself consistently questioning where my so-called unlimited data had escaped to. Things I took for granted, like using Google Maps to find the closest ice cream shop, were suddenly ordeals taking minutes rather than seconds. Searching Wikipedia to settle that argument with a friend about the plot of Home Alone 4? Minutes. Requesting an Uber? Minutes. Downloading the new Taylor Swift album? Forget about it.

The Federal Trade Commission (FTC) understands this pain and wants to recoup the losses of consumers who were allegedly duped by the promise of unlimited data, only to have their usage capped. As a result, the FTC is suing AT&T for misleading millions of consumers about unlimited data plans. After recently consulting with the Federal Communications Commission (FCC), Verizon decided to abandon its data throttling plans. AT&T and Verizon argue that data throttling is a necessary component of network management. The companies suggest that without throttling, carrier service might become interrupted because of heavy data usage by a small group of customers.
AT&T had the opportunity to settle with the FTC, but indicated that it had done nothing wrong and would fight the case in court. AT&T contends that its wireless service contracts clearly informed consumers of the data throttling policy and those customers still signed up for the service. Furthermore, there are other cellular service options for consumers that are dissatisfied with AT&T’s terms. These arguments are unlikely to provide much solace to wireless customers shackled to dial-up level speeds.
If there is a silver lining though, it is this: with my phone acting as a paperweight, I asked those around me for restaurant recommendations rather than turning to yelp, I got a better understanding of my neighborhood by finding my way rather than following the blue dot on my screen, and didn’t think about looking at my phone when having dinner with someone. I was proud. Part of me even wanted to thank AT&T. The only problem? I couldn’t tweet @ATT to send my thanks.


The Data Dilemma for Cell Phone Carriers: To Throttle or Not to Throttle? FTC Seeks to Answer by Suing AT&T Over Speed Limitations for Wireless Customers

Benjamin Borden, MJLST Staff Member

Connecting to the Internet from a mobile device is an invaluable freedom in the modern age. That essential BuzzFeed quiz, artsy instagram picture, or new request on Friendster are all available in an instant. But suddenly, and often without warning, nothing is loading, everything is buffering, and your once treasured piece of hand-held computing brilliance is no better than a cordless phone. Is it broken? Did the satellites fall from the sky? Did I accidentally pick up my friend’s blackberry? All appropriate questions. The explanation behind these dreadfully slow speeds, however, is more often than not a result of data throttling courtesy of wireless service providers. This phenomenon arises from the use of unlimited data plans on the nation’s largest cell phone carriers. Carriers such as AT&T and Verizon phased out their unlimited data plans in 2010 and 2011, respectively. This came just a few years after requiring unlimited data plans for new smartphone purchases. Wireless companies argue that tiered data plans offer more flexibility and better value for consumers, while others suggest that the refusal to offer unlimited data plans is motivated by a desire to increase revenue by selling to data hungry consumers.

Despite no longer offering unlimited data plans to new customers, AT&T has allowed customers who previously signed up for these plans to continue that service. Verizon also allows users to continue, but refuses to offer discounts on new phones if they keep unlimited plans. Grandfathering these users into unlimited data plans, however, meant that wireless companies had millions of customers able to stream movies, download music, and post to social media without restraint, and more importantly, without a surcharge. Naturally, this was deemed to be too much freedom. So, data throttling was born. Once a user of an unlimited data plan goes over a certain download size, 3-5GB for AT&T in a billable month, their speeds are lowered by 80-90% (to 0.15 mbps in my experience). This speed limit makes even the simplest of smartphone functions an exercise in patience.

I experienced this data throttling firsthand and found myself consistently questioning where my so-called unlimited data had escaped to. Things I took for granted, like using Google Maps to find the closest ice cream shop, were suddenly ordeals taking minutes rather than seconds. Searching Wikipedia to settle that argument with a friend about the plot of Home Alone 4? Minutes. Requesting an Uber? Minutes. Downloading the new Taylor Swift album? Forget about it.

The Federal Trade Commission (FTC) understands this pain and wants to recoup the losses of consumers who were allegedly duped by the promise of unlimited data, only to have their usage capped. As a result, the FTC is suing AT&T for misleading millions of consumers about unlimited data plans. After recently consulting with the Federal Communications Commission (FCC), Verizon decided to abandon its data throttling plans. AT&T and Verizon argue that data throttling is a necessary component of network management. The companies suggest that without throttling, carrier service might become interrupted because of heavy data usage by a small group of customers.
AT&T had the opportunity to settle with the FTC, but indicated that it had done nothing wrong and would fight the case in court. AT&T contends that its wireless service contracts clearly informed consumers of the data throttling policy and those customers still signed up for the service. Furthermore, there are other cellular service options for consumers that are dissatisfied with AT&T’s terms. These arguments are unlikely to provide much solace to wireless customers shackled to dial-up level speeds.
If there is a silver lining though, it is this: with my phone acting as a paperweight, I asked those around me for restaurant recommendations rather than turning to yelp, I got a better understanding of my neighborhood by finding my way rather than following the blue dot on my screen, and didn’t think about looking at my phone when having dinner with someone. I was proud. Part of me even wanted to thank AT&T. The only problem? I couldn’t tweet @ATT to send my thanks.


FCC Issues Notice of Proposed Rulemaking to Ensure an Open Internet, Endangers Mid-Size E-Commerce Retailers

Emily Harrison, MJLST Staff

The United States Court of Appeals for the D.C. Circuit twice struck down key provisions of the Federal Communication Commission’s (FCC) orders regarding how to ensure an open Internet. The Commission’s latest articulation is its May 15, 2014 notice of proposed rulemaking, In the Matter of Protecting the Open Internet. According to the proposed rulemaking, it seeks to provide “broadly available, fast and robust Internet as a platform for economic growth, innovation, competition, free expression, and broadband investment and deployment.” The notice of proposed rulemaking includes legal standards previously affirmed by the D.C. Circuit in Verizon v. FCC, 740 F.3d 623 (2014). For example, the FCC relies on Verizon for establishing how the FCC can utilize Section 706 of the Telecommunications Act of 1996 as its source of authority in promulgating Open Internet rules. Additionally, Verizon explained how the FCC can employ a valid “commercially reasonable” standard to monitor the behavior of Internet service providers.

Critics of the FCC’s proposal for network neutrality argue that the proposed standards are insufficient to ensure an open Internet. The proposal arguably allows broadband carriers to offer “paid prioritization” services. The sale of this prioritization not only leads to “fast” and “slow” traffic lanes, but also allows broadband carriers to charge content providers for priority in “allocating the network’s shared resources,” such as the relatively scarce bandwidth between the Internet and an individual broadband subscriber.

Presuming that there is some merit to the critics’ arguments, if Internet Service Providers (ISPs) could charge certain e-commerce websites different rates to access a faster connection to customers, the prioritized websites could gain a competitive advantage in the marketplace. Disadvantaged online retailers could see a relative decrease in their respective revenue. For example, without adequate net neutrality standards, an ISP could prioritize certain websites, such as Amazon or Target, and allow them optimal broadband speeds. Smaller and mid-sized retail stores may only have the capital to access a slower connection. As a result, customers would consistently have a better retail experience on the websites of larger retailers because of the speed in which they can view products or complete transactions. Therefore, insufficient net neutrality policies could potentially have a negative effect on the bottom line of many e-commerce retailers.

Comments can be submitted in response to the FCC’s notice of proposed rulemaking at: http://www.fcc.gov/comments


Is the US Ready for the Next Cyber Terror Attack?

Ian Blodger, MJLST Staff Member

The US’s military intervention against ISIL carries with it a high risk of cyber-terror attacks. The FBI reported that ISIL and other terrorist organizations may turn to cyber attacks against the US in response to the US’s military engagement of ISIL. While no specific targets have been confirmed, likely attacks could result in website defacement to denial of service attacks. Luckily, recent cyber terror attacks attempting to destabilize the US power grid failed, but next time we may not be so lucky. Susan Brenner’s recent article, Cyber-threats and the Limits of Bureaucratic Control, published in the Minnesota Journal of Law Science and Technology volume 14 issue 1, describes the structural reasons for the US’s vulnerabilities to cyber attacks, and offers one possible solution to the problem.

Brenner argues that the traditional methods of investigation do not work well when it comes to cyber attacks. This ineffectiveness results from the obscured origin and often hidden underlying purpose of the attack, both of which are crucial in determining whether a law enforcement or military response is necessary. The impairment leads to problems assessing which agency should control the investigation and response. A nation’s security from external attackers depends, in part, on its ability to present an effective deterrent to would be attackers. In the case of cyber attacks, however, the US’s confusion on which agency should respond often precludes an efficient response.

Brenner argues that these problems are not transitory, but will increase in direct proportion to our reliance on complex technology. The current steps taken by the US are unlikely to solve the issue since they do not address the underlying problem, instead continuing to approach cyber terrorists as conventional attackers. Concluding that top down command structures are unable to respond effectively to the treat of cyber attacks, Brenner suggests a return to a more primitive mode of defense. Rather than trusting the government to ensure the safety of the populace, Brenner suggests citizens should work with the government to ensure their own safety. This decentralized approach, modeled on British town defenses after the fall of the Roman Empire, may avoid the ineffective pitfalls of the bureaucratic approach to cyber security.

There are some issues with this proposed model for cyber security, however. Small British towns during the early middle ages may have been able to ward off attackers through an active citizen based defense, but the anonymity of the internet makes this approach challenging when applied to a digitized battlefield. Small British towns were able to easily identify threats because they knew who lived in the area. The internet, as Brenner concedes, makes it difficult to determine to whom any given person pays allegiance. Presumably, Brenner theorizes that individuals would simply respond to attacks on their own information, or enlist the help of others to fed off attacks. However, the anonymity of the internet would mean utter chaos in bolstering a collective defense. For example, an ISIL cyber terrorist could likely organize a collective US citizen response against a passive target by claiming they were attacked. Likewise, groups utilizing pre-emptive attacks against cyber terrorist organizations could be disrupted by other US groups that do not recognize the pre-emptive cyber strike as a defensive measure. This simply shows that the analogy between the defenses of a primitive British town and the Internet is not complete.

Brenner may argue that her alternative simply calls for current individuals, corporations, and groups to build up their own defenses and protect themselves from impending cyber threats. While this approach would avoid the problems inherent in a bureaucratic approach, it ignores the fact that these groups are unable to protect themselves currently. Shifting these groups’ understanding of their responsibility of self defense may spur innovation and increase investment in cyber protection, but this will likely be insufficient to stop a determined cyber attack. Large corporations like Apple, JPMorgan, Target, and others often hemorrhage confidential information as a result of cyber attacks, even though they have large financial incentives to protect that information. This suggests that an individualized approach to cyber protection would also likely fail.

With the threat of ISIL increasing, it is time for the United States to take additional steps to reduce the threat of a cyber terror attack. At this initial stage, the inefficiencies of bureaucratic action will result in a delayed response to large-scale cyber terror attacks. While allowing private citizens to band together for their own protection may have some advantages over government inefficiency, this too likely would not solve all cyber security problems.


Infinite? in the Political Realm, the Internet May Not Be Big Enough for Everyone

Will Orlady, MJLST Staff Member

The Internet is infinite. At least, that’s what I thought. But Ashley Parker, a New York Times reporter doesn’t agree. When it comes to political ad space, our worldwide information hub may not be the panacea politicians hoped for this election season.

Parker based her argument on two premises. First, not all Internet content providers are equal, at least when it comes to attracting Internet traffic. Second, politicians–especially those in “big” elections–wish to reach more people, motivating their campaigns to run ads on a major content hubs i.e. YouTube.

But sites like YouTube can handle heavy network traffic. And, for the most part, political constituents do not increase site traffic for the purpose of viewing (or hearing) political ads. So what serves to limit a site’s ad space if not its own physical technology that facilitates the site’s user experience? Parker contends that the issue is not new: it’s merely a function of supply and demand.

Ad space on so-called premium video streaming sites like YouTube is broken down into two categories: ads that can be skipped (“skip-able ads”) and ads that must be played entirely before you reach the desired content (“reserved by ads”). The former is sold without exhaustion at auction, but the price of each ad impression increases with demand. The latter is innately more expensive, but can be strategically purchased for reserved times slots, much like television ad space.

Skip-able ads are available for purchase without regard to number. But they are limited by price and by desirability. Because they are sold by auction, in times of high demand (during a political campaign, for example) Parker contends that their value can increase ten-fold. Skip-able ads are, however, most seriously limited by their lack of desirability. Assuming, as I believe it is fair to do here, that most Internet users actually skip the skip-able ads, advertising purchasers would be incentivized to purchase a site’s “reserved by” advertising space.

“Reserved by” ads are sold as their name indicates, by reservation. And if the price of certain Internet ad space is determined by time or geography, it is no longer fungible. Thus, because not all Internet ad space is the same in price, quality, and desirability, certain arenas of Internet advertising are finite.

Parker’s argument ends with the conclusion that political candidates will now compete for ad space on the Internet. This issue, however, is not necessarily problematic or novel. Elections have always been adversarial. And I am not convinced that limited Internet ad space adds to campaign vitriol. An argument could be made to the contrary: that limited ad space will confine candidate to spending resources on meaningful messages about election issues rather than smear tactics. Campaign tactics notwithstanding, I do not believe that the Internet’s limited ad space presents an issue distinct from campaign advertising in other media. Rather, Parker’s argument merely forces purchasers and consumers of such ad space to consider the fact that the internet, as an advertising and political communication medium, may be more similar to existing media than some initially believed.


Apple’s Bark Is Worse Than Its Bite

Jessica Ford, MJLST Staff

Apple’s iPhone tends to garner a great deal of excitement from its aficionados for its streamlined aspects and much resentment from users craving customization on their devices. Apple’s newest smartphone model, the iPhone 6, is no exception. However, at Apple’s September 9, 2014 iPhone 6 unveiling, Apple announced that the new iOS 8 operating system encrypts emails, photos, and contacts when a user assigns a passcode to the phone. Apple is unable to bypass a user’s passcode under the new operating system and is accordingly unable to comply with government warrants demanding physical data extraction from iOS 8 devices.

The director of the FBI, James Comey, has already voiced concerns that this lack of access to iOS 8 devices could prevent the government from gathering information on a terror attack or child kidnappings.

Comey is not the only one to criticize Apple’s apparent attempt to bypass legal court orders and warrants. Orin Kerr, a criminal procedure and computer crime law professor at The George Washington University Law School, worries that this could essentially nullify the Supreme Court’s finding in Riley v. California this year which requires the police to have a warrant before searching and seizing the contents of an arrested individual’s cell phone.

However, phone calls and text messages are not encrypted, and law enforcement can gain access to that data by serving a warrant upon wireless carriers. Law enforcement can also tap and monitor cellphones by going through the same process. Any data backed to iCloud, including iMessages and photos, can be accessed under a warrant. The only data that law enforcement would not be able to access without a passcode is data normally backed up to iCloud that still remains on the device.

While security agencies argue otherwise, iOS 8 seems far from rendering Riley’s warrants useless. Law enforcement still has several viable options to gain information with a warrant. Furthermore, the Supreme Court has already made it clear that it does not find that the public’s interest in solving or preventing crimes outweighs the public’s interest in privacy of phone data, even when there is a chance that the data on a cell phone at issue will be encrypted once the passcode locks the phone,

“[I]n situations in which . . . an officer discovers an unlocked phone, it is not clear that the ability to conduct a warrantless search would make much of a difference. The need to effect the arrest, secure the scene, and tend to other pressuring matters means that law enforcement officers may well not be able to turn their attention to a cell phone right away . . . . If ‘the police are truly confronted with a ‘now or never’ situation,’ . . . they may be able to rely on exigent circumstances to search the phone immediately . . . . Or, if officers happen to seize a phone in an unlocked state, they may be able to disable a phone’s automatic-lock feature in order to prevent the phone from locking and encrypting data . . . . Such a preventive measure could be analyzed under the principles set forth in our decision in McArthur, 531 U.S. 326, 121 S.Ct. 946, which approved officers’ reasonable steps to secure a scene to preserve evidence while they awaited a warrant.” (citations omitted) Riley v. California, 134 S. Ct. 2473, 2487-88 (2014).

With all the legal recourse that remains open, it appears somewhat hasty for the paragon-of-virtue FBI to be crying “big bad wolf.”


Cable TV Providers and the FCC’s Policy-Induced Competition Amidst Changing Consumer Preferences

Daniel Schueppert, MJLST Executive Editor

More and more Americans are getting rid of their cable TV and opting to consume their media of choice through other sources. Roughly 19% of American households with a TV do not subscribe to cable. This change in consumer preferences means that instead of dealing with the infamous “Cable Company Runaround” many households are using their internet connection or tapping into local over-the-air broadcasts to get their TV fix. One of the obvious consequences of this change is that cable TV providers are losing subscribers and may become stuck carrying the costs of existing infrastructure and hardware. Meanwhile, the CEO of Comcast’s cable division announced that “it may take a few years” to fix the company’s customer experience.

In 2011 Ralitza A. Grigorova-Minchev and Tomas W. Hazlett published an article entitled Policy-Induced Competition: The Case of Cable TV Set-Top Boxes in Volume 12 Issue 1 of the Minnesota Journal of Law, Science & Technology. In their article the authors noted that despite the FCC’s policy efforts to bring consumer cable boxes to retail stores like Best Buy, the vast majority of cable subscribing households in America received their cable box from their cable TV operators. In the national cable TV market the two elephants in the room are Comcast and Time Warner Cable. One of these two operators are often the only cable option in certain areas and together they provide over a third of the broadband internet and pay-TV services in the nation. Interestingly, Comcast and Time Warner Cable are currently pursuing a controversial $45 billion merger and in the process both companies are shrewdly negotiating concessions by TV networks and taking shots at Netflix in FCC filings.

The current fad of cutting cable TV implicates a pushback against the traditional policy of vertically integrating media, infrastructure, customer service, and hardware like cable boxes into one service. In contrast to the expensive cable box hardware required and often provided by traditional cable, internet media streaming onto a TV can usually be achieved by any number of relatively low cost and multi-function consumer electronic devices like Google’s Chromecast. This arguably gives customers more control over their media experience by providing the ability to choose which hardware-specific services they bring into their home. If customers no longer want to be part of this vertical model, big companies like Comcast may find it difficult to adjust to changing consumer preferences given the considerable regulatory pressure discussed in Grigorova-Minchev and Hazlett’s article.