[Image via brevard-online]

Despite all the recent attention to election cybersecurity, the discussion has been largely focused on how different levels of government can cooperate to share information on possible threats to the voting process. A new article by Sean Gallagher in Ars Technica, however, is a nice primer for individuals on both why and how to create a “threat model” to identify potential threats and develop plans to protect against them:

In the most basic sense, threat models are a way of looking at risks in order to identify the most likely threats to your security. And the art of threat modeling today is widespread. Whether you’re a person, an organization, an application, or a network, you likely go through some kind of analytical process to evaluate risk.

Threat modeling is a key part of the practice people in security often refer to as “Opsec.” A portmanteau of military lineage originally meaning “operation security,” Opsec originally referred to the idea of preventing an adversary from piecing together intelligence from bits of sensitive but unclassified information, as wartime posters warned with slogans like “Loose lips might sink ships.” In the Internet age, Opsec has become a much more broadly applicable practice—it’s a way of thinking about security and privacy that transcends any specific technology, tool, or service. By using threat modeling to identify your own particular pile of risks, you can then move to counter the ones that are most likely and most dangerous.

Threat modeling doesn’t have to be rocket science. Most people already (consciously or subconsciously) have a threat model for the physical world around them—whether it’s changing the locks on the front door after a roommate moves out or checking window locks after a burglary in the neighborhood. The problem is that very few people pay any sort of regular attention to privacy and security risks online unless something bad has already happened.

First, Gallagher identifies a common approach currently being used by leading consultant Adam Shostack to assist the Seattle Privacy Coalition:

  1. What are you doing? (The thing you’re trying to do, and what information is involved.)
  2. What can go wrong? (How what you’re doing could expose personal information in ways that are bad.)
  3. What are you going to do about it? (Identifying changes that can be made in technology and behavior to prevent things from going wrong.)
  4. Did you do a good job? (Re-assessing to see how much risk was reduced.)

Then, seeking something that works better for everyone in building a “personal threat model”, he shares the process used by the Electronic Frontier Foundation:

  1. What do you want to protect? (The data, communications, and other things that could cause problems for you if misused.)
  2. Who do you want to protect it from? (The people, organizations, and criminal actors who might seek access to that stuff.)
  3. How likely is it that you will need to protect it? (Your personal level of exposure to those threats.)
  4. How bad are the consequences if you fail?
  5. How much trouble are you willing to go through in order to try to prevent those? (The money, time and convenience you’re willing to dispense with to protect those things.)

Gallagher then tries to put these together a set of questions “for the average mortal—or at least, for someone helping the average mortal”:

  • Who am I, and what am I doing here?
  • Who or what might try to mess with me, and how?
  • How much can I stand to do about it?
  • Rinse and repeat.

In answering the “who am I?” question, the goal is to identify “assets” that can be compromised —the important pieces of information you want to use in an activity but simultaneously want to protect, including personally identifiable information or client (voter) data. He then makes a vitally important observation:

Pieces of information that could be used to expose your assets are just as essential to protect as the assets themselves. Personal biographical and background data might be used for social engineering against you, your friends, or a service provider. Keys, passwords, and PIN codes should also be considered as valuable as the things that they provide access to.

The next step is to identify the threats – a process that’s more familiar than you think:

In the physical world of our daily lives, we assess threats based (hopefully) on real-world data. If there have been recent burglaries in the neighborhood, we up our vigilance. In certain higher-risk neighborhoods, we take precautions. If we see someone coming down the sidewalk we want to avoid, we cross the street. Threat intelligence in the digital world is a bit more complicated, but it’s essentially the same principle. We identify possible threats based on motive, resources, and capabilities.

In this part of modeling, it’s important to focus on the most likely threats to your assets and not get caught up in protecting against unicorns. Most of us are not enemies of the state (at least not yet) or of interest to some foreign power. We should likely be more concerned about criminals trying to steal information they can turn into financial gain or use to coerce or deceive us into giving them money directly.

But there are other sorts of threats individuals and organizations need to concern themselves with in threat modeling. At the most basic level, every person who uses the Internet and mobile applications faces a common set of security and privacy threats. Some of these threats are obvious and immediate; others are less intuitive but potentially more damaging in the long term…

Other attackers may focus on you as a means to gain access to a bigger target. If you work in finance or in the accounting department of nearly any company, you might be personally targeted by criminals ultimately aiming for your company’s systems. If you’re a systems administrator, someone might go after you to gain access to the systems you manage. If you’re a journalist, a non-governmental organization employee, a government employee, or a government contractor, someone may have an intelligence-gathering interest in your work…

He then makes this key observation about “threat actors” – including the fact that we are often our own biggest threats:

“[T]hreat actors” ha[ve] different levels of skill and available resources. They also have different motives for—and levels of commitment to—getting your stuff. Typically, attackers motivated by money will not expend more resources than the value of what they’re going after. It’s unusual for a criminal to spend a year or hundreds of thousands of dollars just for the ability to make a few hundred, and they’ll likely not target individuals specifically unless they’re a gateway to a big haul.

Of course, there’s one other major threat everyone faces: ourselves. Accidental disclosure or casual information leakage over e-mail, social media, or other channels can be just as bad as being hacked. Such mistakes can allow others to gain further access to private information by someone who happens across it.

How might they attack? Gallagher enumerates a Microsoft-developed list of six primary paths with the acronym STRIDE:

  • Spoofing identity: using some sort of token or credential to pretend to be an authorized user or another piece of trusted software—or, someone posing as someone else in an e-mail or on social media to gain your trust.
  • Tampering with data: maliciously altering data to cause a software failure or to cause damage to the victim. This could be to lower the user’s trust of the information, or it might be an effort to create an error in software that allows the attacker to launch their own commands on the targeted device.
  • Repudiation: the ability to do something (conduct a transaction, change information, access data) without having a record to prove it happened (such as an event log). This is less of an issue for average users and more of a problem for software developers, but it can still be an issue in some fraud attacks.
  • Information disclosure: your data gets exposed, either through a breach or accidental public exposure.
  • Denial of service: making it impossible for someone to use the application or information, whether it’s your personal website or someone trying to boot you off a game network.
  • Elevation of privilege: gaining a greater level of access to an application or to data than allowed by altering the restrictions on the user or the application (getting “root,” escaping the browser sandbox to install malware, etc.).

And, finally, what can you (stand to) do about it?

For those looking for a quick answer, the TL;DR [Internet shorthand for “too long, didn’t read” used to signal a summary] reply is:

  • Back up your stuff to the cloud or a disk drive you detach from your device;
  • Use a password manager to automate your use of separate passwords for every website;
  • Update your software whenever alerted to do so.

“Do those three things, and you’re doing better than most,” Shostack said.

Updating software and operating systems as soon as updates are available is critical to reducing your attack surface, because the moment the bugs that are patched become widely known they’re more likely to be exploited. Password managers will help eliminate the risk of password reuse across multiple sites. And backups, kept detached from your computer or mobile device, will minimize the amount of data lost if you get hit with ransomware or something else destructive. These three things together will vastly cut down on the attack surface available to the most common threats.

Unfortunately, these steps aren’t always enough, and they’re not always possible or practical for everyone. Not all software vendors automatically alert users to updates. Many consumer Wi-Fi router manufacturers don’t update their firmware for older models, and owners of others may miss updates since they seldom use the administrative Web console for their routers. And while some operating system upgrades are free, the new hardware and software required to support them generally isn’t.

Patching, data backup solutions, and password managers also aren’t going to prevent other threats such as phishing or efforts that exploit human weaknesses. A password manager may block fake sites attempting credential-stealing, but it won’t protect against … attacks that rely on sites you’ve already given credentials for (such as malicious Twitter or Facebook links).

And remember – threats are always changing so your threat models (and your response to those threats) should, too:

Threat models are always changing, and the things you do today may not work tomorrow. And it’s often hard to tell if you’re covered for all the possible threats. This is what companies hire penetration testers for—to find the gaps in what an organization has done to secure itself. Corporations may have dedicated “red teams” to regularly look for gaps in security and identify what needs to be fixed, but most of us don’t personally have the resources for our own personal red team.

Fortunately, organizations such as Google’s Project Zero are constantly searching for bugs for us, and many vendors are quick to patch bugs when they’re found. But the job of tracking the updates for the software and devices we use is largely on us. Regular checks for updates are key to keeping on top of your personal threat model.

Also key is a regular re-assessment of how your risk exposure has changed. Re-checking privacy settings, Wi-Fi router firmware, and other things that don’t necessarily alert you when updates are due on a regular basis is part of keeping risks under control.

None of this is a guarantee. But if you’ve done a risk model and you’ve done what you can to minimize your exposure to security and privacy risk, at least the impact of something bad happening will (hopefully) be manageable. Threat models don’t offer perfection, but they’re pretty good for avoiding a full-blown disaster.

To be sure, there are far more sophisticated approaches to threat modeling out there – and many election offices will want to develop (if they already haven’t) their own models. Still, as a introduction to the concept this is as good as it gets. Thanks to NIST’s Josh Franklin for sharing this article and especially to Ars’ Sean Gallagher for making this important concept accessible to individuals and threat model newbies like me.

Let’s be careful out there, electiongeeks … and stay tuned!