Human Error: Living with the Weakest Link "We have met the enemy, and he is us." -- Walt Kelly's Pogo Computer security breaches have become so common as to seem like a force of nature or an Act of God, like hurricanes or epidemics. After each one, experts scramble to plug holes, rewrite security plans, and explain at length why that particular problem will never happen again. We want to believe that with just a few more bug fixes, our systems will be truly secure. Unfortunately, perfect security will elude us for as long as human beings are involved, because humans are imperfect. From the data entry clerk to the Director of Security to the CEO, everyone makes mistakes, and sometimes those mistakes result in security breaches, either immediately or long after the mistakes were made. The truth is, our security systems are indeed improving steadily, and tend to get better after each breach. But even if we believe in the perfectability of our security technology, human nature doesn't change. Sometimes we're fooled by social engineering attacks, sometimes we misunderstand what we're supposed to do, and sometimes we just plain mess up. Software can be more or less perfected, but not humans. In fact, the more complex the environment, the more likely we are to make mistakes. Given that human error is as inevitable as the next Calfornia wildfire, we need to start supplementing our important efforts to educate users about security with explicit plans about how to handle the next security-threatening human error. Just as we can prepare for the next tsunami by building higher sea walls and zoning to discourage building in flood-prone areas, we can anticipate the ways some user will inevitably err, and plan around them. One thing we can do is deflect errors to where they'll do the least harm. Workers in nuclear power plant have often replaced generic-looking switches with beer tap handles or other things that stand out and warn them that this is a particularly dangerous switch. This may not decrease the likelihood of a worker throwing the wrong switch, but it may decrease the likelihood of throwing the worst possible switch. Paradoxically, an overall system is often safer if we give users fewer warnings, not more. A warning that users see too frequently can be like the boy who cried wolf; users can become so accustomed to ignoring it that they will barely notice it when it truly matters. As our software systems grow more complex, it is high time that they developed a more complex knowledge of the user. For a brand new user, there might be many risky actions worth warning about, but by the time a user has become adept at using a system, the warnings that were initially useful become nuisances to be ignored, which can eentually cause the user to overlook the occasional truly important warning scattered among the rest. If I tell a system to delete a whole directory full of files, and I've never done anything like that before, a warning is probably in order. If I've been using the system for years and have deleted directories often in the past, the warning may do more harm -- by desensitizing me to warnings -- than good. For this kind of user modelling to work, it is probably important that it be made completely automatic. To the extent that a human being decides who is a sophisticated user and who is not, the decison becomes a political one, with possible career-affecting consequences, and is subject to all the interpersonal drama of a performance review. Such a system could become an administrative nightmare, with the potential to negatively affect team morale and performance It seems much more likely to be effective if the system itself decides who is a sophisticated user, and matches its warnings and other constraints to the capabilities it ascribes to the user. Paradoxically, people are more likely to accept being labelled by an "impartial" computer than by a human being with whom they are already enmeshed in a complex web of relationships and dependencies. Once upon a time, user interface design and computer security were seperate issues. Today, they are inevitably intertwined. Discussions of computer security that neglect the human element are incomplete, throwbacks to a simpler age. Fortunately, for a decade now the security and usabilitiy research communities have been coming together to address these problems, although the results of their studies are not widely enough known or implemented. Anyone interested in learning more would do well to start with the proceedings of the SOUPS conferences, which can be found at http://cups.cs.cmu.edu/soups/2015/. The researchers have begun to lead the way, and a few implementors have begun to follow them, but the greatest progress will come only when executives and security specialists within each company understand the critical role of human factors in security, and begin to try to follow the advice slowly emerging from the community of usable security researchers.