Scott McNealy, the former CEO of the former Sun Microsystems, in the late 1990s, in an address to the Commonwealth Club said that the future of the Internet is in security. Indeed, it seems that there has been much effort and capital invested in addressing security matters. Encryption, authentication, secure transaction processing, secure processors, code scanners, code verifiers and host of other approaches to make your system and its software and hardware components into a veritable Fort Knox. And it’s all very expensive and quite time consuming (both in development and actual processing). And yet we still hear of routine security breeches, data and identity theft, on-line fraud and other crimes. Why is that? Is security impossible? Unlikely? Too expensive? Misused? Abused? A fiction?
Well, in my mind, there are two issues and they are the weak links in any security endeavour. The two actually have one common root. That common root, as Pogo might say, “is us”. The first one that has been in the press very much of late and always is the reliance on password. When you let the customers in and provide them security using passwords, they sign up using passwords like ‘12345’ or ‘welcome’ or ‘password’. That is usually combated through the introduction of password rules. Rules usually indicate that passwords must meet some minimum level of complexity. This would usually be something like requiring that each password must have a letter and a number and a punctuation mark and be at least 6 characters long. This might cause some customers to get so aggravated because they can’t use their favorite password that they don’t both signing up at all. Other end users get upset but change their passwords to “a12345!” or “passw0rd!” or “welc0me!”. And worst of all, they write the password down and put it in a sticky note on their computer.
Of course, ordinary users are not the only ones to blame, administrators are human, too, and equally as fallible. Even though they should know better, they are equally likely to have the root or administrator password left at the default “admin” or even nothing at all.
The second issue is directly the fault of the administrator – but it is wholly understandable. Getting a system, well a complete network of systems, working and functional is quite an achievement. It is not something to be toyed around with once things are set. When your OS supplier or application software provider delivers a security update, you will think many times over before risking system and network stability to apply it. The choice must be made. The administrator thinks: “Do I wreak havoc on the system – even theoretical havoc – to plug a security hole no matter how potentially damaging?” And considers that: “Maybe I can rely on my firewall…maybe I rely on the fact that our company isn’t much of a target…or I think it isn’t.” And rationalizes: “Then I can defer the application of the patch for now (and likely forever) in the name of stability.”
The bulk of hackers aren’t evil geniuses that stay up late at night doing forensic research and decompilation to find flaws, gaffes, leaks and holes in software and systems. No they are much more likely to be people who read a little about the latest flaws and the most popular passwords and spend their nights just trying stuff to see what they can see. A few of them even specialize in social engineering in which they simply guess or trick you into divulging your password – maybe by examining your online social media presence.
The notorious stuxnet malware worm may be a complex piece of software engineering but it would have done nothing were it not for the peril of human curiosity. The virus allegedly made its way into secure facilities on USB memory sticks. Those memory sticks were carried in human hands and inserted into the targeted computers by those same hands. How did they get into those human hands? A few USB sticks with the virus were probably sprinkled in the parking lot outside the facility. Studies have determined that people will pick up USB memory sticks they find and insert them in their PCs about 60% of the time. The interesting thing is that the likelihood of grabbing and using those USB devices goes up to over 90% if the device has a logo on it.
You can have all the firewalls and scanners and access badges and encryption and SecureIDs and retinal scans you want. In the end, one of your best and most talented employees grabbing a random USB stick and using it on his PC can be the root cause of devastation that could cost you staff years of time to undo.
So what do you do? Fire your employees? Institute policies so onerous that no work can be done at all? As is usual, the best thing to do is apply common sense. If you are not a prime target like a government, a security company or a repository of reams of valuable personal data – don’t go overboard. Keep your systems up-to-date. The time spent now will definitely pay off in the future. Use a firewall. A good one. Finally, be honest with your employees. Educate them helpfully. None of the scare tactics, no “Loose Lips Sink Ships”, just straight talk and a little humor to help guide and modify behavior over time.