Tag: risk

perfectionI have noticed that as people age, they become finer and finer versions of themselves. Their eccentricities become sharper and more pronounced; their opinions and ideas more pointed and immutable; their thoughts more focussed. In short, I like to say that they become more perfect versions of themselves. We see it in our friends and acquaintances and in our parents and grandparents. It seems a part of natural human development.

Back in 2006, Netflix initiated the Netflix Prize with the intent of encouraging development of improvements in the accuracy of predictions about how much someone is going to enjoy a movie based on their movie preferences and rewarding the winner with $1,000,000. Contestants were given access to a set of Netflix’s end-users’ movie ratings and were challenged to provide recommendations of other movies to watch that bested Netflix’s own recommendation engine. BellKor’s Pragmatic Chaos team was announced as the winner in 2009 having manage to improve Netflix’s recommendations by 10% and walked off with the prize money.

What did they do? Basically, they algorithmically determined and identified movies that were exceptionally similar to the ones that were already liked by a specific user and offered those movies as recommended viewing. And they did it really well.

In essence what the Bellkor team did was build a better echo chamber. Every viewer is analyzed, their taste detailed and then the algorithm perfects that taste and hones it to a razor sharp edge. You become, say, an expert in light romantic comedies with a strong female lead, who lives in a spacious apartment in Manhattan, includes many dog owners, no visible children and often features panoramic views of Central Park.

Of course, therein lies the rub. A multifaceted rub at that. As recommendation engines become more accurate and discerning of individual tastes they remove any element of chance, randomness or error that might serve to introduce new experiences, genres or even products into you life. You become a more perfect version of you. But in that perfection you are also stunted. You are shielded from experimentation and breadth of experience. You pick a single pond and overfish it.

There are many reasons why this is bad and we see it reflected, most obviously, in our political discourse where our interactions with opposing viewpoints are limited to exchanges of taunts (as opposed to conversations) followed by a quick retreat to the comfort of our well-constructed echo chambers of choice where our already perfected views are nurtured and reinforced.

But it also has other ramifications. If we come to know what people like to such a degree then innovation outside safe and well-known boundaries might be discouraged. If Netflix knows that 90% of its subscribers like action/adventure films with a male hero and lots of explosions why would they bother investing in a story about a broken family being held together by a sullen beekeeper. If retail recommendations hew toward what you are most likely to buy – how can markets of unrelated products be expanded? How can individual tastes be extended and deepened?

Extending that – why would anyone risk investment in or development of something new and radically different if the recommendation engine models cannot justify it. How can the leap be made from Zero to One – as Peter Theil described – in a society, market or investment environment in which the recommendation data is not present and does not justify it?

There are a number of possible answers. One might be that “gut instincts” need to continue to play a role in innovation and development and investment and that risk aversion has no place in making the giant leaps that technology builds upon and needs in order to thrive.

A more geeky answer is that big data isn’t yet big enough and that recommendation engines aren’t yet smart enough. A good recommendation engine will not just reinforce your prejudicial tastes, it will also often challenge and extend them and that we don’t yet have the modelling right to do that effectively.  The data are there but we don’t yet know how to mine it correctly to broaden rather than narrow our horizons. This broadening – when properly implemented – will widen markets and opportunities and increase revenue.

Tags: , , , , , , , ,

iStock_000016388919XSmallBack in 1992, after the Berlin Wall fell and communist states were toppled one after another, Francis Fukuyama authored and published a book entitled The End of History and The Last Man.  It received much press at the time for its bold and seemingly definitive statement (specifically that whole ‘end of history’ thing with the thesis that capitalist liberal democracy is that endpoint). The result was much press, discussion, discourse and theorizing and presumably a higher sales volume for a book that likely still graces many a bookshelf, binding still uncracked.  Now it’s my turn to be bold.

Here it is:

With the advent and popularization of the smartphone, we are now at the end of custom personal consumer hardware.

That’s it.  THE END OF HARDWARE.  Sure there will be form factor changes and maybe a few additional new hardware features but all of these changes will be incorporated in smartphone handsets as that platform.

Maybe I’m exaggerating – but only a little.  Really, there’s not much more room for hardware innovation in the smartphone platform and as it is currently deployed, it contains the building blocks of any custom personal consumer device. Efforts are clearly being directed at gadgets to replace those cell phones.  That might be smart watches, wearable computers, tablets or even phablets. But these are really just changes in form not function.  Much like the evolution of the PC, it appears that mobile hardware has reached the point where the added value of hardware has become incremental and less valuable.  The true innovation is in the manner in which software can be used to connect resources and increase the actual or perceived power that platform.

In the PC world, faster and faster microprocessors were of marginal utility to the great majority of end-users who merely used their PCs for reading email or doing PowerPoint.  Bloated applications (of the sort that the folks at Microsoft seem so pleased to develop and distribute) didn’t even benefit from faster processors as much as they did from cheaper memory and faster internet connections.  And now, we may be approaching that same place for mobile applications.  The value of some of these applications is becoming limited more by the availability of on-device resources like memory and faster internet connections through the cell provider rather than the actual hardware features of the handset.  Newer applications are more and more dependent on big data and other cloud-based resources.  The handset is merely a window into those data sets.  A presentation layer, if you will.  Other applications use the information collected locally from the device’s sensors and hardware peripherals (geographical location, speed, direction, scanned images, sounds, etc.) in concert with cloud-based big data to provide services, entertainment and utilities.

In addition, and more significantly, we are seeing developing smartphone applications that use the phone’s peripherals to directly interface to other local hardware (like PCs, projectors, RC toys,  headsets, etc.) to extend the functionality of those products.  Why buy a presentation remote when you get an app? Why buy a remote for your TV when you can get an app? Why buy a camera when you already have one on your phone? A compass? A flashlight? A GPS? An exercise monitor?

Any consumer-targeted handheld device need no longer develop an independent hardware platform.  You just develop an app to use the features of the handset that you need and deploy the app.  Perhaps additional special purpose sensor packs might be needed to augment the capabilities of the smartphone for specialized uses but any mass-market application can be fully realized using the handset as the existing base and few hours of coding.

And if you doubt that handset hardware development has plateaued  then consider the evolution of the Samsung Galaxy S3 to the Samsung Galaxy S4.  The key difference between the two devices is the processor capabilities and the camera resolution.  The bulk of the innovations are pure software related and could have been implemented as part of the Samsung Galaxy S3 itself without really modifying the hardware.  The differences between the iPhone 4s and the iPhone 5s were a faster processor, a better camera and a fingerprint sensor.  Judging from a completely unscientific survey of end-users that I know, the fingerprint sensor remains unused by most owners. An innovation that has no perceived value.

The economics of this thesis is clear.  If a consumer has already spent $600 or so on a smartphone and lives most of their life on it anyway and carries it with them everywhere, are you going to have better luck selling them a new gadget for $50-$250 (that they have to order, wait for learn how to use, get comfortable with and then carry around) or an app that they can buy for $2 and download and use in seconds – when they need it?

 

Tags: , , , , , , , , , , , , , ,

next-big-thing1There is a great imbalance in the vast internet marketplace that has yet to be addressed and is quite ripe for the picking. In fact, this imbalance is probably at the root of the astronomical stock market valuations of existing and new companies like Google, facebook, Twitter and their ilk.

It turns out that your data is valuable.  Very valuable.  And it also turns out that you are basically giving it away.  You are giving it away – not quite for free but pretty close.  What you are getting in return is personalization. You get advertisements targeted at you providing you with products you don’t need but are likely to find quite iresistable.  You get recommendations for other sites that ensure that you need never venture outside the bounds of your existing likes and dislikes. You get matched up with companies that provide services that you might or might not need but definitely will think are valuable.

Ultimately, you are giving up your data so businesses can more efficiently extract more money from you.

If you are going to get exploited in this manner, it’s time to make that exploitation a two way street. Newspapers, for instance, are rapidly arriving at the conclusion that there is actual monetary value in the information that they provide.  They are seeing that the provision of vetted, verified, thougful and well-written information is intrinsicly worth more than nothing.  They have decided that simply giving this valuable commodity away for free is giving up the keys to the kingdom.  The Wall Street Journal, the New York Times, The Economist and others are seeing that people are willing to pay and do actually subscribe.

There is a lesson in this for you – as a person. There is value in your data.  Your mobile movements, your surf trail, your shopping preferences  It  should not be the case that you implicitly surrender this information for better personalization or even a $5 Starbucks gift card.  This constant flow of data from you, your actions, movements and keystrokes ought to result in a constant flow of money to you.  When you think about it, why isn’t the ultimate personal data collection engine, Google Glass, given away for free? Because people don’t realize that personal data collection is its primary function.  Clearly, the time has come for the realization of a personal paywall.

The idea is simple, if an entity wants your information they pay you for it.  Directly.  They don’t go to Google or facebook and buy it – they open up an account with you and pay you directly.  At a rate that you set.  Then that business can decide if you are worth what you think you are or not.  You can adjust your fee up or down anytime and you can be dropped or picked up by followers. You could provide discount tokens or free passes for friends.  You could charge per click, hour, day, month or year.  You might charge more for your mobile movements and less for your internet browsing trail.  The data you share comes with an audit trail that ensures that if the information is passed on to others without your consent you will be able to take action – maybe even delete it – wherever it is.  Maybe your data lives for only a few days or months or years – like a contract or a note – and then disappears.

Of course, you will have to do the due diligence to ensure you are selling your information to a legitimate organization and not a Nigerian prince.  This, in turn, may result in the creation of a new class of service providers who vet these information buyers.

This data reselling capability would also provide additional income to individuals.  It would not a living wage to compensate for having lost a job but it would be some compensation for participating in facebook or LinkedIn or a sort of kickback for buying something at Amazon and then allowing them to target you as a consumer more effectively. It would effectively reward you for contributing the information that drives the profits of these organizations and recognize the value that you add to the system.

The implementation is challenging and would require encapsulating data in packets over which you exert some control.  An architectural model similar to bitcoin with a central table indicating where every bit of your data is at any time would be valuable and necessary. Use of the personal paywall would likely require that you include an application on your phone or use a customized browser that releases your information only to your paid-up clients. In addition, some sort of easy, frictionless mechanism through which companies or organizations could automatically decide to buy your information and perhaps negotiate (again automatically) with your paywall for a rate that suits both of you would make use of the personal paywall invisible and easy. Again this technology would have to screen out fraudulent entities and not even bother negotiating with them.

There is much more to this approach to consider and many more challenges to overcome.  I think, though, that this is an idea that could change the internet landscape and make it more equitable and ensure the true value of the internet is realized and shared by all its participants and users.

Tags: , , , , , , , , , , , ,

basicsI admit it. I got a free eBook.  I signed up with O’Reilly Media as a reviewer. The terms and conditions of this position were that when I get an  eBook,  I agree to write a review of it.  Doesn’t matter if the review is good or bad (so I guess, technically, this is NOT log rolling).  I just need to write a review.  And if I post the review, I get to choose another eBook to review.  And so on. So, here it is.  The first in what will likely be an irregular series.  My review.

The book under review is “The Basics of Web Hacking” subtitled “Tools and Techniques to Attack the Web” by Josh Pauli. The book was published in June, 2013 so it is fairly recent.  Alas, recent in calendar time is actually not quite that recent in Internet time – but more on this later.

First, a quick overview. The book provides an survey of hacking tools of the sort that might be used for either the good of mankind (to test and detect security issues in a website and application installation) or for the destruction of man and the furtherance of evil (to identify and exploit security issues in a website and application installation).  The book includes a several page disclaimer advising against the latter behavior suggesting that the eventual outcomes of such a path may not be pleasant.  I would say that the disclaimer section is written thoughtfully with the expectation that readers would take seriously its warnings.

For the purposes of practice, the book introduces the Damn Vulnerable Web Application (DVWA).  This poorly-designed-on-purpose web application allows you to use available tools and techniques to see exactly how vulnerabilities are detected and exploits deployed. While the book describes utilizing an earlier version of the application, figuring out how to install and use the newer version that is now available is a helpful and none-too-difficult experience as well.

Using DVWA as a test bed, the book walks you through jargon and then techniques and then practical exercises in the world of hacking. Coverage of scanning, exploitation, vulnerability assessment and attacks suited to each vulnerability including a decent overview of the vast array of available tools to facilitate these actions.  The number of widely available very well built applications with easy-to-use interfaces is overwhelming and quite frankly quite scary.  Additionally, a plethora of web sites provide a repository of information regarding already known to be vulnerable web sites and how they are vulnerable (in many cases these sites remain vulnerable despite the fact that they have been notified)

The book covers usage of applications such as Burp Suite, Metasploit, nmap, nessus, nikto and The Social Engineer Toolkit. Of course, you could simply download these applications and try them out but the book marches through a variety of useful hands-on experiments that exhibit typical real-life usage scenarios. The book also describes how the various applications can be used in combination with each other which can make investigation and exploitation easier.

In the final chapter, the book describes design methods and application development rules that can either correct or minimize most vulnerabilities as well as providing a relatively complete list of “for further study” items that includes books, groups, conferences and web sites.

All in all, this book provides a valuable primer and introduction to detecting and correcting vulnerabilities in web applications.  Since the book is not that old, changes to applications are slight enough that figuring out what the changes are and how to do what the book is describing is a great learning experience rather than simply an exercise in frustration. These slight detours actually serve to increase your understanding of the application.

I say 4.5 stars out of 5 (docked a star because these subject areas tend to get out-of-date too quickly but if you read it NOW you are set to grow with the field)

See you at DEFCON!

Tags: , , , , , , , , ,

google-glass-patent-2-21-13-01Let me start by being perfectly clear.  I don’t have Google Glass.  I’ve never seen a pair live.  I’ve never held or used the device.  So basically, I just have strong opinions based on what I have read and seen.  And, of course, the way I have understood what I have read and seen.  Sergei Brin recently did a TED talk about Google Glass during which, after sharing a glitzy, well-produced video commercial for the product, he maintained that they developed Google Glass because burying your head in a smartphone was rude and anti-social.  Presumably staring off into the projected images produced by Google Glass but still avoiding eye-contact and real human interaction is somehow less rude and less anti-social.  But let that alone for now.

The “what’s in it for me” of Google Glass is the illusion of intelligence (or at least the ability to instantly access facts), Internet-based real-time social sharing, real-time scrapbooking and interactive memo taking amongst other Dick Tracy-like functions.

What’s in it for Google is obvious.  At its heart, Google is an advertising company – well – more of an advertising distribution company.  They are a platform for serving up advertisements for all manner of products and services.  Their ads are more valuable if they can directly target people with ads for products or services at a time and place when the confluence of the advertisement and the reality yield a situation in which the person is almost compelled to purchase what is on offer because it is exactly what they want when they want it.  This level of targeting is enhanced when they know what you like (Google+, Google Photos (formerly Picasa)), how much money you have (Google Wallet), where you are (Android), what you already have (Google Shopping), what you may be thinking (GMail), who you are with (Android) and what your friends and neighbors have and think (all of the aforementioned).  Google Glass, by recording location data, images, registering your likes and other purchases can work to build and enhance such a personal database.  Even if you choose to anonymize yourself and force Google to de-personalize your data, their guesses may be less accurate but they will still know about you as a demographic group (male, aged 30-34, lives in zip code 95123, etc.) and perhaps general information based on your locale and places you visit and where you might be at any time.  So, I immediately see the value of Google Glass for Google and Google’s advertising customers but see less value in its everyday use by ordinary folks unless they seek to be perceived as cold, anti-social savants who may possibly be on the Autistic Spectrum.

I don’t want to predict that Google Glass will be a marketplace disaster but the value statement for it appears to be limited.  A lot of the capabilities touted for it are already on your smartphone or soon to be released for it.  There is talk of image scanning applications that immediately bring up information about whatever it is that you’re looking at.  Well, Google’s own Goggles is an existing platform for that and it works on a standard mobile phone.  In fact, all of the applications touted thus far for Google Glass rely on some sort of visual analysis or geolocation-based look-up that is equally applicable to anything with a camera. It seems to me that the “gotta have the latest gadget” gang will flock to Google Glass as they always do to these devices but appealing to the general public may be a more difficult task.  Who really wants to wear their phone on their face?  If the benefit of Google Glass is its wearability then maybe Apple’s much-rumored iWatch is a less intrusive and less nerdy looking alternative.  Maybe Apple still better understands what people really want when it comes to mobile connectivity.

Ultimately, Google Glass may be a blockbuster hit or just an interesting (but expensive) experiment.  We’ll find out by the end of the year.

Tags: , , , , , , , , , , , , ,

IEEE-1149-1-jtag-pictureThat venerable electronic test standard IEEE Std 1149.1 (also known as JTAG; also known as Boundary-Scan; also known as Dot 1) has just been freshened up.  This is no ordinary freshening.  The standard, last revisited in 2001, is long overdue for some clarification and enhancement.  It’s been a long time coming and now…it’s here.  While the guts remain the same and in good shape, some very interesting options and improvements have been added.  The improvements are intended to provide support for testing and verification of the more complex devices currently available and to acknowledge the more sophisticated test algorithms and capabilities afforded by the latest hardware.  There is an attempt, as well, (perhaps though, only as well as one can do this sort of thing) to anticipate future capabilities and requirements and to provide a framework within which such capabilities and requirements can be supported.  Of course, since the bulk of the changes are optional their value will only be realized if the end-user community embraces them.

There are only some minor clarifications or relaxations to the rules that are already established. For the most part, components currently compliant with the previous version of this standard will remain compliant with this one. There is but one “inside baseball” sort of exception.  The long denigrated and deprecated BC_6 boundary-scan cell has finally been put to rest. It is, with the 2013 version, no longer supported or defined, so any component supplier who chose to utilize this boundary-scan cell – despite all warnings to contrary – must now provide their own BSDL package defining this BC_6 cell if they upgrade to using the STD_1149_1_2013 standard package for their BSDL definitions.

While this is indeed a major revision, I must again emphasize that all the new items introduced are optional.  One of the largest changes is in documentation capability incorporating  the introduction of a new executable description language called Procedural Description Language (PDL) to document test procedures unique to a component.  PDL, a TCL-like language, was adopted from the work of the IEEE Std P1687 working group. 1687 is a proposed IEEE Standard for the access to and operation of embedded instruments (1687 is therefore also known as iJTAG or Instrument JTAG). The first iteration of the standard was based on use of the 1149.1 Test Access Port and Controller to provide the chip access—and a set of modified 1149.1-type Test Data Registers to create an access network for embedded instruments. PDL was developed to describe access to and operation of these embedded instruments.

Now, let’s look at the details.  The major changes are as follows:

In the standard body:

  • In order to allow devices to maintain their test logic in test mode, a new, optional, test mode persistence controller was introduced.  This means that test logic (like the boundary-scan register) can remain behaviorally in test mode even if the active instruction does not force test mode. To support this, the TAP controller was cleaved into 2 parts.  One part that controls test mode and the other that has all the rest of the TAP functionality. In support of this new controller, there are three new instructions: CLAMP_HOLD and TMP_STATUS (both of which access the new TMP status test data register) and CLAMP_RELEASE.
  • In recognizing the emerging requirement for unique device identification codes a new, optional ECIDCODE instruction was introduced along with an associated electronic chip identification test data register.  This instruction-register pair is intended to supplement the existing IDCODE and USERCODE instructions and allow for access to an Electronic Chip Identification value that could be used to identify and track individual integrated circuits.
  • The problem of initializing a device for test has been addressed by providing a well-defined framework to use to formalize this process. The new, optional INIT_SETUP, INIT_SETUP_CLAMP, and INIT_RUN instructions paired with their associated initialization data and initialization status test data registers were provided to this end. The intent is that these instructions formalize the manner in which programmable input/output (I/O) can be set up prior to board or system testing, as well as any providing for the execution of any tasks required to put the system logic into a safe state for test.
  • Recognizing that resetting a device can be complex and require many steps or phases, a new, optional, IC_RESET instruction and its associated reset_select test data register is defined to provide formalized control of component reset functions through the TAP.
  • Many devices now have a number of separate power domains that could result in sections of the device being powered down while other are powered up.  A single, uniform boundary-scan register does not align well with that device style.  So to support power domains that may be powered down but having a single test data register routed through these domains,  an optional standard TAP to test data register interface is recommended that allows for segmentation of test data registers. The concept of register segments allows for segments that may be excluded or included and is generalized sufficiently for utilization beyond the power domain example.
  • There have also been a few enhancements to the boundary-scan register description to incorporate the following:
    1. Optional excludable (but not selectable) boundary-scan register segments
    2. Optional observe-only boundary-scan register cells to redundantly capture the signal value on all digital pins except the TAP pins
    3. Optional observe-only boundary-scan register cells to capture a fault condition on all pins, including non-digital pins, except the TAP pins.

The Boundary Scan Description Language annex was rewritten and includes:

  • Increased clarity and consistency based on end-user feedback accumulated over the years.
  • A technical change was made such that BSDL is no longer a “proper subset” of VHDL, but it is now merely “based on” VHDL. This means that BSDL now maintains VHDL’s flavor but has for all intents and purposes been “forked”.
  • As result of this forking, formal definitions of language elements are now included in the annex instead of reliance on inheritance from VHDL.
  • Also as a result of this forking, some changes to the BNF notation used, including definition of all the special character tokens, are in the annex.
  • Pin mapping now allows for documenting that a port is not connected to any device package pin in a specific mapped device package.
  • The boundary-scan register description introduces new attributes for defining boundary-scan register segments, and introduces a requirement for documenting the behavior of an un-driven input.
  • New capabilities are introduced for documenting the structural details of test data registers:
    1. Mnemonics may be defined that may be associated with register fields.
    2. Name fields within a register or segment may be defined.
    3. Types of cells used in a test data register (TDR) field may be defined.
    4. One may hierarchically assemble segments into larger segments or whole registers.
    5. Constraints may be defined on the values to be loaded in a register or register field.
    6. A register field or bit may be associated with specific ports
    7. Power port may be associated with other ports.
  • The User Defined Package has been expanded to support logic IP providers who may need to document test data register segments contained within their IP.

As I stated earlier, a newly adopted language, PDL, has been included in this version of the standard.  The details of this language are included as part of Annex C. PDL is designed to document the procedural and data requirements for some of the new instructions. PDL serves a descriptive purpose in that regard but, as such, it is also executable should a system choose to interpret it.

It was decided to adopt and develop PDL to support the new capability of  initializing internal test data register fields and configuring complex I/Os prior to entering the EXTEST instruction.  Since the data required for initialization could vary for each use of the component on each distinct board or system design there needed to be an algorithmic way to describe the data set-up and application., in order to configure the I/O Since this version of the standard introduces new instructions for configuring complex I/Os prior to entering the EXTEST instruction. As the data required for initialization could vary for each use of the component on each distinct board or system design, this created the need for a new language for setting internal test data register fields in order to configure the I/O. It was decided to adopt PDL and tailor it to the BSDL register descriptions and the needs of IEEE 1149.1.

Since the concept of BSDL and PDL working together is new and best explained via examples Annex D is provided to supply extended examples of BSDL and PDL used together to describe the structure and the procedures for use of new capabilities. Similarly Annex E provides example pseudo-code for the execution of the PDL iApply command, the most complex of the new commands in PDL.

So that is the new 1149.1 in a nutshell. A fair amount of new capabilities. Some of it complex. All of it optional.  Will you use it?

Tags: , , , , , , ,

gate-with-no-fence-please-keep-locked-articleScott McNealy, the former CEO of the former Sun Microsystems, in the late 1990s, in an address to the Commonwealth Club said that the future of the Internet is in security. Indeed, it seems that there has been much effort and capital invested in addressing security matters. Encryption, authentication, secure transaction processing, secure processors, code scanners, code verifiers and host of other approaches to make your system and its software and hardware components into a veritable Fort Knox. And it’s all very expensive and quite time consuming (both in development and actual processing). And yet we still hear of routine security breeches, data and identity theft, on-line fraud and other crimes. Why is that? Is security impossible? Unlikely? Too expensive? Misused? Abused? A fiction?

Well, in my mind, there are two issues and they are the weak links in any security endeavour. The two actually have one common root. That common root, as Pogo might say, “is us”. The first one that has been in the press very much of late and always is the reliance on password. When you let the customers in and provide them security using passwords, they sign up using passwords like ‘12345’ or ‘welcome’ or ‘password’. That is usually combated through the introduction of password rules. Rules usually indicate that passwords must meet some minimum level of complexity. This would usually be something like requiring that each password must have a letter and a number and a punctuation mark and be at least 6 characters long. This might cause some customers to get so aggravated because they can’t use their favorite password that they don’t both signing up at all. Other end users get upset but change their passwords to “a12345!” or “passw0rd!” or “welc0me!”. And worst of all, they write the password down and put it in a sticky note on their computer.

Of course, ordinary users are not the only ones to blame, administrators are human, too, and equally as fallible. Even though they should know better, they are equally likely to have the root or administrator password left at the default “admin” or even nothing at all.

The second issue is directly the fault of the administrator – but it is wholly understandable. Getting a system, well a complete network of systems, working and functional is quite an achievement. It is not something to be toyed around with once things are set. When your OS supplier or application software provider delivers a security update, you will think many times over before risking system and network stability to apply it. The choice must be made. The administrator thinks: “Do I wreak havoc on the system – even theoretical havoc – to plug a security hole no matter how potentially damaging?” And considers that: “Maybe I can rely on my firewall…maybe I rely on the fact that our company isn’t much of a target…or I think it isn’t.” And rationalizes: “Then I can defer the application of the patch for now (and likely forever) in the name of stability.”

The bulk of hackers aren’t evil geniuses that stay up late at night doing forensic research and decompilation to find flaws, gaffes, leaks and holes in software and systems. No they are much more likely to be people who read a little about the latest flaws and the most popular passwords and spend their nights just trying stuff to see what they can see. A few of them even specialize in social engineering in which they simply guess or trick you into divulging your password – maybe by examining your online social media presence.

The notorious stuxnet malware worm may be a complex piece of software engineering but it would have done nothing were it not for the peril of human curiosity. The virus allegedly made its way into secure facilities on USB memory sticks. Those memory sticks were carried in human hands and inserted into the targeted computers by those same hands. How did they get into those human hands? A few USB sticks with the virus were probably sprinkled in the parking lot outside the facility. Studies have determined that people will pick up USB memory sticks they find and insert them in their PCs about 60% of the time. The interesting thing is that the likelihood of grabbing and using those USB devices goes up to over 90% if the device has a logo on it.

You can have all the firewalls and scanners and access badges and encryption and SecureIDs and retinal scans you want. In the end, one of your best and most talented employees grabbing a random USB stick and using it on his PC can be the root cause of devastation that could cost you staff years of time to undo.

So what do you do? Fire your employees? Institute policies so onerous that no work can be done at all? As is usual, the best thing to do is apply common sense. If you are not a prime target like a government, a security company or a repository of reams of valuable personal data – don’t go overboard. Keep your systems up-to-date. The time spent now will definitely pay off in the future. Use a firewall. A good one. Finally, be honest with your employees. Educate them helpfully. None of the scare tactics, no “Loose Lips Sink Ships”, just straight talk and a little humor to help guide and modify behavior over time.

Tags: , , , , ,
Back to top