Archive for 'Consulting'

basicsI admit it. I got a free eBook.  I signed up with O’Reilly Media as a reviewer. The terms and conditions of this position were that when I get an  eBook,  I agree to write a review of it.  Doesn’t matter if the review is good or bad (so I guess, technically, this is NOT log rolling).  I just need to write a review.  And if I post the review, I get to choose another eBook to review.  And so on. So, here it is.  The first in what will likely be an irregular series.  My review.

The book under review is “The Basics of Web Hacking” subtitled “Tools and Techniques to Attack the Web” by Josh Pauli. The book was published in June, 2013 so it is fairly recent.  Alas, recent in calendar time is actually not quite that recent in Internet time – but more on this later.

First, a quick overview. The book provides an survey of hacking tools of the sort that might be used for either the good of mankind (to test and detect security issues in a website and application installation) or for the destruction of man and the furtherance of evil (to identify and exploit security issues in a website and application installation).  The book includes a several page disclaimer advising against the latter behavior suggesting that the eventual outcomes of such a path may not be pleasant.  I would say that the disclaimer section is written thoughtfully with the expectation that readers would take seriously its warnings.

For the purposes of practice, the book introduces the Damn Vulnerable Web Application (DVWA).  This poorly-designed-on-purpose web application allows you to use available tools and techniques to see exactly how vulnerabilities are detected and exploits deployed. While the book describes utilizing an earlier version of the application, figuring out how to install and use the newer version that is now available is a helpful and none-too-difficult experience as well.

Using DVWA as a test bed, the book walks you through jargon and then techniques and then practical exercises in the world of hacking. Coverage of scanning, exploitation, vulnerability assessment and attacks suited to each vulnerability including a decent overview of the vast array of available tools to facilitate these actions.  The number of widely available very well built applications with easy-to-use interfaces is overwhelming and quite frankly quite scary.  Additionally, a plethora of web sites provide a repository of information regarding already known to be vulnerable web sites and how they are vulnerable (in many cases these sites remain vulnerable despite the fact that they have been notified)

The book covers usage of applications such as Burp Suite, Metasploit, nmap, nessus, nikto and The Social Engineer Toolkit. Of course, you could simply download these applications and try them out but the book marches through a variety of useful hands-on experiments that exhibit typical real-life usage scenarios. The book also describes how the various applications can be used in combination with each other which can make investigation and exploitation easier.

In the final chapter, the book describes design methods and application development rules that can either correct or minimize most vulnerabilities as well as providing a relatively complete list of “for further study” items that includes books, groups, conferences and web sites.

All in all, this book provides a valuable primer and introduction to detecting and correcting vulnerabilities in web applications.  Since the book is not that old, changes to applications are slight enough that figuring out what the changes are and how to do what the book is describing is a great learning experience rather than simply an exercise in frustration. These slight detours actually serve to increase your understanding of the application.

I say 4.5 stars out of 5 (docked a star because these subject areas tend to get out-of-date too quickly but if you read it NOW you are set to grow with the field)

See you at DEFCON!

Tags: , , , , , , , , ,

disruptive_innovation_graphEveryone who is anyone loves bandying about the name of Clayton Christensen, the famed Professor of Business Administration at the Harvard Business School, who is regarded as one of the world’s top experts on innovation and growth and who is most famous for coining the term “disruptive innovation“. Briefly, the classical meaning of the term is as follows. A company, usually a large one, focuses on serving the high end, high margin part of their business and in doing so they provide an opening at the low end, low margin market segment.  This allows for small nimble, hungry innovators to get a foothold in the market by providing cheap but good enough products to the low end who are otherwise forsaken by the large company who is only willing to provide high priced, over-featured products.  These small innovators use their foothold to innovate further upmarket providing products of increasingly better functionality at lower cost that the Big Boys at the high end.  The Big Boys are happy with this because those lower margin products are a lot of effort for little payback and “The Market” rewards them handsomely for doing incremental innovation at the high end and maintaining high margins.  In the fullness of time, the little scrappy innovators disrupt the market with cheaper, better and more innovative solutions and products that catch up to and eclipse the offerings of the Big Boys, catching them off guard and the once large corporations, with their fat margins, become small meaningless boutique firms.  Thus the market is disrupted and the once regal and large companies, even though they followed all the appropriate rules dictated by “The Market”, falter and die.

Examples of this sort of evolution are many.  The Japanese automobile manufacturers used this sort of approach to disrupt the large American manufacturers in the 70s and 80s; the same with Minicomputers versus Mainframes and then PCs versus Minicomputers; to name but a few.  But when you think about it, sometimes disruption comes “from above”.  Consider the iPod.  Remember when Apple introduced their first music player?  They weren’t the first-to-market as there were literally tens of MP3 players available.  They certainly weren’t the cheapest as about 80% of the portable players had a price point well-below Apple’s $499 MSRP.  The iPod did have more features than most other players available and was in many ways more sophisticated – but $499?   This iPod was more expensive, more featured, higher priced, had more space on it for storage than anyone could ever imagine needing and had bigger margins than any other similar device on the market. And it was a huge hit.  (I personally think that the disruptive part was iTunes that made downloading music safe, legal and cheap at a time when the RIAA was making headlines by suing ordinary folks for thousands of dollars for illegal music downloads – but enough about me.)  From the iPod, Apple went on to innovate a few iPod variants, the iPhone and the iPad as well as incorporating some of the acquired knowledge into the Mac.

And now, I think, another similarly modeled innovation is upon us.  Consider Tesla Motors.  Starting with the now-discontinued Roadster – a super high end luxury 2 seater sport vehicle that was wholly impractical and basically a plaything for the 1%.  But it was a great platform to collect data and learn about batteries, charging, performance, efficiency, design, use and utility.  Then the Model S that, while still quite expensive, brought that price within reach of perhaps the 2% or even the 3%.   In Northern California, for instance, Tesla S cars populate the roadways seemingly with the regularity of VW Beetles.  Of course, part of what makes them seem so common is that their generic luxury car styling makes them nearly indistinguishable, at first glace, from a Lexus, Jaguar, Infiniti, Maserati, Mercedes Benz, BMW and the like. The choice of styling is perhaps yet another avenue of innovation.  Unlike, the Toyota Prius whose iconic design became a “vector” sending a message to even the casual observer about the driver and perhaps the driver’s social and environmental concerns.  The message of the Tesla’s generic luxury car design to the casual observer merely seems to be “I’m rich – but if you want to learn more about me – you better take a closer look”. Yet even attracting this small market segment, Tesla was able to announce profitability for the first time.

With their third generation vehicle, Tesla promises to reduce their selling price by 40% over the current Model S .  This would bring the base price to about $30,000 which is within the average selling price of new cars in the United States.  Even without the lower priced vehicle available, Tesla is being richly rewarded by The Market thanks to a good product (some might say great), some profitability, excellent and savvy PR and lots and lots of promise of a bright future.

But it is the iPod model all over again. Tesla is serving the high end and selling top-of-the-line technology.  They are developing their technology within a framework that is bound mostly by innovation and their ability to innovate and not by cost or selling price.  They are also acting in a segment of the market that is not really well-contested (high-end luxury electric cars).  This gives them freedom from the pressures of competition and schedules – which gives them an opportunity to get things right rather than rushing out ‘something’ to appease the market.  And with their success in that market, they are turning around and using what they have learned to figure out how to build the same thing (or a similar thing) cheaper and more efficiently to bring the experience to the masses (think: iPod to Nano to Shuffle).  They will also be able thusly to ease their way into competing at the lower end with the Nissan Leaf, Chevy Volt, the Fiat 500e and the like.

Maybe the pathway to innovation really is from the high-end down to mass production?

Tags: , , , , , , , ,

171i-Training-Talet-Development-Competency-BenchmarkingI’ve had occasion to be interviewed for positions at a variety of technology companies. Sometimes the position actually exists, other times it might exist and even other times, the folks are just fishing for solutions to their problems and hope to save a little from their consulting budget. In all cases, the goal of the interview is primarily to find out what you know and how well you know it in a 30 to 45 minute conversation. It is interesting to see how some go about doing it. My experience has been that an interview really tells you nothing but does give you a sense of whether the person is nice enough to “work well with others“.

But now, finally folks at Google used big data to figure out something that has been patently obvious to anyone who has either interviewed for a job or was interviewing someone for a job. The article published in the New York Time details a talk with Mr. Laszlo Bock, senior vice president of people operations at Google.  In it, he shared that puzzle questions don’t tell you anything about anyone.  I maintain that they tell you if someone has heard that particular puzzle question before. In the published interview Mr. Bock, less charitably, suggests that it merely serves to puff up the ego of the interviewer.

I think it’s only a matter of time before big data is used again to figure out another obvious fact – that even asking simple or complex programming questions serves as no indicator of on-the-job success.  Especially now in the age of Google and open-source software.  Let’s say you want to write some code to sort a string of arbitrary letters and determine the computational complexity, a few quick Google searches and presto – you have the solution.  You need to understand the question and the nature of the problem but the solution itself has merely become a matter of copying from your betters and equals who shared their ideas on the Internet.  Of course, such questions are always made more useless when the caveat is added – “without using the built-in sort function” – which is, of course, the way you actually solve it in real life.

Another issue I see is the concern about experience with a specific programming language. I recall that the good people at Apple are particularly fond of Objective C to the point where they believe that unless you have had years of direct experience with it, you could never use it to program effectively.  Of course, this position is insulting to both any competent programmer and the Objective C language. The variations between these algorithmic control flow languages are sometimes subtle, usually stylistic but always easily understood. This is true of any programming language.  In reality, if you are competent at any one, you should easily be able to master any another. For instance,  Python uses indentation but C uses curly braces to delineate code blocks.  Certainly there are other differences but give any competent developer a few days and they can figure it out leveraging their existing knowledge.

But that still leaves the hard question.  How do you determine competency?  I don’t think you can figure it out in a 45 minute interview – or a 45 hour one for that matter – if the problems and work conditions are artificial.  I think the first interview should be primarily behavioral and focus on fit and then, if that looks good, the hiring entity should then pay you to come in and work for a week solving an actual problem working with the team that would be yours. This makes sense in today’s world of limited, at-will employment where everyone is really just a contractor waiting to be let go. So, in this approach, everyone gets to see how you fit in with the team, how productive you can be, how quickly you can come up to speed on a basic issue and how you actually work a problem to a solution in the true environment. This is very different from establishing that you can minimize the number of trips a farmer takes across a river with five foxes, three hens, six bag of lentils, a sewing machine and a trapeze.

I encourage you to share some of your ideas for improving the interview process.

Tags: , , , , , ,

gate-with-no-fence-please-keep-locked-articleScott McNealy, the former CEO of the former Sun Microsystems, in the late 1990s, in an address to the Commonwealth Club said that the future of the Internet is in security. Indeed, it seems that there has been much effort and capital invested in addressing security matters. Encryption, authentication, secure transaction processing, secure processors, code scanners, code verifiers and host of other approaches to make your system and its software and hardware components into a veritable Fort Knox. And it’s all very expensive and quite time consuming (both in development and actual processing). And yet we still hear of routine security breeches, data and identity theft, on-line fraud and other crimes. Why is that? Is security impossible? Unlikely? Too expensive? Misused? Abused? A fiction?

Well, in my mind, there are two issues and they are the weak links in any security endeavour. The two actually have one common root. That common root, as Pogo might say, “is us”. The first one that has been in the press very much of late and always is the reliance on password. When you let the customers in and provide them security using passwords, they sign up using passwords like ‘12345’ or ‘welcome’ or ‘password’. That is usually combated through the introduction of password rules. Rules usually indicate that passwords must meet some minimum level of complexity. This would usually be something like requiring that each password must have a letter and a number and a punctuation mark and be at least 6 characters long. This might cause some customers to get so aggravated because they can’t use their favorite password that they don’t both signing up at all. Other end users get upset but change their passwords to “a12345!” or “passw0rd!” or “welc0me!”. And worst of all, they write the password down and put it in a sticky note on their computer.

Of course, ordinary users are not the only ones to blame, administrators are human, too, and equally as fallible. Even though they should know better, they are equally likely to have the root or administrator password left at the default “admin” or even nothing at all.

The second issue is directly the fault of the administrator – but it is wholly understandable. Getting a system, well a complete network of systems, working and functional is quite an achievement. It is not something to be toyed around with once things are set. When your OS supplier or application software provider delivers a security update, you will think many times over before risking system and network stability to apply it. The choice must be made. The administrator thinks: “Do I wreak havoc on the system – even theoretical havoc – to plug a security hole no matter how potentially damaging?” And considers that: “Maybe I can rely on my firewall…maybe I rely on the fact that our company isn’t much of a target…or I think it isn’t.” And rationalizes: “Then I can defer the application of the patch for now (and likely forever) in the name of stability.”

The bulk of hackers aren’t evil geniuses that stay up late at night doing forensic research and decompilation to find flaws, gaffes, leaks and holes in software and systems. No they are much more likely to be people who read a little about the latest flaws and the most popular passwords and spend their nights just trying stuff to see what they can see. A few of them even specialize in social engineering in which they simply guess or trick you into divulging your password – maybe by examining your online social media presence.

The notorious stuxnet malware worm may be a complex piece of software engineering but it would have done nothing were it not for the peril of human curiosity. The virus allegedly made its way into secure facilities on USB memory sticks. Those memory sticks were carried in human hands and inserted into the targeted computers by those same hands. How did they get into those human hands? A few USB sticks with the virus were probably sprinkled in the parking lot outside the facility. Studies have determined that people will pick up USB memory sticks they find and insert them in their PCs about 60% of the time. The interesting thing is that the likelihood of grabbing and using those USB devices goes up to over 90% if the device has a logo on it.

You can have all the firewalls and scanners and access badges and encryption and SecureIDs and retinal scans you want. In the end, one of your best and most talented employees grabbing a random USB stick and using it on his PC can be the root cause of devastation that could cost you staff years of time to undo.

So what do you do? Fire your employees? Institute policies so onerous that no work can be done at all? As is usual, the best thing to do is apply common sense. If you are not a prime target like a government, a security company or a repository of reams of valuable personal data – don’t go overboard. Keep your systems up-to-date. The time spent now will definitely pay off in the future. Use a firewall. A good one. Finally, be honest with your employees. Educate them helpfully. None of the scare tactics, no “Loose Lips Sink Ships”, just straight talk and a little humor to help guide and modify behavior over time.

Tags: , , , , ,

spaceIn the famous Aardman Animations short film “Creature Comforts“, a variety of zoo animals discuss their lives in the zoo.  A Brazilian Lion speaks at length about the virtue of the great outdoors (cf. a zoo) recalling that in Brazil “We have space“.  While space might be a great thing for Brazilian Lions, it turns out that space is a dangerous and difficult reality in path names for computer applications.

In a recent contract, one portion of the work involved running an existing Windows application under Cygwin. Cygwin, for the uninitiated, is an emulation of the bash shell and most standard Unix commands. It provides this functionality so you can experience Unix under Windows. The Windows application I was working on had been abandoned for several years and customer pressure finally reached a level at which maintenance and updates were required – nay, demanded. Cygwin support was required primarily for internal infrastructure reasons. The infrastructure was a testing framework – primarily comprising bash shell scripts – that ran successfully on Linux (for other applications). My job was to get the Windows application re-animated and running under the shell scripts on Cygwin.

It turns out that the Windows application had a variety of issues with spaces in path names. Actually, it had one big issue – it just didn’t work when the path names had spaces. The shell scripts had a variety of issues with spaces. Well, one big issue – they, too, just didn’t work when the path names had spaces. And it turns out that some applications and operations in Cygwin have issues with spaces, too. Well, that one big issue – they don’t like spaces.

Now by “like”, I mean that when the path name contains spaces then even using ‘\040’ (instead of the space) or quoting the name (e.g., “Documents and Settings”) does not resolve matters and instead merely yields unusual and unhelpful error messages. The behavior was completely unpredictable, as well. For instance, quoting might get you part way through a section of code but then the same quoted name failed when used to call stat. It would then turn out that stat didn’t like spaces in any form (quoted, escaped, whatever…).

Parenthetically, I would note that the space problem is widespread. I was doing some Android work and having an odd an unhelpful error displayed (“invalid command-line parameter”) when trying to run my application on the emulator under Eclipse. It turns out that a space in the path name to the Android SDK was the cause.  Once the space was removed, all was well.

The solution to my problem turned out to be manifold. It involved a mixture of quoting, clever use of cygpath and the Windows API calls GetLongPathName and GetShortPathName.

When assigning and passing variables around in shell scripts, quoting a space-laden path or a variable containing a space-laden path,  the solution was easy. Just remember to use quotes:

THIS=”${THAT}”

Passing command line options that include path names with spaces tended to be more problematic. The argc/argv parsers don’t like spaces.  They don’t like them quoted and don’t like them escaped.  Or maybe the parser likes them but the application doesn’t. In any event, the specific workaround that used was clever manipulation of the path using the cygpath command. The cygpath -w -s command will translate a path name to the Windows version (with the drive letter and a colon at the beginning) and then shortens the name to the old-style 8+3 limited format thereby removing the spaces. An additional trick is that then, if you need the cygwin style path – without spaces – you get the output of the cygpath -w -s and run it through cygpath -u. Then you get a /cygdrive/ style file name with no spaces. There is no other direct path to generating a cygwin Unix style file name without spaces.

These manipulations allow you to get the sort of input you need to the various Windows programs you are using. It is important to note, however, that a Windows GUI application built using standard file browser widgets and the like always passes fully instantiated, space-laden path names. The browser widgets can’t even correctly parse 8+3 names. Some of the system routines, however, don’t like spaces. Then the trick is how do you manipulate the names once within the sphere of the Windows application? Well, there are a number of things to keep in mind, the solutions I propose will not work with cygwin Unix-style names and they will not work with relative path names.

Basically, I used the 2 windows API calls GetLongPathName and GetShortPathName to manipulate the path. I used GetShortPathName to generate the old-style 8+3 format name that removes all the spaces. This ensured that all system calls worked without a hitch. Then, in order, to display messaging that the end-user would recognize, make sure that the long paths are restored by calling GetLongPathName for all externally shared information. I need to emphasize that these Windows API calls do not appear to work with relative path names. They return an empty string as a result. So you need to watch out for that.

Any combination of all these approaches (in whole or in part) may be helpful to you in resolving any space issues you encounter.

Tags: , , , , , , , ,

Software is complicated. Look up software complexity on Google and you get almost 10,000,000 matches. Even if 90% of them are bogus that’s still about a million relevant hits. But I wonder if that means that when there’s a problem in a system – the software is always to blame?

I think that in pure application level software development, you can be pretty sure that any problems that arise in your development are of your own making. So when I am working on that sort of project, I make liberal use of a wide variety of debugging tools, keep quiet and the fix the problems I find.

But when developing for any sort of custom embedded system, suddenly the lines become much more blurry. With my clients, when I am verifying embedded software targeting custom systems and things aren’t working according to the written specification and when initial indications are that a value read from something like a sensor or a pin is wrong – I find that I will always report my observations but quickly indicate that I need to review my code before making any final conclusion. I sometimes wonder if I do this because I actually believe I could be that careless or that I am merely being subconsciously obsequious (“Oh no, dear customer, I am the only imperfect one here!”) or perhaps I am merely being conservative in exercising my judgement choosing to dig for more facts one way or the other. I suspect it might be the latter.

But I wonder, if this level of conservatism does anyone any good? Perhaps I should be quicker to point the finger at the guilty party? Maybe that would speed up development and hasten delivery?

In fact, what I have noticed is that my additional efforts often point out additional testing and verification strategies that can be used to improve the overall quality of the system regardless of the source of the problem. I am often better able to identify critical interfaces and more importantly critical behaviors across these interfaces that can be monitored as either online or offline diagnosis and validation tasks.

Tags:

In these days of tight budgets but no shortage of things to do, more and more companies are finding that having a flexible workforce is key. This means that having the ability to apply immediate resources to any project is paramount. But also as important, is the ability to de-staff a project quickly and without the messiness of layoffs.

While this harsh work environment seems challenging, it actually can very rewarding both professionally and monetarily and see both the employers and employees coming out winners.

The employees have the benefit of being able to work on a wide variety of disparate projects. This can yield a level of excitement unlikely to be experienced in a full time position that is usually focussed on developing deep expertise in a narrow area. The employers get the ability to quickly staff up to meet schedules and requirements and the ability to scale back just as quickly.

Of course, this flexibility – by definition – means that there is no stability and limited predictability for both employees and employers. The employees don’t know when or where they will see the next job and the employers don’t know if they will get the staff they need when they need it. While some thrive in this sort of environment, others seek the security of knowing with some degree of certainty what tomorrow brings. With enough experience with a single contractor, an employer can choose to attempt to flip the contractor from a “renter” to an “owner”. Similarly, the contractor may find the work atmosphere so enticing that settling down and getting some “equity” might be ideal.

It is a strange but mutually beneficial arrangement with each party having equal stance and in effect both having the right of first refusal in the relationship. And it may very well be the new normal in the workplace.

Tags: , ,
Back to top