Archive for 'Software'

gate-with-no-fence-please-keep-locked-articleScott McNealy, the former CEO of the former Sun Microsystems, in the late 1990s, in an address to the Commonwealth Club said that the future of the Internet is in security. Indeed, it seems that there has been much effort and capital invested in addressing security matters. Encryption, authentication, secure transaction processing, secure processors, code scanners, code verifiers and host of other approaches to make your system and its software and hardware components into a veritable Fort Knox. And it’s all very expensive and quite time consuming (both in development and actual processing). And yet we still hear of routine security breeches, data and identity theft, on-line fraud and other crimes. Why is that? Is security impossible? Unlikely? Too expensive? Misused? Abused? A fiction?

Well, in my mind, there are two issues and they are the weak links in any security endeavour. The two actually have one common root. That common root, as Pogo might say, “is us”. The first one that has been in the press very much of late and always is the reliance on password. When you let the customers in and provide them security using passwords, they sign up using passwords like ‘12345’ or ‘welcome’ or ‘password’. That is usually combated through the introduction of password rules. Rules usually indicate that passwords must meet some minimum level of complexity. This would usually be something like requiring that each password must have a letter and a number and a punctuation mark and be at least 6 characters long. This might cause some customers to get so aggravated because they can’t use their favorite password that they don’t both signing up at all. Other end users get upset but change their passwords to “a12345!” or “passw0rd!” or “welc0me!”. And worst of all, they write the password down and put it in a sticky note on their computer.

Of course, ordinary users are not the only ones to blame, administrators are human, too, and equally as fallible. Even though they should know better, they are equally likely to have the root or administrator password left at the default “admin” or even nothing at all.

The second issue is directly the fault of the administrator – but it is wholly understandable. Getting a system, well a complete network of systems, working and functional is quite an achievement. It is not something to be toyed around with once things are set. When your OS supplier or application software provider delivers a security update, you will think many times over before risking system and network stability to apply it. The choice must be made. The administrator thinks: “Do I wreak havoc on the system – even theoretical havoc – to plug a security hole no matter how potentially damaging?” And considers that: “Maybe I can rely on my firewall…maybe I rely on the fact that our company isn’t much of a target…or I think it isn’t.” And rationalizes: “Then I can defer the application of the patch for now (and likely forever) in the name of stability.”

The bulk of hackers aren’t evil geniuses that stay up late at night doing forensic research and decompilation to find flaws, gaffes, leaks and holes in software and systems. No they are much more likely to be people who read a little about the latest flaws and the most popular passwords and spend their nights just trying stuff to see what they can see. A few of them even specialize in social engineering in which they simply guess or trick you into divulging your password – maybe by examining your online social media presence.

The notorious stuxnet malware worm may be a complex piece of software engineering but it would have done nothing were it not for the peril of human curiosity. The virus allegedly made its way into secure facilities on USB memory sticks. Those memory sticks were carried in human hands and inserted into the targeted computers by those same hands. How did they get into those human hands? A few USB sticks with the virus were probably sprinkled in the parking lot outside the facility. Studies have determined that people will pick up USB memory sticks they find and insert them in their PCs about 60% of the time. The interesting thing is that the likelihood of grabbing and using those USB devices goes up to over 90% if the device has a logo on it.

You can have all the firewalls and scanners and access badges and encryption and SecureIDs and retinal scans you want. In the end, one of your best and most talented employees grabbing a random USB stick and using it on his PC can be the root cause of devastation that could cost you staff years of time to undo.

So what do you do? Fire your employees? Institute policies so onerous that no work can be done at all? As is usual, the best thing to do is apply common sense. If you are not a prime target like a government, a security company or a repository of reams of valuable personal data – don’t go overboard. Keep your systems up-to-date. The time spent now will definitely pay off in the future. Use a firewall. A good one. Finally, be honest with your employees. Educate them helpfully. None of the scare tactics, no “Loose Lips Sink Ships”, just straight talk and a little humor to help guide and modify behavior over time.

Tags: , , , , ,

spaceIn the famous Aardman Animations short film “Creature Comforts“, a variety of zoo animals discuss their lives in the zoo.  A Brazilian Lion speaks at length about the virtue of the great outdoors (cf. a zoo) recalling that in Brazil “We have space“.  While space might be a great thing for Brazilian Lions, it turns out that space is a dangerous and difficult reality in path names for computer applications.

In a recent contract, one portion of the work involved running an existing Windows application under Cygwin. Cygwin, for the uninitiated, is an emulation of the bash shell and most standard Unix commands. It provides this functionality so you can experience Unix under Windows. The Windows application I was working on had been abandoned for several years and customer pressure finally reached a level at which maintenance and updates were required – nay, demanded. Cygwin support was required primarily for internal infrastructure reasons. The infrastructure was a testing framework – primarily comprising bash shell scripts – that ran successfully on Linux (for other applications). My job was to get the Windows application re-animated and running under the shell scripts on Cygwin.

It turns out that the Windows application had a variety of issues with spaces in path names. Actually, it had one big issue – it just didn’t work when the path names had spaces. The shell scripts had a variety of issues with spaces. Well, one big issue – they, too, just didn’t work when the path names had spaces. And it turns out that some applications and operations in Cygwin have issues with spaces, too. Well, that one big issue – they don’t like spaces.

Now by “like”, I mean that when the path name contains spaces then even using ‘\040’ (instead of the space) or quoting the name (e.g., “Documents and Settings”) does not resolve matters and instead merely yields unusual and unhelpful error messages. The behavior was completely unpredictable, as well. For instance, quoting might get you part way through a section of code but then the same quoted name failed when used to call stat. It would then turn out that stat didn’t like spaces in any form (quoted, escaped, whatever…).

Parenthetically, I would note that the space problem is widespread. I was doing some Android work and having an odd an unhelpful error displayed (“invalid command-line parameter”) when trying to run my application on the emulator under Eclipse. It turns out that a space in the path name to the Android SDK was the cause.  Once the space was removed, all was well.

The solution to my problem turned out to be manifold. It involved a mixture of quoting, clever use of cygpath and the Windows API calls GetLongPathName and GetShortPathName.

When assigning and passing variables around in shell scripts, quoting a space-laden path or a variable containing a space-laden path,  the solution was easy. Just remember to use quotes:

THIS=”${THAT}”

Passing command line options that include path names with spaces tended to be more problematic. The argc/argv parsers don’t like spaces.  They don’t like them quoted and don’t like them escaped.  Or maybe the parser likes them but the application doesn’t. In any event, the specific workaround that used was clever manipulation of the path using the cygpath command. The cygpath -w -s command will translate a path name to the Windows version (with the drive letter and a colon at the beginning) and then shortens the name to the old-style 8+3 limited format thereby removing the spaces. An additional trick is that then, if you need the cygwin style path – without spaces – you get the output of the cygpath -w -s and run it through cygpath -u. Then you get a /cygdrive/ style file name with no spaces. There is no other direct path to generating a cygwin Unix style file name without spaces.

These manipulations allow you to get the sort of input you need to the various Windows programs you are using. It is important to note, however, that a Windows GUI application built using standard file browser widgets and the like always passes fully instantiated, space-laden path names. The browser widgets can’t even correctly parse 8+3 names. Some of the system routines, however, don’t like spaces. Then the trick is how do you manipulate the names once within the sphere of the Windows application? Well, there are a number of things to keep in mind, the solutions I propose will not work with cygwin Unix-style names and they will not work with relative path names.

Basically, I used the 2 windows API calls GetLongPathName and GetShortPathName to manipulate the path. I used GetShortPathName to generate the old-style 8+3 format name that removes all the spaces. This ensured that all system calls worked without a hitch. Then, in order, to display messaging that the end-user would recognize, make sure that the long paths are restored by calling GetLongPathName for all externally shared information. I need to emphasize that these Windows API calls do not appear to work with relative path names. They return an empty string as a result. So you need to watch out for that.

Any combination of all these approaches (in whole or in part) may be helpful to you in resolving any space issues you encounter.

Tags: , , , , , , , ,

Back at the end of March, I attended O’Reilly‘s Web 2.0 Expo in San Francisco. As usual with the O’Reilly brand of conferences it was a slick, show-bizzy affair. The plenary sessions were fast-paced with generic techno soundtracks, theatrical lighting and spectacular attempts at buzz-generation. Despite their best efforts, the staging seems to overhwelm the Droopy Dog-like presenters who tend to be more at home coding in darkened rooms whilst gorging themselves on Red Bull and cookies. Even the audience seemed to prefer the company of their smartphones or iPads than any actual human interaction with “live tweets” being the preferred method of communication.

In any event, the conference is usually interesting and a few nuggets are typically extracted from the superficial, mostly promotional aspects of the presentations.

What was clear was that every start-up and every business plan was keyed on data collection. Data collection about YOU. The more – the better. The goal was to learn as much about you as possible so as to be able to sell you stuff. Even better – to sell you stuff that was so in tune with your desires that you would be helpless to resist purchasing it.

The trick was – how to get you to cough up that precious data? Some sites just assumed you’d be OK with spending a few days answering questions and volunteering information – apparently just for the sheer joy of it. Others believed that being up-front and admitting that you were going to be sucked into a vortex of unrelenting and irresistable consumption would be reward enough. Still others felt that they ought to offer you some valuable service in return. Most often, this service, oddly enough, was financial planning and retirement saving-based.

The other thing that was interesting (and perhaps obvious) was that data collection is usually pretty easy (at least the basic stuff). Getting details is harder and most folks do expect something in return. And, of course, the hardest part is the data mining to extract the information that would provide the most compelling sales pitch to you.

There are all sorts of ways to build the case around your apparent desires. By finding out where you live or where you are, they can suggest things “like” other things you have already that are nearby. (You sure seem to like Lady Gaga, you know there’s a meat dress shoppe around the corner…) By finding out who your friends are and what they like, they can apply peer-pressure-based recommendations (All of your friends are downloading the new Justin Beiber recording. Why aren’t you?). And by finding out about your family and demographic information they can suggest what you need or ought to be needing soon (You son’s 16th birthday is coming up soon, how about a new car for him?).

Of all the sites and ideas, it seems to me that Intuit‘s Mint is the most interesting. Mint is an on-line financial planning and management site. Sort of like Quicken but online. To “hook” you, their key idea is to offer you the tease of the most valuable analysis with the minimum of initial information. It’s almost like given your email and zip code they’ll draw up a basic profile of you and your lifestyle. Give them a bit more and they’ll make it better. And so you get sucked in but you get value for your data. They do claim to keep the data separate from you but they also do collect demographically filtered data and likely geographically filtered data.

This really isn’t news. facebook understood this years ago when their ill-fated Beacon campaign was launched. This probably would have been better accepted had it been rolled out more sensitively. But it is ultimately where everyone is stampeding right now.

The most interesting thing is that there is already a huge amount of personal data on the web. It is protected because it’s all in different places and not associated. facebook has all of your friends and acquaintances. Amazon and eBay have a lot about what you like and what you buy. Google has what you’re interested in (and if you have an Android phone – where you go). Apple has a lot about where you go and who you talk to and also through your app selection what you like and are interested in. LinkedIn has your professional associations. And, of course, twitter has when you go to the bathroom and what kind of muffins you eat.

Each of these giants is trying to expand their reservoir of data about you. Other giants are trying to figure out how to get a piece of that action (Yahoo!, Microsoft). And yet others, are trying to sell missing bits of information to these players. Credit card companies are making their vast purchasing databases available, specialty retailers are trying to cash in, cell phone service providers are muscling in as well. They each have a little piece of your puzzle to make analysis more accurate.

The expectations is that there will be acceptance of diminishing privacy and some sort of belief that the holders of these vast databases will be benevolent and secure and not require government intervention. Technologically, storage and retrieval will need to be addressed and newer, faster algorithms for analysis will need to be developed.

Looking for a job…or a powerful patent? I say look here.

Tags: , , , ,

Software is complicated. Look up software complexity on Google and you get almost 10,000,000 matches. Even if 90% of them are bogus that’s still about a million relevant hits. But I wonder if that means that when there’s a problem in a system – the software is always to blame?

I think that in pure application level software development, you can be pretty sure that any problems that arise in your development are of your own making. So when I am working on that sort of project, I make liberal use of a wide variety of debugging tools, keep quiet and the fix the problems I find.

But when developing for any sort of custom embedded system, suddenly the lines become much more blurry. With my clients, when I am verifying embedded software targeting custom systems and things aren’t working according to the written specification and when initial indications are that a value read from something like a sensor or a pin is wrong – I find that I will always report my observations but quickly indicate that I need to review my code before making any final conclusion. I sometimes wonder if I do this because I actually believe I could be that careless or that I am merely being subconsciously obsequious (“Oh no, dear customer, I am the only imperfect one here!”) or perhaps I am merely being conservative in exercising my judgement choosing to dig for more facts one way or the other. I suspect it might be the latter.

But I wonder, if this level of conservatism does anyone any good? Perhaps I should be quicker to point the finger at the guilty party? Maybe that would speed up development and hasten delivery?

In fact, what I have noticed is that my additional efforts often point out additional testing and verification strategies that can be used to improve the overall quality of the system regardless of the source of the problem. I am often better able to identify critical interfaces and more importantly critical behaviors across these interfaces that can be monitored as either online or offline diagnosis and validation tasks.

Tags:

I have spent a fair amount of my formative years in and around the field programmable gate array (FPGA) industry.  I participated in the evolution of FPGAs from a convenient repository for glue logic and a pricey but useful prototyping platform to a convenient repository for lots of glue logic, an affordable but still a little pricey platform to improve time-to-market and a useful system-on-a-chip platform.  There was much talk about FPGAs going mainstream, displacing all but a few ASICs and becoming the vehicle of choice for most system implementations.  It turns out that last step…the mainstreaming, the death of ASICs, the proliferating system-on-chip…is still underway.  And maybe it’s just around the corner, again.  But maybe it’s not.

FPGA companies (well, Xilinx and Altera) appear to be falling prey to the classic disruptive technology trap described by Clayton Christensen.  Listening to the calls of the deans of Wall Street and pursuing fat margins.  Whether it’s Virtex or Stratix, both Xilinx and Altera are innovating at the high end delivering very profitable and very expensive parts that their biggest customers want and pretty much ignoring the little guys who are looking for cheap, functional and mostly low power devices.

This opens the door for players like Silicon Blue, Actel or Lattice to pick a niche and exploit the heck out of it.  Be it low power, non-volatile storage or security, these folks are picking up some significant business here and there. 

This innovation trap, however, ignores a huge opportunity that really only a big player can address.  I think that the biggest competitor to FPGAs is not ASSPs or ASICs or even other cheaper FPGAs.  I think that what everyone needs to be watching out for is CPUs and GPUs

Let’s face it, even with an integrated processor in your FPGA, you still really need to be a VHDL or Verilog HDL developer to build systems based on the FPGA.  And how many HDL designers are there worldwide?  Tens of thousands?  Perhaps.  Charitably.  This illuminates another issue with systems-on-a-chip – software and software infrastructure. I think this might even be the most important issue acting as an obstacle to the wide adoption of programmable logic technology. To design a CPU or GPU-based system, you need to know C or C++.  How many C developers are there worldwide?  Millions?  Maybe more.

With a GPU you are entering the world of tesselation automata or systolic arrays.  It is easier (but still challenging) to map a C program to a processor grid than sea of gates.  And you also get to leverage the existing broad set of software debug and development tools.  What would you prefer to use to develop your next system on a chip – SystemC with spotty support infrastructure or standard C with deep and broad support infrastructure?

The road to the FPGA revolution is littered with companies who’s products started as FPGA-based with a processor to help, but then migrated to a full multi-core CPU solution dumping the FPGA (except for data path and logic consolidation).  Why is that?  Because to make a FPGA solution work you need to be an expert immersed in FPGA architectures and you need to develop your own tools to carefully divide hardware and software tasks.  And in the end, to get really great speeds and results, you need to keep tweaking your system and reassigning hardware and software tasks.   And then there’s the debugging challenge.  In the end – it’s just hard.

On the other hand, grab an off-the-shelf multi-core processor, whack together some C code, compile it and run it and you get pretty good speeds and the same results.  On top of that – debugging is better supported.

I think FPGAs are great and someday they may be able to provide a real system-on-a-chip solution but they won’t until FPGA companies stop thinking like semiconductor manufacturers and start thinking (and acting) like the software application solution providers they need to become.

Tags: , , , , , ,

A friend of mine who made the move from the world of electronic design automation (EDA) to the world wide web (WWW) once told me that he believed that compared to the problems being solved in EDA, WWW programming is a walk in the park. 

I had an opportunity to reflect on this statement when I visited Web 2.0 Expo in San Francisco the other day.  I spent a fair amount of my career working in the algorithm heavy world of EDA developing all manner of simulators (logic and fault), test pattern generators and netlist modifiers.  The algorithms we used and modified included things like managing various queues, genetic algorithms, path analysis, determing covering sets and the like.  The nature of the solutions meant that we also had opportunity or more specifically a need to utilize better software development techniques, processes and tools.

As I wandered the exhibit hall, I was alternately mystified by the prevalence of buzz words and jargon (crowd sourcing, cloud computation, web analyticscollaborative software, etc.) and amazed at how old technologies were touted as new (design patternsobject-oriented programming! APIs!).  Of course, I understand that any group of people who band together tends to develop their own language so as to more effectively communicate ideas,  identify themselves to one another and sometimes even to exclude outsiders.  So I accept the language stuff but what was truly interesting to me was that this seemingly insular society appears to have slapped together the web without consideration of the developments in computer science that preceded them!

I guess I should be happy they are figuring that out now and are attempting to catch up but then I think “what about all that stuff that’s out there already?’  Does this mean there are all these existing web sites and infrastructure that are about collapse as a result of the force of their own weight?  Is there a  disaster about to befall these sites when they need to upgrade, enhance or even fix significant bugs? Are there major web sites built out of popsicle sticks and bubble gum?

So why is there a big push to hire people who have experience developing these (bad) web sites?  Shouldn’t these Web 2.0 companies be looking for developers that know software rather than developers that know how to slap together a heap of code into a functional but otherwise jumbled mess?

Tags: , ,

I’m Waving at You

I have recently been “chosen” to receive a fistful of invitations to Google‘s newest permanent beta product Google Wave.

This new application is bundled along with an 81 minute video that explains what it is and what it does. My first impression upon noticing that little fact suggested that anything that requires almost an hour and a half to explain is not for the faint of heart. Nor is it likely to interest the casual user. I have spent some time futzing around with Google Wave and believe that I am, indeed, ready to share my initial impressions.

First, I will save you 81 minutes of your life and give you my less than 200 word description of Google Wave. Google Wave is an on-line collaboration application that allows you to collect all information from all sources associated with the topic under discussion in one place. That includes search results, text files, media files, drawings, voicemail, maps, email, reports…everything you can implement, store or view on a computer. Additionally, Google Wave allows you to include and exclude people from the collaboration as the discussion progresses and evolves. And in the usual Google manner, a developer’s API is provided so that interested companies or individuals can contribute functionality or customize installations to suit their needs.

Additionally, (and perhaps cynically) Google Wave serves as a platform for Google to vacuum up and analyze more information about you and your peers and collaborators to be able to serve you more accurately targeted advertisements – which, after all, is what Google’s primary business is all about.

All right…so what about it? Was using Google Wave a transformative experience? Has it turned collaboration on its head? Will this be the platform to transform the global workforce into a seamless, well-oiled machine functioning at high efficiency regardless of geographical location?

My sense is that Google Wave is good but not great. The crushing weight of its complexity means that the casual user (i.e., most people) will never be able to (or, more precisely, never want to) experience the full capabilities of Google Wave. Like Microsoft Word, you will end up with 80% of the users using 20% of the functionality with this huge reservoir of provided functionality never being touched. In fact, in a completely non-scientific series of discussions with end-users, most perceive Google Wave to be no more than yet another email tool (albeit a complex one) and therefore really completely without benefit to them.

My personal experience is that it is a cool collaboration environment and I appreciate its flexibility although I have not yet attempted to develop any custom applications for it. I do like the idea of collecting all discussion-associated data in one place and being able to include appropriate people in the thread and having everything they need to come up-to-speed within easy reach. Personally, I still need to talk to people and see them face-to-face but I appreciate the repository/notebook/library/archive functionality afforded by Google Wave.

I still have a few invitations left so if you want to experience the wave yourself and be your own judge, post a comment with your email address and I’ll shoot an invite out to you.

Tags: , , ,

In these days of tight budgets but no shortage of things to do, more and more companies are finding that having a flexible workforce is key. This means that having the ability to apply immediate resources to any project is paramount. But also as important, is the ability to de-staff a project quickly and without the messiness of layoffs.

While this harsh work environment seems challenging, it actually can very rewarding both professionally and monetarily and see both the employers and employees coming out winners.

The employees have the benefit of being able to work on a wide variety of disparate projects. This can yield a level of excitement unlikely to be experienced in a full time position that is usually focussed on developing deep expertise in a narrow area. The employers get the ability to quickly staff up to meet schedules and requirements and the ability to scale back just as quickly.

Of course, this flexibility – by definition – means that there is no stability and limited predictability for both employees and employers. The employees don’t know when or where they will see the next job and the employers don’t know if they will get the staff they need when they need it. While some thrive in this sort of environment, others seek the security of knowing with some degree of certainty what tomorrow brings. With enough experience with a single contractor, an employer can choose to attempt to flip the contractor from a “renter” to an “owner”. Similarly, the contractor may find the work atmosphere so enticing that settling down and getting some “equity” might be ideal.

It is a strange but mutually beneficial arrangement with each party having equal stance and in effect both having the right of first refusal in the relationship. And it may very well be the new normal in the workplace.

Tags: , ,

It’s the Rodney Dangerfield of disciplines. Sweaty, unkempt, unnerving, uncomfortable and disrespected. Test. Yuck. You hate it. Design, baby! That’s where it’s at! Creating! Developing! Building! Who needs test? It’s designed to work!

In actuality, as much as it pains me to admit “trust, but verify” is a good rule of thumb. Of course, every design is developed with an eye to excellence. Of course, all developers are very talented and unlikely to make mistakes of any sort. But it’s still a good idea to have a look-see at what they have done. It’s even better if they leave in the code or hardware that they used to verify their own implementations. The fact of the matter is that designers add in all manner of extras to help them debug and verify their designs and then – just before releasing it – they rip out all of this valuable apparatus. Big mistake. Leave it! It’s all good! If it’s code – enable it with a compile-time define or environment variable. If it’s hardware – connect it up to your boundary-scan infrastructure and enable it using instructions through your IEEE STD 1149.1 Test Access Port. These little gizmos that give you observability and diagnosability at run time will also provide an invaluable aid in the verification and test process. Please…share the love!

Tags: , , , , ,

The WWW is The Wheel

For no apparent reason, but moreso than ever before, I have come to believe that the World Wide Web can truly be the source of all knowledge and a savior for the lazy (or at least an inspiration to those who need examples to learn or get started).

I was writing a simple application in C the other day and needed to code up a dynamic array. It seemed to me that actually typing out the 20 or so lines of code to implement the allocation and management was just too much effort. And then it occurred to me – “Why reinvent the wheel?” People write dynamic arrays in C every day and I bet that at least one person posted their implementation to the WWW for all to see and admire. A quick search revealed that to be true and in minutes I was customizing code to suit my needs.

Now…did I really save time? In the end, did my customizations result in no net increase in productivity? In many ways, for me, it didn’t matter. I am the sort of person who needs some inspiration to overcome a blank sheet of paper – something concrete – a real starting point – even a bad one. Having that implementation in place gave me that starting point and even if I ended up deleting everything and rewriting it I feel like I benefited, at least psychologically, from having somewhere to start.

It is also valuable to see and learn from the experience of others. Why should I re-invent something so basic? Why not use what’s already extant and spend my energy and talent where I can really add value?

But it is also true that although the WWW may indeed be “the wheel” it sometimes provides a wheel made from wood or stone, that has a flat tire or is damaged beyond repair. For me, though, even that is beneficial since it helps me overcome that forbidding blank sheet of paper.

Tags: , , , ,
« Previous posts Next posts » Back to top