Tag: Software

Facebook-mobile-phoneIt’s all the rage right now to be viewed as a leader in the mobile space.  There are many different sectors in which to demonstrate your leadership.  There are operating systems like iOS and Android and maybe even Windows Phone (someday).  There’s hardware like Apple, Samsung, HTC and maybe even Nokia.  And of course there’s the applications like FourSquare, Square and other primarily mobile applications in social, payments, health and gaming and then all the other applications rushing to mobile because they were told that’s where they ought to be.

Somewhere in this broad and vague classification is Facebook (or perhaps more properly “facebook”).  This massive database of human foibles and interests is either being pressed or voluntarily exploring just exactly how to enter the mobile space and presumably dominate it.  Apparently they have made several attempts to develop their own handset.  The biggest issue it seems is that they believed that just because they are a bunch of really smart folks they should be able to stitch a phone together and make it work.  I believe the saying is “too smart by half“.  And since they reportedly tried this several times without success – perhaps they were also “too stubborn by several halves”.

This push by facebook begs the question: “What?” or even “Why?”  There is a certain logic to it.  Facebook provides hours of amusement to tens of millions of active users and the developers at facebook build applications to run on a series of mobile platforms already.  Those applications are limited in their ability to provide a full facebook experience and also limit facebook’s ability to extract revenue from these users.  Though when you step back, you quickly realize that facebook is really a platform.  It has messaging (text, voice and video), it has contact information, it has position and location information, it has your personal profile along with your interest history and friends, it knows what motivates you (by your comment contents and what you “like”) and it is a platform for application development (including games and exciting virus and spam possibilities) with a well-defined and documented interface.  At the 10,000 foot level, it seems like facebook is an operating system and a platform ready-to-go.  This is not too different from the vision that propelled Netscape into Microsoft’s sights leading to their ultimate demise. Microsoft doesn’t have the might it once did but Google does and so does Apple.  Neither may be “evil” but both are known to be ruthless.  For facebook to enter this hostile market with yet another platform would be bold. And for that company to be one whose stock price and perceived confidence is faltering after a shaky IPO – it may also be dumb. But it may be the only and necessary option for growth.

On the other hand, facebook’s recent edict imploring all employees to access facebook from Android phones rather than their iPhones could either suggest that the elders at facebook believe their future is in Android or simply that they recognize that it is a growing and highly utilized platform. Maybe they will ditch the phone handset and go all in for mobile on iOS and Android on equal footing.

Personally, I think that a new platform with a facebook-centric interface might be a really interesting product especially if the equipment cost is nothing to the end-user.  A free phone supported by facebook ads, running all your favorite games, with constant chatter and photos from your friends? Talk about an immersive communications experience. It would drive me batty. But I think it would be a huge hit with a certain demographic. And how could they do this given their previous failures? Amongst the weaker players in the handset space, Nokia has teamed up with Microsoft but RIM continues to flail. Their stock is plummeting but they have a ready-to-go team of smart employees with experience in getting once popular products to market as well as that all-important experience in dealing with the assorted wireless companies to say nothing of the treasure trove of patents they hold. They also have some interesting infrastructure in their SRP network that could be exploited by facebook to improve their service (or, after proper consideration, sold off).

You can’t help but wonder that if instead of spending $1B on Instagram prior to its IPO, facebook had instead spent a little more and bought RIM would the outcome and IPO lauch have been different?  I guess I can only speculate about that.  Now, though, it seems that facebook ought to move soon or be damned to be a once great player who squandered their potential.

Tags: , , , , , , , , , , , ,

gate-with-no-fence-please-keep-locked-articleScott McNealy, the former CEO of the former Sun Microsystems, in the late 1990s, in an address to the Commonwealth Club said that the future of the Internet is in security. Indeed, it seems that there has been much effort and capital invested in addressing security matters. Encryption, authentication, secure transaction processing, secure processors, code scanners, code verifiers and host of other approaches to make your system and its software and hardware components into a veritable Fort Knox. And it’s all very expensive and quite time consuming (both in development and actual processing). And yet we still hear of routine security breeches, data and identity theft, on-line fraud and other crimes. Why is that? Is security impossible? Unlikely? Too expensive? Misused? Abused? A fiction?

Well, in my mind, there are two issues and they are the weak links in any security endeavour. The two actually have one common root. That common root, as Pogo might say, “is us”. The first one that has been in the press very much of late and always is the reliance on password. When you let the customers in and provide them security using passwords, they sign up using passwords like ‘12345’ or ‘welcome’ or ‘password’. That is usually combated through the introduction of password rules. Rules usually indicate that passwords must meet some minimum level of complexity. This would usually be something like requiring that each password must have a letter and a number and a punctuation mark and be at least 6 characters long. This might cause some customers to get so aggravated because they can’t use their favorite password that they don’t both signing up at all. Other end users get upset but change their passwords to “a12345!” or “passw0rd!” or “welc0me!”. And worst of all, they write the password down and put it in a sticky note on their computer.

Of course, ordinary users are not the only ones to blame, administrators are human, too, and equally as fallible. Even though they should know better, they are equally likely to have the root or administrator password left at the default “admin” or even nothing at all.

The second issue is directly the fault of the administrator – but it is wholly understandable. Getting a system, well a complete network of systems, working and functional is quite an achievement. It is not something to be toyed around with once things are set. When your OS supplier or application software provider delivers a security update, you will think many times over before risking system and network stability to apply it. The choice must be made. The administrator thinks: “Do I wreak havoc on the system – even theoretical havoc – to plug a security hole no matter how potentially damaging?” And considers that: “Maybe I can rely on my firewall…maybe I rely on the fact that our company isn’t much of a target…or I think it isn’t.” And rationalizes: “Then I can defer the application of the patch for now (and likely forever) in the name of stability.”

The bulk of hackers aren’t evil geniuses that stay up late at night doing forensic research and decompilation to find flaws, gaffes, leaks and holes in software and systems. No they are much more likely to be people who read a little about the latest flaws and the most popular passwords and spend their nights just trying stuff to see what they can see. A few of them even specialize in social engineering in which they simply guess or trick you into divulging your password – maybe by examining your online social media presence.

The notorious stuxnet malware worm may be a complex piece of software engineering but it would have done nothing were it not for the peril of human curiosity. The virus allegedly made its way into secure facilities on USB memory sticks. Those memory sticks were carried in human hands and inserted into the targeted computers by those same hands. How did they get into those human hands? A few USB sticks with the virus were probably sprinkled in the parking lot outside the facility. Studies have determined that people will pick up USB memory sticks they find and insert them in their PCs about 60% of the time. The interesting thing is that the likelihood of grabbing and using those USB devices goes up to over 90% if the device has a logo on it.

You can have all the firewalls and scanners and access badges and encryption and SecureIDs and retinal scans you want. In the end, one of your best and most talented employees grabbing a random USB stick and using it on his PC can be the root cause of devastation that could cost you staff years of time to undo.

So what do you do? Fire your employees? Institute policies so onerous that no work can be done at all? As is usual, the best thing to do is apply common sense. If you are not a prime target like a government, a security company or a repository of reams of valuable personal data – don’t go overboard. Keep your systems up-to-date. The time spent now will definitely pay off in the future. Use a firewall. A good one. Finally, be honest with your employees. Educate them helpfully. None of the scare tactics, no “Loose Lips Sink Ships”, just straight talk and a little humor to help guide and modify behavior over time.

Tags: , , , , ,

spaceIn the famous Aardman Animations short film “Creature Comforts“, a variety of zoo animals discuss their lives in the zoo.  A Brazilian Lion speaks at length about the virtue of the great outdoors (cf. a zoo) recalling that in Brazil “We have space“.  While space might be a great thing for Brazilian Lions, it turns out that space is a dangerous and difficult reality in path names for computer applications.

In a recent contract, one portion of the work involved running an existing Windows application under Cygwin. Cygwin, for the uninitiated, is an emulation of the bash shell and most standard Unix commands. It provides this functionality so you can experience Unix under Windows. The Windows application I was working on had been abandoned for several years and customer pressure finally reached a level at which maintenance and updates were required – nay, demanded. Cygwin support was required primarily for internal infrastructure reasons. The infrastructure was a testing framework – primarily comprising bash shell scripts – that ran successfully on Linux (for other applications). My job was to get the Windows application re-animated and running under the shell scripts on Cygwin.

It turns out that the Windows application had a variety of issues with spaces in path names. Actually, it had one big issue – it just didn’t work when the path names had spaces. The shell scripts had a variety of issues with spaces. Well, one big issue – they, too, just didn’t work when the path names had spaces. And it turns out that some applications and operations in Cygwin have issues with spaces, too. Well, that one big issue – they don’t like spaces.

Now by “like”, I mean that when the path name contains spaces then even using ‘\040’ (instead of the space) or quoting the name (e.g., “Documents and Settings”) does not resolve matters and instead merely yields unusual and unhelpful error messages. The behavior was completely unpredictable, as well. For instance, quoting might get you part way through a section of code but then the same quoted name failed when used to call stat. It would then turn out that stat didn’t like spaces in any form (quoted, escaped, whatever…).

Parenthetically, I would note that the space problem is widespread. I was doing some Android work and having an odd an unhelpful error displayed (“invalid command-line parameter”) when trying to run my application on the emulator under Eclipse. It turns out that a space in the path name to the Android SDK was the cause.  Once the space was removed, all was well.

The solution to my problem turned out to be manifold. It involved a mixture of quoting, clever use of cygpath and the Windows API calls GetLongPathName and GetShortPathName.

When assigning and passing variables around in shell scripts, quoting a space-laden path or a variable containing a space-laden path,  the solution was easy. Just remember to use quotes:

THIS=”${THAT}”

Passing command line options that include path names with spaces tended to be more problematic. The argc/argv parsers don’t like spaces.  They don’t like them quoted and don’t like them escaped.  Or maybe the parser likes them but the application doesn’t. In any event, the specific workaround that used was clever manipulation of the path using the cygpath command. The cygpath -w -s command will translate a path name to the Windows version (with the drive letter and a colon at the beginning) and then shortens the name to the old-style 8+3 limited format thereby removing the spaces. An additional trick is that then, if you need the cygwin style path – without spaces – you get the output of the cygpath -w -s and run it through cygpath -u. Then you get a /cygdrive/ style file name with no spaces. There is no other direct path to generating a cygwin Unix style file name without spaces.

These manipulations allow you to get the sort of input you need to the various Windows programs you are using. It is important to note, however, that a Windows GUI application built using standard file browser widgets and the like always passes fully instantiated, space-laden path names. The browser widgets can’t even correctly parse 8+3 names. Some of the system routines, however, don’t like spaces. Then the trick is how do you manipulate the names once within the sphere of the Windows application? Well, there are a number of things to keep in mind, the solutions I propose will not work with cygwin Unix-style names and they will not work with relative path names.

Basically, I used the 2 windows API calls GetLongPathName and GetShortPathName to manipulate the path. I used GetShortPathName to generate the old-style 8+3 format name that removes all the spaces. This ensured that all system calls worked without a hitch. Then, in order, to display messaging that the end-user would recognize, make sure that the long paths are restored by calling GetLongPathName for all externally shared information. I need to emphasize that these Windows API calls do not appear to work with relative path names. They return an empty string as a result. So you need to watch out for that.

Any combination of all these approaches (in whole or in part) may be helpful to you in resolving any space issues you encounter.

Tags: , , , , , , , ,

Back at the end of March, I attended O’Reilly‘s Web 2.0 Expo in San Francisco. As usual with the O’Reilly brand of conferences it was a slick, show-bizzy affair. The plenary sessions were fast-paced with generic techno soundtracks, theatrical lighting and spectacular attempts at buzz-generation. Despite their best efforts, the staging seems to overhwelm the Droopy Dog-like presenters who tend to be more at home coding in darkened rooms whilst gorging themselves on Red Bull and cookies. Even the audience seemed to prefer the company of their smartphones or iPads than any actual human interaction with “live tweets” being the preferred method of communication.

In any event, the conference is usually interesting and a few nuggets are typically extracted from the superficial, mostly promotional aspects of the presentations.

What was clear was that every start-up and every business plan was keyed on data collection. Data collection about YOU. The more – the better. The goal was to learn as much about you as possible so as to be able to sell you stuff. Even better – to sell you stuff that was so in tune with your desires that you would be helpless to resist purchasing it.

The trick was – how to get you to cough up that precious data? Some sites just assumed you’d be OK with spending a few days answering questions and volunteering information – apparently just for the sheer joy of it. Others believed that being up-front and admitting that you were going to be sucked into a vortex of unrelenting and irresistable consumption would be reward enough. Still others felt that they ought to offer you some valuable service in return. Most often, this service, oddly enough, was financial planning and retirement saving-based.

The other thing that was interesting (and perhaps obvious) was that data collection is usually pretty easy (at least the basic stuff). Getting details is harder and most folks do expect something in return. And, of course, the hardest part is the data mining to extract the information that would provide the most compelling sales pitch to you.

There are all sorts of ways to build the case around your apparent desires. By finding out where you live or where you are, they can suggest things “like” other things you have already that are nearby. (You sure seem to like Lady Gaga, you know there’s a meat dress shoppe around the corner…) By finding out who your friends are and what they like, they can apply peer-pressure-based recommendations (All of your friends are downloading the new Justin Beiber recording. Why aren’t you?). And by finding out about your family and demographic information they can suggest what you need or ought to be needing soon (You son’s 16th birthday is coming up soon, how about a new car for him?).

Of all the sites and ideas, it seems to me that Intuit‘s Mint is the most interesting. Mint is an on-line financial planning and management site. Sort of like Quicken but online. To “hook” you, their key idea is to offer you the tease of the most valuable analysis with the minimum of initial information. It’s almost like given your email and zip code they’ll draw up a basic profile of you and your lifestyle. Give them a bit more and they’ll make it better. And so you get sucked in but you get value for your data. They do claim to keep the data separate from you but they also do collect demographically filtered data and likely geographically filtered data.

This really isn’t news. facebook understood this years ago when their ill-fated Beacon campaign was launched. This probably would have been better accepted had it been rolled out more sensitively. But it is ultimately where everyone is stampeding right now.

The most interesting thing is that there is already a huge amount of personal data on the web. It is protected because it’s all in different places and not associated. facebook has all of your friends and acquaintances. Amazon and eBay have a lot about what you like and what you buy. Google has what you’re interested in (and if you have an Android phone – where you go). Apple has a lot about where you go and who you talk to and also through your app selection what you like and are interested in. LinkedIn has your professional associations. And, of course, twitter has when you go to the bathroom and what kind of muffins you eat.

Each of these giants is trying to expand their reservoir of data about you. Other giants are trying to figure out how to get a piece of that action (Yahoo!, Microsoft). And yet others, are trying to sell missing bits of information to these players. Credit card companies are making their vast purchasing databases available, specialty retailers are trying to cash in, cell phone service providers are muscling in as well. They each have a little piece of your puzzle to make analysis more accurate.

The expectations is that there will be acceptance of diminishing privacy and some sort of belief that the holders of these vast databases will be benevolent and secure and not require government intervention. Technologically, storage and retrieval will need to be addressed and newer, faster algorithms for analysis will need to be developed.

Looking for a job…or a powerful patent? I say look here.

Tags: , , , ,

I think it’s high time I authored a completely opinion-based article full of observations and my own prejudices that might result in a littany of ad hominem attacks and insults.  Or at least, I hope it does.  This little bit of prose will outline my view of the world of programmable logic as I see it today.  Again, it is as I see it.  You might see it differently.  But you would be wrong.

First let’s look at the players.  The two headed Cerberus of the programmable logic world is Altera and Xilinx.  They battle it out for the bulk of the end-user market share.  After that, there are a series of niche players (Lattice Semiconductor, Microsemi (who recently purchased Actel) and Quicklogic), lesser lights (Atmel and Cypress) and wishful upstarts (Tabula, Achronix and SiliconBlue).

Atmel and Cypress are broadline suppliers of specialty semiconductors.  They each sell a small portfolio of basic programmable logic devices (Atmel CPLDs, Atmel FPGAs and Cypress CPLDs).  As best I can tell, they do this for two reasons.  First, they entered the marketplace and have been in it for about 15 years and at this point have just enough key customers using the devices such that the cost of exiting the market would be greater than the cost of keeping these big customers happy.  The technology is not, by any stretch of the imagination, state of the art so the relative cost of supporting and manufacturing these parts is small.  Second, as a broadline supplier of a wide variety of specialty semiconductors, it’s nice for their sales team to have a PLD to toss into a customer’s solution to stitch together all that other stuff they bought from them.  All told, you’re not going to see any profound innovations from these folks in the programmable logic space.  ‘Nuff said about these players, then.

At the top of the programmable logic food chain are Altera and Xilinx.  These two titans battle head-to-head and every few years exchange the lead.  Currently, Altera has leapt or will leap ahead of Xilinx in technology, market share and market capitalization.  But when it comes to innovation and new ideas, both companies typically offer incremental innovations rather than risky quantum leaps ahead.  They are both clearly pursuing a policy that chases the high end, fat margin devices, focusing more and more on the big, sophisticated end-user who is most happy with greater complexity, capacity and speed.  Those margin leaders are Xilinx’s Virtex families and Altera’s Stratix series. The sweet spot for these devices are low volume, high cost equipment like network equipment, storage systemcontroller and cell phone base stations. Oddly though, Altera’s recent leap to the lead can be traced to their mid-price Arria and low-price Cyclone families that offered lower power and lower price point with the right level of functionality for a wider swath of customers.  Xilinx had no response having not produced a similarly featured device from the release of the Spartan3 (and its variants) until the arrival of the Spartan6 some 4 years later.  This gap provided just the opportunity that Altera needed to gobble up a huge portion of a growing market.  And then, when Xilinx’s Spartan6 finally arrived, its entry to production was marked by bumpiness and a certain amount of “So what?” from end-users who were about to or already did already migrate to Altera.

The battle between Altera and Xilinx is based on ever-shrinking technology nodes, ever-increasing logic capacity, faster speeds and a widening variety of IP cores (hard and soft) and, of course, competitive pricing.  There has been little effort on the part of either company to provide any sort of quantum leap of innovation since there is substantial risk involved.  The overall programmable logic market is behaving more like a commodity market.  The true differentiation is price since the feature sets are basically identical.  If you try to do some risky innovation, you will likely have to divert efforts from your base technology.  And it is that base technology that delivers those fat margins.  If that risky innovation falls flat, you miss a generation and lose those fat margins and market share.

Xilinx’s recent announcement of the unfortunately named Zynq device might be such a quantum innovative leap but it’s hard to tell from the promotional material since it is long on fluff and short on facts.  Is it really substantially different from the Virtex4FX from 2004?  Maybe it isn’t because its announcement does not seem to have instilled any sort of fear over at Altera.  Or maybe Altera is just too frightened to respond?

Lattice Semiconductor has worked hard to find little market niches to serve.  They have done this by focusing mostly on price and acquisitions.  Historically the leader in in-system programmable devices, Lattice saw this lead erode as Xilinx and Altera entered that market using an open standard (rather than a proprietary one, as Lattice did).  In response, Lattice moved to the open standard, acquired FPGA technology and tried to develop other programmable niche markets (e.g., switches, analog).   Lattice has continued to move opportunistically; shifting quickly at the margins of the market to find unserved or underserved programmable logic end-users, with a strong emphasis on price competitiveness.  They have had erratic results and limited success with this strategy and have seen their market share continue to erode.

Microsemi owns the antifuse programmable technology market.  This technology is strongly favored by end-users who want high reliability in their programmable logic.  Unlike the static RAM-based programmable technologies used by most every other manufacturer, antifuse is not susceptible to single event upsets making it ideal for space, defense and similar applications. The downside of this technology is that unlike static RAM, antifuse is not reprogrammable.  You can only program it once and if you need to fix your downloaded design, you need to get a new part, program it with the new pattern and replace the old part with the new part.  Microsemi has attempted to broaden their product offering into more traditional markets by offering more conventional FPGAs.  However, rather than basing their FPGA’s programmability on static RAM, the Microsemi product, ProASIC, uses flash technology.  A nice incremental innovation offering its own benefits (non-volatile pattern storage) and costs (flash does not scale well with shrinking technology nodes). In addition, Microtec is already shipping a Zynq-like device known as the SmartFusion family.  The SmartFusion device has hard analog IP included.  As best I can tell, Zync does not include that analog functionality.  SmartFusion is relatively new, I do not know how popular it is and what additional functionality its end-users are requesting.  I believe the acceptance of the SmartFusion device will serve as a early bellwether indicator for the acceptance of Zynq.

Quicklogic started out as a more general purpose programmable logic supplier based on a programming technology similar to antifuse with a low power profile.  Over the years, Quicklogic has chosen to focus their offering as more of a programmable application specific standard product (ASSP).  The devices they offer include specific hard IP tailored to the mobile market along with a programmable fabric.  As a company, their laser focus on mobile applications leaves them as very much a niche player.

In recent years, a number of startups have entered the marketplace.  While one might have thought that they would target the low end and seek to provide “good enough” functionality at a low price in an effort to truly disrupt the market from the bottom, gain a solid foothold and sell products to those overserved by what Altera and Xilinx offer; that turns out not to be the case.  In fact, two of the new entrants (Tabula and Achronix) are specifically after the high end, high margin sector that Altera and Xilinx so jealously guard.

The company with the most buzz is Tabula.  They are headed by former Xilinx executive, Dennis Segers, who is widely credited with making the decisions that resulted in Xilinx’s stellar growth in the late 1990s with the release of the original Virtex device. People are hoping for the same magic at Tabula.  Tabula’s product offers what they refer to as a SpaceTime Architecture and 3D Programmable Logic.  Basically what that means is that your design is sectioned and swapped in and out of the device much like a program is swapped in and out of a computer’s RAM space.  This provides a higher effective design density on a device having less “hard logic”.  An interesting idea.  It seems like it would likely utilize less power than the full design realized on a single chip.  The cost is complexity of the design software and the critical nature of the system setup (i.e., the memory interface and implementation) on the board to ensure the swapping functionality as promised.  Is it easy to use?  Is it worth the hassle?  It’s hard to tell right now.  There are some early adopters kicking the tires.  If Tabula is successful will they be able to expand their market beyond where they are now? It looks like their technology might scale up very easily to provide higher and higher effective densities.  But does their technology scale down to low cost markets easily?  It doesn’t look like it.  There is a lot of overhead associated with all that image swapping and its value for the low end is questionable.  But, I’ll be the first to say: I don’t know.

Achronix as best I can tell has staked out the high speed-high density market.  That is quite similar to what Tabula is addressing.  The key distinction between the two companies (besides Achronix’s lack of Star Trek-like marketing terminology) is that Achronix is using Intel as their foundry.  This might finally put an end to those persistent annual rumors that Intel is poised to purchase Altera or Xilinx (is it the same analyst every time who leaks this?).  That Intel relationship and a less complex (than Tabula) fabric technology means that Achronix might be best situated to offer their product for those defense applications that require a secure, on-shore foundry.  If that is the case, then Achronix is aiming at a select and very profitable sector that neither Altera nor Xilinx will let go without a big fight.  Even if successful, where does Achronix expand?  Does their technology scale down to low cost markets easily?  I don’t think so…but I don’t know.  Does it scale up to higher densities easily?  Maybe.

SiliconBlue is taking a different approach.  They are aiming at the low power, low cost segment.  That seems like more of a disruptive play.  Should they be able to squeeze in, they might be able to innovate their way up the market and cause some trouble for Xilinx and Altera.  The rumored issue with SiliconBlue is that their devices aren’t quite low power enough or quite cheap enough to fit their intended target market.  The other rumor is that they are constantly looking for a buyer.  That doesn’t instill a high level of confidence now, does it?

So what does all this mean?  The Microsemi SmartFusion device might be that quantum innovative leap that most likely extends the programmable logic market space.  It may be the one product that has the potential to serve an unserved market and bring more end-user and applications on board.  But the power and price point might not be right.

The ability of any programmable logic solution to expand beyond the typical sweet spots is based on its ability to displace other technologies at a lower cost and with sufficient useful functionality.  PLDs are competing not just against ASSPs but also against multi-core processors and GPUs.  Multi-core processors and GPUs offer a simpler programming model (using common programming languages), relatively low power and a wealth of application development tools with a large pool of able, skilled developers.  PLDs still require understanding hardware description languages (like VHDL or Verilog HDL) as well as common programming languages (like C) in addition to specific conceptual knowledge of hardware and software.  On top of all that programmable logic often delivers higher power consumption at a higher price point than competing solutions.

In the end, the real trick is not just providing a hardware solution that delivers the correct power and price point but a truly integrated tool set that leverages the expansive resource pool of C programmers rather than the much smaller resource puddle of HDL programmers. And no one, big or small, new or old, is investing in that development effort.

Tags: , , , , ,

Software is complicated. Look up software complexity on Google and you get almost 10,000,000 matches. Even if 90% of them are bogus that’s still about a million relevant hits. But I wonder if that means that when there’s a problem in a system – the software is always to blame?

I think that in pure application level software development, you can be pretty sure that any problems that arise in your development are of your own making. So when I am working on that sort of project, I make liberal use of a wide variety of debugging tools, keep quiet and the fix the problems I find.

But when developing for any sort of custom embedded system, suddenly the lines become much more blurry. With my clients, when I am verifying embedded software targeting custom systems and things aren’t working according to the written specification and when initial indications are that a value read from something like a sensor or a pin is wrong – I find that I will always report my observations but quickly indicate that I need to review my code before making any final conclusion. I sometimes wonder if I do this because I actually believe I could be that careless or that I am merely being subconsciously obsequious (“Oh no, dear customer, I am the only imperfect one here!”) or perhaps I am merely being conservative in exercising my judgement choosing to dig for more facts one way or the other. I suspect it might be the latter.

But I wonder, if this level of conservatism does anyone any good? Perhaps I should be quicker to point the finger at the guilty party? Maybe that would speed up development and hasten delivery?

In fact, what I have noticed is that my additional efforts often point out additional testing and verification strategies that can be used to improve the overall quality of the system regardless of the source of the problem. I am often better able to identify critical interfaces and more importantly critical behaviors across these interfaces that can be monitored as either online or offline diagnosis and validation tasks.

Tags:

I have spent a fair amount of my formative years in and around the field programmable gate array (FPGA) industry.  I participated in the evolution of FPGAs from a convenient repository for glue logic and a pricey but useful prototyping platform to a convenient repository for lots of glue logic, an affordable but still a little pricey platform to improve time-to-market and a useful system-on-a-chip platform.  There was much talk about FPGAs going mainstream, displacing all but a few ASICs and becoming the vehicle of choice for most system implementations.  It turns out that last step…the mainstreaming, the death of ASICs, the proliferating system-on-chip…is still underway.  And maybe it’s just around the corner, again.  But maybe it’s not.

FPGA companies (well, Xilinx and Altera) appear to be falling prey to the classic disruptive technology trap described by Clayton Christensen.  Listening to the calls of the deans of Wall Street and pursuing fat margins.  Whether it’s Virtex or Stratix, both Xilinx and Altera are innovating at the high end delivering very profitable and very expensive parts that their biggest customers want and pretty much ignoring the little guys who are looking for cheap, functional and mostly low power devices.

This opens the door for players like Silicon Blue, Actel or Lattice to pick a niche and exploit the heck out of it.  Be it low power, non-volatile storage or security, these folks are picking up some significant business here and there. 

This innovation trap, however, ignores a huge opportunity that really only a big player can address.  I think that the biggest competitor to FPGAs is not ASSPs or ASICs or even other cheaper FPGAs.  I think that what everyone needs to be watching out for is CPUs and GPUs

Let’s face it, even with an integrated processor in your FPGA, you still really need to be a VHDL or Verilog HDL developer to build systems based on the FPGA.  And how many HDL designers are there worldwide?  Tens of thousands?  Perhaps.  Charitably.  This illuminates another issue with systems-on-a-chip – software and software infrastructure. I think this might even be the most important issue acting as an obstacle to the wide adoption of programmable logic technology. To design a CPU or GPU-based system, you need to know C or C++.  How many C developers are there worldwide?  Millions?  Maybe more.

With a GPU you are entering the world of tesselation automata or systolic arrays.  It is easier (but still challenging) to map a C program to a processor grid than sea of gates.  And you also get to leverage the existing broad set of software debug and development tools.  What would you prefer to use to develop your next system on a chip – SystemC with spotty support infrastructure or standard C with deep and broad support infrastructure?

The road to the FPGA revolution is littered with companies who’s products started as FPGA-based with a processor to help, but then migrated to a full multi-core CPU solution dumping the FPGA (except for data path and logic consolidation).  Why is that?  Because to make a FPGA solution work you need to be an expert immersed in FPGA architectures and you need to develop your own tools to carefully divide hardware and software tasks.  And in the end, to get really great speeds and results, you need to keep tweaking your system and reassigning hardware and software tasks.   And then there’s the debugging challenge.  In the end – it’s just hard.

On the other hand, grab an off-the-shelf multi-core processor, whack together some C code, compile it and run it and you get pretty good speeds and the same results.  On top of that – debugging is better supported.

I think FPGAs are great and someday they may be able to provide a real system-on-a-chip solution but they won’t until FPGA companies stop thinking like semiconductor manufacturers and start thinking (and acting) like the software application solution providers they need to become.

Tags: , , , , , ,

A friend of mine who made the move from the world of electronic design automation (EDA) to the world wide web (WWW) once told me that he believed that compared to the problems being solved in EDA, WWW programming is a walk in the park. 

I had an opportunity to reflect on this statement when I visited Web 2.0 Expo in San Francisco the other day.  I spent a fair amount of my career working in the algorithm heavy world of EDA developing all manner of simulators (logic and fault), test pattern generators and netlist modifiers.  The algorithms we used and modified included things like managing various queues, genetic algorithms, path analysis, determing covering sets and the like.  The nature of the solutions meant that we also had opportunity or more specifically a need to utilize better software development techniques, processes and tools.

As I wandered the exhibit hall, I was alternately mystified by the prevalence of buzz words and jargon (crowd sourcing, cloud computation, web analyticscollaborative software, etc.) and amazed at how old technologies were touted as new (design patternsobject-oriented programming! APIs!).  Of course, I understand that any group of people who band together tends to develop their own language so as to more effectively communicate ideas,  identify themselves to one another and sometimes even to exclude outsiders.  So I accept the language stuff but what was truly interesting to me was that this seemingly insular society appears to have slapped together the web without consideration of the developments in computer science that preceded them!

I guess I should be happy they are figuring that out now and are attempting to catch up but then I think “what about all that stuff that’s out there already?’  Does this mean there are all these existing web sites and infrastructure that are about collapse as a result of the force of their own weight?  Is there a  disaster about to befall these sites when they need to upgrade, enhance or even fix significant bugs? Are there major web sites built out of popsicle sticks and bubble gum?

So why is there a big push to hire people who have experience developing these (bad) web sites?  Shouldn’t these Web 2.0 companies be looking for developers that know software rather than developers that know how to slap together a heap of code into a functional but otherwise jumbled mess?

Tags: , ,

I’m Waving at You

I have recently been “chosen” to receive a fistful of invitations to Google‘s newest permanent beta product Google Wave.

This new application is bundled along with an 81 minute video that explains what it is and what it does. My first impression upon noticing that little fact suggested that anything that requires almost an hour and a half to explain is not for the faint of heart. Nor is it likely to interest the casual user. I have spent some time futzing around with Google Wave and believe that I am, indeed, ready to share my initial impressions.

First, I will save you 81 minutes of your life and give you my less than 200 word description of Google Wave. Google Wave is an on-line collaboration application that allows you to collect all information from all sources associated with the topic under discussion in one place. That includes search results, text files, media files, drawings, voicemail, maps, email, reports…everything you can implement, store or view on a computer. Additionally, Google Wave allows you to include and exclude people from the collaboration as the discussion progresses and evolves. And in the usual Google manner, a developer’s API is provided so that interested companies or individuals can contribute functionality or customize installations to suit their needs.

Additionally, (and perhaps cynically) Google Wave serves as a platform for Google to vacuum up and analyze more information about you and your peers and collaborators to be able to serve you more accurately targeted advertisements – which, after all, is what Google’s primary business is all about.

All right…so what about it? Was using Google Wave a transformative experience? Has it turned collaboration on its head? Will this be the platform to transform the global workforce into a seamless, well-oiled machine functioning at high efficiency regardless of geographical location?

My sense is that Google Wave is good but not great. The crushing weight of its complexity means that the casual user (i.e., most people) will never be able to (or, more precisely, never want to) experience the full capabilities of Google Wave. Like Microsoft Word, you will end up with 80% of the users using 20% of the functionality with this huge reservoir of provided functionality never being touched. In fact, in a completely non-scientific series of discussions with end-users, most perceive Google Wave to be no more than yet another email tool (albeit a complex one) and therefore really completely without benefit to them.

My personal experience is that it is a cool collaboration environment and I appreciate its flexibility although I have not yet attempted to develop any custom applications for it. I do like the idea of collecting all discussion-associated data in one place and being able to include appropriate people in the thread and having everything they need to come up-to-speed within easy reach. Personally, I still need to talk to people and see them face-to-face but I appreciate the repository/notebook/library/archive functionality afforded by Google Wave.

I still have a few invitations left so if you want to experience the wave yourself and be your own judge, post a comment with your email address and I’ll shoot an invite out to you.

Tags: , , ,

It’s the Rodney Dangerfield of disciplines. Sweaty, unkempt, unnerving, uncomfortable and disrespected. Test. Yuck. You hate it. Design, baby! That’s where it’s at! Creating! Developing! Building! Who needs test? It’s designed to work!

In actuality, as much as it pains me to admit “trust, but verify” is a good rule of thumb. Of course, every design is developed with an eye to excellence. Of course, all developers are very talented and unlikely to make mistakes of any sort. But it’s still a good idea to have a look-see at what they have done. It’s even better if they leave in the code or hardware that they used to verify their own implementations. The fact of the matter is that designers add in all manner of extras to help them debug and verify their designs and then – just before releasing it – they rip out all of this valuable apparatus. Big mistake. Leave it! It’s all good! If it’s code – enable it with a compile-time define or environment variable. If it’s hardware – connect it up to your boundary-scan infrastructure and enable it using instructions through your IEEE STD 1149.1 Test Access Port. These little gizmos that give you observability and diagnosability at run time will also provide an invaluable aid in the verification and test process. Please…share the love!

Tags: , , , , ,
« Previous posts Next posts » Back to top