I have spent a fair amount of my formative years in and around the field programmable gate array (FPGA) industry.  I participated in the evolution of FPGAs from a convenient repository for glue logic and a pricey but useful prototyping platform to a convenient repository for lots of glue logic, an affordable but still a little pricey platform to improve time-to-market and a useful system-on-a-chip platform.  There was much talk about FPGAs going mainstream, displacing all but a few ASICs and becoming the vehicle of choice for most system implementations.  It turns out that last step…the mainstreaming, the death of ASICs, the proliferating system-on-chip…is still underway.  And maybe it’s just around the corner, again.  But maybe it’s not.

FPGA companies (well, Xilinx and Altera) appear to be falling prey to the classic disruptive technology trap described by Clayton Christensen.  Listening to the calls of the deans of Wall Street and pursuing fat margins.  Whether it’s Virtex or Stratix, both Xilinx and Altera are innovating at the high end delivering very profitable and very expensive parts that their biggest customers want and pretty much ignoring the little guys who are looking for cheap, functional and mostly low power devices.

This opens the door for players like Silicon Blue, Actel or Lattice to pick a niche and exploit the heck out of it.  Be it low power, non-volatile storage or security, these folks are picking up some significant business here and there. 

This innovation trap, however, ignores a huge opportunity that really only a big player can address.  I think that the biggest competitor to FPGAs is not ASSPs or ASICs or even other cheaper FPGAs.  I think that what everyone needs to be watching out for is CPUs and GPUs

Let’s face it, even with an integrated processor in your FPGA, you still really need to be a VHDL or Verilog HDL developer to build systems based on the FPGA.  And how many HDL designers are there worldwide?  Tens of thousands?  Perhaps.  Charitably.  This illuminates another issue with systems-on-a-chip – software and software infrastructure. I think this might even be the most important issue acting as an obstacle to the wide adoption of programmable logic technology. To design a CPU or GPU-based system, you need to know C or C++.  How many C developers are there worldwide?  Millions?  Maybe more.

With a GPU you are entering the world of tesselation automata or systolic arrays.  It is easier (but still challenging) to map a C program to a processor grid than sea of gates.  And you also get to leverage the existing broad set of software debug and development tools.  What would you prefer to use to develop your next system on a chip – SystemC with spotty support infrastructure or standard C with deep and broad support infrastructure?

The road to the FPGA revolution is littered with companies who’s products started as FPGA-based with a processor to help, but then migrated to a full multi-core CPU solution dumping the FPGA (except for data path and logic consolidation).  Why is that?  Because to make a FPGA solution work you need to be an expert immersed in FPGA architectures and you need to develop your own tools to carefully divide hardware and software tasks.  And in the end, to get really great speeds and results, you need to keep tweaking your system and reassigning hardware and software tasks.   And then there’s the debugging challenge.  In the end – it’s just hard.

On the other hand, grab an off-the-shelf multi-core processor, whack together some C code, compile it and run it and you get pretty good speeds and the same results.  On top of that – debugging is better supported.

I think FPGAs are great and someday they may be able to provide a real system-on-a-chip solution but they won’t until FPGA companies stop thinking like semiconductor manufacturers and start thinking (and acting) like the software application solution providers they need to become.

Tags: , , , , , ,

iPad Explained

In a previous post, I admitted to the fact that I was ignorant of or perhaps merely immune to the magic of the iPad.  Since that time, through a series of discussions with people who do get it I have come to understand the magic of the iPad and also why it holds no such power over me.  

Essentially, the iPad is a media consumption device.  It is for those who consume movies, videos, music, games, puzzles, newpapers, facebook, MySpace, magazines, You Tube and all of that stuff available on the web but do not have a requirement for lots of input (typing or otherwise).  You can tap out a few emails, register for a web site but, really, it’s not a platform for writing documents, developing presentations, writing code or working out problems and doing analysis.  That is unless you buy a few pricey accessories.

The pervasive (well, at least around here) iPad billboards really say it best.  They typically feature casually attired torsos reclining, with legs raised, bent at the knees to support the iPad.  These smartly but simply dressed users are lounging and passively consuming media.  They are not working.  They are not developing.  They are not even necessarily thinking.  They are simply happy (we think – even though no faces are visible) and drinking in the experience.  You are expected to (lightly) toss the iPad about after quickly reading an article, keep it on your night stand for those late night web-based fact checks, leave it on your coffee table to watch that old episode of Star Trek at your leisure or pack it in your folio to help while away the hours in waiting rooms and airports.

But this isn’t me. I am more of a developer.  Certainly of software, sometimes of content.  I like a full-sized (or near full-sized) real keyboard for typing.  If I need to check something late at night, my cell phone browser seems to do the trick just fine.  I can triage my email just fine on my cell phone, too.  So, I am not an iPad.  At least not yet.  But if it really is only a consumption platform then not ever.  But one never quite knows what those wizards in Cupertino might be conjuring up next, does one?

Tags: , , ,

A friend of mine who made the move from the world of electronic design automation (EDA) to the world wide web (WWW) once told me that he believed that compared to the problems being solved in EDA, WWW programming is a walk in the park. 

I had an opportunity to reflect on this statement when I visited Web 2.0 Expo in San Francisco the other day.  I spent a fair amount of my career working in the algorithm heavy world of EDA developing all manner of simulators (logic and fault), test pattern generators and netlist modifiers.  The algorithms we used and modified included things like managing various queues, genetic algorithms, path analysis, determing covering sets and the like.  The nature of the solutions meant that we also had opportunity or more specifically a need to utilize better software development techniques, processes and tools.

As I wandered the exhibit hall, I was alternately mystified by the prevalence of buzz words and jargon (crowd sourcing, cloud computation, web analyticscollaborative software, etc.) and amazed at how old technologies were touted as new (design patternsobject-oriented programming! APIs!).  Of course, I understand that any group of people who band together tends to develop their own language so as to more effectively communicate ideas,  identify themselves to one another and sometimes even to exclude outsiders.  So I accept the language stuff but what was truly interesting to me was that this seemingly insular society appears to have slapped together the web without consideration of the developments in computer science that preceded them!

I guess I should be happy they are figuring that out now and are attempting to catch up but then I think “what about all that stuff that’s out there already?’  Does this mean there are all these existing web sites and infrastructure that are about collapse as a result of the force of their own weight?  Is there a  disaster about to befall these sites when they need to upgrade, enhance or even fix significant bugs? Are there major web sites built out of popsicle sticks and bubble gum?

So why is there a big push to hire people who have experience developing these (bad) web sites?  Shouldn’t these Web 2.0 companies be looking for developers that know software rather than developers that know how to slap together a heap of code into a functional but otherwise jumbled mess?

Tags: , ,

I admit it…I am clueless

My world is about to change but I fully admit that I don’t get how. That’s right…everything changes this Saturday, April 3rd when the Apple iPad is released. Am I the only one who looks at it and thinks of those big button phones that one purchases for their aging parent? Yes, I know Steve Jobs found the English language lacking sufficiently meaningful superlatives to describe it. And, yes, I know there will be hundreds of pre-programmed Apple zombies lining the streets to collect their own personal iPad probably starting Friday evening. But, I don’t understand why I would want an oversized iPhone without the phone or the camera or application software or a keyboard and why and how this gadget will change civilization. Don’t get me wrong, I know it will. But I don’t see how the iPad will revolutionize, say, magazine sales. Why would I buy Esquire for $2.99 when I get the print version for less than $1 and throw it out after reading it for 30 minutes (full disclosure: I am a subscriber – I admit that freely)? I also know I can’t take the iPad to the beach to read a book because it will get wet, clotted with sand and the screen will be unreadable in sunlight. But, I’m sure the iPad will be a huge hit. And I’m sure my life will change. Just tell me how. Someone…..please….?

Tags: , , , ,

I’m Waving at You

I have recently been “chosen” to receive a fistful of invitations to Google‘s newest permanent beta product Google Wave.

This new application is bundled along with an 81 minute video that explains what it is and what it does. My first impression upon noticing that little fact suggested that anything that requires almost an hour and a half to explain is not for the faint of heart. Nor is it likely to interest the casual user. I have spent some time futzing around with Google Wave and believe that I am, indeed, ready to share my initial impressions.

First, I will save you 81 minutes of your life and give you my less than 200 word description of Google Wave. Google Wave is an on-line collaboration application that allows you to collect all information from all sources associated with the topic under discussion in one place. That includes search results, text files, media files, drawings, voicemail, maps, email, reports…everything you can implement, store or view on a computer. Additionally, Google Wave allows you to include and exclude people from the collaboration as the discussion progresses and evolves. And in the usual Google manner, a developer’s API is provided so that interested companies or individuals can contribute functionality or customize installations to suit their needs.

Additionally, (and perhaps cynically) Google Wave serves as a platform for Google to vacuum up and analyze more information about you and your peers and collaborators to be able to serve you more accurately targeted advertisements – which, after all, is what Google’s primary business is all about.

All right…so what about it? Was using Google Wave a transformative experience? Has it turned collaboration on its head? Will this be the platform to transform the global workforce into a seamless, well-oiled machine functioning at high efficiency regardless of geographical location?

My sense is that Google Wave is good but not great. The crushing weight of its complexity means that the casual user (i.e., most people) will never be able to (or, more precisely, never want to) experience the full capabilities of Google Wave. Like Microsoft Word, you will end up with 80% of the users using 20% of the functionality with this huge reservoir of provided functionality never being touched. In fact, in a completely non-scientific series of discussions with end-users, most perceive Google Wave to be no more than yet another email tool (albeit a complex one) and therefore really completely without benefit to them.

My personal experience is that it is a cool collaboration environment and I appreciate its flexibility although I have not yet attempted to develop any custom applications for it. I do like the idea of collecting all discussion-associated data in one place and being able to include appropriate people in the thread and having everything they need to come up-to-speed within easy reach. Personally, I still need to talk to people and see them face-to-face but I appreciate the repository/notebook/library/archive functionality afforded by Google Wave.

I still have a few invitations left so if you want to experience the wave yourself and be your own judge, post a comment with your email address and I’ll shoot an invite out to you.

Tags: , , ,

In these days of tight budgets but no shortage of things to do, more and more companies are finding that having a flexible workforce is key. This means that having the ability to apply immediate resources to any project is paramount. But also as important, is the ability to de-staff a project quickly and without the messiness of layoffs.

While this harsh work environment seems challenging, it actually can very rewarding both professionally and monetarily and see both the employers and employees coming out winners.

The employees have the benefit of being able to work on a wide variety of disparate projects. This can yield a level of excitement unlikely to be experienced in a full time position that is usually focussed on developing deep expertise in a narrow area. The employers get the ability to quickly staff up to meet schedules and requirements and the ability to scale back just as quickly.

Of course, this flexibility – by definition – means that there is no stability and limited predictability for both employees and employers. The employees don’t know when or where they will see the next job and the employers don’t know if they will get the staff they need when they need it. While some thrive in this sort of environment, others seek the security of knowing with some degree of certainty what tomorrow brings. With enough experience with a single contractor, an employer can choose to attempt to flip the contractor from a “renter” to an “owner”. Similarly, the contractor may find the work atmosphere so enticing that settling down and getting some “equity” might be ideal.

It is a strange but mutually beneficial arrangement with each party having equal stance and in effect both having the right of first refusal in the relationship. And it may very well be the new normal in the workplace.

Tags: , ,

It’s the Rodney Dangerfield of disciplines. Sweaty, unkempt, unnerving, uncomfortable and disrespected. Test. Yuck. You hate it. Design, baby! That’s where it’s at! Creating! Developing! Building! Who needs test? It’s designed to work!

In actuality, as much as it pains me to admit “trust, but verify” is a good rule of thumb. Of course, every design is developed with an eye to excellence. Of course, all developers are very talented and unlikely to make mistakes of any sort. But it’s still a good idea to have a look-see at what they have done. It’s even better if they leave in the code or hardware that they used to verify their own implementations. The fact of the matter is that designers add in all manner of extras to help them debug and verify their designs and then – just before releasing it – they rip out all of this valuable apparatus. Big mistake. Leave it! It’s all good! If it’s code – enable it with a compile-time define or environment variable. If it’s hardware – connect it up to your boundary-scan infrastructure and enable it using instructions through your IEEE STD 1149.1 Test Access Port. These little gizmos that give you observability and diagnosability at run time will also provide an invaluable aid in the verification and test process. Please…share the love!

Tags: , , , , ,

The WWW is The Wheel

For no apparent reason, but moreso than ever before, I have come to believe that the World Wide Web can truly be the source of all knowledge and a savior for the lazy (or at least an inspiration to those who need examples to learn or get started).

I was writing a simple application in C the other day and needed to code up a dynamic array. It seemed to me that actually typing out the 20 or so lines of code to implement the allocation and management was just too much effort. And then it occurred to me – “Why reinvent the wheel?” People write dynamic arrays in C every day and I bet that at least one person posted their implementation to the WWW for all to see and admire. A quick search revealed that to be true and in minutes I was customizing code to suit my needs.

Now…did I really save time? In the end, did my customizations result in no net increase in productivity? In many ways, for me, it didn’t matter. I am the sort of person who needs some inspiration to overcome a blank sheet of paper – something concrete – a real starting point – even a bad one. Having that implementation in place gave me that starting point and even if I ended up deleting everything and rewriting it I feel like I benefited, at least psychologically, from having somewhere to start.

It is also valuable to see and learn from the experience of others. Why should I re-invent something so basic? Why not use what’s already extant and spend my energy and talent where I can really add value?

But it is also true that although the WWW may indeed be “the wheel” it sometimes provides a wheel made from wood or stone, that has a flat tire or is damaged beyond repair. For me, though, even that is beneficial since it helps me overcome that forbidding blank sheet of paper.

Tags: , , , ,

I have decided to use this blog to out-gas on things I am thinking about. Aren’t you happy about that?

I have spent some time looking around the popular virtual world platform Second Life. In a virtual world, you assign yourself an avatar (basically a cartoon character representing you) and walk around this large simulated space and interact with other avatars and objects. There is a fun and coolness factor to it all. There are museums to explore, historical location recreations, science fiction universes and dance floors. Lots of dance floors. But what really intrigues me is the very presence in Second Life of large Fortune 500 companies like IBM and Cisco. What are they up to there?

I spoke to some people experiencing and supporting those companies’ Second Life presence in impromptu discussions “in world” (as they say). The conversations often left me with more questions than answers.  While the Second Life experience promises a high degree of interaction, it comes at a significant cost.  A user needs to become conversant in the use of the proprietary viewer (a special-purpose browser to connect you with the virtual world), the methods for creation, manipulation and animation of  objects and the utilization of the on-line chat facility or its voice-based interaction mechanism.  The primary question is: given all of these costs and barriers to adoption, what is the benefit of this experience over say, WebEx (which Cisco actually owns) or Telepresence (which Cisco also heavily promotes) or even a standard teleconference?  The common answer was either that Second Life was “cool” or “fun” – just what I experienced.  But is that enough?  Does that constitute “the killer app” for virtual worlds? It’s “cool” and “fun”?

There are also, however, some intangibles. People hiding behind their personal (and anonymous) avatars tend to be a little bolder. They tend to speak more openly and honestly. That can allow for more compelling and fruitful interactions and in collaborative circumstances result in better outcomes and solutions developed. Some studies have even shown that this boldness is transferrable to real life. So maybe, these companies are engaging in a little social cognitive therapy for those legions of techies they employ expecting to elicit better human interaction as a result. And that makes it all worthwhile.

Tags: , , , , , ,

Hello world!

The new web-based home of Formidable Engineering Consultants is getting in shape.  You will be able to track the rise and rise of this organization in real time or at least as soon as I get around to updating this page.  Thanks for your continuing support. As always, comments are welcome.

Tags: , , ,
Next posts » Back to top