Tag: hardware

thingsThe hoi polloi are running fast towards the banner marked “Internet of Things“.   They are running at full speed chanting “I-o-T, I-o-T, I-o-T” all along the way. But for the most part, they are each running towards something different.  For some, it is a network of sensors; for others, it is a network of processors; for still others, it is a previously unconnected and unnetworked  embedded system but now connected and attached to a network;  some say it is any of those things connected to the cloud; and there are those who say it is simply renaming whatever they already have and including the descriptive marketing label “IoT” or “Internet of Things” on the box.

So what is it?  Why the excitement? And what can it do?

At its simplest, the Internet of Things is a collections of endpoints of some sort each of which has a sensor or a number of sensors, a processor, some memory and some sort of wireless connectivity.  The endpoints are then connected to a server – where “server” is defined in the broadest possible sense.  It could be a phone, a tablet, a laptop or desktop, a remote server farm or some combination of all of those (say, a phone that then talks to a server farm).  Along the transmission path, data collected from the sensors goes through increasingly higher levels of analysis and processing.  For instance, at the endpoint itself raw data may be displayed or averaged or corrected and then delivered to the server and then stored in the cloud.  Once in the cloud, data can be analyzed historically, compared with other similarly collected data, correlated to other related data or even unrelated data in an attempt to search for unexpected or heretofore unseen correlations.  Fully processed data can then be delivered back to the user in some meaningful way. Perhaps the processed data could be displayed as trend display or as a prescriptive suite of actions or recommendations.  And, of course, the fully analyzed data and its correlations could also be sold or otherwise used to target advertising or product or service recommendations.

There is a further enhancement to this collection of endpoints and associated data analysis processes described in my basic IoT system.  The ‘things’ on this Internet of Things could also use to the data it collects to improve itself.  This could include identifying missing data elements or sensor readings, bad timing assumptions or other ways to improve the capabilities of the overall system.  If the endpoints are reconfigurable either through programmable logic (like Field Programmable Gate Arrays) or through software updates then new hardware or software images could be distributed with enhancements (or, dare I say, bug fixes) throughout the system to provide it with new functionality.  This makes the IoT system both evolutionary and field upgradeable.  It extends the deployment lifetime of the device and could potentially extend the time in market at both the beginning and the end of the product life cycle. You could get to market earlier with limited functionality, introduce new features and enhancement post deployment and continue to add innovations when the product might ordinarily have been obsoleted.

Having defined an ideal IoT system, the question becomes how does one turn it into a business? The value of these IoT applications are based on the collection of data over time and the processing and interpretation (mining) of said data.  As more data are collected over time the value of the analysis increases (but likely asymptotically approaching some maximal value).  The data analysis could include information like:

  • Your triathlon training plan is on track, you ought to taper the swim a bit and increase the running volume to 18 miles per week.
  • The drive shaft on your car will fail in the next 1 to 6 weeks – how about I order one for you and set up an appointment at the dealership?
  • If you keep eating the kind of food you have for the past 4 days, you will gain 15 pounds by Friday.

The above sample analysis is obviously from a variety of different products or systems but the idea is that by mining collected and historical data from you, and maybe even people ‘like’ you, certain conclusions may be drawn.

Since the analysis is continuous and the feedback unsynchronized to any specific event or time, the fees for these services would have to be subscription-based.  A small charge every month would deliver the analysis and prescriptive suggestions as and when needed.

This would suggest that when you a buy a car instead of an extended service contract that you pay for as a lump sum upfront, you pay, say, $5 per month and the IoT system is enabled on your car and your car will schedule service with a complete list of required parts and tasks exactly when and as needed.

Similarly in the health services sector, your IoT system collects all of your biometric data automatically, loads your activity data to Strava, alerts you to suspicious bodily and vital sign changes and perhaps even calls the doctor to set up your appointment.

The subscription fees should be low because they provide for efficiencies in the system that benefit both the subscriber and the service provider.  The car dealer orders the parts they need when they need them, reducing inventory, providing faster turnaround of cars, obviating the need for overnight storage of cars and payment for rentals.

Doctors see patients less often and then only when something is truly out of whack.

And on and on.

Certainly the possibility for tiered levels of subscription may make sense for some businesses.  There may be ‘free’ variants that provide limited but still useful information to the subscriber but at the cost of sharing their data for broader community analysis. Paid subscribers who share their data for use in broader community analysis may get reduced subscription rates. There are obvious many possible subscription models to investigate.

These described industry capabilities and direction facilitated by the Internet of Things are either pollyannaish or visionary.  It’s up to us to find out. But for now, what do you think?

Tags: , , , , , , , , , , ,

iStock_000016388919XSmallBack in 1992, after the Berlin Wall fell and communist states were toppled one after another, Francis Fukuyama authored and published a book entitled The End of History and The Last Man.  It received much press at the time for its bold and seemingly definitive statement (specifically that whole ‘end of history’ thing with the thesis that capitalist liberal democracy is that endpoint). The result was much press, discussion, discourse and theorizing and presumably a higher sales volume for a book that likely still graces many a bookshelf, binding still uncracked.  Now it’s my turn to be bold.

Here it is:

With the advent and popularization of the smartphone, we are now at the end of custom personal consumer hardware.

That’s it.  THE END OF HARDWARE.  Sure there will be form factor changes and maybe a few additional new hardware features but all of these changes will be incorporated in smartphone handsets as that platform.

Maybe I’m exaggerating – but only a little.  Really, there’s not much more room for hardware innovation in the smartphone platform and as it is currently deployed, it contains the building blocks of any custom personal consumer device. Efforts are clearly being directed at gadgets to replace those cell phones.  That might be smart watches, wearable computers, tablets or even phablets. But these are really just changes in form not function.  Much like the evolution of the PC, it appears that mobile hardware has reached the point where the added value of hardware has become incremental and less valuable.  The true innovation is in the manner in which software can be used to connect resources and increase the actual or perceived power that platform.

In the PC world, faster and faster microprocessors were of marginal utility to the great majority of end-users who merely used their PCs for reading email or doing PowerPoint.  Bloated applications (of the sort that the folks at Microsoft seem so pleased to develop and distribute) didn’t even benefit from faster processors as much as they did from cheaper memory and faster internet connections.  And now, we may be approaching that same place for mobile applications.  The value of some of these applications is becoming limited more by the availability of on-device resources like memory and faster internet connections through the cell provider rather than the actual hardware features of the handset.  Newer applications are more and more dependent on big data and other cloud-based resources.  The handset is merely a window into those data sets.  A presentation layer, if you will.  Other applications use the information collected locally from the device’s sensors and hardware peripherals (geographical location, speed, direction, scanned images, sounds, etc.) in concert with cloud-based big data to provide services, entertainment and utilities.

In addition, and more significantly, we are seeing developing smartphone applications that use the phone’s peripherals to directly interface to other local hardware (like PCs, projectors, RC toys,  headsets, etc.) to extend the functionality of those products.  Why buy a presentation remote when you get an app? Why buy a remote for your TV when you can get an app? Why buy a camera when you already have one on your phone? A compass? A flashlight? A GPS? An exercise monitor?

Any consumer-targeted handheld device need no longer develop an independent hardware platform.  You just develop an app to use the features of the handset that you need and deploy the app.  Perhaps additional special purpose sensor packs might be needed to augment the capabilities of the smartphone for specialized uses but any mass-market application can be fully realized using the handset as the existing base and few hours of coding.

And if you doubt that handset hardware development has plateaued  then consider the evolution of the Samsung Galaxy S3 to the Samsung Galaxy S4.  The key difference between the two devices is the processor capabilities and the camera resolution.  The bulk of the innovations are pure software related and could have been implemented as part of the Samsung Galaxy S3 itself without really modifying the hardware.  The differences between the iPhone 4s and the iPhone 5s were a faster processor, a better camera and a fingerprint sensor.  Judging from a completely unscientific survey of end-users that I know, the fingerprint sensor remains unused by most owners. An innovation that has no perceived value.

The economics of this thesis is clear.  If a consumer has already spent $600 or so on a smartphone and lives most of their life on it anyway and carries it with them everywhere, are you going to have better luck selling them a new gadget for $50-$250 (that they have to order, wait for learn how to use, get comfortable with and then carry around) or an app that they can buy for $2 and download and use in seconds – when they need it?

 

Tags: , , , , , , , , , , , , , ,

google-glass-patent-2-21-13-01Let me start by being perfectly clear.  I don’t have Google Glass.  I’ve never seen a pair live.  I’ve never held or used the device.  So basically, I just have strong opinions based on what I have read and seen.  And, of course, the way I have understood what I have read and seen.  Sergei Brin recently did a TED talk about Google Glass during which, after sharing a glitzy, well-produced video commercial for the product, he maintained that they developed Google Glass because burying your head in a smartphone was rude and anti-social.  Presumably staring off into the projected images produced by Google Glass but still avoiding eye-contact and real human interaction is somehow less rude and less anti-social.  But let that alone for now.

The “what’s in it for me” of Google Glass is the illusion of intelligence (or at least the ability to instantly access facts), Internet-based real-time social sharing, real-time scrapbooking and interactive memo taking amongst other Dick Tracy-like functions.

What’s in it for Google is obvious.  At its heart, Google is an advertising company – well – more of an advertising distribution company.  They are a platform for serving up advertisements for all manner of products and services.  Their ads are more valuable if they can directly target people with ads for products or services at a time and place when the confluence of the advertisement and the reality yield a situation in which the person is almost compelled to purchase what is on offer because it is exactly what they want when they want it.  This level of targeting is enhanced when they know what you like (Google+, Google Photos (formerly Picasa)), how much money you have (Google Wallet), where you are (Android), what you already have (Google Shopping), what you may be thinking (GMail), who you are with (Android) and what your friends and neighbors have and think (all of the aforementioned).  Google Glass, by recording location data, images, registering your likes and other purchases can work to build and enhance such a personal database.  Even if you choose to anonymize yourself and force Google to de-personalize your data, their guesses may be less accurate but they will still know about you as a demographic group (male, aged 30-34, lives in zip code 95123, etc.) and perhaps general information based on your locale and places you visit and where you might be at any time.  So, I immediately see the value of Google Glass for Google and Google’s advertising customers but see less value in its everyday use by ordinary folks unless they seek to be perceived as cold, anti-social savants who may possibly be on the Autistic Spectrum.

I don’t want to predict that Google Glass will be a marketplace disaster but the value statement for it appears to be limited.  A lot of the capabilities touted for it are already on your smartphone or soon to be released for it.  There is talk of image scanning applications that immediately bring up information about whatever it is that you’re looking at.  Well, Google’s own Goggles is an existing platform for that and it works on a standard mobile phone.  In fact, all of the applications touted thus far for Google Glass rely on some sort of visual analysis or geolocation-based look-up that is equally applicable to anything with a camera. It seems to me that the “gotta have the latest gadget” gang will flock to Google Glass as they always do to these devices but appealing to the general public may be a more difficult task.  Who really wants to wear their phone on their face?  If the benefit of Google Glass is its wearability then maybe Apple’s much-rumored iWatch is a less intrusive and less nerdy looking alternative.  Maybe Apple still better understands what people really want when it comes to mobile connectivity.

Ultimately, Google Glass may be a blockbuster hit or just an interesting (but expensive) experiment.  We’ll find out by the end of the year.

Tags: , , , , , , , , , , , , ,

IEEE-1149-1-jtag-pictureThat venerable electronic test standard IEEE Std 1149.1 (also known as JTAG; also known as Boundary-Scan; also known as Dot 1) has just been freshened up.  This is no ordinary freshening.  The standard, last revisited in 2001, is long overdue for some clarification and enhancement.  It’s been a long time coming and now…it’s here.  While the guts remain the same and in good shape, some very interesting options and improvements have been added.  The improvements are intended to provide support for testing and verification of the more complex devices currently available and to acknowledge the more sophisticated test algorithms and capabilities afforded by the latest hardware.  There is an attempt, as well, (perhaps though, only as well as one can do this sort of thing) to anticipate future capabilities and requirements and to provide a framework within which such capabilities and requirements can be supported.  Of course, since the bulk of the changes are optional their value will only be realized if the end-user community embraces them.

There are only some minor clarifications or relaxations to the rules that are already established. For the most part, components currently compliant with the previous version of this standard will remain compliant with this one. There is but one “inside baseball” sort of exception.  The long denigrated and deprecated BC_6 boundary-scan cell has finally been put to rest. It is, with the 2013 version, no longer supported or defined, so any component supplier who chose to utilize this boundary-scan cell – despite all warnings to contrary – must now provide their own BSDL package defining this BC_6 cell if they upgrade to using the STD_1149_1_2013 standard package for their BSDL definitions.

While this is indeed a major revision, I must again emphasize that all the new items introduced are optional.  One of the largest changes is in documentation capability incorporating  the introduction of a new executable description language called Procedural Description Language (PDL) to document test procedures unique to a component.  PDL, a TCL-like language, was adopted from the work of the IEEE Std P1687 working group. 1687 is a proposed IEEE Standard for the access to and operation of embedded instruments (1687 is therefore also known as iJTAG or Instrument JTAG). The first iteration of the standard was based on use of the 1149.1 Test Access Port and Controller to provide the chip access—and a set of modified 1149.1-type Test Data Registers to create an access network for embedded instruments. PDL was developed to describe access to and operation of these embedded instruments.

Now, let’s look at the details.  The major changes are as follows:

In the standard body:

  • In order to allow devices to maintain their test logic in test mode, a new, optional, test mode persistence controller was introduced.  This means that test logic (like the boundary-scan register) can remain behaviorally in test mode even if the active instruction does not force test mode. To support this, the TAP controller was cleaved into 2 parts.  One part that controls test mode and the other that has all the rest of the TAP functionality. In support of this new controller, there are three new instructions: CLAMP_HOLD and TMP_STATUS (both of which access the new TMP status test data register) and CLAMP_RELEASE.
  • In recognizing the emerging requirement for unique device identification codes a new, optional ECIDCODE instruction was introduced along with an associated electronic chip identification test data register.  This instruction-register pair is intended to supplement the existing IDCODE and USERCODE instructions and allow for access to an Electronic Chip Identification value that could be used to identify and track individual integrated circuits.
  • The problem of initializing a device for test has been addressed by providing a well-defined framework to use to formalize this process. The new, optional INIT_SETUP, INIT_SETUP_CLAMP, and INIT_RUN instructions paired with their associated initialization data and initialization status test data registers were provided to this end. The intent is that these instructions formalize the manner in which programmable input/output (I/O) can be set up prior to board or system testing, as well as any providing for the execution of any tasks required to put the system logic into a safe state for test.
  • Recognizing that resetting a device can be complex and require many steps or phases, a new, optional, IC_RESET instruction and its associated reset_select test data register is defined to provide formalized control of component reset functions through the TAP.
  • Many devices now have a number of separate power domains that could result in sections of the device being powered down while other are powered up.  A single, uniform boundary-scan register does not align well with that device style.  So to support power domains that may be powered down but having a single test data register routed through these domains,  an optional standard TAP to test data register interface is recommended that allows for segmentation of test data registers. The concept of register segments allows for segments that may be excluded or included and is generalized sufficiently for utilization beyond the power domain example.
  • There have also been a few enhancements to the boundary-scan register description to incorporate the following:
    1. Optional excludable (but not selectable) boundary-scan register segments
    2. Optional observe-only boundary-scan register cells to redundantly capture the signal value on all digital pins except the TAP pins
    3. Optional observe-only boundary-scan register cells to capture a fault condition on all pins, including non-digital pins, except the TAP pins.

The Boundary Scan Description Language annex was rewritten and includes:

  • Increased clarity and consistency based on end-user feedback accumulated over the years.
  • A technical change was made such that BSDL is no longer a “proper subset” of VHDL, but it is now merely “based on” VHDL. This means that BSDL now maintains VHDL’s flavor but has for all intents and purposes been “forked”.
  • As result of this forking, formal definitions of language elements are now included in the annex instead of reliance on inheritance from VHDL.
  • Also as a result of this forking, some changes to the BNF notation used, including definition of all the special character tokens, are in the annex.
  • Pin mapping now allows for documenting that a port is not connected to any device package pin in a specific mapped device package.
  • The boundary-scan register description introduces new attributes for defining boundary-scan register segments, and introduces a requirement for documenting the behavior of an un-driven input.
  • New capabilities are introduced for documenting the structural details of test data registers:
    1. Mnemonics may be defined that may be associated with register fields.
    2. Name fields within a register or segment may be defined.
    3. Types of cells used in a test data register (TDR) field may be defined.
    4. One may hierarchically assemble segments into larger segments or whole registers.
    5. Constraints may be defined on the values to be loaded in a register or register field.
    6. A register field or bit may be associated with specific ports
    7. Power port may be associated with other ports.
  • The User Defined Package has been expanded to support logic IP providers who may need to document test data register segments contained within their IP.

As I stated earlier, a newly adopted language, PDL, has been included in this version of the standard.  The details of this language are included as part of Annex C. PDL is designed to document the procedural and data requirements for some of the new instructions. PDL serves a descriptive purpose in that regard but, as such, it is also executable should a system choose to interpret it.

It was decided to adopt and develop PDL to support the new capability of  initializing internal test data register fields and configuring complex I/Os prior to entering the EXTEST instruction.  Since the data required for initialization could vary for each use of the component on each distinct board or system design there needed to be an algorithmic way to describe the data set-up and application., in order to configure the I/O Since this version of the standard introduces new instructions for configuring complex I/Os prior to entering the EXTEST instruction. As the data required for initialization could vary for each use of the component on each distinct board or system design, this created the need for a new language for setting internal test data register fields in order to configure the I/O. It was decided to adopt PDL and tailor it to the BSDL register descriptions and the needs of IEEE 1149.1.

Since the concept of BSDL and PDL working together is new and best explained via examples Annex D is provided to supply extended examples of BSDL and PDL used together to describe the structure and the procedures for use of new capabilities. Similarly Annex E provides example pseudo-code for the execution of the PDL iApply command, the most complex of the new commands in PDL.

So that is the new 1149.1 in a nutshell. A fair amount of new capabilities. Some of it complex. All of it optional.  Will you use it?

Tags: , , , , , , ,

gate-with-no-fence-please-keep-locked-articleScott McNealy, the former CEO of the former Sun Microsystems, in the late 1990s, in an address to the Commonwealth Club said that the future of the Internet is in security. Indeed, it seems that there has been much effort and capital invested in addressing security matters. Encryption, authentication, secure transaction processing, secure processors, code scanners, code verifiers and host of other approaches to make your system and its software and hardware components into a veritable Fort Knox. And it’s all very expensive and quite time consuming (both in development and actual processing). And yet we still hear of routine security breeches, data and identity theft, on-line fraud and other crimes. Why is that? Is security impossible? Unlikely? Too expensive? Misused? Abused? A fiction?

Well, in my mind, there are two issues and they are the weak links in any security endeavour. The two actually have one common root. That common root, as Pogo might say, “is us”. The first one that has been in the press very much of late and always is the reliance on password. When you let the customers in and provide them security using passwords, they sign up using passwords like ‘12345’ or ‘welcome’ or ‘password’. That is usually combated through the introduction of password rules. Rules usually indicate that passwords must meet some minimum level of complexity. This would usually be something like requiring that each password must have a letter and a number and a punctuation mark and be at least 6 characters long. This might cause some customers to get so aggravated because they can’t use their favorite password that they don’t both signing up at all. Other end users get upset but change their passwords to “a12345!” or “passw0rd!” or “welc0me!”. And worst of all, they write the password down and put it in a sticky note on their computer.

Of course, ordinary users are not the only ones to blame, administrators are human, too, and equally as fallible. Even though they should know better, they are equally likely to have the root or administrator password left at the default “admin” or even nothing at all.

The second issue is directly the fault of the administrator – but it is wholly understandable. Getting a system, well a complete network of systems, working and functional is quite an achievement. It is not something to be toyed around with once things are set. When your OS supplier or application software provider delivers a security update, you will think many times over before risking system and network stability to apply it. The choice must be made. The administrator thinks: “Do I wreak havoc on the system – even theoretical havoc – to plug a security hole no matter how potentially damaging?” And considers that: “Maybe I can rely on my firewall…maybe I rely on the fact that our company isn’t much of a target…or I think it isn’t.” And rationalizes: “Then I can defer the application of the patch for now (and likely forever) in the name of stability.”

The bulk of hackers aren’t evil geniuses that stay up late at night doing forensic research and decompilation to find flaws, gaffes, leaks and holes in software and systems. No they are much more likely to be people who read a little about the latest flaws and the most popular passwords and spend their nights just trying stuff to see what they can see. A few of them even specialize in social engineering in which they simply guess or trick you into divulging your password – maybe by examining your online social media presence.

The notorious stuxnet malware worm may be a complex piece of software engineering but it would have done nothing were it not for the peril of human curiosity. The virus allegedly made its way into secure facilities on USB memory sticks. Those memory sticks were carried in human hands and inserted into the targeted computers by those same hands. How did they get into those human hands? A few USB sticks with the virus were probably sprinkled in the parking lot outside the facility. Studies have determined that people will pick up USB memory sticks they find and insert them in their PCs about 60% of the time. The interesting thing is that the likelihood of grabbing and using those USB devices goes up to over 90% if the device has a logo on it.

You can have all the firewalls and scanners and access badges and encryption and SecureIDs and retinal scans you want. In the end, one of your best and most talented employees grabbing a random USB stick and using it on his PC can be the root cause of devastation that could cost you staff years of time to undo.

So what do you do? Fire your employees? Institute policies so onerous that no work can be done at all? As is usual, the best thing to do is apply common sense. If you are not a prime target like a government, a security company or a repository of reams of valuable personal data – don’t go overboard. Keep your systems up-to-date. The time spent now will definitely pay off in the future. Use a firewall. A good one. Finally, be honest with your employees. Educate them helpfully. None of the scare tactics, no “Loose Lips Sink Ships”, just straight talk and a little humor to help guide and modify behavior over time.

Tags: , , , , ,

I think it’s high time I authored a completely opinion-based article full of observations and my own prejudices that might result in a littany of ad hominem attacks and insults.  Or at least, I hope it does.  This little bit of prose will outline my view of the world of programmable logic as I see it today.  Again, it is as I see it.  You might see it differently.  But you would be wrong.

First let’s look at the players.  The two headed Cerberus of the programmable logic world is Altera and Xilinx.  They battle it out for the bulk of the end-user market share.  After that, there are a series of niche players (Lattice Semiconductor, Microsemi (who recently purchased Actel) and Quicklogic), lesser lights (Atmel and Cypress) and wishful upstarts (Tabula, Achronix and SiliconBlue).

Atmel and Cypress are broadline suppliers of specialty semiconductors.  They each sell a small portfolio of basic programmable logic devices (Atmel CPLDs, Atmel FPGAs and Cypress CPLDs).  As best I can tell, they do this for two reasons.  First, they entered the marketplace and have been in it for about 15 years and at this point have just enough key customers using the devices such that the cost of exiting the market would be greater than the cost of keeping these big customers happy.  The technology is not, by any stretch of the imagination, state of the art so the relative cost of supporting and manufacturing these parts is small.  Second, as a broadline supplier of a wide variety of specialty semiconductors, it’s nice for their sales team to have a PLD to toss into a customer’s solution to stitch together all that other stuff they bought from them.  All told, you’re not going to see any profound innovations from these folks in the programmable logic space.  ‘Nuff said about these players, then.

At the top of the programmable logic food chain are Altera and Xilinx.  These two titans battle head-to-head and every few years exchange the lead.  Currently, Altera has leapt or will leap ahead of Xilinx in technology, market share and market capitalization.  But when it comes to innovation and new ideas, both companies typically offer incremental innovations rather than risky quantum leaps ahead.  They are both clearly pursuing a policy that chases the high end, fat margin devices, focusing more and more on the big, sophisticated end-user who is most happy with greater complexity, capacity and speed.  Those margin leaders are Xilinx’s Virtex families and Altera’s Stratix series. The sweet spot for these devices are low volume, high cost equipment like network equipment, storage systemcontroller and cell phone base stations. Oddly though, Altera’s recent leap to the lead can be traced to their mid-price Arria and low-price Cyclone families that offered lower power and lower price point with the right level of functionality for a wider swath of customers.  Xilinx had no response having not produced a similarly featured device from the release of the Spartan3 (and its variants) until the arrival of the Spartan6 some 4 years later.  This gap provided just the opportunity that Altera needed to gobble up a huge portion of a growing market.  And then, when Xilinx’s Spartan6 finally arrived, its entry to production was marked by bumpiness and a certain amount of “So what?” from end-users who were about to or already did already migrate to Altera.

The battle between Altera and Xilinx is based on ever-shrinking technology nodes, ever-increasing logic capacity, faster speeds and a widening variety of IP cores (hard and soft) and, of course, competitive pricing.  There has been little effort on the part of either company to provide any sort of quantum leap of innovation since there is substantial risk involved.  The overall programmable logic market is behaving more like a commodity market.  The true differentiation is price since the feature sets are basically identical.  If you try to do some risky innovation, you will likely have to divert efforts from your base technology.  And it is that base technology that delivers those fat margins.  If that risky innovation falls flat, you miss a generation and lose those fat margins and market share.

Xilinx’s recent announcement of the unfortunately named Zynq device might be such a quantum innovative leap but it’s hard to tell from the promotional material since it is long on fluff and short on facts.  Is it really substantially different from the Virtex4FX from 2004?  Maybe it isn’t because its announcement does not seem to have instilled any sort of fear over at Altera.  Or maybe Altera is just too frightened to respond?

Lattice Semiconductor has worked hard to find little market niches to serve.  They have done this by focusing mostly on price and acquisitions.  Historically the leader in in-system programmable devices, Lattice saw this lead erode as Xilinx and Altera entered that market using an open standard (rather than a proprietary one, as Lattice did).  In response, Lattice moved to the open standard, acquired FPGA technology and tried to develop other programmable niche markets (e.g., switches, analog).   Lattice has continued to move opportunistically; shifting quickly at the margins of the market to find unserved or underserved programmable logic end-users, with a strong emphasis on price competitiveness.  They have had erratic results and limited success with this strategy and have seen their market share continue to erode.

Microsemi owns the antifuse programmable technology market.  This technology is strongly favored by end-users who want high reliability in their programmable logic.  Unlike the static RAM-based programmable technologies used by most every other manufacturer, antifuse is not susceptible to single event upsets making it ideal for space, defense and similar applications. The downside of this technology is that unlike static RAM, antifuse is not reprogrammable.  You can only program it once and if you need to fix your downloaded design, you need to get a new part, program it with the new pattern and replace the old part with the new part.  Microsemi has attempted to broaden their product offering into more traditional markets by offering more conventional FPGAs.  However, rather than basing their FPGA’s programmability on static RAM, the Microsemi product, ProASIC, uses flash technology.  A nice incremental innovation offering its own benefits (non-volatile pattern storage) and costs (flash does not scale well with shrinking technology nodes). In addition, Microtec is already shipping a Zynq-like device known as the SmartFusion family.  The SmartFusion device has hard analog IP included.  As best I can tell, Zync does not include that analog functionality.  SmartFusion is relatively new, I do not know how popular it is and what additional functionality its end-users are requesting.  I believe the acceptance of the SmartFusion device will serve as a early bellwether indicator for the acceptance of Zynq.

Quicklogic started out as a more general purpose programmable logic supplier based on a programming technology similar to antifuse with a low power profile.  Over the years, Quicklogic has chosen to focus their offering as more of a programmable application specific standard product (ASSP).  The devices they offer include specific hard IP tailored to the mobile market along with a programmable fabric.  As a company, their laser focus on mobile applications leaves them as very much a niche player.

In recent years, a number of startups have entered the marketplace.  While one might have thought that they would target the low end and seek to provide “good enough” functionality at a low price in an effort to truly disrupt the market from the bottom, gain a solid foothold and sell products to those overserved by what Altera and Xilinx offer; that turns out not to be the case.  In fact, two of the new entrants (Tabula and Achronix) are specifically after the high end, high margin sector that Altera and Xilinx so jealously guard.

The company with the most buzz is Tabula.  They are headed by former Xilinx executive, Dennis Segers, who is widely credited with making the decisions that resulted in Xilinx’s stellar growth in the late 1990s with the release of the original Virtex device. People are hoping for the same magic at Tabula.  Tabula’s product offers what they refer to as a SpaceTime Architecture and 3D Programmable Logic.  Basically what that means is that your design is sectioned and swapped in and out of the device much like a program is swapped in and out of a computer’s RAM space.  This provides a higher effective design density on a device having less “hard logic”.  An interesting idea.  It seems like it would likely utilize less power than the full design realized on a single chip.  The cost is complexity of the design software and the critical nature of the system setup (i.e., the memory interface and implementation) on the board to ensure the swapping functionality as promised.  Is it easy to use?  Is it worth the hassle?  It’s hard to tell right now.  There are some early adopters kicking the tires.  If Tabula is successful will they be able to expand their market beyond where they are now? It looks like their technology might scale up very easily to provide higher and higher effective densities.  But does their technology scale down to low cost markets easily?  It doesn’t look like it.  There is a lot of overhead associated with all that image swapping and its value for the low end is questionable.  But, I’ll be the first to say: I don’t know.

Achronix as best I can tell has staked out the high speed-high density market.  That is quite similar to what Tabula is addressing.  The key distinction between the two companies (besides Achronix’s lack of Star Trek-like marketing terminology) is that Achronix is using Intel as their foundry.  This might finally put an end to those persistent annual rumors that Intel is poised to purchase Altera or Xilinx (is it the same analyst every time who leaks this?).  That Intel relationship and a less complex (than Tabula) fabric technology means that Achronix might be best situated to offer their product for those defense applications that require a secure, on-shore foundry.  If that is the case, then Achronix is aiming at a select and very profitable sector that neither Altera nor Xilinx will let go without a big fight.  Even if successful, where does Achronix expand?  Does their technology scale down to low cost markets easily?  I don’t think so…but I don’t know.  Does it scale up to higher densities easily?  Maybe.

SiliconBlue is taking a different approach.  They are aiming at the low power, low cost segment.  That seems like more of a disruptive play.  Should they be able to squeeze in, they might be able to innovate their way up the market and cause some trouble for Xilinx and Altera.  The rumored issue with SiliconBlue is that their devices aren’t quite low power enough or quite cheap enough to fit their intended target market.  The other rumor is that they are constantly looking for a buyer.  That doesn’t instill a high level of confidence now, does it?

So what does all this mean?  The Microsemi SmartFusion device might be that quantum innovative leap that most likely extends the programmable logic market space.  It may be the one product that has the potential to serve an unserved market and bring more end-user and applications on board.  But the power and price point might not be right.

The ability of any programmable logic solution to expand beyond the typical sweet spots is based on its ability to displace other technologies at a lower cost and with sufficient useful functionality.  PLDs are competing not just against ASSPs but also against multi-core processors and GPUs.  Multi-core processors and GPUs offer a simpler programming model (using common programming languages), relatively low power and a wealth of application development tools with a large pool of able, skilled developers.  PLDs still require understanding hardware description languages (like VHDL or Verilog HDL) as well as common programming languages (like C) in addition to specific conceptual knowledge of hardware and software.  On top of all that programmable logic often delivers higher power consumption at a higher price point than competing solutions.

In the end, the real trick is not just providing a hardware solution that delivers the correct power and price point but a truly integrated tool set that leverages the expansive resource pool of C programmers rather than the much smaller resource puddle of HDL programmers. And no one, big or small, new or old, is investing in that development effort.

Tags: , , , , ,

I have spent a fair amount of my formative years in and around the field programmable gate array (FPGA) industry.  I participated in the evolution of FPGAs from a convenient repository for glue logic and a pricey but useful prototyping platform to a convenient repository for lots of glue logic, an affordable but still a little pricey platform to improve time-to-market and a useful system-on-a-chip platform.  There was much talk about FPGAs going mainstream, displacing all but a few ASICs and becoming the vehicle of choice for most system implementations.  It turns out that last step…the mainstreaming, the death of ASICs, the proliferating system-on-chip…is still underway.  And maybe it’s just around the corner, again.  But maybe it’s not.

FPGA companies (well, Xilinx and Altera) appear to be falling prey to the classic disruptive technology trap described by Clayton Christensen.  Listening to the calls of the deans of Wall Street and pursuing fat margins.  Whether it’s Virtex or Stratix, both Xilinx and Altera are innovating at the high end delivering very profitable and very expensive parts that their biggest customers want and pretty much ignoring the little guys who are looking for cheap, functional and mostly low power devices.

This opens the door for players like Silicon Blue, Actel or Lattice to pick a niche and exploit the heck out of it.  Be it low power, non-volatile storage or security, these folks are picking up some significant business here and there. 

This innovation trap, however, ignores a huge opportunity that really only a big player can address.  I think that the biggest competitor to FPGAs is not ASSPs or ASICs or even other cheaper FPGAs.  I think that what everyone needs to be watching out for is CPUs and GPUs

Let’s face it, even with an integrated processor in your FPGA, you still really need to be a VHDL or Verilog HDL developer to build systems based on the FPGA.  And how many HDL designers are there worldwide?  Tens of thousands?  Perhaps.  Charitably.  This illuminates another issue with systems-on-a-chip – software and software infrastructure. I think this might even be the most important issue acting as an obstacle to the wide adoption of programmable logic technology. To design a CPU or GPU-based system, you need to know C or C++.  How many C developers are there worldwide?  Millions?  Maybe more.

With a GPU you are entering the world of tesselation automata or systolic arrays.  It is easier (but still challenging) to map a C program to a processor grid than sea of gates.  And you also get to leverage the existing broad set of software debug and development tools.  What would you prefer to use to develop your next system on a chip – SystemC with spotty support infrastructure or standard C with deep and broad support infrastructure?

The road to the FPGA revolution is littered with companies who’s products started as FPGA-based with a processor to help, but then migrated to a full multi-core CPU solution dumping the FPGA (except for data path and logic consolidation).  Why is that?  Because to make a FPGA solution work you need to be an expert immersed in FPGA architectures and you need to develop your own tools to carefully divide hardware and software tasks.  And in the end, to get really great speeds and results, you need to keep tweaking your system and reassigning hardware and software tasks.   And then there’s the debugging challenge.  In the end – it’s just hard.

On the other hand, grab an off-the-shelf multi-core processor, whack together some C code, compile it and run it and you get pretty good speeds and the same results.  On top of that – debugging is better supported.

I think FPGAs are great and someday they may be able to provide a real system-on-a-chip solution but they won’t until FPGA companies stop thinking like semiconductor manufacturers and start thinking (and acting) like the software application solution providers they need to become.

Tags: , , , , , ,

iPad Explained

In a previous post, I admitted to the fact that I was ignorant of or perhaps merely immune to the magic of the iPad.  Since that time, through a series of discussions with people who do get it I have come to understand the magic of the iPad and also why it holds no such power over me.  

Essentially, the iPad is a media consumption device.  It is for those who consume movies, videos, music, games, puzzles, newpapers, facebook, MySpace, magazines, You Tube and all of that stuff available on the web but do not have a requirement for lots of input (typing or otherwise).  You can tap out a few emails, register for a web site but, really, it’s not a platform for writing documents, developing presentations, writing code or working out problems and doing analysis.  That is unless you buy a few pricey accessories.

The pervasive (well, at least around here) iPad billboards really say it best.  They typically feature casually attired torsos reclining, with legs raised, bent at the knees to support the iPad.  These smartly but simply dressed users are lounging and passively consuming media.  They are not working.  They are not developing.  They are not even necessarily thinking.  They are simply happy (we think – even though no faces are visible) and drinking in the experience.  You are expected to (lightly) toss the iPad about after quickly reading an article, keep it on your night stand for those late night web-based fact checks, leave it on your coffee table to watch that old episode of Star Trek at your leisure or pack it in your folio to help while away the hours in waiting rooms and airports.

But this isn’t me. I am more of a developer.  Certainly of software, sometimes of content.  I like a full-sized (or near full-sized) real keyboard for typing.  If I need to check something late at night, my cell phone browser seems to do the trick just fine.  I can triage my email just fine on my cell phone, too.  So, I am not an iPad.  At least not yet.  But if it really is only a consumption platform then not ever.  But one never quite knows what those wizards in Cupertino might be conjuring up next, does one?

Tags: , , ,

I admit it…I am clueless

My world is about to change but I fully admit that I don’t get how. That’s right…everything changes this Saturday, April 3rd when the Apple iPad is released. Am I the only one who looks at it and thinks of those big button phones that one purchases for their aging parent? Yes, I know Steve Jobs found the English language lacking sufficiently meaningful superlatives to describe it. And, yes, I know there will be hundreds of pre-programmed Apple zombies lining the streets to collect their own personal iPad probably starting Friday evening. But, I don’t understand why I would want an oversized iPhone without the phone or the camera or application software or a keyboard and why and how this gadget will change civilization. Don’t get me wrong, I know it will. But I don’t see how the iPad will revolutionize, say, magazine sales. Why would I buy Esquire for $2.99 when I get the print version for less than $1 and throw it out after reading it for 30 minutes (full disclosure: I am a subscriber – I admit that freely)? I also know I can’t take the iPad to the beach to read a book because it will get wet, clotted with sand and the screen will be unreadable in sunlight. But, I’m sure the iPad will be a huge hit. And I’m sure my life will change. Just tell me how. Someone…..please….?

Tags: , , , ,

It’s the Rodney Dangerfield of disciplines. Sweaty, unkempt, unnerving, uncomfortable and disrespected. Test. Yuck. You hate it. Design, baby! That’s where it’s at! Creating! Developing! Building! Who needs test? It’s designed to work!

In actuality, as much as it pains me to admit “trust, but verify” is a good rule of thumb. Of course, every design is developed with an eye to excellence. Of course, all developers are very talented and unlikely to make mistakes of any sort. But it’s still a good idea to have a look-see at what they have done. It’s even better if they leave in the code or hardware that they used to verify their own implementations. The fact of the matter is that designers add in all manner of extras to help them debug and verify their designs and then – just before releasing it – they rip out all of this valuable apparatus. Big mistake. Leave it! It’s all good! If it’s code – enable it with a compile-time define or environment variable. If it’s hardware – connect it up to your boundary-scan infrastructure and enable it using instructions through your IEEE STD 1149.1 Test Access Port. These little gizmos that give you observability and diagnosability at run time will also provide an invaluable aid in the verification and test process. Please…share the love!

Tags: , , , , ,
Back to top