Archive for 'Systems On A Chip'

thingsThe hoi polloi are running fast towards the banner marked “Internet of Things“.   They are running at full speed chanting “I-o-T, I-o-T, I-o-T” all along the way. But for the most part, they are each running towards something different.  For some, it is a network of sensors; for others, it is a network of processors; for still others, it is a previously unconnected and unnetworked  embedded system but now connected and attached to a network;  some say it is any of those things connected to the cloud; and there are those who say it is simply renaming whatever they already have and including the descriptive marketing label “IoT” or “Internet of Things” on the box.

So what is it?  Why the excitement? And what can it do?

At its simplest, the Internet of Things is a collections of endpoints of some sort each of which has a sensor or a number of sensors, a processor, some memory and some sort of wireless connectivity.  The endpoints are then connected to a server – where “server” is defined in the broadest possible sense.  It could be a phone, a tablet, a laptop or desktop, a remote server farm or some combination of all of those (say, a phone that then talks to a server farm).  Along the transmission path, data collected from the sensors goes through increasingly higher levels of analysis and processing.  For instance, at the endpoint itself raw data may be displayed or averaged or corrected and then delivered to the server and then stored in the cloud.  Once in the cloud, data can be analyzed historically, compared with other similarly collected data, correlated to other related data or even unrelated data in an attempt to search for unexpected or heretofore unseen correlations.  Fully processed data can then be delivered back to the user in some meaningful way. Perhaps the processed data could be displayed as trend display or as a prescriptive suite of actions or recommendations.  And, of course, the fully analyzed data and its correlations could also be sold or otherwise used to target advertising or product or service recommendations.

There is a further enhancement to this collection of endpoints and associated data analysis processes described in my basic IoT system.  The ‘things’ on this Internet of Things could also use to the data it collects to improve itself.  This could include identifying missing data elements or sensor readings, bad timing assumptions or other ways to improve the capabilities of the overall system.  If the endpoints are reconfigurable either through programmable logic (like Field Programmable Gate Arrays) or through software updates then new hardware or software images could be distributed with enhancements (or, dare I say, bug fixes) throughout the system to provide it with new functionality.  This makes the IoT system both evolutionary and field upgradeable.  It extends the deployment lifetime of the device and could potentially extend the time in market at both the beginning and the end of the product life cycle. You could get to market earlier with limited functionality, introduce new features and enhancement post deployment and continue to add innovations when the product might ordinarily have been obsoleted.

Having defined an ideal IoT system, the question becomes how does one turn it into a business? The value of these IoT applications are based on the collection of data over time and the processing and interpretation (mining) of said data.  As more data are collected over time the value of the analysis increases (but likely asymptotically approaching some maximal value).  The data analysis could include information like:

  • Your triathlon training plan is on track, you ought to taper the swim a bit and increase the running volume to 18 miles per week.
  • The drive shaft on your car will fail in the next 1 to 6 weeks – how about I order one for you and set up an appointment at the dealership?
  • If you keep eating the kind of food you have for the past 4 days, you will gain 15 pounds by Friday.

The above sample analysis is obviously from a variety of different products or systems but the idea is that by mining collected and historical data from you, and maybe even people ‘like’ you, certain conclusions may be drawn.

Since the analysis is continuous and the feedback unsynchronized to any specific event or time, the fees for these services would have to be subscription-based.  A small charge every month would deliver the analysis and prescriptive suggestions as and when needed.

This would suggest that when you a buy a car instead of an extended service contract that you pay for as a lump sum upfront, you pay, say, $5 per month and the IoT system is enabled on your car and your car will schedule service with a complete list of required parts and tasks exactly when and as needed.

Similarly in the health services sector, your IoT system collects all of your biometric data automatically, loads your activity data to Strava, alerts you to suspicious bodily and vital sign changes and perhaps even calls the doctor to set up your appointment.

The subscription fees should be low because they provide for efficiencies in the system that benefit both the subscriber and the service provider.  The car dealer orders the parts they need when they need them, reducing inventory, providing faster turnaround of cars, obviating the need for overnight storage of cars and payment for rentals.

Doctors see patients less often and then only when something is truly out of whack.

And on and on.

Certainly the possibility for tiered levels of subscription may make sense for some businesses.  There may be ‘free’ variants that provide limited but still useful information to the subscriber but at the cost of sharing their data for broader community analysis. Paid subscribers who share their data for use in broader community analysis may get reduced subscription rates. There are obvious many possible subscription models to investigate.

These described industry capabilities and direction facilitated by the Internet of Things are either pollyannaish or visionary.  It’s up to us to find out. But for now, what do you think?

Tags: , , , , , , , , , , ,

IEEE-1149-1-jtag-pictureThat venerable electronic test standard IEEE Std 1149.1 (also known as JTAG; also known as Boundary-Scan; also known as Dot 1) has just been freshened up.  This is no ordinary freshening.  The standard, last revisited in 2001, is long overdue for some clarification and enhancement.  It’s been a long time coming and now…it’s here.  While the guts remain the same and in good shape, some very interesting options and improvements have been added.  The improvements are intended to provide support for testing and verification of the more complex devices currently available and to acknowledge the more sophisticated test algorithms and capabilities afforded by the latest hardware.  There is an attempt, as well, (perhaps though, only as well as one can do this sort of thing) to anticipate future capabilities and requirements and to provide a framework within which such capabilities and requirements can be supported.  Of course, since the bulk of the changes are optional their value will only be realized if the end-user community embraces them.

There are only some minor clarifications or relaxations to the rules that are already established. For the most part, components currently compliant with the previous version of this standard will remain compliant with this one. There is but one “inside baseball” sort of exception.  The long denigrated and deprecated BC_6 boundary-scan cell has finally been put to rest. It is, with the 2013 version, no longer supported or defined, so any component supplier who chose to utilize this boundary-scan cell – despite all warnings to contrary – must now provide their own BSDL package defining this BC_6 cell if they upgrade to using the STD_1149_1_2013 standard package for their BSDL definitions.

While this is indeed a major revision, I must again emphasize that all the new items introduced are optional.  One of the largest changes is in documentation capability incorporating  the introduction of a new executable description language called Procedural Description Language (PDL) to document test procedures unique to a component.  PDL, a TCL-like language, was adopted from the work of the IEEE Std P1687 working group. 1687 is a proposed IEEE Standard for the access to and operation of embedded instruments (1687 is therefore also known as iJTAG or Instrument JTAG). The first iteration of the standard was based on use of the 1149.1 Test Access Port and Controller to provide the chip access—and a set of modified 1149.1-type Test Data Registers to create an access network for embedded instruments. PDL was developed to describe access to and operation of these embedded instruments.

Now, let’s look at the details.  The major changes are as follows:

In the standard body:

  • In order to allow devices to maintain their test logic in test mode, a new, optional, test mode persistence controller was introduced.  This means that test logic (like the boundary-scan register) can remain behaviorally in test mode even if the active instruction does not force test mode. To support this, the TAP controller was cleaved into 2 parts.  One part that controls test mode and the other that has all the rest of the TAP functionality. In support of this new controller, there are three new instructions: CLAMP_HOLD and TMP_STATUS (both of which access the new TMP status test data register) and CLAMP_RELEASE.
  • In recognizing the emerging requirement for unique device identification codes a new, optional ECIDCODE instruction was introduced along with an associated electronic chip identification test data register.  This instruction-register pair is intended to supplement the existing IDCODE and USERCODE instructions and allow for access to an Electronic Chip Identification value that could be used to identify and track individual integrated circuits.
  • The problem of initializing a device for test has been addressed by providing a well-defined framework to use to formalize this process. The new, optional INIT_SETUP, INIT_SETUP_CLAMP, and INIT_RUN instructions paired with their associated initialization data and initialization status test data registers were provided to this end. The intent is that these instructions formalize the manner in which programmable input/output (I/O) can be set up prior to board or system testing, as well as any providing for the execution of any tasks required to put the system logic into a safe state for test.
  • Recognizing that resetting a device can be complex and require many steps or phases, a new, optional, IC_RESET instruction and its associated reset_select test data register is defined to provide formalized control of component reset functions through the TAP.
  • Many devices now have a number of separate power domains that could result in sections of the device being powered down while other are powered up.  A single, uniform boundary-scan register does not align well with that device style.  So to support power domains that may be powered down but having a single test data register routed through these domains,  an optional standard TAP to test data register interface is recommended that allows for segmentation of test data registers. The concept of register segments allows for segments that may be excluded or included and is generalized sufficiently for utilization beyond the power domain example.
  • There have also been a few enhancements to the boundary-scan register description to incorporate the following:
    1. Optional excludable (but not selectable) boundary-scan register segments
    2. Optional observe-only boundary-scan register cells to redundantly capture the signal value on all digital pins except the TAP pins
    3. Optional observe-only boundary-scan register cells to capture a fault condition on all pins, including non-digital pins, except the TAP pins.

The Boundary Scan Description Language annex was rewritten and includes:

  • Increased clarity and consistency based on end-user feedback accumulated over the years.
  • A technical change was made such that BSDL is no longer a “proper subset” of VHDL, but it is now merely “based on” VHDL. This means that BSDL now maintains VHDL’s flavor but has for all intents and purposes been “forked”.
  • As result of this forking, formal definitions of language elements are now included in the annex instead of reliance on inheritance from VHDL.
  • Also as a result of this forking, some changes to the BNF notation used, including definition of all the special character tokens, are in the annex.
  • Pin mapping now allows for documenting that a port is not connected to any device package pin in a specific mapped device package.
  • The boundary-scan register description introduces new attributes for defining boundary-scan register segments, and introduces a requirement for documenting the behavior of an un-driven input.
  • New capabilities are introduced for documenting the structural details of test data registers:
    1. Mnemonics may be defined that may be associated with register fields.
    2. Name fields within a register or segment may be defined.
    3. Types of cells used in a test data register (TDR) field may be defined.
    4. One may hierarchically assemble segments into larger segments or whole registers.
    5. Constraints may be defined on the values to be loaded in a register or register field.
    6. A register field or bit may be associated with specific ports
    7. Power port may be associated with other ports.
  • The User Defined Package has been expanded to support logic IP providers who may need to document test data register segments contained within their IP.

As I stated earlier, a newly adopted language, PDL, has been included in this version of the standard.  The details of this language are included as part of Annex C. PDL is designed to document the procedural and data requirements for some of the new instructions. PDL serves a descriptive purpose in that regard but, as such, it is also executable should a system choose to interpret it.

It was decided to adopt and develop PDL to support the new capability of  initializing internal test data register fields and configuring complex I/Os prior to entering the EXTEST instruction.  Since the data required for initialization could vary for each use of the component on each distinct board or system design there needed to be an algorithmic way to describe the data set-up and application., in order to configure the I/O Since this version of the standard introduces new instructions for configuring complex I/Os prior to entering the EXTEST instruction. As the data required for initialization could vary for each use of the component on each distinct board or system design, this created the need for a new language for setting internal test data register fields in order to configure the I/O. It was decided to adopt PDL and tailor it to the BSDL register descriptions and the needs of IEEE 1149.1.

Since the concept of BSDL and PDL working together is new and best explained via examples Annex D is provided to supply extended examples of BSDL and PDL used together to describe the structure and the procedures for use of new capabilities. Similarly Annex E provides example pseudo-code for the execution of the PDL iApply command, the most complex of the new commands in PDL.

So that is the new 1149.1 in a nutshell. A fair amount of new capabilities. Some of it complex. All of it optional.  Will you use it?

Tags: , , , , , , ,

I think it’s high time I authored a completely opinion-based article full of observations and my own prejudices that might result in a littany of ad hominem attacks and insults.  Or at least, I hope it does.  This little bit of prose will outline my view of the world of programmable logic as I see it today.  Again, it is as I see it.  You might see it differently.  But you would be wrong.

First let’s look at the players.  The two headed Cerberus of the programmable logic world is Altera and Xilinx.  They battle it out for the bulk of the end-user market share.  After that, there are a series of niche players (Lattice Semiconductor, Microsemi (who recently purchased Actel) and Quicklogic), lesser lights (Atmel and Cypress) and wishful upstarts (Tabula, Achronix and SiliconBlue).

Atmel and Cypress are broadline suppliers of specialty semiconductors.  They each sell a small portfolio of basic programmable logic devices (Atmel CPLDs, Atmel FPGAs and Cypress CPLDs).  As best I can tell, they do this for two reasons.  First, they entered the marketplace and have been in it for about 15 years and at this point have just enough key customers using the devices such that the cost of exiting the market would be greater than the cost of keeping these big customers happy.  The technology is not, by any stretch of the imagination, state of the art so the relative cost of supporting and manufacturing these parts is small.  Second, as a broadline supplier of a wide variety of specialty semiconductors, it’s nice for their sales team to have a PLD to toss into a customer’s solution to stitch together all that other stuff they bought from them.  All told, you’re not going to see any profound innovations from these folks in the programmable logic space.  ‘Nuff said about these players, then.

At the top of the programmable logic food chain are Altera and Xilinx.  These two titans battle head-to-head and every few years exchange the lead.  Currently, Altera has leapt or will leap ahead of Xilinx in technology, market share and market capitalization.  But when it comes to innovation and new ideas, both companies typically offer incremental innovations rather than risky quantum leaps ahead.  They are both clearly pursuing a policy that chases the high end, fat margin devices, focusing more and more on the big, sophisticated end-user who is most happy with greater complexity, capacity and speed.  Those margin leaders are Xilinx’s Virtex families and Altera’s Stratix series. The sweet spot for these devices are low volume, high cost equipment like network equipment, storage systemcontroller and cell phone base stations. Oddly though, Altera’s recent leap to the lead can be traced to their mid-price Arria and low-price Cyclone families that offered lower power and lower price point with the right level of functionality for a wider swath of customers.  Xilinx had no response having not produced a similarly featured device from the release of the Spartan3 (and its variants) until the arrival of the Spartan6 some 4 years later.  This gap provided just the opportunity that Altera needed to gobble up a huge portion of a growing market.  And then, when Xilinx’s Spartan6 finally arrived, its entry to production was marked by bumpiness and a certain amount of “So what?” from end-users who were about to or already did already migrate to Altera.

The battle between Altera and Xilinx is based on ever-shrinking technology nodes, ever-increasing logic capacity, faster speeds and a widening variety of IP cores (hard and soft) and, of course, competitive pricing.  There has been little effort on the part of either company to provide any sort of quantum leap of innovation since there is substantial risk involved.  The overall programmable logic market is behaving more like a commodity market.  The true differentiation is price since the feature sets are basically identical.  If you try to do some risky innovation, you will likely have to divert efforts from your base technology.  And it is that base technology that delivers those fat margins.  If that risky innovation falls flat, you miss a generation and lose those fat margins and market share.

Xilinx’s recent announcement of the unfortunately named Zynq device might be such a quantum innovative leap but it’s hard to tell from the promotional material since it is long on fluff and short on facts.  Is it really substantially different from the Virtex4FX from 2004?  Maybe it isn’t because its announcement does not seem to have instilled any sort of fear over at Altera.  Or maybe Altera is just too frightened to respond?

Lattice Semiconductor has worked hard to find little market niches to serve.  They have done this by focusing mostly on price and acquisitions.  Historically the leader in in-system programmable devices, Lattice saw this lead erode as Xilinx and Altera entered that market using an open standard (rather than a proprietary one, as Lattice did).  In response, Lattice moved to the open standard, acquired FPGA technology and tried to develop other programmable niche markets (e.g., switches, analog).   Lattice has continued to move opportunistically; shifting quickly at the margins of the market to find unserved or underserved programmable logic end-users, with a strong emphasis on price competitiveness.  They have had erratic results and limited success with this strategy and have seen their market share continue to erode.

Microsemi owns the antifuse programmable technology market.  This technology is strongly favored by end-users who want high reliability in their programmable logic.  Unlike the static RAM-based programmable technologies used by most every other manufacturer, antifuse is not susceptible to single event upsets making it ideal for space, defense and similar applications. The downside of this technology is that unlike static RAM, antifuse is not reprogrammable.  You can only program it once and if you need to fix your downloaded design, you need to get a new part, program it with the new pattern and replace the old part with the new part.  Microsemi has attempted to broaden their product offering into more traditional markets by offering more conventional FPGAs.  However, rather than basing their FPGA’s programmability on static RAM, the Microsemi product, ProASIC, uses flash technology.  A nice incremental innovation offering its own benefits (non-volatile pattern storage) and costs (flash does not scale well with shrinking technology nodes). In addition, Microtec is already shipping a Zynq-like device known as the SmartFusion family.  The SmartFusion device has hard analog IP included.  As best I can tell, Zync does not include that analog functionality.  SmartFusion is relatively new, I do not know how popular it is and what additional functionality its end-users are requesting.  I believe the acceptance of the SmartFusion device will serve as a early bellwether indicator for the acceptance of Zynq.

Quicklogic started out as a more general purpose programmable logic supplier based on a programming technology similar to antifuse with a low power profile.  Over the years, Quicklogic has chosen to focus their offering as more of a programmable application specific standard product (ASSP).  The devices they offer include specific hard IP tailored to the mobile market along with a programmable fabric.  As a company, their laser focus on mobile applications leaves them as very much a niche player.

In recent years, a number of startups have entered the marketplace.  While one might have thought that they would target the low end and seek to provide “good enough” functionality at a low price in an effort to truly disrupt the market from the bottom, gain a solid foothold and sell products to those overserved by what Altera and Xilinx offer; that turns out not to be the case.  In fact, two of the new entrants (Tabula and Achronix) are specifically after the high end, high margin sector that Altera and Xilinx so jealously guard.

The company with the most buzz is Tabula.  They are headed by former Xilinx executive, Dennis Segers, who is widely credited with making the decisions that resulted in Xilinx’s stellar growth in the late 1990s with the release of the original Virtex device. People are hoping for the same magic at Tabula.  Tabula’s product offers what they refer to as a SpaceTime Architecture and 3D Programmable Logic.  Basically what that means is that your design is sectioned and swapped in and out of the device much like a program is swapped in and out of a computer’s RAM space.  This provides a higher effective design density on a device having less “hard logic”.  An interesting idea.  It seems like it would likely utilize less power than the full design realized on a single chip.  The cost is complexity of the design software and the critical nature of the system setup (i.e., the memory interface and implementation) on the board to ensure the swapping functionality as promised.  Is it easy to use?  Is it worth the hassle?  It’s hard to tell right now.  There are some early adopters kicking the tires.  If Tabula is successful will they be able to expand their market beyond where they are now? It looks like their technology might scale up very easily to provide higher and higher effective densities.  But does their technology scale down to low cost markets easily?  It doesn’t look like it.  There is a lot of overhead associated with all that image swapping and its value for the low end is questionable.  But, I’ll be the first to say: I don’t know.

Achronix as best I can tell has staked out the high speed-high density market.  That is quite similar to what Tabula is addressing.  The key distinction between the two companies (besides Achronix’s lack of Star Trek-like marketing terminology) is that Achronix is using Intel as their foundry.  This might finally put an end to those persistent annual rumors that Intel is poised to purchase Altera or Xilinx (is it the same analyst every time who leaks this?).  That Intel relationship and a less complex (than Tabula) fabric technology means that Achronix might be best situated to offer their product for those defense applications that require a secure, on-shore foundry.  If that is the case, then Achronix is aiming at a select and very profitable sector that neither Altera nor Xilinx will let go without a big fight.  Even if successful, where does Achronix expand?  Does their technology scale down to low cost markets easily?  I don’t think so…but I don’t know.  Does it scale up to higher densities easily?  Maybe.

SiliconBlue is taking a different approach.  They are aiming at the low power, low cost segment.  That seems like more of a disruptive play.  Should they be able to squeeze in, they might be able to innovate their way up the market and cause some trouble for Xilinx and Altera.  The rumored issue with SiliconBlue is that their devices aren’t quite low power enough or quite cheap enough to fit their intended target market.  The other rumor is that they are constantly looking for a buyer.  That doesn’t instill a high level of confidence now, does it?

So what does all this mean?  The Microsemi SmartFusion device might be that quantum innovative leap that most likely extends the programmable logic market space.  It may be the one product that has the potential to serve an unserved market and bring more end-user and applications on board.  But the power and price point might not be right.

The ability of any programmable logic solution to expand beyond the typical sweet spots is based on its ability to displace other technologies at a lower cost and with sufficient useful functionality.  PLDs are competing not just against ASSPs but also against multi-core processors and GPUs.  Multi-core processors and GPUs offer a simpler programming model (using common programming languages), relatively low power and a wealth of application development tools with a large pool of able, skilled developers.  PLDs still require understanding hardware description languages (like VHDL or Verilog HDL) as well as common programming languages (like C) in addition to specific conceptual knowledge of hardware and software.  On top of all that programmable logic often delivers higher power consumption at a higher price point than competing solutions.

In the end, the real trick is not just providing a hardware solution that delivers the correct power and price point but a truly integrated tool set that leverages the expansive resource pool of C programmers rather than the much smaller resource puddle of HDL programmers. And no one, big or small, new or old, is investing in that development effort.

Tags: , , , , ,

I have spent a fair amount of my formative years in and around the field programmable gate array (FPGA) industry.  I participated in the evolution of FPGAs from a convenient repository for glue logic and a pricey but useful prototyping platform to a convenient repository for lots of glue logic, an affordable but still a little pricey platform to improve time-to-market and a useful system-on-a-chip platform.  There was much talk about FPGAs going mainstream, displacing all but a few ASICs and becoming the vehicle of choice for most system implementations.  It turns out that last step…the mainstreaming, the death of ASICs, the proliferating system-on-chip…is still underway.  And maybe it’s just around the corner, again.  But maybe it’s not.

FPGA companies (well, Xilinx and Altera) appear to be falling prey to the classic disruptive technology trap described by Clayton Christensen.  Listening to the calls of the deans of Wall Street and pursuing fat margins.  Whether it’s Virtex or Stratix, both Xilinx and Altera are innovating at the high end delivering very profitable and very expensive parts that their biggest customers want and pretty much ignoring the little guys who are looking for cheap, functional and mostly low power devices.

This opens the door for players like Silicon Blue, Actel or Lattice to pick a niche and exploit the heck out of it.  Be it low power, non-volatile storage or security, these folks are picking up some significant business here and there. 

This innovation trap, however, ignores a huge opportunity that really only a big player can address.  I think that the biggest competitor to FPGAs is not ASSPs or ASICs or even other cheaper FPGAs.  I think that what everyone needs to be watching out for is CPUs and GPUs

Let’s face it, even with an integrated processor in your FPGA, you still really need to be a VHDL or Verilog HDL developer to build systems based on the FPGA.  And how many HDL designers are there worldwide?  Tens of thousands?  Perhaps.  Charitably.  This illuminates another issue with systems-on-a-chip – software and software infrastructure. I think this might even be the most important issue acting as an obstacle to the wide adoption of programmable logic technology. To design a CPU or GPU-based system, you need to know C or C++.  How many C developers are there worldwide?  Millions?  Maybe more.

With a GPU you are entering the world of tesselation automata or systolic arrays.  It is easier (but still challenging) to map a C program to a processor grid than sea of gates.  And you also get to leverage the existing broad set of software debug and development tools.  What would you prefer to use to develop your next system on a chip – SystemC with spotty support infrastructure or standard C with deep and broad support infrastructure?

The road to the FPGA revolution is littered with companies who’s products started as FPGA-based with a processor to help, but then migrated to a full multi-core CPU solution dumping the FPGA (except for data path and logic consolidation).  Why is that?  Because to make a FPGA solution work you need to be an expert immersed in FPGA architectures and you need to develop your own tools to carefully divide hardware and software tasks.  And in the end, to get really great speeds and results, you need to keep tweaking your system and reassigning hardware and software tasks.   And then there’s the debugging challenge.  In the end – it’s just hard.

On the other hand, grab an off-the-shelf multi-core processor, whack together some C code, compile it and run it and you get pretty good speeds and the same results.  On top of that – debugging is better supported.

I think FPGAs are great and someday they may be able to provide a real system-on-a-chip solution but they won’t until FPGA companies stop thinking like semiconductor manufacturers and start thinking (and acting) like the software application solution providers they need to become.

Tags: , , , , , ,
Back to top