Tag: revolutionary

3d keyA spate of recent articles describes the proliferation of back doors in systems.  There are so many such back doors in so many systems, it claims, that the idea of a completely secure and invulnerable system is, at best, a fallacy.  These back doors may be as result of the system software or even designed into the hardware.  Some back doors are designed in to the systems to facilitate remote update, diagnosis, debug and the like – usually never with the intention of being a security hole.  Some are inserted with subterfuge and espionage in mind by foreign-controlled entities keen on gaining access to otherwise secure systems.  Some may serve both purposes, as well. And some, are just design or specification errors.  This suggests that once you connect a system to a network, some one, some how will be able to access.  As if to provide an extreme example, a recent break-in at the United States Chamber of Commerce was traced to an internet-connected thermostat.

That’s hardware.  What about software?  Despite the abundance of anti-virus software and firewalls, a little social engineering is all you really need to get through to any system. I have written previously about the experiment in which USB memory sticks seeded in a parking lot were inserted in corporate laptops by more than half of employees who found them without any prompting. Email written as if sent from a superior is often utilized to get employees to open attached infected applications that install themselves and open a hole in a firewall for external communications and control.

The problem is actually designed in.  The Internet was built for sharing. The sharing was originally limited to trusted sources. A network of academics. The idea that someone would try to do something awful to you – except as some sort of prank – was inconceivable.

That was then.

Now we are in a place where the Internet is omnipresent.  It is used for sharing and viewing cat videos and for financial transactions.  It is used for the transmission of top secret information and buying cheese.  It connected to servers containing huge volumes of sensitive and personal customer data: social security numbers, bank account numbers, credit card numbers, addresses, health information, etc.  And now, not a day goes by without reports of another breach.  Sometimes attributed to Anonymous, the Chinese, organized crime or kids with more time than sense, these break-ins are relentless and everyone is susceptible

So what to do?

There is a story, perhaps apocryphal, that, at the height of the cold war, when the United States captured a Soviet fighter jet and were examining it, they discovered that there was no solid state electronics in it.  The entire jet was designed using vacuum tubes.  That set the investigators thinking.  Were the Soviets merely backward or did they design using tubes to guard against EMP attacks?

Backward to the future?

Are we headed to a place where the most secure organizations will go offline.  They will revert to paper documents, file folders and heavy cabinets stored in underground vaults?  Of course such systems are not completely secure, as no system actually is.  On the other hand, a break in requires physical presence, carting away tons of documents requires physical strength and effort.  Paper is a material object that cannot be easily spirited away as a stream of electrons. Maybe that’s the solution. But what of all the information infrastructure built up for convenience, cost effectiveness, space savings and general efficiency? Do organizations spend more money going back to paper, staples, binders and hanging folders? And then purchase vast secure spaces to stow these materials?

Will there instead a technological fix in designing a parallel Internet infrastructure from the ground up redesigned so that it incorporates authentication, encryption and verifiable sender identification? Then all secure transactions and information could move to that newer, safer Internet? Is that newer, safer Internet just a .secure domain? Won’t that just be a bigger, better and more value laden target for evil-doers? And what about back-doors – even in a secure infrastructure, an open door or even a door with a breakable window ruins even the finest advanced security infrastructure.  And, of course, there is always social engineering of people that provides access more easily that any other technique. Or spies. Or people thinking they are “doing good”.

The real solution may not yet even be defined or known.  Is it Quantum Computing (which is really just a parallel environment of a differently-developed computing infrastructure)? Or is it really nothing – in that there is no solution and we are stuck with tactical solutions?  It’s an interesting question but for now, it is clear as it was some 20 years ago when Scott McNeally said it “The future of the Internet is security”.

Tags: , , , , ,

Facebook-mobile-phoneIt’s all the rage right now to be viewed as a leader in the mobile space.  There are many different sectors in which to demonstrate your leadership.  There are operating systems like iOS and Android and maybe even Windows Phone (someday).  There’s hardware like Apple, Samsung, HTC and maybe even Nokia.  And of course there’s the applications like FourSquare, Square and other primarily mobile applications in social, payments, health and gaming and then all the other applications rushing to mobile because they were told that’s where they ought to be.

Somewhere in this broad and vague classification is Facebook (or perhaps more properly “facebook”).  This massive database of human foibles and interests is either being pressed or voluntarily exploring just exactly how to enter the mobile space and presumably dominate it.  Apparently they have made several attempts to develop their own handset.  The biggest issue it seems is that they believed that just because they are a bunch of really smart folks they should be able to stitch a phone together and make it work.  I believe the saying is “too smart by half“.  And since they reportedly tried this several times without success – perhaps they were also “too stubborn by several halves”.

This push by facebook begs the question: “What?” or even “Why?”  There is a certain logic to it.  Facebook provides hours of amusement to tens of millions of active users and the developers at facebook build applications to run on a series of mobile platforms already.  Those applications are limited in their ability to provide a full facebook experience and also limit facebook’s ability to extract revenue from these users.  Though when you step back, you quickly realize that facebook is really a platform.  It has messaging (text, voice and video), it has contact information, it has position and location information, it has your personal profile along with your interest history and friends, it knows what motivates you (by your comment contents and what you “like”) and it is a platform for application development (including games and exciting virus and spam possibilities) with a well-defined and documented interface.  At the 10,000 foot level, it seems like facebook is an operating system and a platform ready-to-go.  This is not too different from the vision that propelled Netscape into Microsoft’s sights leading to their ultimate demise. Microsoft doesn’t have the might it once did but Google does and so does Apple.  Neither may be “evil” but both are known to be ruthless.  For facebook to enter this hostile market with yet another platform would be bold. And for that company to be one whose stock price and perceived confidence is faltering after a shaky IPO – it may also be dumb. But it may be the only and necessary option for growth.

On the other hand, facebook’s recent edict imploring all employees to access facebook from Android phones rather than their iPhones could either suggest that the elders at facebook believe their future is in Android or simply that they recognize that it is a growing and highly utilized platform. Maybe they will ditch the phone handset and go all in for mobile on iOS and Android on equal footing.

Personally, I think that a new platform with a facebook-centric interface might be a really interesting product especially if the equipment cost is nothing to the end-user.  A free phone supported by facebook ads, running all your favorite games, with constant chatter and photos from your friends? Talk about an immersive communications experience. It would drive me batty. But I think it would be a huge hit with a certain demographic. And how could they do this given their previous failures? Amongst the weaker players in the handset space, Nokia has teamed up with Microsoft but RIM continues to flail. Their stock is plummeting but they have a ready-to-go team of smart employees with experience in getting once popular products to market as well as that all-important experience in dealing with the assorted wireless companies to say nothing of the treasure trove of patents they hold. They also have some interesting infrastructure in their SRP network that could be exploited by facebook to improve their service (or, after proper consideration, sold off).

You can’t help but wonder that if instead of spending $1B on Instagram prior to its IPO, facebook had instead spent a little more and bought RIM would the outcome and IPO lauch have been different?  I guess I can only speculate about that.  Now, though, it seems that facebook ought to move soon or be damned to be a once great player who squandered their potential.

Tags: , , , , , , , , , , , ,

I think it’s high time I authored a completely opinion-based article full of observations and my own prejudices that might result in a littany of ad hominem attacks and insults.  Or at least, I hope it does.  This little bit of prose will outline my view of the world of programmable logic as I see it today.  Again, it is as I see it.  You might see it differently.  But you would be wrong.

First let’s look at the players.  The two headed Cerberus of the programmable logic world is Altera and Xilinx.  They battle it out for the bulk of the end-user market share.  After that, there are a series of niche players (Lattice Semiconductor, Microsemi (who recently purchased Actel) and Quicklogic), lesser lights (Atmel and Cypress) and wishful upstarts (Tabula, Achronix and SiliconBlue).

Atmel and Cypress are broadline suppliers of specialty semiconductors.  They each sell a small portfolio of basic programmable logic devices (Atmel CPLDs, Atmel FPGAs and Cypress CPLDs).  As best I can tell, they do this for two reasons.  First, they entered the marketplace and have been in it for about 15 years and at this point have just enough key customers using the devices such that the cost of exiting the market would be greater than the cost of keeping these big customers happy.  The technology is not, by any stretch of the imagination, state of the art so the relative cost of supporting and manufacturing these parts is small.  Second, as a broadline supplier of a wide variety of specialty semiconductors, it’s nice for their sales team to have a PLD to toss into a customer’s solution to stitch together all that other stuff they bought from them.  All told, you’re not going to see any profound innovations from these folks in the programmable logic space.  ‘Nuff said about these players, then.

At the top of the programmable logic food chain are Altera and Xilinx.  These two titans battle head-to-head and every few years exchange the lead.  Currently, Altera has leapt or will leap ahead of Xilinx in technology, market share and market capitalization.  But when it comes to innovation and new ideas, both companies typically offer incremental innovations rather than risky quantum leaps ahead.  They are both clearly pursuing a policy that chases the high end, fat margin devices, focusing more and more on the big, sophisticated end-user who is most happy with greater complexity, capacity and speed.  Those margin leaders are Xilinx’s Virtex families and Altera’s Stratix series. The sweet spot for these devices are low volume, high cost equipment like network equipment, storage systemcontroller and cell phone base stations. Oddly though, Altera’s recent leap to the lead can be traced to their mid-price Arria and low-price Cyclone families that offered lower power and lower price point with the right level of functionality for a wider swath of customers.  Xilinx had no response having not produced a similarly featured device from the release of the Spartan3 (and its variants) until the arrival of the Spartan6 some 4 years later.  This gap provided just the opportunity that Altera needed to gobble up a huge portion of a growing market.  And then, when Xilinx’s Spartan6 finally arrived, its entry to production was marked by bumpiness and a certain amount of “So what?” from end-users who were about to or already did already migrate to Altera.

The battle between Altera and Xilinx is based on ever-shrinking technology nodes, ever-increasing logic capacity, faster speeds and a widening variety of IP cores (hard and soft) and, of course, competitive pricing.  There has been little effort on the part of either company to provide any sort of quantum leap of innovation since there is substantial risk involved.  The overall programmable logic market is behaving more like a commodity market.  The true differentiation is price since the feature sets are basically identical.  If you try to do some risky innovation, you will likely have to divert efforts from your base technology.  And it is that base technology that delivers those fat margins.  If that risky innovation falls flat, you miss a generation and lose those fat margins and market share.

Xilinx’s recent announcement of the unfortunately named Zynq device might be such a quantum innovative leap but it’s hard to tell from the promotional material since it is long on fluff and short on facts.  Is it really substantially different from the Virtex4FX from 2004?  Maybe it isn’t because its announcement does not seem to have instilled any sort of fear over at Altera.  Or maybe Altera is just too frightened to respond?

Lattice Semiconductor has worked hard to find little market niches to serve.  They have done this by focusing mostly on price and acquisitions.  Historically the leader in in-system programmable devices, Lattice saw this lead erode as Xilinx and Altera entered that market using an open standard (rather than a proprietary one, as Lattice did).  In response, Lattice moved to the open standard, acquired FPGA technology and tried to develop other programmable niche markets (e.g., switches, analog).   Lattice has continued to move opportunistically; shifting quickly at the margins of the market to find unserved or underserved programmable logic end-users, with a strong emphasis on price competitiveness.  They have had erratic results and limited success with this strategy and have seen their market share continue to erode.

Microsemi owns the antifuse programmable technology market.  This technology is strongly favored by end-users who want high reliability in their programmable logic.  Unlike the static RAM-based programmable technologies used by most every other manufacturer, antifuse is not susceptible to single event upsets making it ideal for space, defense and similar applications. The downside of this technology is that unlike static RAM, antifuse is not reprogrammable.  You can only program it once and if you need to fix your downloaded design, you need to get a new part, program it with the new pattern and replace the old part with the new part.  Microsemi has attempted to broaden their product offering into more traditional markets by offering more conventional FPGAs.  However, rather than basing their FPGA’s programmability on static RAM, the Microsemi product, ProASIC, uses flash technology.  A nice incremental innovation offering its own benefits (non-volatile pattern storage) and costs (flash does not scale well with shrinking technology nodes). In addition, Microtec is already shipping a Zynq-like device known as the SmartFusion family.  The SmartFusion device has hard analog IP included.  As best I can tell, Zync does not include that analog functionality.  SmartFusion is relatively new, I do not know how popular it is and what additional functionality its end-users are requesting.  I believe the acceptance of the SmartFusion device will serve as a early bellwether indicator for the acceptance of Zynq.

Quicklogic started out as a more general purpose programmable logic supplier based on a programming technology similar to antifuse with a low power profile.  Over the years, Quicklogic has chosen to focus their offering as more of a programmable application specific standard product (ASSP).  The devices they offer include specific hard IP tailored to the mobile market along with a programmable fabric.  As a company, their laser focus on mobile applications leaves them as very much a niche player.

In recent years, a number of startups have entered the marketplace.  While one might have thought that they would target the low end and seek to provide “good enough” functionality at a low price in an effort to truly disrupt the market from the bottom, gain a solid foothold and sell products to those overserved by what Altera and Xilinx offer; that turns out not to be the case.  In fact, two of the new entrants (Tabula and Achronix) are specifically after the high end, high margin sector that Altera and Xilinx so jealously guard.

The company with the most buzz is Tabula.  They are headed by former Xilinx executive, Dennis Segers, who is widely credited with making the decisions that resulted in Xilinx’s stellar growth in the late 1990s with the release of the original Virtex device. People are hoping for the same magic at Tabula.  Tabula’s product offers what they refer to as a SpaceTime Architecture and 3D Programmable Logic.  Basically what that means is that your design is sectioned and swapped in and out of the device much like a program is swapped in and out of a computer’s RAM space.  This provides a higher effective design density on a device having less “hard logic”.  An interesting idea.  It seems like it would likely utilize less power than the full design realized on a single chip.  The cost is complexity of the design software and the critical nature of the system setup (i.e., the memory interface and implementation) on the board to ensure the swapping functionality as promised.  Is it easy to use?  Is it worth the hassle?  It’s hard to tell right now.  There are some early adopters kicking the tires.  If Tabula is successful will they be able to expand their market beyond where they are now? It looks like their technology might scale up very easily to provide higher and higher effective densities.  But does their technology scale down to low cost markets easily?  It doesn’t look like it.  There is a lot of overhead associated with all that image swapping and its value for the low end is questionable.  But, I’ll be the first to say: I don’t know.

Achronix as best I can tell has staked out the high speed-high density market.  That is quite similar to what Tabula is addressing.  The key distinction between the two companies (besides Achronix’s lack of Star Trek-like marketing terminology) is that Achronix is using Intel as their foundry.  This might finally put an end to those persistent annual rumors that Intel is poised to purchase Altera or Xilinx (is it the same analyst every time who leaks this?).  That Intel relationship and a less complex (than Tabula) fabric technology means that Achronix might be best situated to offer their product for those defense applications that require a secure, on-shore foundry.  If that is the case, then Achronix is aiming at a select and very profitable sector that neither Altera nor Xilinx will let go without a big fight.  Even if successful, where does Achronix expand?  Does their technology scale down to low cost markets easily?  I don’t think so…but I don’t know.  Does it scale up to higher densities easily?  Maybe.

SiliconBlue is taking a different approach.  They are aiming at the low power, low cost segment.  That seems like more of a disruptive play.  Should they be able to squeeze in, they might be able to innovate their way up the market and cause some trouble for Xilinx and Altera.  The rumored issue with SiliconBlue is that their devices aren’t quite low power enough or quite cheap enough to fit their intended target market.  The other rumor is that they are constantly looking for a buyer.  That doesn’t instill a high level of confidence now, does it?

So what does all this mean?  The Microsemi SmartFusion device might be that quantum innovative leap that most likely extends the programmable logic market space.  It may be the one product that has the potential to serve an unserved market and bring more end-user and applications on board.  But the power and price point might not be right.

The ability of any programmable logic solution to expand beyond the typical sweet spots is based on its ability to displace other technologies at a lower cost and with sufficient useful functionality.  PLDs are competing not just against ASSPs but also against multi-core processors and GPUs.  Multi-core processors and GPUs offer a simpler programming model (using common programming languages), relatively low power and a wealth of application development tools with a large pool of able, skilled developers.  PLDs still require understanding hardware description languages (like VHDL or Verilog HDL) as well as common programming languages (like C) in addition to specific conceptual knowledge of hardware and software.  On top of all that programmable logic often delivers higher power consumption at a higher price point than competing solutions.

In the end, the real trick is not just providing a hardware solution that delivers the correct power and price point but a truly integrated tool set that leverages the expansive resource pool of C programmers rather than the much smaller resource puddle of HDL programmers. And no one, big or small, new or old, is investing in that development effort.

Tags: , , , , ,

I have spent a fair amount of my formative years in and around the field programmable gate array (FPGA) industry.  I participated in the evolution of FPGAs from a convenient repository for glue logic and a pricey but useful prototyping platform to a convenient repository for lots of glue logic, an affordable but still a little pricey platform to improve time-to-market and a useful system-on-a-chip platform.  There was much talk about FPGAs going mainstream, displacing all but a few ASICs and becoming the vehicle of choice for most system implementations.  It turns out that last step…the mainstreaming, the death of ASICs, the proliferating system-on-chip…is still underway.  And maybe it’s just around the corner, again.  But maybe it’s not.

FPGA companies (well, Xilinx and Altera) appear to be falling prey to the classic disruptive technology trap described by Clayton Christensen.  Listening to the calls of the deans of Wall Street and pursuing fat margins.  Whether it’s Virtex or Stratix, both Xilinx and Altera are innovating at the high end delivering very profitable and very expensive parts that their biggest customers want and pretty much ignoring the little guys who are looking for cheap, functional and mostly low power devices.

This opens the door for players like Silicon Blue, Actel or Lattice to pick a niche and exploit the heck out of it.  Be it low power, non-volatile storage or security, these folks are picking up some significant business here and there. 

This innovation trap, however, ignores a huge opportunity that really only a big player can address.  I think that the biggest competitor to FPGAs is not ASSPs or ASICs or even other cheaper FPGAs.  I think that what everyone needs to be watching out for is CPUs and GPUs

Let’s face it, even with an integrated processor in your FPGA, you still really need to be a VHDL or Verilog HDL developer to build systems based on the FPGA.  And how many HDL designers are there worldwide?  Tens of thousands?  Perhaps.  Charitably.  This illuminates another issue with systems-on-a-chip – software and software infrastructure. I think this might even be the most important issue acting as an obstacle to the wide adoption of programmable logic technology. To design a CPU or GPU-based system, you need to know C or C++.  How many C developers are there worldwide?  Millions?  Maybe more.

With a GPU you are entering the world of tesselation automata or systolic arrays.  It is easier (but still challenging) to map a C program to a processor grid than sea of gates.  And you also get to leverage the existing broad set of software debug and development tools.  What would you prefer to use to develop your next system on a chip – SystemC with spotty support infrastructure or standard C with deep and broad support infrastructure?

The road to the FPGA revolution is littered with companies who’s products started as FPGA-based with a processor to help, but then migrated to a full multi-core CPU solution dumping the FPGA (except for data path and logic consolidation).  Why is that?  Because to make a FPGA solution work you need to be an expert immersed in FPGA architectures and you need to develop your own tools to carefully divide hardware and software tasks.  And in the end, to get really great speeds and results, you need to keep tweaking your system and reassigning hardware and software tasks.   And then there’s the debugging challenge.  In the end – it’s just hard.

On the other hand, grab an off-the-shelf multi-core processor, whack together some C code, compile it and run it and you get pretty good speeds and the same results.  On top of that – debugging is better supported.

I think FPGAs are great and someday they may be able to provide a real system-on-a-chip solution but they won’t until FPGA companies stop thinking like semiconductor manufacturers and start thinking (and acting) like the software application solution providers they need to become.

Tags: , , , , , ,

iPad Explained

In a previous post, I admitted to the fact that I was ignorant of or perhaps merely immune to the magic of the iPad.  Since that time, through a series of discussions with people who do get it I have come to understand the magic of the iPad and also why it holds no such power over me.  

Essentially, the iPad is a media consumption device.  It is for those who consume movies, videos, music, games, puzzles, newpapers, facebook, MySpace, magazines, You Tube and all of that stuff available on the web but do not have a requirement for lots of input (typing or otherwise).  You can tap out a few emails, register for a web site but, really, it’s not a platform for writing documents, developing presentations, writing code or working out problems and doing analysis.  That is unless you buy a few pricey accessories.

The pervasive (well, at least around here) iPad billboards really say it best.  They typically feature casually attired torsos reclining, with legs raised, bent at the knees to support the iPad.  These smartly but simply dressed users are lounging and passively consuming media.  They are not working.  They are not developing.  They are not even necessarily thinking.  They are simply happy (we think – even though no faces are visible) and drinking in the experience.  You are expected to (lightly) toss the iPad about after quickly reading an article, keep it on your night stand for those late night web-based fact checks, leave it on your coffee table to watch that old episode of Star Trek at your leisure or pack it in your folio to help while away the hours in waiting rooms and airports.

But this isn’t me. I am more of a developer.  Certainly of software, sometimes of content.  I like a full-sized (or near full-sized) real keyboard for typing.  If I need to check something late at night, my cell phone browser seems to do the trick just fine.  I can triage my email just fine on my cell phone, too.  So, I am not an iPad.  At least not yet.  But if it really is only a consumption platform then not ever.  But one never quite knows what those wizards in Cupertino might be conjuring up next, does one?

Tags: , , ,

I admit it…I am clueless

My world is about to change but I fully admit that I don’t get how. That’s right…everything changes this Saturday, April 3rd when the Apple iPad is released. Am I the only one who looks at it and thinks of those big button phones that one purchases for their aging parent? Yes, I know Steve Jobs found the English language lacking sufficiently meaningful superlatives to describe it. And, yes, I know there will be hundreds of pre-programmed Apple zombies lining the streets to collect their own personal iPad probably starting Friday evening. But, I don’t understand why I would want an oversized iPhone without the phone or the camera or application software or a keyboard and why and how this gadget will change civilization. Don’t get me wrong, I know it will. But I don’t see how the iPad will revolutionize, say, magazine sales. Why would I buy Esquire for $2.99 when I get the print version for less than $1 and throw it out after reading it for 30 minutes (full disclosure: I am a subscriber – I admit that freely)? I also know I can’t take the iPad to the beach to read a book because it will get wet, clotted with sand and the screen will be unreadable in sunlight. But, I’m sure the iPad will be a huge hit. And I’m sure my life will change. Just tell me how. Someone…..please….?

Tags: , , , ,
Next posts » Back to top