171i-Training-Talet-Development-Competency-BenchmarkingI’ve had occasion to be interviewed for positions at a variety of technology companies. Sometimes the position actually exists, other times it might exist and even other times, the folks are just fishing for solutions to their problems and hope to save a little from their consulting budget. In all cases, the goal of the interview is primarily to find out what you know and how well you know it in a 30 to 45 minute conversation. It is interesting to see how some go about doing it. My experience has been that an interview really tells you nothing but does give you a sense of whether the person is nice enough to “work well with others“.

But now, finally folks at Google used big data to figure out something that has been patently obvious to anyone who has either interviewed for a job or was interviewing someone for a job. The article published in the New York Time details a talk with Mr. Laszlo Bock, senior vice president of people operations at Google.  In it, he shared that puzzle questions don’t tell you anything about anyone.  I maintain that they tell you if someone has heard that particular puzzle question before. In the published interview Mr. Bock, less charitably, suggests that it merely serves to puff up the ego of the interviewer.

I think it’s only a matter of time before big data is used again to figure out another obvious fact – that even asking simple or complex programming questions serves as no indicator of on-the-job success.  Especially now in the age of Google and open-source software.  Let’s say you want to write some code to sort a string of arbitrary letters and determine the computational complexity, a few quick Google searches and presto – you have the solution.  You need to understand the question and the nature of the problem but the solution itself has merely become a matter of copying from your betters and equals who shared their ideas on the Internet.  Of course, such questions are always made more useless when the caveat is added – “without using the built-in sort function” – which is, of course, the way you actually solve it in real life.

Another issue I see is the concern about experience with a specific programming language. I recall that the good people at Apple are particularly fond of Objective C to the point where they believe that unless you have had years of direct experience with it, you could never use it to program effectively.  Of course, this position is insulting to both any competent programmer and the Objective C language. The variations between these algorithmic control flow languages are sometimes subtle, usually stylistic but always easily understood. This is true of any programming language.  In reality, if you are competent at any one, you should easily be able to master any another. For instance,  Python uses indentation but C uses curly braces to delineate code blocks.  Certainly there are other differences but give any competent developer a few days and they can figure it out leveraging their existing knowledge.

But that still leaves the hard question.  How do you determine competency?  I don’t think you can figure it out in a 45 minute interview – or a 45 hour one for that matter – if the problems and work conditions are artificial.  I think the first interview should be primarily behavioral and focus on fit and then, if that looks good, the hiring entity should then pay you to come in and work for a week solving an actual problem working with the team that would be yours. This makes sense in today’s world of limited, at-will employment where everyone is really just a contractor waiting to be let go. So, in this approach, everyone gets to see how you fit in with the team, how productive you can be, how quickly you can come up to speed on a basic issue and how you actually work a problem to a solution in the true environment. This is very different from establishing that you can minimize the number of trips a farmer takes across a river with five foxes, three hens, six bag of lentils, a sewing machine and a trapeze.

I encourage you to share some of your ideas for improving the interview process.

Tags: , , , , , ,

IEEE-1149-1-jtag-pictureThat venerable electronic test standard IEEE Std 1149.1 (also known as JTAG; also known as Boundary-Scan; also known as Dot 1) has just been freshened up.  This is no ordinary freshening.  The standard, last revisited in 2001, is long overdue for some clarification and enhancement.  It’s been a long time coming and now…it’s here.  While the guts remain the same and in good shape, some very interesting options and improvements have been added.  The improvements are intended to provide support for testing and verification of the more complex devices currently available and to acknowledge the more sophisticated test algorithms and capabilities afforded by the latest hardware.  There is an attempt, as well, (perhaps though, only as well as one can do this sort of thing) to anticipate future capabilities and requirements and to provide a framework within which such capabilities and requirements can be supported.  Of course, since the bulk of the changes are optional their value will only be realized if the end-user community embraces them.

There are only some minor clarifications or relaxations to the rules that are already established. For the most part, components currently compliant with the previous version of this standard will remain compliant with this one. There is but one “inside baseball” sort of exception.  The long denigrated and deprecated BC_6 boundary-scan cell has finally been put to rest. It is, with the 2013 version, no longer supported or defined, so any component supplier who chose to utilize this boundary-scan cell – despite all warnings to contrary – must now provide their own BSDL package defining this BC_6 cell if they upgrade to using the STD_1149_1_2013 standard package for their BSDL definitions.

While this is indeed a major revision, I must again emphasize that all the new items introduced are optional.  One of the largest changes is in documentation capability incorporating  the introduction of a new executable description language called Procedural Description Language (PDL) to document test procedures unique to a component.  PDL, a TCL-like language, was adopted from the work of the IEEE Std P1687 working group. 1687 is a proposed IEEE Standard for the access to and operation of embedded instruments (1687 is therefore also known as iJTAG or Instrument JTAG). The first iteration of the standard was based on use of the 1149.1 Test Access Port and Controller to provide the chip access—and a set of modified 1149.1-type Test Data Registers to create an access network for embedded instruments. PDL was developed to describe access to and operation of these embedded instruments.

Now, let’s look at the details.  The major changes are as follows:

In the standard body:

  • In order to allow devices to maintain their test logic in test mode, a new, optional, test mode persistence controller was introduced.  This means that test logic (like the boundary-scan register) can remain behaviorally in test mode even if the active instruction does not force test mode. To support this, the TAP controller was cleaved into 2 parts.  One part that controls test mode and the other that has all the rest of the TAP functionality. In support of this new controller, there are three new instructions: CLAMP_HOLD and TMP_STATUS (both of which access the new TMP status test data register) and CLAMP_RELEASE.
  • In recognizing the emerging requirement for unique device identification codes a new, optional ECIDCODE instruction was introduced along with an associated electronic chip identification test data register.  This instruction-register pair is intended to supplement the existing IDCODE and USERCODE instructions and allow for access to an Electronic Chip Identification value that could be used to identify and track individual integrated circuits.
  • The problem of initializing a device for test has been addressed by providing a well-defined framework to use to formalize this process. The new, optional INIT_SETUP, INIT_SETUP_CLAMP, and INIT_RUN instructions paired with their associated initialization data and initialization status test data registers were provided to this end. The intent is that these instructions formalize the manner in which programmable input/output (I/O) can be set up prior to board or system testing, as well as any providing for the execution of any tasks required to put the system logic into a safe state for test.
  • Recognizing that resetting a device can be complex and require many steps or phases, a new, optional, IC_RESET instruction and its associated reset_select test data register is defined to provide formalized control of component reset functions through the TAP.
  • Many devices now have a number of separate power domains that could result in sections of the device being powered down while other are powered up.  A single, uniform boundary-scan register does not align well with that device style.  So to support power domains that may be powered down but having a single test data register routed through these domains,  an optional standard TAP to test data register interface is recommended that allows for segmentation of test data registers. The concept of register segments allows for segments that may be excluded or included and is generalized sufficiently for utilization beyond the power domain example.
  • There have also been a few enhancements to the boundary-scan register description to incorporate the following:
    1. Optional excludable (but not selectable) boundary-scan register segments
    2. Optional observe-only boundary-scan register cells to redundantly capture the signal value on all digital pins except the TAP pins
    3. Optional observe-only boundary-scan register cells to capture a fault condition on all pins, including non-digital pins, except the TAP pins.

The Boundary Scan Description Language annex was rewritten and includes:

  • Increased clarity and consistency based on end-user feedback accumulated over the years.
  • A technical change was made such that BSDL is no longer a “proper subset” of VHDL, but it is now merely “based on” VHDL. This means that BSDL now maintains VHDL’s flavor but has for all intents and purposes been “forked”.
  • As result of this forking, formal definitions of language elements are now included in the annex instead of reliance on inheritance from VHDL.
  • Also as a result of this forking, some changes to the BNF notation used, including definition of all the special character tokens, are in the annex.
  • Pin mapping now allows for documenting that a port is not connected to any device package pin in a specific mapped device package.
  • The boundary-scan register description introduces new attributes for defining boundary-scan register segments, and introduces a requirement for documenting the behavior of an un-driven input.
  • New capabilities are introduced for documenting the structural details of test data registers:
    1. Mnemonics may be defined that may be associated with register fields.
    2. Name fields within a register or segment may be defined.
    3. Types of cells used in a test data register (TDR) field may be defined.
    4. One may hierarchically assemble segments into larger segments or whole registers.
    5. Constraints may be defined on the values to be loaded in a register or register field.
    6. A register field or bit may be associated with specific ports
    7. Power port may be associated with other ports.
  • The User Defined Package has been expanded to support logic IP providers who may need to document test data register segments contained within their IP.

As I stated earlier, a newly adopted language, PDL, has been included in this version of the standard.  The details of this language are included as part of Annex C. PDL is designed to document the procedural and data requirements for some of the new instructions. PDL serves a descriptive purpose in that regard but, as such, it is also executable should a system choose to interpret it.

It was decided to adopt and develop PDL to support the new capability of  initializing internal test data register fields and configuring complex I/Os prior to entering the EXTEST instruction.  Since the data required for initialization could vary for each use of the component on each distinct board or system design there needed to be an algorithmic way to describe the data set-up and application., in order to configure the I/O Since this version of the standard introduces new instructions for configuring complex I/Os prior to entering the EXTEST instruction. As the data required for initialization could vary for each use of the component on each distinct board or system design, this created the need for a new language for setting internal test data register fields in order to configure the I/O. It was decided to adopt PDL and tailor it to the BSDL register descriptions and the needs of IEEE 1149.1.

Since the concept of BSDL and PDL working together is new and best explained via examples Annex D is provided to supply extended examples of BSDL and PDL used together to describe the structure and the procedures for use of new capabilities. Similarly Annex E provides example pseudo-code for the execution of the PDL iApply command, the most complex of the new commands in PDL.

So that is the new 1149.1 in a nutshell. A fair amount of new capabilities. Some of it complex. All of it optional.  Will you use it?

Tags: , , , , , , ,

3d keyA spate of recent articles describes the proliferation of back doors in systems.  There are so many such back doors in so many systems, it claims, that the idea of a completely secure and invulnerable system is, at best, a fallacy.  These back doors may be as result of the system software or even designed into the hardware.  Some back doors are designed in to the systems to facilitate remote update, diagnosis, debug and the like – usually never with the intention of being a security hole.  Some are inserted with subterfuge and espionage in mind by foreign-controlled entities keen on gaining access to otherwise secure systems.  Some may serve both purposes, as well. And some, are just design or specification errors.  This suggests that once you connect a system to a network, some one, some how will be able to access.  As if to provide an extreme example, a recent break-in at the United States Chamber of Commerce was traced to an internet-connected thermostat.

That’s hardware.  What about software?  Despite the abundance of anti-virus software and firewalls, a little social engineering is all you really need to get through to any system. I have written previously about the experiment in which USB memory sticks seeded in a parking lot were inserted in corporate laptops by more than half of employees who found them without any prompting. Email written as if sent from a superior is often utilized to get employees to open attached infected applications that install themselves and open a hole in a firewall for external communications and control.

The problem is actually designed in.  The Internet was built for sharing. The sharing was originally limited to trusted sources. A network of academics. The idea that someone would try to do something awful to you – except as some sort of prank – was inconceivable.

That was then.

Now we are in a place where the Internet is omnipresent.  It is used for sharing and viewing cat videos and for financial transactions.  It is used for the transmission of top secret information and buying cheese.  It connected to servers containing huge volumes of sensitive and personal customer data: social security numbers, bank account numbers, credit card numbers, addresses, health information, etc.  And now, not a day goes by without reports of another breach.  Sometimes attributed to Anonymous, the Chinese, organized crime or kids with more time than sense, these break-ins are relentless and everyone is susceptible

So what to do?

There is a story, perhaps apocryphal, that, at the height of the cold war, when the United States captured a Soviet fighter jet and were examining it, they discovered that there was no solid state electronics in it.  The entire jet was designed using vacuum tubes.  That set the investigators thinking.  Were the Soviets merely backward or did they design using tubes to guard against EMP attacks?

Backward to the future?

Are we headed to a place where the most secure organizations will go offline.  They will revert to paper documents, file folders and heavy cabinets stored in underground vaults?  Of course such systems are not completely secure, as no system actually is.  On the other hand, a break in requires physical presence, carting away tons of documents requires physical strength and effort.  Paper is a material object that cannot be easily spirited away as a stream of electrons. Maybe that’s the solution. But what of all the information infrastructure built up for convenience, cost effectiveness, space savings and general efficiency? Do organizations spend more money going back to paper, staples, binders and hanging folders? And then purchase vast secure spaces to stow these materials?

Will there instead a technological fix in designing a parallel Internet infrastructure from the ground up redesigned so that it incorporates authentication, encryption and verifiable sender identification? Then all secure transactions and information could move to that newer, safer Internet? Is that newer, safer Internet just a .secure domain? Won’t that just be a bigger, better and more value laden target for evil-doers? And what about back-doors – even in a secure infrastructure, an open door or even a door with a breakable window ruins even the finest advanced security infrastructure.  And, of course, there is always social engineering of people that provides access more easily that any other technique. Or spies. Or people thinking they are “doing good”.

The real solution may not yet even be defined or known.  Is it Quantum Computing (which is really just a parallel environment of a differently-developed computing infrastructure)? Or is it really nothing – in that there is no solution and we are stuck with tactical solutions?  It’s an interesting question but for now, it is clear as it was some 20 years ago when Scott McNeally said it “The future of the Internet is security”.

Tags: , , , , ,

Facebook-mobile-phoneIt’s all the rage right now to be viewed as a leader in the mobile space.  There are many different sectors in which to demonstrate your leadership.  There are operating systems like iOS and Android and maybe even Windows Phone (someday).  There’s hardware like Apple, Samsung, HTC and maybe even Nokia.  And of course there’s the applications like FourSquare, Square and other primarily mobile applications in social, payments, health and gaming and then all the other applications rushing to mobile because they were told that’s where they ought to be.

Somewhere in this broad and vague classification is Facebook (or perhaps more properly “facebook”).  This massive database of human foibles and interests is either being pressed or voluntarily exploring just exactly how to enter the mobile space and presumably dominate it.  Apparently they have made several attempts to develop their own handset.  The biggest issue it seems is that they believed that just because they are a bunch of really smart folks they should be able to stitch a phone together and make it work.  I believe the saying is “too smart by half“.  And since they reportedly tried this several times without success – perhaps they were also “too stubborn by several halves”.

This push by facebook begs the question: “What?” or even “Why?”  There is a certain logic to it.  Facebook provides hours of amusement to tens of millions of active users and the developers at facebook build applications to run on a series of mobile platforms already.  Those applications are limited in their ability to provide a full facebook experience and also limit facebook’s ability to extract revenue from these users.  Though when you step back, you quickly realize that facebook is really a platform.  It has messaging (text, voice and video), it has contact information, it has position and location information, it has your personal profile along with your interest history and friends, it knows what motivates you (by your comment contents and what you “like”) and it is a platform for application development (including games and exciting virus and spam possibilities) with a well-defined and documented interface.  At the 10,000 foot level, it seems like facebook is an operating system and a platform ready-to-go.  This is not too different from the vision that propelled Netscape into Microsoft’s sights leading to their ultimate demise. Microsoft doesn’t have the might it once did but Google does and so does Apple.  Neither may be “evil” but both are known to be ruthless.  For facebook to enter this hostile market with yet another platform would be bold. And for that company to be one whose stock price and perceived confidence is faltering after a shaky IPO – it may also be dumb. But it may be the only and necessary option for growth.

On the other hand, facebook’s recent edict imploring all employees to access facebook from Android phones rather than their iPhones could either suggest that the elders at facebook believe their future is in Android or simply that they recognize that it is a growing and highly utilized platform. Maybe they will ditch the phone handset and go all in for mobile on iOS and Android on equal footing.

Personally, I think that a new platform with a facebook-centric interface might be a really interesting product especially if the equipment cost is nothing to the end-user.  A free phone supported by facebook ads, running all your favorite games, with constant chatter and photos from your friends? Talk about an immersive communications experience. It would drive me batty. But I think it would be a huge hit with a certain demographic. And how could they do this given their previous failures? Amongst the weaker players in the handset space, Nokia has teamed up with Microsoft but RIM continues to flail. Their stock is plummeting but they have a ready-to-go team of smart employees with experience in getting once popular products to market as well as that all-important experience in dealing with the assorted wireless companies to say nothing of the treasure trove of patents they hold. They also have some interesting infrastructure in their SRP network that could be exploited by facebook to improve their service (or, after proper consideration, sold off).

You can’t help but wonder that if instead of spending $1B on Instagram prior to its IPO, facebook had instead spent a little more and bought RIM would the outcome and IPO lauch have been different?  I guess I can only speculate about that.  Now, though, it seems that facebook ought to move soon or be damned to be a once great player who squandered their potential.

Tags: , , , , , , , , , , , ,

gate-with-no-fence-please-keep-locked-articleScott McNealy, the former CEO of the former Sun Microsystems, in the late 1990s, in an address to the Commonwealth Club said that the future of the Internet is in security. Indeed, it seems that there has been much effort and capital invested in addressing security matters. Encryption, authentication, secure transaction processing, secure processors, code scanners, code verifiers and host of other approaches to make your system and its software and hardware components into a veritable Fort Knox. And it’s all very expensive and quite time consuming (both in development and actual processing). And yet we still hear of routine security breeches, data and identity theft, on-line fraud and other crimes. Why is that? Is security impossible? Unlikely? Too expensive? Misused? Abused? A fiction?

Well, in my mind, there are two issues and they are the weak links in any security endeavour. The two actually have one common root. That common root, as Pogo might say, “is us”. The first one that has been in the press very much of late and always is the reliance on password. When you let the customers in and provide them security using passwords, they sign up using passwords like ‘12345’ or ‘welcome’ or ‘password’. That is usually combated through the introduction of password rules. Rules usually indicate that passwords must meet some minimum level of complexity. This would usually be something like requiring that each password must have a letter and a number and a punctuation mark and be at least 6 characters long. This might cause some customers to get so aggravated because they can’t use their favorite password that they don’t both signing up at all. Other end users get upset but change their passwords to “a12345!” or “passw0rd!” or “welc0me!”. And worst of all, they write the password down and put it in a sticky note on their computer.

Of course, ordinary users are not the only ones to blame, administrators are human, too, and equally as fallible. Even though they should know better, they are equally likely to have the root or administrator password left at the default “admin” or even nothing at all.

The second issue is directly the fault of the administrator – but it is wholly understandable. Getting a system, well a complete network of systems, working and functional is quite an achievement. It is not something to be toyed around with once things are set. When your OS supplier or application software provider delivers a security update, you will think many times over before risking system and network stability to apply it. The choice must be made. The administrator thinks: “Do I wreak havoc on the system – even theoretical havoc – to plug a security hole no matter how potentially damaging?” And considers that: “Maybe I can rely on my firewall…maybe I rely on the fact that our company isn’t much of a target…or I think it isn’t.” And rationalizes: “Then I can defer the application of the patch for now (and likely forever) in the name of stability.”

The bulk of hackers aren’t evil geniuses that stay up late at night doing forensic research and decompilation to find flaws, gaffes, leaks and holes in software and systems. No they are much more likely to be people who read a little about the latest flaws and the most popular passwords and spend their nights just trying stuff to see what they can see. A few of them even specialize in social engineering in which they simply guess or trick you into divulging your password – maybe by examining your online social media presence.

The notorious stuxnet malware worm may be a complex piece of software engineering but it would have done nothing were it not for the peril of human curiosity. The virus allegedly made its way into secure facilities on USB memory sticks. Those memory sticks were carried in human hands and inserted into the targeted computers by those same hands. How did they get into those human hands? A few USB sticks with the virus were probably sprinkled in the parking lot outside the facility. Studies have determined that people will pick up USB memory sticks they find and insert them in their PCs about 60% of the time. The interesting thing is that the likelihood of grabbing and using those USB devices goes up to over 90% if the device has a logo on it.

You can have all the firewalls and scanners and access badges and encryption and SecureIDs and retinal scans you want. In the end, one of your best and most talented employees grabbing a random USB stick and using it on his PC can be the root cause of devastation that could cost you staff years of time to undo.

So what do you do? Fire your employees? Institute policies so onerous that no work can be done at all? As is usual, the best thing to do is apply common sense. If you are not a prime target like a government, a security company or a repository of reams of valuable personal data – don’t go overboard. Keep your systems up-to-date. The time spent now will definitely pay off in the future. Use a firewall. A good one. Finally, be honest with your employees. Educate them helpfully. None of the scare tactics, no “Loose Lips Sink Ships”, just straight talk and a little humor to help guide and modify behavior over time.

Tags: , , , , ,

spaceIn the famous Aardman Animations short film “Creature Comforts“, a variety of zoo animals discuss their lives in the zoo.  A Brazilian Lion speaks at length about the virtue of the great outdoors (cf. a zoo) recalling that in Brazil “We have space“.  While space might be a great thing for Brazilian Lions, it turns out that space is a dangerous and difficult reality in path names for computer applications.

In a recent contract, one portion of the work involved running an existing Windows application under Cygwin. Cygwin, for the uninitiated, is an emulation of the bash shell and most standard Unix commands. It provides this functionality so you can experience Unix under Windows. The Windows application I was working on had been abandoned for several years and customer pressure finally reached a level at which maintenance and updates were required – nay, demanded. Cygwin support was required primarily for internal infrastructure reasons. The infrastructure was a testing framework – primarily comprising bash shell scripts – that ran successfully on Linux (for other applications). My job was to get the Windows application re-animated and running under the shell scripts on Cygwin.

It turns out that the Windows application had a variety of issues with spaces in path names. Actually, it had one big issue – it just didn’t work when the path names had spaces. The shell scripts had a variety of issues with spaces. Well, one big issue – they, too, just didn’t work when the path names had spaces. And it turns out that some applications and operations in Cygwin have issues with spaces, too. Well, that one big issue – they don’t like spaces.

Now by “like”, I mean that when the path name contains spaces then even using ‘\040’ (instead of the space) or quoting the name (e.g., “Documents and Settings”) does not resolve matters and instead merely yields unusual and unhelpful error messages. The behavior was completely unpredictable, as well. For instance, quoting might get you part way through a section of code but then the same quoted name failed when used to call stat. It would then turn out that stat didn’t like spaces in any form (quoted, escaped, whatever…).

Parenthetically, I would note that the space problem is widespread. I was doing some Android work and having an odd an unhelpful error displayed (“invalid command-line parameter”) when trying to run my application on the emulator under Eclipse. It turns out that a space in the path name to the Android SDK was the cause.  Once the space was removed, all was well.

The solution to my problem turned out to be manifold. It involved a mixture of quoting, clever use of cygpath and the Windows API calls GetLongPathName and GetShortPathName.

When assigning and passing variables around in shell scripts, quoting a space-laden path or a variable containing a space-laden path,  the solution was easy. Just remember to use quotes:

THIS=”${THAT}”

Passing command line options that include path names with spaces tended to be more problematic. The argc/argv parsers don’t like spaces.  They don’t like them quoted and don’t like them escaped.  Or maybe the parser likes them but the application doesn’t. In any event, the specific workaround that used was clever manipulation of the path using the cygpath command. The cygpath -w -s command will translate a path name to the Windows version (with the drive letter and a colon at the beginning) and then shortens the name to the old-style 8+3 limited format thereby removing the spaces. An additional trick is that then, if you need the cygwin style path – without spaces – you get the output of the cygpath -w -s and run it through cygpath -u. Then you get a /cygdrive/ style file name with no spaces. There is no other direct path to generating a cygwin Unix style file name without spaces.

These manipulations allow you to get the sort of input you need to the various Windows programs you are using. It is important to note, however, that a Windows GUI application built using standard file browser widgets and the like always passes fully instantiated, space-laden path names. The browser widgets can’t even correctly parse 8+3 names. Some of the system routines, however, don’t like spaces. Then the trick is how do you manipulate the names once within the sphere of the Windows application? Well, there are a number of things to keep in mind, the solutions I propose will not work with cygwin Unix-style names and they will not work with relative path names.

Basically, I used the 2 windows API calls GetLongPathName and GetShortPathName to manipulate the path. I used GetShortPathName to generate the old-style 8+3 format name that removes all the spaces. This ensured that all system calls worked without a hitch. Then, in order, to display messaging that the end-user would recognize, make sure that the long paths are restored by calling GetLongPathName for all externally shared information. I need to emphasize that these Windows API calls do not appear to work with relative path names. They return an empty string as a result. So you need to watch out for that.

Any combination of all these approaches (in whole or in part) may be helpful to you in resolving any space issues you encounter.

Tags: , , , , , , , ,

Back at the end of March, I attended O’Reilly‘s Web 2.0 Expo in San Francisco. As usual with the O’Reilly brand of conferences it was a slick, show-bizzy affair. The plenary sessions were fast-paced with generic techno soundtracks, theatrical lighting and spectacular attempts at buzz-generation. Despite their best efforts, the staging seems to overhwelm the Droopy Dog-like presenters who tend to be more at home coding in darkened rooms whilst gorging themselves on Red Bull and cookies. Even the audience seemed to prefer the company of their smartphones or iPads than any actual human interaction with “live tweets” being the preferred method of communication.

In any event, the conference is usually interesting and a few nuggets are typically extracted from the superficial, mostly promotional aspects of the presentations.

What was clear was that every start-up and every business plan was keyed on data collection. Data collection about YOU. The more – the better. The goal was to learn as much about you as possible so as to be able to sell you stuff. Even better – to sell you stuff that was so in tune with your desires that you would be helpless to resist purchasing it.

The trick was – how to get you to cough up that precious data? Some sites just assumed you’d be OK with spending a few days answering questions and volunteering information – apparently just for the sheer joy of it. Others believed that being up-front and admitting that you were going to be sucked into a vortex of unrelenting and irresistable consumption would be reward enough. Still others felt that they ought to offer you some valuable service in return. Most often, this service, oddly enough, was financial planning and retirement saving-based.

The other thing that was interesting (and perhaps obvious) was that data collection is usually pretty easy (at least the basic stuff). Getting details is harder and most folks do expect something in return. And, of course, the hardest part is the data mining to extract the information that would provide the most compelling sales pitch to you.

There are all sorts of ways to build the case around your apparent desires. By finding out where you live or where you are, they can suggest things “like” other things you have already that are nearby. (You sure seem to like Lady Gaga, you know there’s a meat dress shoppe around the corner…) By finding out who your friends are and what they like, they can apply peer-pressure-based recommendations (All of your friends are downloading the new Justin Beiber recording. Why aren’t you?). And by finding out about your family and demographic information they can suggest what you need or ought to be needing soon (You son’s 16th birthday is coming up soon, how about a new car for him?).

Of all the sites and ideas, it seems to me that Intuit‘s Mint is the most interesting. Mint is an on-line financial planning and management site. Sort of like Quicken but online. To “hook” you, their key idea is to offer you the tease of the most valuable analysis with the minimum of initial information. It’s almost like given your email and zip code they’ll draw up a basic profile of you and your lifestyle. Give them a bit more and they’ll make it better. And so you get sucked in but you get value for your data. They do claim to keep the data separate from you but they also do collect demographically filtered data and likely geographically filtered data.

This really isn’t news. facebook understood this years ago when their ill-fated Beacon campaign was launched. This probably would have been better accepted had it been rolled out more sensitively. But it is ultimately where everyone is stampeding right now.

The most interesting thing is that there is already a huge amount of personal data on the web. It is protected because it’s all in different places and not associated. facebook has all of your friends and acquaintances. Amazon and eBay have a lot about what you like and what you buy. Google has what you’re interested in (and if you have an Android phone – where you go). Apple has a lot about where you go and who you talk to and also through your app selection what you like and are interested in. LinkedIn has your professional associations. And, of course, twitter has when you go to the bathroom and what kind of muffins you eat.

Each of these giants is trying to expand their reservoir of data about you. Other giants are trying to figure out how to get a piece of that action (Yahoo!, Microsoft). And yet others, are trying to sell missing bits of information to these players. Credit card companies are making their vast purchasing databases available, specialty retailers are trying to cash in, cell phone service providers are muscling in as well. They each have a little piece of your puzzle to make analysis more accurate.

The expectations is that there will be acceptance of diminishing privacy and some sort of belief that the holders of these vast databases will be benevolent and secure and not require government intervention. Technologically, storage and retrieval will need to be addressed and newer, faster algorithms for analysis will need to be developed.

Looking for a job…or a powerful patent? I say look here.

Tags: , , , ,

I think it’s high time I authored a completely opinion-based article full of observations and my own prejudices that might result in a littany of ad hominem attacks and insults.  Or at least, I hope it does.  This little bit of prose will outline my view of the world of programmable logic as I see it today.  Again, it is as I see it.  You might see it differently.  But you would be wrong.

First let’s look at the players.  The two headed Cerberus of the programmable logic world is Altera and Xilinx.  They battle it out for the bulk of the end-user market share.  After that, there are a series of niche players (Lattice Semiconductor, Microsemi (who recently purchased Actel) and Quicklogic), lesser lights (Atmel and Cypress) and wishful upstarts (Tabula, Achronix and SiliconBlue).

Atmel and Cypress are broadline suppliers of specialty semiconductors.  They each sell a small portfolio of basic programmable logic devices (Atmel CPLDs, Atmel FPGAs and Cypress CPLDs).  As best I can tell, they do this for two reasons.  First, they entered the marketplace and have been in it for about 15 years and at this point have just enough key customers using the devices such that the cost of exiting the market would be greater than the cost of keeping these big customers happy.  The technology is not, by any stretch of the imagination, state of the art so the relative cost of supporting and manufacturing these parts is small.  Second, as a broadline supplier of a wide variety of specialty semiconductors, it’s nice for their sales team to have a PLD to toss into a customer’s solution to stitch together all that other stuff they bought from them.  All told, you’re not going to see any profound innovations from these folks in the programmable logic space.  ‘Nuff said about these players, then.

At the top of the programmable logic food chain are Altera and Xilinx.  These two titans battle head-to-head and every few years exchange the lead.  Currently, Altera has leapt or will leap ahead of Xilinx in technology, market share and market capitalization.  But when it comes to innovation and new ideas, both companies typically offer incremental innovations rather than risky quantum leaps ahead.  They are both clearly pursuing a policy that chases the high end, fat margin devices, focusing more and more on the big, sophisticated end-user who is most happy with greater complexity, capacity and speed.  Those margin leaders are Xilinx’s Virtex families and Altera’s Stratix series. The sweet spot for these devices are low volume, high cost equipment like network equipment, storage systemcontroller and cell phone base stations. Oddly though, Altera’s recent leap to the lead can be traced to their mid-price Arria and low-price Cyclone families that offered lower power and lower price point with the right level of functionality for a wider swath of customers.  Xilinx had no response having not produced a similarly featured device from the release of the Spartan3 (and its variants) until the arrival of the Spartan6 some 4 years later.  This gap provided just the opportunity that Altera needed to gobble up a huge portion of a growing market.  And then, when Xilinx’s Spartan6 finally arrived, its entry to production was marked by bumpiness and a certain amount of “So what?” from end-users who were about to or already did already migrate to Altera.

The battle between Altera and Xilinx is based on ever-shrinking technology nodes, ever-increasing logic capacity, faster speeds and a widening variety of IP cores (hard and soft) and, of course, competitive pricing.  There has been little effort on the part of either company to provide any sort of quantum leap of innovation since there is substantial risk involved.  The overall programmable logic market is behaving more like a commodity market.  The true differentiation is price since the feature sets are basically identical.  If you try to do some risky innovation, you will likely have to divert efforts from your base technology.  And it is that base technology that delivers those fat margins.  If that risky innovation falls flat, you miss a generation and lose those fat margins and market share.

Xilinx’s recent announcement of the unfortunately named Zynq device might be such a quantum innovative leap but it’s hard to tell from the promotional material since it is long on fluff and short on facts.  Is it really substantially different from the Virtex4FX from 2004?  Maybe it isn’t because its announcement does not seem to have instilled any sort of fear over at Altera.  Or maybe Altera is just too frightened to respond?

Lattice Semiconductor has worked hard to find little market niches to serve.  They have done this by focusing mostly on price and acquisitions.  Historically the leader in in-system programmable devices, Lattice saw this lead erode as Xilinx and Altera entered that market using an open standard (rather than a proprietary one, as Lattice did).  In response, Lattice moved to the open standard, acquired FPGA technology and tried to develop other programmable niche markets (e.g., switches, analog).   Lattice has continued to move opportunistically; shifting quickly at the margins of the market to find unserved or underserved programmable logic end-users, with a strong emphasis on price competitiveness.  They have had erratic results and limited success with this strategy and have seen their market share continue to erode.

Microsemi owns the antifuse programmable technology market.  This technology is strongly favored by end-users who want high reliability in their programmable logic.  Unlike the static RAM-based programmable technologies used by most every other manufacturer, antifuse is not susceptible to single event upsets making it ideal for space, defense and similar applications. The downside of this technology is that unlike static RAM, antifuse is not reprogrammable.  You can only program it once and if you need to fix your downloaded design, you need to get a new part, program it with the new pattern and replace the old part with the new part.  Microsemi has attempted to broaden their product offering into more traditional markets by offering more conventional FPGAs.  However, rather than basing their FPGA’s programmability on static RAM, the Microsemi product, ProASIC, uses flash technology.  A nice incremental innovation offering its own benefits (non-volatile pattern storage) and costs (flash does not scale well with shrinking technology nodes). In addition, Microtec is already shipping a Zynq-like device known as the SmartFusion family.  The SmartFusion device has hard analog IP included.  As best I can tell, Zync does not include that analog functionality.  SmartFusion is relatively new, I do not know how popular it is and what additional functionality its end-users are requesting.  I believe the acceptance of the SmartFusion device will serve as a early bellwether indicator for the acceptance of Zynq.

Quicklogic started out as a more general purpose programmable logic supplier based on a programming technology similar to antifuse with a low power profile.  Over the years, Quicklogic has chosen to focus their offering as more of a programmable application specific standard product (ASSP).  The devices they offer include specific hard IP tailored to the mobile market along with a programmable fabric.  As a company, their laser focus on mobile applications leaves them as very much a niche player.

In recent years, a number of startups have entered the marketplace.  While one might have thought that they would target the low end and seek to provide “good enough” functionality at a low price in an effort to truly disrupt the market from the bottom, gain a solid foothold and sell products to those overserved by what Altera and Xilinx offer; that turns out not to be the case.  In fact, two of the new entrants (Tabula and Achronix) are specifically after the high end, high margin sector that Altera and Xilinx so jealously guard.

The company with the most buzz is Tabula.  They are headed by former Xilinx executive, Dennis Segers, who is widely credited with making the decisions that resulted in Xilinx’s stellar growth in the late 1990s with the release of the original Virtex device. People are hoping for the same magic at Tabula.  Tabula’s product offers what they refer to as a SpaceTime Architecture and 3D Programmable Logic.  Basically what that means is that your design is sectioned and swapped in and out of the device much like a program is swapped in and out of a computer’s RAM space.  This provides a higher effective design density on a device having less “hard logic”.  An interesting idea.  It seems like it would likely utilize less power than the full design realized on a single chip.  The cost is complexity of the design software and the critical nature of the system setup (i.e., the memory interface and implementation) on the board to ensure the swapping functionality as promised.  Is it easy to use?  Is it worth the hassle?  It’s hard to tell right now.  There are some early adopters kicking the tires.  If Tabula is successful will they be able to expand their market beyond where they are now? It looks like their technology might scale up very easily to provide higher and higher effective densities.  But does their technology scale down to low cost markets easily?  It doesn’t look like it.  There is a lot of overhead associated with all that image swapping and its value for the low end is questionable.  But, I’ll be the first to say: I don’t know.

Achronix as best I can tell has staked out the high speed-high density market.  That is quite similar to what Tabula is addressing.  The key distinction between the two companies (besides Achronix’s lack of Star Trek-like marketing terminology) is that Achronix is using Intel as their foundry.  This might finally put an end to those persistent annual rumors that Intel is poised to purchase Altera or Xilinx (is it the same analyst every time who leaks this?).  That Intel relationship and a less complex (than Tabula) fabric technology means that Achronix might be best situated to offer their product for those defense applications that require a secure, on-shore foundry.  If that is the case, then Achronix is aiming at a select and very profitable sector that neither Altera nor Xilinx will let go without a big fight.  Even if successful, where does Achronix expand?  Does their technology scale down to low cost markets easily?  I don’t think so…but I don’t know.  Does it scale up to higher densities easily?  Maybe.

SiliconBlue is taking a different approach.  They are aiming at the low power, low cost segment.  That seems like more of a disruptive play.  Should they be able to squeeze in, they might be able to innovate their way up the market and cause some trouble for Xilinx and Altera.  The rumored issue with SiliconBlue is that their devices aren’t quite low power enough or quite cheap enough to fit their intended target market.  The other rumor is that they are constantly looking for a buyer.  That doesn’t instill a high level of confidence now, does it?

So what does all this mean?  The Microsemi SmartFusion device might be that quantum innovative leap that most likely extends the programmable logic market space.  It may be the one product that has the potential to serve an unserved market and bring more end-user and applications on board.  But the power and price point might not be right.

The ability of any programmable logic solution to expand beyond the typical sweet spots is based on its ability to displace other technologies at a lower cost and with sufficient useful functionality.  PLDs are competing not just against ASSPs but also against multi-core processors and GPUs.  Multi-core processors and GPUs offer a simpler programming model (using common programming languages), relatively low power and a wealth of application development tools with a large pool of able, skilled developers.  PLDs still require understanding hardware description languages (like VHDL or Verilog HDL) as well as common programming languages (like C) in addition to specific conceptual knowledge of hardware and software.  On top of all that programmable logic often delivers higher power consumption at a higher price point than competing solutions.

In the end, the real trick is not just providing a hardware solution that delivers the correct power and price point but a truly integrated tool set that leverages the expansive resource pool of C programmers rather than the much smaller resource puddle of HDL programmers. And no one, big or small, new or old, is investing in that development effort.

Tags: , , , , ,

Software is complicated. Look up software complexity on Google and you get almost 10,000,000 matches. Even if 90% of them are bogus that’s still about a million relevant hits. But I wonder if that means that when there’s a problem in a system – the software is always to blame?

I think that in pure application level software development, you can be pretty sure that any problems that arise in your development are of your own making. So when I am working on that sort of project, I make liberal use of a wide variety of debugging tools, keep quiet and the fix the problems I find.

But when developing for any sort of custom embedded system, suddenly the lines become much more blurry. With my clients, when I am verifying embedded software targeting custom systems and things aren’t working according to the written specification and when initial indications are that a value read from something like a sensor or a pin is wrong – I find that I will always report my observations but quickly indicate that I need to review my code before making any final conclusion. I sometimes wonder if I do this because I actually believe I could be that careless or that I am merely being subconsciously obsequious (“Oh no, dear customer, I am the only imperfect one here!”) or perhaps I am merely being conservative in exercising my judgement choosing to dig for more facts one way or the other. I suspect it might be the latter.

But I wonder, if this level of conservatism does anyone any good? Perhaps I should be quicker to point the finger at the guilty party? Maybe that would speed up development and hasten delivery?

In fact, what I have noticed is that my additional efforts often point out additional testing and verification strategies that can be used to improve the overall quality of the system regardless of the source of the problem. I am often better able to identify critical interfaces and more importantly critical behaviors across these interfaces that can be monitored as either online or offline diagnosis and validation tasks.

Tags:

Internet Immortality

My social network appears to be wide, diverse and technologically savvy enough that I have a large number of friends and acquaintances with large Internet footprints. That includes people with a presence on a variety of social networking sites like facebook and LinkedIn, Twitter and Flickr feeds, multiple email accounts and even blogs.

Having a broad sample of such connections means that life cycle events are not unusual in this group either. That includes death. I have now – several times – had the oddly jarring event of having a message reminding me about a birthday of a friend who passed away or a suggestion to reconnect with a long-dead relative and similar communications from across the chasm – as it were.

There is both joy and sorrow associated with these episodes. The sorrow is obvious but the joy is in spending a few moments reviewing their blog thoughts or their facebook photos and, in essence, celebrating their life in quiet, solitary reflection. And it provides these people with their own little slice of immortality. It bolsters the line from the movie The Social Network saying that “The Internet isn’t written in pencil; it’s written in ink”.

This got me thinking.  In an odd way, this phenomena struck me as an opportunity.  An opportunity for a new Internet application.

I see this opportunity as having at least two possibilities. The first would be a service (or application) that seeks out the Internet footprint of the deceased and expunges and closes all the accounts. This might have to include a password cracking program and some clever manner to deduce or infer login names – for the cases where little is known about the person’s online activities.  It may be the case that after closing the account, the person may live on in the databases hidden behind the websites that are never purged, but they will be gone from public view.

The alternate would serve those who wish to be celebrated and truly immortalized. This would collect the entire presence of a person on the WWW and provide a comprehensive home page to celebrate their life, through their own words and images. This home page would include links to all surviving accounts, photos, posts and comments thereby providing a window into a life lived (albeit online).

In an odd way, this creates an avatar that is a more accurate representation of yourself than anything you could possibly create on Second Life or any similar virtual world. One could certainly imagine, though, taking all that data input and using it to create a sort of stilted avatar driven by the content entered over the course of your life.  It might only have actions based on what was collected about you but a more sophisticated variation would derive behaviors or likely responses based on projections of your “collected works”.

Immortality?  Not exactly.  But an amazing simulation.

Tags: , , , ,
« Previous posts Next posts » Back to top