Archive for 'Innovation'

next-big-thing1There is a great imbalance in the vast internet marketplace that has yet to be addressed and is quite ripe for the picking. In fact, this imbalance is probably at the root of the astronomical stock market valuations of existing and new companies like Google, facebook, Twitter and their ilk.

It turns out that your data is valuable.  Very valuable.  And it also turns out that you are basically giving it away.  You are giving it away – not quite for free but pretty close.  What you are getting in return is personalization. You get advertisements targeted at you providing you with products you don’t need but are likely to find quite iresistable.  You get recommendations for other sites that ensure that you need never venture outside the bounds of your existing likes and dislikes. You get matched up with companies that provide services that you might or might not need but definitely will think are valuable.

Ultimately, you are giving up your data so businesses can more efficiently extract more money from you.

If you are going to get exploited in this manner, it’s time to make that exploitation a two way street. Newspapers, for instance, are rapidly arriving at the conclusion that there is actual monetary value in the information that they provide.  They are seeing that the provision of vetted, verified, thougful and well-written information is intrinsicly worth more than nothing.  They have decided that simply giving this valuable commodity away for free is giving up the keys to the kingdom.  The Wall Street Journal, the New York Times, The Economist and others are seeing that people are willing to pay and do actually subscribe.

There is a lesson in this for you – as a person. There is value in your data.  Your mobile movements, your surf trail, your shopping preferences  It  should not be the case that you implicitly surrender this information for better personalization or even a $5 Starbucks gift card.  This constant flow of data from you, your actions, movements and keystrokes ought to result in a constant flow of money to you.  When you think about it, why isn’t the ultimate personal data collection engine, Google Glass, given away for free? Because people don’t realize that personal data collection is its primary function.  Clearly, the time has come for the realization of a personal paywall.

The idea is simple, if an entity wants your information they pay you for it.  Directly.  They don’t go to Google or facebook and buy it – they open up an account with you and pay you directly.  At a rate that you set.  Then that business can decide if you are worth what you think you are or not.  You can adjust your fee up or down anytime and you can be dropped or picked up by followers. You could provide discount tokens or free passes for friends.  You could charge per click, hour, day, month or year.  You might charge more for your mobile movements and less for your internet browsing trail.  The data you share comes with an audit trail that ensures that if the information is passed on to others without your consent you will be able to take action – maybe even delete it – wherever it is.  Maybe your data lives for only a few days or months or years – like a contract or a note – and then disappears.

Of course, you will have to do the due diligence to ensure you are selling your information to a legitimate organization and not a Nigerian prince.  This, in turn, may result in the creation of a new class of service providers who vet these information buyers.

This data reselling capability would also provide additional income to individuals.  It would not a living wage to compensate for having lost a job but it would be some compensation for participating in facebook or LinkedIn or a sort of kickback for buying something at Amazon and then allowing them to target you as a consumer more effectively. It would effectively reward you for contributing the information that drives the profits of these organizations and recognize the value that you add to the system.

The implementation is challenging and would require encapsulating data in packets over which you exert some control.  An architectural model similar to bitcoin with a central table indicating where every bit of your data is at any time would be valuable and necessary. Use of the personal paywall would likely require that you include an application on your phone or use a customized browser that releases your information only to your paid-up clients. In addition, some sort of easy, frictionless mechanism through which companies or organizations could automatically decide to buy your information and perhaps negotiate (again automatically) with your paywall for a rate that suits both of you would make use of the personal paywall invisible and easy. Again this technology would have to screen out fraudulent entities and not even bother negotiating with them.

There is much more to this approach to consider and many more challenges to overcome.  I think, though, that this is an idea that could change the internet landscape and make it more equitable and ensure the true value of the internet is realized and shared by all its participants and users.

Tags: , , , , , , , , , , , ,

disruptive_innovation_graphEveryone who is anyone loves bandying about the name of Clayton Christensen, the famed Professor of Business Administration at the Harvard Business School, who is regarded as one of the world’s top experts on innovation and growth and who is most famous for coining the term “disruptive innovation“. Briefly, the classical meaning of the term is as follows. A company, usually a large one, focuses on serving the high end, high margin part of their business and in doing so they provide an opening at the low end, low margin market segment.  This allows for small nimble, hungry innovators to get a foothold in the market by providing cheap but good enough products to the low end who are otherwise forsaken by the large company who is only willing to provide high priced, over-featured products.  These small innovators use their foothold to innovate further upmarket providing products of increasingly better functionality at lower cost that the Big Boys at the high end.  The Big Boys are happy with this because those lower margin products are a lot of effort for little payback and “The Market” rewards them handsomely for doing incremental innovation at the high end and maintaining high margins.  In the fullness of time, the little scrappy innovators disrupt the market with cheaper, better and more innovative solutions and products that catch up to and eclipse the offerings of the Big Boys, catching them off guard and the once large corporations, with their fat margins, become small meaningless boutique firms.  Thus the market is disrupted and the once regal and large companies, even though they followed all the appropriate rules dictated by “The Market”, falter and die.

Examples of this sort of evolution are many.  The Japanese automobile manufacturers used this sort of approach to disrupt the large American manufacturers in the 70s and 80s; the same with Minicomputers versus Mainframes and then PCs versus Minicomputers; to name but a few.  But when you think about it, sometimes disruption comes “from above”.  Consider the iPod.  Remember when Apple introduced their first music player?  They weren’t the first-to-market as there were literally tens of MP3 players available.  They certainly weren’t the cheapest as about 80% of the portable players had a price point well-below Apple’s $499 MSRP.  The iPod did have more features than most other players available and was in many ways more sophisticated – but $499?   This iPod was more expensive, more featured, higher priced, had more space on it for storage than anyone could ever imagine needing and had bigger margins than any other similar device on the market. And it was a huge hit.  (I personally think that the disruptive part was iTunes that made downloading music safe, legal and cheap at a time when the RIAA was making headlines by suing ordinary folks for thousands of dollars for illegal music downloads – but enough about me.)  From the iPod, Apple went on to innovate a few iPod variants, the iPhone and the iPad as well as incorporating some of the acquired knowledge into the Mac.

And now, I think, another similarly modeled innovation is upon us.  Consider Tesla Motors.  Starting with the now-discontinued Roadster – a super high end luxury 2 seater sport vehicle that was wholly impractical and basically a plaything for the 1%.  But it was a great platform to collect data and learn about batteries, charging, performance, efficiency, design, use and utility.  Then the Model S that, while still quite expensive, brought that price within reach of perhaps the 2% or even the 3%.   In Northern California, for instance, Tesla S cars populate the roadways seemingly with the regularity of VW Beetles.  Of course, part of what makes them seem so common is that their generic luxury car styling makes them nearly indistinguishable, at first glace, from a Lexus, Jaguar, Infiniti, Maserati, Mercedes Benz, BMW and the like. The choice of styling is perhaps yet another avenue of innovation.  Unlike, the Toyota Prius whose iconic design became a “vector” sending a message to even the casual observer about the driver and perhaps the driver’s social and environmental concerns.  The message of the Tesla’s generic luxury car design to the casual observer merely seems to be “I’m rich – but if you want to learn more about me – you better take a closer look”. Yet even attracting this small market segment, Tesla was able to announce profitability for the first time.

With their third generation vehicle, Tesla promises to reduce their selling price by 40% over the current Model S .  This would bring the base price to about $30,000 which is within the average selling price of new cars in the United States.  Even without the lower priced vehicle available, Tesla is being richly rewarded by The Market thanks to a good product (some might say great), some profitability, excellent and savvy PR and lots and lots of promise of a bright future.

But it is the iPod model all over again. Tesla is serving the high end and selling top-of-the-line technology.  They are developing their technology within a framework that is bound mostly by innovation and their ability to innovate and not by cost or selling price.  They are also acting in a segment of the market that is not really well-contested (high-end luxury electric cars).  This gives them freedom from the pressures of competition and schedules – which gives them an opportunity to get things right rather than rushing out ‘something’ to appease the market.  And with their success in that market, they are turning around and using what they have learned to figure out how to build the same thing (or a similar thing) cheaper and more efficiently to bring the experience to the masses (think: iPod to Nano to Shuffle).  They will also be able thusly to ease their way into competing at the lower end with the Nissan Leaf, Chevy Volt, the Fiat 500e and the like.

Maybe the pathway to innovation really is from the high-end down to mass production?

Tags: , , , , , , , ,

google-glass-patent-2-21-13-01Let me start by being perfectly clear.  I don’t have Google Glass.  I’ve never seen a pair live.  I’ve never held or used the device.  So basically, I just have strong opinions based on what I have read and seen.  And, of course, the way I have understood what I have read and seen.  Sergei Brin recently did a TED talk about Google Glass during which, after sharing a glitzy, well-produced video commercial for the product, he maintained that they developed Google Glass because burying your head in a smartphone was rude and anti-social.  Presumably staring off into the projected images produced by Google Glass but still avoiding eye-contact and real human interaction is somehow less rude and less anti-social.  But let that alone for now.

The “what’s in it for me” of Google Glass is the illusion of intelligence (or at least the ability to instantly access facts), Internet-based real-time social sharing, real-time scrapbooking and interactive memo taking amongst other Dick Tracy-like functions.

What’s in it for Google is obvious.  At its heart, Google is an advertising company – well – more of an advertising distribution company.  They are a platform for serving up advertisements for all manner of products and services.  Their ads are more valuable if they can directly target people with ads for products or services at a time and place when the confluence of the advertisement and the reality yield a situation in which the person is almost compelled to purchase what is on offer because it is exactly what they want when they want it.  This level of targeting is enhanced when they know what you like (Google+, Google Photos (formerly Picasa)), how much money you have (Google Wallet), where you are (Android), what you already have (Google Shopping), what you may be thinking (GMail), who you are with (Android) and what your friends and neighbors have and think (all of the aforementioned).  Google Glass, by recording location data, images, registering your likes and other purchases can work to build and enhance such a personal database.  Even if you choose to anonymize yourself and force Google to de-personalize your data, their guesses may be less accurate but they will still know about you as a demographic group (male, aged 30-34, lives in zip code 95123, etc.) and perhaps general information based on your locale and places you visit and where you might be at any time.  So, I immediately see the value of Google Glass for Google and Google’s advertising customers but see less value in its everyday use by ordinary folks unless they seek to be perceived as cold, anti-social savants who may possibly be on the Autistic Spectrum.

I don’t want to predict that Google Glass will be a marketplace disaster but the value statement for it appears to be limited.  A lot of the capabilities touted for it are already on your smartphone or soon to be released for it.  There is talk of image scanning applications that immediately bring up information about whatever it is that you’re looking at.  Well, Google’s own Goggles is an existing platform for that and it works on a standard mobile phone.  In fact, all of the applications touted thus far for Google Glass rely on some sort of visual analysis or geolocation-based look-up that is equally applicable to anything with a camera. It seems to me that the “gotta have the latest gadget” gang will flock to Google Glass as they always do to these devices but appealing to the general public may be a more difficult task.  Who really wants to wear their phone on their face?  If the benefit of Google Glass is its wearability then maybe Apple’s much-rumored iWatch is a less intrusive and less nerdy looking alternative.  Maybe Apple still better understands what people really want when it comes to mobile connectivity.

Ultimately, Google Glass may be a blockbuster hit or just an interesting (but expensive) experiment.  We’ll find out by the end of the year.

Tags: , , , , , , , , , , , , ,

171i-Training-Talet-Development-Competency-BenchmarkingI’ve had occasion to be interviewed for positions at a variety of technology companies. Sometimes the position actually exists, other times it might exist and even other times, the folks are just fishing for solutions to their problems and hope to save a little from their consulting budget. In all cases, the goal of the interview is primarily to find out what you know and how well you know it in a 30 to 45 minute conversation. It is interesting to see how some go about doing it. My experience has been that an interview really tells you nothing but does give you a sense of whether the person is nice enough to “work well with others“.

But now, finally folks at Google used big data to figure out something that has been patently obvious to anyone who has either interviewed for a job or was interviewing someone for a job. The article published in the New York Time details a talk with Mr. Laszlo Bock, senior vice president of people operations at Google.  In it, he shared that puzzle questions don’t tell you anything about anyone.  I maintain that they tell you if someone has heard that particular puzzle question before. In the published interview Mr. Bock, less charitably, suggests that it merely serves to puff up the ego of the interviewer.

I think it’s only a matter of time before big data is used again to figure out another obvious fact – that even asking simple or complex programming questions serves as no indicator of on-the-job success.  Especially now in the age of Google and open-source software.  Let’s say you want to write some code to sort a string of arbitrary letters and determine the computational complexity, a few quick Google searches and presto – you have the solution.  You need to understand the question and the nature of the problem but the solution itself has merely become a matter of copying from your betters and equals who shared their ideas on the Internet.  Of course, such questions are always made more useless when the caveat is added – “without using the built-in sort function” – which is, of course, the way you actually solve it in real life.

Another issue I see is the concern about experience with a specific programming language. I recall that the good people at Apple are particularly fond of Objective C to the point where they believe that unless you have had years of direct experience with it, you could never use it to program effectively.  Of course, this position is insulting to both any competent programmer and the Objective C language. The variations between these algorithmic control flow languages are sometimes subtle, usually stylistic but always easily understood. This is true of any programming language.  In reality, if you are competent at any one, you should easily be able to master any another. For instance,  Python uses indentation but C uses curly braces to delineate code blocks.  Certainly there are other differences but give any competent developer a few days and they can figure it out leveraging their existing knowledge.

But that still leaves the hard question.  How do you determine competency?  I don’t think you can figure it out in a 45 minute interview – or a 45 hour one for that matter – if the problems and work conditions are artificial.  I think the first interview should be primarily behavioral and focus on fit and then, if that looks good, the hiring entity should then pay you to come in and work for a week solving an actual problem working with the team that would be yours. This makes sense in today’s world of limited, at-will employment where everyone is really just a contractor waiting to be let go. So, in this approach, everyone gets to see how you fit in with the team, how productive you can be, how quickly you can come up to speed on a basic issue and how you actually work a problem to a solution in the true environment. This is very different from establishing that you can minimize the number of trips a farmer takes across a river with five foxes, three hens, six bag of lentils, a sewing machine and a trapeze.

I encourage you to share some of your ideas for improving the interview process.

Tags: , , , , , ,

IEEE-1149-1-jtag-pictureThat venerable electronic test standard IEEE Std 1149.1 (also known as JTAG; also known as Boundary-Scan; also known as Dot 1) has just been freshened up.  This is no ordinary freshening.  The standard, last revisited in 2001, is long overdue for some clarification and enhancement.  It’s been a long time coming and now…it’s here.  While the guts remain the same and in good shape, some very interesting options and improvements have been added.  The improvements are intended to provide support for testing and verification of the more complex devices currently available and to acknowledge the more sophisticated test algorithms and capabilities afforded by the latest hardware.  There is an attempt, as well, (perhaps though, only as well as one can do this sort of thing) to anticipate future capabilities and requirements and to provide a framework within which such capabilities and requirements can be supported.  Of course, since the bulk of the changes are optional their value will only be realized if the end-user community embraces them.

There are only some minor clarifications or relaxations to the rules that are already established. For the most part, components currently compliant with the previous version of this standard will remain compliant with this one. There is but one “inside baseball” sort of exception.  The long denigrated and deprecated BC_6 boundary-scan cell has finally been put to rest. It is, with the 2013 version, no longer supported or defined, so any component supplier who chose to utilize this boundary-scan cell – despite all warnings to contrary – must now provide their own BSDL package defining this BC_6 cell if they upgrade to using the STD_1149_1_2013 standard package for their BSDL definitions.

While this is indeed a major revision, I must again emphasize that all the new items introduced are optional.  One of the largest changes is in documentation capability incorporating  the introduction of a new executable description language called Procedural Description Language (PDL) to document test procedures unique to a component.  PDL, a TCL-like language, was adopted from the work of the IEEE Std P1687 working group. 1687 is a proposed IEEE Standard for the access to and operation of embedded instruments (1687 is therefore also known as iJTAG or Instrument JTAG). The first iteration of the standard was based on use of the 1149.1 Test Access Port and Controller to provide the chip access—and a set of modified 1149.1-type Test Data Registers to create an access network for embedded instruments. PDL was developed to describe access to and operation of these embedded instruments.

Now, let’s look at the details.  The major changes are as follows:

In the standard body:

  • In order to allow devices to maintain their test logic in test mode, a new, optional, test mode persistence controller was introduced.  This means that test logic (like the boundary-scan register) can remain behaviorally in test mode even if the active instruction does not force test mode. To support this, the TAP controller was cleaved into 2 parts.  One part that controls test mode and the other that has all the rest of the TAP functionality. In support of this new controller, there are three new instructions: CLAMP_HOLD and TMP_STATUS (both of which access the new TMP status test data register) and CLAMP_RELEASE.
  • In recognizing the emerging requirement for unique device identification codes a new, optional ECIDCODE instruction was introduced along with an associated electronic chip identification test data register.  This instruction-register pair is intended to supplement the existing IDCODE and USERCODE instructions and allow for access to an Electronic Chip Identification value that could be used to identify and track individual integrated circuits.
  • The problem of initializing a device for test has been addressed by providing a well-defined framework to use to formalize this process. The new, optional INIT_SETUP, INIT_SETUP_CLAMP, and INIT_RUN instructions paired with their associated initialization data and initialization status test data registers were provided to this end. The intent is that these instructions formalize the manner in which programmable input/output (I/O) can be set up prior to board or system testing, as well as any providing for the execution of any tasks required to put the system logic into a safe state for test.
  • Recognizing that resetting a device can be complex and require many steps or phases, a new, optional, IC_RESET instruction and its associated reset_select test data register is defined to provide formalized control of component reset functions through the TAP.
  • Many devices now have a number of separate power domains that could result in sections of the device being powered down while other are powered up.  A single, uniform boundary-scan register does not align well with that device style.  So to support power domains that may be powered down but having a single test data register routed through these domains,  an optional standard TAP to test data register interface is recommended that allows for segmentation of test data registers. The concept of register segments allows for segments that may be excluded or included and is generalized sufficiently for utilization beyond the power domain example.
  • There have also been a few enhancements to the boundary-scan register description to incorporate the following:
    1. Optional excludable (but not selectable) boundary-scan register segments
    2. Optional observe-only boundary-scan register cells to redundantly capture the signal value on all digital pins except the TAP pins
    3. Optional observe-only boundary-scan register cells to capture a fault condition on all pins, including non-digital pins, except the TAP pins.

The Boundary Scan Description Language annex was rewritten and includes:

  • Increased clarity and consistency based on end-user feedback accumulated over the years.
  • A technical change was made such that BSDL is no longer a “proper subset” of VHDL, but it is now merely “based on” VHDL. This means that BSDL now maintains VHDL’s flavor but has for all intents and purposes been “forked”.
  • As result of this forking, formal definitions of language elements are now included in the annex instead of reliance on inheritance from VHDL.
  • Also as a result of this forking, some changes to the BNF notation used, including definition of all the special character tokens, are in the annex.
  • Pin mapping now allows for documenting that a port is not connected to any device package pin in a specific mapped device package.
  • The boundary-scan register description introduces new attributes for defining boundary-scan register segments, and introduces a requirement for documenting the behavior of an un-driven input.
  • New capabilities are introduced for documenting the structural details of test data registers:
    1. Mnemonics may be defined that may be associated with register fields.
    2. Name fields within a register or segment may be defined.
    3. Types of cells used in a test data register (TDR) field may be defined.
    4. One may hierarchically assemble segments into larger segments or whole registers.
    5. Constraints may be defined on the values to be loaded in a register or register field.
    6. A register field or bit may be associated with specific ports
    7. Power port may be associated with other ports.
  • The User Defined Package has been expanded to support logic IP providers who may need to document test data register segments contained within their IP.

As I stated earlier, a newly adopted language, PDL, has been included in this version of the standard.  The details of this language are included as part of Annex C. PDL is designed to document the procedural and data requirements for some of the new instructions. PDL serves a descriptive purpose in that regard but, as such, it is also executable should a system choose to interpret it.

It was decided to adopt and develop PDL to support the new capability of  initializing internal test data register fields and configuring complex I/Os prior to entering the EXTEST instruction.  Since the data required for initialization could vary for each use of the component on each distinct board or system design there needed to be an algorithmic way to describe the data set-up and application., in order to configure the I/O Since this version of the standard introduces new instructions for configuring complex I/Os prior to entering the EXTEST instruction. As the data required for initialization could vary for each use of the component on each distinct board or system design, this created the need for a new language for setting internal test data register fields in order to configure the I/O. It was decided to adopt PDL and tailor it to the BSDL register descriptions and the needs of IEEE 1149.1.

Since the concept of BSDL and PDL working together is new and best explained via examples Annex D is provided to supply extended examples of BSDL and PDL used together to describe the structure and the procedures for use of new capabilities. Similarly Annex E provides example pseudo-code for the execution of the PDL iApply command, the most complex of the new commands in PDL.

So that is the new 1149.1 in a nutshell. A fair amount of new capabilities. Some of it complex. All of it optional.  Will you use it?

Tags: , , , , , , ,
Next posts » Back to top