Tag: reinvent

perfectionI have noticed that as people age, they become finer and finer versions of themselves. Their eccentricities become sharper and more pronounced; their opinions and ideas more pointed and immutable; their thoughts more focussed. In short, I like to say that they become more perfect versions of themselves. We see it in our friends and acquaintances and in our parents and grandparents. It seems a part of natural human development.

Back in 2006, Netflix initiated the Netflix Prize with the intent of encouraging development of improvements in the accuracy of predictions about how much someone is going to enjoy a movie based on their movie preferences and rewarding the winner with $1,000,000. Contestants were given access to a set of Netflix’s end-users’ movie ratings and were challenged to provide recommendations of other movies to watch that bested Netflix’s own recommendation engine. BellKor’s Pragmatic Chaos team was announced as the winner in 2009 having manage to improve Netflix’s recommendations by 10% and walked off with the prize money.

What did they do? Basically, they algorithmically determined and identified movies that were exceptionally similar to the ones that were already liked by a specific user and offered those movies as recommended viewing. And they did it really well.

In essence what the Bellkor team did was build a better echo chamber. Every viewer is analyzed, their taste detailed and then the algorithm perfects that taste and hones it to a razor sharp edge. You become, say, an expert in light romantic comedies with a strong female lead, who lives in a spacious apartment in Manhattan, includes many dog owners, no visible children and often features panoramic views of Central Park.

Of course, therein lies the rub. A multifaceted rub at that. As recommendation engines become more accurate and discerning of individual tastes they remove any element of chance, randomness or error that might serve to introduce new experiences, genres or even products into you life. You become a more perfect version of you. But in that perfection you are also stunted. You are shielded from experimentation and breadth of experience. You pick a single pond and overfish it.

There are many reasons why this is bad and we see it reflected, most obviously, in our political discourse where our interactions with opposing viewpoints are limited to exchanges of taunts (as opposed to conversations) followed by a quick retreat to the comfort of our well-constructed echo chambers of choice where our already perfected views are nurtured and reinforced.

But it also has other ramifications. If we come to know what people like to such a degree then innovation outside safe and well-known boundaries might be discouraged. If Netflix knows that 90% of its subscribers like action/adventure films with a male hero and lots of explosions why would they bother investing in a story about a broken family being held together by a sullen beekeeper. If retail recommendations hew toward what you are most likely to buy – how can markets of unrelated products be expanded? How can individual tastes be extended and deepened?

Extending that – why would anyone risk investment in or development of something new and radically different if the recommendation engine models cannot justify it. How can the leap be made from Zero to One – as Peter Theil described – in a society, market or investment environment in which the recommendation data is not present and does not justify it?

There are a number of possible answers. One might be that “gut instincts” need to continue to play a role in innovation and development and investment and that risk aversion has no place in making the giant leaps that technology builds upon and needs in order to thrive.

A more geeky answer is that big data isn’t yet big enough and that recommendation engines aren’t yet smart enough. A good recommendation engine will not just reinforce your prejudicial tastes, it will also often challenge and extend them and that we don’t yet have the modelling right to do that effectively.  The data are there but we don’t yet know how to mine it correctly to broaden rather than narrow our horizons. This broadening – when properly implemented – will widen markets and opportunities and increase revenue.

Tags: , , , , , , , ,

Spring_Llama-icon-512x512On June 2, 2014, we released our first Android application to the Google Play store.  “Llama Detector” is a lifestyle app that gives end-users ability to detect the presence of llamas in social situations.  It affords the end-user greater comfort in their daily interactions by allowing them to quietly and quickly detect hidden llamas wherever they may be. It does this using the your platform’s on-device camera hardware and peripherals. This amazing and technologically advanced application is guaranteed to provide end-users with seconds or even minutes of amusement. This posting serves as the end-user documentation and FAQ listing.

Usage

Using the Llama Detector is simple and straight forward.  The application prioritizes use of the rear-facing camera on your device.   If the device has no rear-facing camera then the front-facing camera is used. If the device has no camera then you will need to ask the supposed llama directly if it is a llama or spend a few minutes carefully examining the suspected area for llamas.

Upon launching the application, point the camera at the item or region that you suspect to be llama-infested.  Depress the button labelled ‘Scan’ when you have successfully framed the area that needs to be analyzed. The image will be captured and the red scan line will traverse the screen and the detection process will begin.

If you decide against analysis after beginning the scan, for any reason, you may cancel the operation by depressing the ‘Cancel’ button. Otherwise, scanning will continue for approximately 10 seconds.  After scanning, the Llama Detector will indicate if any llamas have been detected. Sometimes other items are detected and Llama Detector is able to indicate what it has identified.

If you would like to alter the detection sensitivity of the application, you may do so through the application preferences.  Choose the preferences either through the soft button or menu bar.  Then, display the Llama Sensitivity Filter.  Enter an integer value between 1 and 1000 where 1000 is the highest sensitivity value (and 1 is lowest).  This will alter the detection algorithm characteristics. A higher value will make the results more accurate with fewer false positives.  The default value is 800.

FAQ

1. How much does this amazing application cost?

Llama Detector is an absolutely free download from the Google Play store.

2. Free? That’s crazy! How do you do that?

How do we do it? Volume.

3. What sort of personal information does Llama Detector collect?

Llama Detector collects no personal information and does not communicate with any external servers. It should be noted though that by downloading the application you have identified yourself as either a llama or a llama enthusiast.

4. I went to the zoo and used Llama Detector at the llama exhibit but it detected no llamas.  Why is that?

Llamas are very difficult to hold in captivity.  They tend to sneak out of their pens and hang out at the concession stands eating hot dogs and trying to pick-up women. For this reason, most zoos use camels or, in some cases, baby giraffes dressed up as llamas in the llama pens.  The Llama Detector application can be used to indicate if a zoo is engaged in this sort of duplicity.  For this reason, many zoos nationwide ban the use of Llama Detector within the confines of their property.

5. I used Llama Detector in my house and it detected a llama in my bathroom. Now I am afraid to use the bathroom.  What do I do?

Llamas are quite agile and fleet of foot.  It is important to note that detection of llamas should be run multiple times for surety.  If the presence of a llama is verified, start making lettuce noises and slowly move to an open space.  The llama will follow you to that space.  Then stop making the lettuce noises.  The llama will wonder where the lettuce went and start looking around the open space.  Then quickly and silently proceed into the now llama-free bathroom.

6. When will the iOS version be available?

Our team of expert programmers are hard at work developing a native iOS version of this application so that iPhone user can enjoy the comfort and protection afforded by this new technology. The team is currently considering whether to wait for the release of iOS 8 to ensure a richer user experience.

7. I have another question but I don’t know what it is.

Feel free to post your questions to android at formidableengineeringconsultants dot com.  If it’s a really good question, we’ll even answer it.

Tags: , , , , , , , ,

BigDataBigBuildingsThere is a huge focus on big data nowadays. Driven by ever decreasing prices and ever increasing capacity of data storage solutions, big data provides magical insights and new windows into the exploitation of the long tail and addressing micro markets and their needs.  Big data can be used to build, test and validate models and ideas Big data holds promise akin to a panacea.  It is being pushed as a universal solution to all ills.  But if you look carefully and analyze correctly what big data ultimately provides is what Marshall MacLuhan described as an accurate prediction of the present.  Big data helps us understand how we got to where we are today. It tells us what people want or need or do within a framework as it exists today.  It is bounded by today’s (and the past’s) possibilities and ideas.

But big data does not identify the next seismic innovation.  It does not necessarily even identify how to modify the current big thing to make it incrementally better

In the October 2013 issue of IEEE Spectrum, an article described the work of a company named Lex Machina. The company is a classic big data play.  They collect, scan and analyze all legal proceedings associated with patent litigation and draw up statistics identifying, for instance, the companies who are more likely to settle, law firms that are more likely to win, judges who are more favorable to defendants or the prosecution, duration and cost assessments of prosecutions in different areas.  So it is a useful tool.  But all it does is tell you about the state of things now.  It does not measure variables like outcomes of prosecution or settlements (for instance, if a company wins but goes out of business or wins and goes on to build a more dominant market share or wins and nothing happens).  It does not indicate if companies protect only specific patents that have, say, an estimated future value of, say, $X million or what metric companies might use in their internal decision making process because that is likely not visible in the data.

Marissa Meyer, the hyper-analyzed and hyper-reported-on CEO of Yahoo!, famously tests all decisions based on data.  Whether it is the shade of purple for the new Yahoo! logo, the purchase price of the next acquisition or value of any specific employee – it’s all about measurables.

But how can you measure the immeasurable?  If something truly revolutionary is developed, how can big data help you decide if it’s worth it? How even can little data help you?  How can people know what they like until they have it? If I told you that I would provide you with a service that lets you broadcast your thoughts to anyone who cares to subscribe to them, you’d probably say.  “Sounds stupid. Why would I do that and who would care what I think?”  If I then told you that I forgot one important aspect of the idea, that every shared thought is limited to 140 characters, you would have likely said, “Well, now I KNOW it’s stupid!”.  Alas, I just described Twitter.  An idea that turned into a company that is, as of this writing, trading on the NYSE for just over $42 per share with a market capitalization of about $25 billion.

Will a strong reliance on big data lead us incrementally into a big corner?  Will all this fishing about in massive data sets for patterns and correlations merely reveal the complete works of Shakespeare in big enough data sets? Is Big Data just another variant of the Infinite Monkey Theorem? Will we get the to point that with so much data to analyze we merely prove whatever it is we are looking for?

Already we are seeing that Google Flu Trends is looking for instances of the flu and finds them where they aren’t or in higher frequencies than they actually are.  In that manner, big data fails even to accurately predict the present.

It is only now that some of the issues with ‘big data’ are being considered.  For instance, even when you have a lot of data – if it is bad or incomplete, you still have garbage only just a lot more of it (that is where wearable devices, cell phones and other sophisticated but merely thinly veiled data accumulation appliances come into play – to help improve the data quality by making it more complete).  Then the data itself is only as good as the analysis you can execute on it.  The failings of Google Flu Trends are often attributed to bad search terms in the analysis but of course, there could be many other different reasons.

Maybe, in the end, big data is just big hubris.  It lulls us into a false sense of security, promising knowledge and wisdom based on getting enough data but in the end all we learn is where we are right now and its predictive powers are, at best, based merely on what we want the future to be and, at worst, are non-existent.

Tags: , , , , , ,

iStock_000016388919XSmallBack in 1992, after the Berlin Wall fell and communist states were toppled one after another, Francis Fukuyama authored and published a book entitled The End of History and The Last Man.  It received much press at the time for its bold and seemingly definitive statement (specifically that whole ‘end of history’ thing with the thesis that capitalist liberal democracy is that endpoint). The result was much press, discussion, discourse and theorizing and presumably a higher sales volume for a book that likely still graces many a bookshelf, binding still uncracked.  Now it’s my turn to be bold.

Here it is:

With the advent and popularization of the smartphone, we are now at the end of custom personal consumer hardware.

That’s it.  THE END OF HARDWARE.  Sure there will be form factor changes and maybe a few additional new hardware features but all of these changes will be incorporated in smartphone handsets as that platform.

Maybe I’m exaggerating – but only a little.  Really, there’s not much more room for hardware innovation in the smartphone platform and as it is currently deployed, it contains the building blocks of any custom personal consumer device. Efforts are clearly being directed at gadgets to replace those cell phones.  That might be smart watches, wearable computers, tablets or even phablets. But these are really just changes in form not function.  Much like the evolution of the PC, it appears that mobile hardware has reached the point where the added value of hardware has become incremental and less valuable.  The true innovation is in the manner in which software can be used to connect resources and increase the actual or perceived power that platform.

In the PC world, faster and faster microprocessors were of marginal utility to the great majority of end-users who merely used their PCs for reading email or doing PowerPoint.  Bloated applications (of the sort that the folks at Microsoft seem so pleased to develop and distribute) didn’t even benefit from faster processors as much as they did from cheaper memory and faster internet connections.  And now, we may be approaching that same place for mobile applications.  The value of some of these applications is becoming limited more by the availability of on-device resources like memory and faster internet connections through the cell provider rather than the actual hardware features of the handset.  Newer applications are more and more dependent on big data and other cloud-based resources.  The handset is merely a window into those data sets.  A presentation layer, if you will.  Other applications use the information collected locally from the device’s sensors and hardware peripherals (geographical location, speed, direction, scanned images, sounds, etc.) in concert with cloud-based big data to provide services, entertainment and utilities.

In addition, and more significantly, we are seeing developing smartphone applications that use the phone’s peripherals to directly interface to other local hardware (like PCs, projectors, RC toys,  headsets, etc.) to extend the functionality of those products.  Why buy a presentation remote when you get an app? Why buy a remote for your TV when you can get an app? Why buy a camera when you already have one on your phone? A compass? A flashlight? A GPS? An exercise monitor?

Any consumer-targeted handheld device need no longer develop an independent hardware platform.  You just develop an app to use the features of the handset that you need and deploy the app.  Perhaps additional special purpose sensor packs might be needed to augment the capabilities of the smartphone for specialized uses but any mass-market application can be fully realized using the handset as the existing base and few hours of coding.

And if you doubt that handset hardware development has plateaued  then consider the evolution of the Samsung Galaxy S3 to the Samsung Galaxy S4.  The key difference between the two devices is the processor capabilities and the camera resolution.  The bulk of the innovations are pure software related and could have been implemented as part of the Samsung Galaxy S3 itself without really modifying the hardware.  The differences between the iPhone 4s and the iPhone 5s were a faster processor, a better camera and a fingerprint sensor.  Judging from a completely unscientific survey of end-users that I know, the fingerprint sensor remains unused by most owners. An innovation that has no perceived value.

The economics of this thesis is clear.  If a consumer has already spent $600 or so on a smartphone and lives most of their life on it anyway and carries it with them everywhere, are you going to have better luck selling them a new gadget for $50-$250 (that they have to order, wait for learn how to use, get comfortable with and then carry around) or an app that they can buy for $2 and download and use in seconds – when they need it?

 

Tags: , , , , , , , , , , , , , ,

disruptive_innovation_graphEveryone who is anyone loves bandying about the name of Clayton Christensen, the famed Professor of Business Administration at the Harvard Business School, who is regarded as one of the world’s top experts on innovation and growth and who is most famous for coining the term “disruptive innovation“. Briefly, the classical meaning of the term is as follows. A company, usually a large one, focuses on serving the high end, high margin part of their business and in doing so they provide an opening at the low end, low margin market segment.  This allows for small nimble, hungry innovators to get a foothold in the market by providing cheap but good enough products to the low end who are otherwise forsaken by the large company who is only willing to provide high priced, over-featured products.  These small innovators use their foothold to innovate further upmarket providing products of increasingly better functionality at lower cost that the Big Boys at the high end.  The Big Boys are happy with this because those lower margin products are a lot of effort for little payback and “The Market” rewards them handsomely for doing incremental innovation at the high end and maintaining high margins.  In the fullness of time, the little scrappy innovators disrupt the market with cheaper, better and more innovative solutions and products that catch up to and eclipse the offerings of the Big Boys, catching them off guard and the once large corporations, with their fat margins, become small meaningless boutique firms.  Thus the market is disrupted and the once regal and large companies, even though they followed all the appropriate rules dictated by “The Market”, falter and die.

Examples of this sort of evolution are many.  The Japanese automobile manufacturers used this sort of approach to disrupt the large American manufacturers in the 70s and 80s; the same with Minicomputers versus Mainframes and then PCs versus Minicomputers; to name but a few.  But when you think about it, sometimes disruption comes “from above”.  Consider the iPod.  Remember when Apple introduced their first music player?  They weren’t the first-to-market as there were literally tens of MP3 players available.  They certainly weren’t the cheapest as about 80% of the portable players had a price point well-below Apple’s $499 MSRP.  The iPod did have more features than most other players available and was in many ways more sophisticated – but $499?   This iPod was more expensive, more featured, higher priced, had more space on it for storage than anyone could ever imagine needing and had bigger margins than any other similar device on the market. And it was a huge hit.  (I personally think that the disruptive part was iTunes that made downloading music safe, legal and cheap at a time when the RIAA was making headlines by suing ordinary folks for thousands of dollars for illegal music downloads – but enough about me.)  From the iPod, Apple went on to innovate a few iPod variants, the iPhone and the iPad as well as incorporating some of the acquired knowledge into the Mac.

And now, I think, another similarly modeled innovation is upon us.  Consider Tesla Motors.  Starting with the now-discontinued Roadster – a super high end luxury 2 seater sport vehicle that was wholly impractical and basically a plaything for the 1%.  But it was a great platform to collect data and learn about batteries, charging, performance, efficiency, design, use and utility.  Then the Model S that, while still quite expensive, brought that price within reach of perhaps the 2% or even the 3%.   In Northern California, for instance, Tesla S cars populate the roadways seemingly with the regularity of VW Beetles.  Of course, part of what makes them seem so common is that their generic luxury car styling makes them nearly indistinguishable, at first glace, from a Lexus, Jaguar, Infiniti, Maserati, Mercedes Benz, BMW and the like. The choice of styling is perhaps yet another avenue of innovation.  Unlike, the Toyota Prius whose iconic design became a “vector” sending a message to even the casual observer about the driver and perhaps the driver’s social and environmental concerns.  The message of the Tesla’s generic luxury car design to the casual observer merely seems to be “I’m rich – but if you want to learn more about me – you better take a closer look”. Yet even attracting this small market segment, Tesla was able to announce profitability for the first time.

With their third generation vehicle, Tesla promises to reduce their selling price by 40% over the current Model S .  This would bring the base price to about $30,000 which is within the average selling price of new cars in the United States.  Even without the lower priced vehicle available, Tesla is being richly rewarded by The Market thanks to a good product (some might say great), some profitability, excellent and savvy PR and lots and lots of promise of a bright future.

But it is the iPod model all over again. Tesla is serving the high end and selling top-of-the-line technology.  They are developing their technology within a framework that is bound mostly by innovation and their ability to innovate and not by cost or selling price.  They are also acting in a segment of the market that is not really well-contested (high-end luxury electric cars).  This gives them freedom from the pressures of competition and schedules – which gives them an opportunity to get things right rather than rushing out ‘something’ to appease the market.  And with their success in that market, they are turning around and using what they have learned to figure out how to build the same thing (or a similar thing) cheaper and more efficiently to bring the experience to the masses (think: iPod to Nano to Shuffle).  They will also be able thusly to ease their way into competing at the lower end with the Nissan Leaf, Chevy Volt, the Fiat 500e and the like.

Maybe the pathway to innovation really is from the high-end down to mass production?

Tags: , , , , , , , ,

3d keyA spate of recent articles describes the proliferation of back doors in systems.  There are so many such back doors in so many systems, it claims, that the idea of a completely secure and invulnerable system is, at best, a fallacy.  These back doors may be as result of the system software or even designed into the hardware.  Some back doors are designed in to the systems to facilitate remote update, diagnosis, debug and the like – usually never with the intention of being a security hole.  Some are inserted with subterfuge and espionage in mind by foreign-controlled entities keen on gaining access to otherwise secure systems.  Some may serve both purposes, as well. And some, are just design or specification errors.  This suggests that once you connect a system to a network, some one, some how will be able to access.  As if to provide an extreme example, a recent break-in at the United States Chamber of Commerce was traced to an internet-connected thermostat.

That’s hardware.  What about software?  Despite the abundance of anti-virus software and firewalls, a little social engineering is all you really need to get through to any system. I have written previously about the experiment in which USB memory sticks seeded in a parking lot were inserted in corporate laptops by more than half of employees who found them without any prompting. Email written as if sent from a superior is often utilized to get employees to open attached infected applications that install themselves and open a hole in a firewall for external communications and control.

The problem is actually designed in.  The Internet was built for sharing. The sharing was originally limited to trusted sources. A network of academics. The idea that someone would try to do something awful to you – except as some sort of prank – was inconceivable.

That was then.

Now we are in a place where the Internet is omnipresent.  It is used for sharing and viewing cat videos and for financial transactions.  It is used for the transmission of top secret information and buying cheese.  It connected to servers containing huge volumes of sensitive and personal customer data: social security numbers, bank account numbers, credit card numbers, addresses, health information, etc.  And now, not a day goes by without reports of another breach.  Sometimes attributed to Anonymous, the Chinese, organized crime or kids with more time than sense, these break-ins are relentless and everyone is susceptible

So what to do?

There is a story, perhaps apocryphal, that, at the height of the cold war, when the United States captured a Soviet fighter jet and were examining it, they discovered that there was no solid state electronics in it.  The entire jet was designed using vacuum tubes.  That set the investigators thinking.  Were the Soviets merely backward or did they design using tubes to guard against EMP attacks?

Backward to the future?

Are we headed to a place where the most secure organizations will go offline.  They will revert to paper documents, file folders and heavy cabinets stored in underground vaults?  Of course such systems are not completely secure, as no system actually is.  On the other hand, a break in requires physical presence, carting away tons of documents requires physical strength and effort.  Paper is a material object that cannot be easily spirited away as a stream of electrons. Maybe that’s the solution. But what of all the information infrastructure built up for convenience, cost effectiveness, space savings and general efficiency? Do organizations spend more money going back to paper, staples, binders and hanging folders? And then purchase vast secure spaces to stow these materials?

Will there instead a technological fix in designing a parallel Internet infrastructure from the ground up redesigned so that it incorporates authentication, encryption and verifiable sender identification? Then all secure transactions and information could move to that newer, safer Internet? Is that newer, safer Internet just a .secure domain? Won’t that just be a bigger, better and more value laden target for evil-doers? And what about back-doors – even in a secure infrastructure, an open door or even a door with a breakable window ruins even the finest advanced security infrastructure.  And, of course, there is always social engineering of people that provides access more easily that any other technique. Or spies. Or people thinking they are “doing good”.

The real solution may not yet even be defined or known.  Is it Quantum Computing (which is really just a parallel environment of a differently-developed computing infrastructure)? Or is it really nothing – in that there is no solution and we are stuck with tactical solutions?  It’s an interesting question but for now, it is clear as it was some 20 years ago when Scott McNeally said it “The future of the Internet is security”.

Tags: , , , , ,

Facebook-mobile-phoneIt’s all the rage right now to be viewed as a leader in the mobile space.  There are many different sectors in which to demonstrate your leadership.  There are operating systems like iOS and Android and maybe even Windows Phone (someday).  There’s hardware like Apple, Samsung, HTC and maybe even Nokia.  And of course there’s the applications like FourSquare, Square and other primarily mobile applications in social, payments, health and gaming and then all the other applications rushing to mobile because they were told that’s where they ought to be.

Somewhere in this broad and vague classification is Facebook (or perhaps more properly “facebook”).  This massive database of human foibles and interests is either being pressed or voluntarily exploring just exactly how to enter the mobile space and presumably dominate it.  Apparently they have made several attempts to develop their own handset.  The biggest issue it seems is that they believed that just because they are a bunch of really smart folks they should be able to stitch a phone together and make it work.  I believe the saying is “too smart by half“.  And since they reportedly tried this several times without success – perhaps they were also “too stubborn by several halves”.

This push by facebook begs the question: “What?” or even “Why?”  There is a certain logic to it.  Facebook provides hours of amusement to tens of millions of active users and the developers at facebook build applications to run on a series of mobile platforms already.  Those applications are limited in their ability to provide a full facebook experience and also limit facebook’s ability to extract revenue from these users.  Though when you step back, you quickly realize that facebook is really a platform.  It has messaging (text, voice and video), it has contact information, it has position and location information, it has your personal profile along with your interest history and friends, it knows what motivates you (by your comment contents and what you “like”) and it is a platform for application development (including games and exciting virus and spam possibilities) with a well-defined and documented interface.  At the 10,000 foot level, it seems like facebook is an operating system and a platform ready-to-go.  This is not too different from the vision that propelled Netscape into Microsoft’s sights leading to their ultimate demise. Microsoft doesn’t have the might it once did but Google does and so does Apple.  Neither may be “evil” but both are known to be ruthless.  For facebook to enter this hostile market with yet another platform would be bold. And for that company to be one whose stock price and perceived confidence is faltering after a shaky IPO – it may also be dumb. But it may be the only and necessary option for growth.

On the other hand, facebook’s recent edict imploring all employees to access facebook from Android phones rather than their iPhones could either suggest that the elders at facebook believe their future is in Android or simply that they recognize that it is a growing and highly utilized platform. Maybe they will ditch the phone handset and go all in for mobile on iOS and Android on equal footing.

Personally, I think that a new platform with a facebook-centric interface might be a really interesting product especially if the equipment cost is nothing to the end-user.  A free phone supported by facebook ads, running all your favorite games, with constant chatter and photos from your friends? Talk about an immersive communications experience. It would drive me batty. But I think it would be a huge hit with a certain demographic. And how could they do this given their previous failures? Amongst the weaker players in the handset space, Nokia has teamed up with Microsoft but RIM continues to flail. Their stock is plummeting but they have a ready-to-go team of smart employees with experience in getting once popular products to market as well as that all-important experience in dealing with the assorted wireless companies to say nothing of the treasure trove of patents they hold. They also have some interesting infrastructure in their SRP network that could be exploited by facebook to improve their service (or, after proper consideration, sold off).

You can’t help but wonder that if instead of spending $1B on Instagram prior to its IPO, facebook had instead spent a little more and bought RIM would the outcome and IPO lauch have been different?  I guess I can only speculate about that.  Now, though, it seems that facebook ought to move soon or be damned to be a once great player who squandered their potential.

Tags: , , , , , , , , , , , ,

The WWW is The Wheel

For no apparent reason, but moreso than ever before, I have come to believe that the World Wide Web can truly be the source of all knowledge and a savior for the lazy (or at least an inspiration to those who need examples to learn or get started).

I was writing a simple application in C the other day and needed to code up a dynamic array. It seemed to me that actually typing out the 20 or so lines of code to implement the allocation and management was just too much effort. And then it occurred to me – “Why reinvent the wheel?” People write dynamic arrays in C every day and I bet that at least one person posted their implementation to the WWW for all to see and admire. A quick search revealed that to be true and in minutes I was customizing code to suit my needs.

Now…did I really save time? In the end, did my customizations result in no net increase in productivity? In many ways, for me, it didn’t matter. I am the sort of person who needs some inspiration to overcome a blank sheet of paper – something concrete – a real starting point – even a bad one. Having that implementation in place gave me that starting point and even if I ended up deleting everything and rewriting it I feel like I benefited, at least psychologically, from having somewhere to start.

It is also valuable to see and learn from the experience of others. Why should I re-invent something so basic? Why not use what’s already extant and spend my energy and talent where I can really add value?

But it is also true that although the WWW may indeed be “the wheel” it sometimes provides a wheel made from wood or stone, that has a flat tire or is damaged beyond repair. For me, though, even that is beneficial since it helps me overcome that forbidding blank sheet of paper.

Tags: , , , ,
Back to top