Tag: world wide web

perfectionI have noticed that as people age, they become finer and finer versions of themselves. Their eccentricities become sharper and more pronounced; their opinions and ideas more pointed and immutable; their thoughts more focussed. In short, I like to say that they become more perfect versions of themselves. We see it in our friends and acquaintances and in our parents and grandparents. It seems a part of natural human development.

Back in 2006, Netflix initiated the Netflix Prize with the intent of encouraging development of improvements in the accuracy of predictions about how much someone is going to enjoy a movie based on their movie preferences and rewarding the winner with $1,000,000. Contestants were given access to a set of Netflix’s end-users’ movie ratings and were challenged to provide recommendations of other movies to watch that bested Netflix’s own recommendation engine. BellKor’s Pragmatic Chaos team was announced as the winner in 2009 having manage to improve Netflix’s recommendations by 10% and walked off with the prize money.

What did they do? Basically, they algorithmically determined and identified movies that were exceptionally similar to the ones that were already liked by a specific user and offered those movies as recommended viewing. And they did it really well.

In essence what the Bellkor team did was build a better echo chamber. Every viewer is analyzed, their taste detailed and then the algorithm perfects that taste and hones it to a razor sharp edge. You become, say, an expert in light romantic comedies with a strong female lead, who lives in a spacious apartment in Manhattan, includes many dog owners, no visible children and often features panoramic views of Central Park.

Of course, therein lies the rub. A multifaceted rub at that. As recommendation engines become more accurate and discerning of individual tastes they remove any element of chance, randomness or error that might serve to introduce new experiences, genres or even products into you life. You become a more perfect version of you. But in that perfection you are also stunted. You are shielded from experimentation and breadth of experience. You pick a single pond and overfish it.

There are many reasons why this is bad and we see it reflected, most obviously, in our political discourse where our interactions with opposing viewpoints are limited to exchanges of taunts (as opposed to conversations) followed by a quick retreat to the comfort of our well-constructed echo chambers of choice where our already perfected views are nurtured and reinforced.

But it also has other ramifications. If we come to know what people like to such a degree then innovation outside safe and well-known boundaries might be discouraged. If Netflix knows that 90% of its subscribers like action/adventure films with a male hero and lots of explosions why would they bother investing in a story about a broken family being held together by a sullen beekeeper. If retail recommendations hew toward what you are most likely to buy – how can markets of unrelated products be expanded? How can individual tastes be extended and deepened?

Extending that – why would anyone risk investment in or development of something new and radically different if the recommendation engine models cannot justify it. How can the leap be made from Zero to One – as Peter Theil described – in a society, market or investment environment in which the recommendation data is not present and does not justify it?

There are a number of possible answers. One might be that “gut instincts” need to continue to play a role in innovation and development and investment and that risk aversion has no place in making the giant leaps that technology builds upon and needs in order to thrive.

A more geeky answer is that big data isn’t yet big enough and that recommendation engines aren’t yet smart enough. A good recommendation engine will not just reinforce your prejudicial tastes, it will also often challenge and extend them and that we don’t yet have the modelling right to do that effectively.  The data are there but we don’t yet know how to mine it correctly to broaden rather than narrow our horizons. This broadening – when properly implemented – will widen markets and opportunities and increase revenue.

Tags: , , , , , , , ,

thingsThe hoi polloi are running fast towards the banner marked “Internet of Things“.   They are running at full speed chanting “I-o-T, I-o-T, I-o-T” all along the way. But for the most part, they are each running towards something different.  For some, it is a network of sensors; for others, it is a network of processors; for still others, it is a previously unconnected and unnetworked  embedded system but now connected and attached to a network;  some say it is any of those things connected to the cloud; and there are those who say it is simply renaming whatever they already have and including the descriptive marketing label “IoT” or “Internet of Things” on the box.

So what is it?  Why the excitement? And what can it do?

At its simplest, the Internet of Things is a collections of endpoints of some sort each of which has a sensor or a number of sensors, a processor, some memory and some sort of wireless connectivity.  The endpoints are then connected to a server – where “server” is defined in the broadest possible sense.  It could be a phone, a tablet, a laptop or desktop, a remote server farm or some combination of all of those (say, a phone that then talks to a server farm).  Along the transmission path, data collected from the sensors goes through increasingly higher levels of analysis and processing.  For instance, at the endpoint itself raw data may be displayed or averaged or corrected and then delivered to the server and then stored in the cloud.  Once in the cloud, data can be analyzed historically, compared with other similarly collected data, correlated to other related data or even unrelated data in an attempt to search for unexpected or heretofore unseen correlations.  Fully processed data can then be delivered back to the user in some meaningful way. Perhaps the processed data could be displayed as trend display or as a prescriptive suite of actions or recommendations.  And, of course, the fully analyzed data and its correlations could also be sold or otherwise used to target advertising or product or service recommendations.

There is a further enhancement to this collection of endpoints and associated data analysis processes described in my basic IoT system.  The ‘things’ on this Internet of Things could also use to the data it collects to improve itself.  This could include identifying missing data elements or sensor readings, bad timing assumptions or other ways to improve the capabilities of the overall system.  If the endpoints are reconfigurable either through programmable logic (like Field Programmable Gate Arrays) or through software updates then new hardware or software images could be distributed with enhancements (or, dare I say, bug fixes) throughout the system to provide it with new functionality.  This makes the IoT system both evolutionary and field upgradeable.  It extends the deployment lifetime of the device and could potentially extend the time in market at both the beginning and the end of the product life cycle. You could get to market earlier with limited functionality, introduce new features and enhancement post deployment and continue to add innovations when the product might ordinarily have been obsoleted.

Having defined an ideal IoT system, the question becomes how does one turn it into a business? The value of these IoT applications are based on the collection of data over time and the processing and interpretation (mining) of said data.  As more data are collected over time the value of the analysis increases (but likely asymptotically approaching some maximal value).  The data analysis could include information like:

  • Your triathlon training plan is on track, you ought to taper the swim a bit and increase the running volume to 18 miles per week.
  • The drive shaft on your car will fail in the next 1 to 6 weeks – how about I order one for you and set up an appointment at the dealership?
  • If you keep eating the kind of food you have for the past 4 days, you will gain 15 pounds by Friday.

The above sample analysis is obviously from a variety of different products or systems but the idea is that by mining collected and historical data from you, and maybe even people ‘like’ you, certain conclusions may be drawn.

Since the analysis is continuous and the feedback unsynchronized to any specific event or time, the fees for these services would have to be subscription-based.  A small charge every month would deliver the analysis and prescriptive suggestions as and when needed.

This would suggest that when you a buy a car instead of an extended service contract that you pay for as a lump sum upfront, you pay, say, $5 per month and the IoT system is enabled on your car and your car will schedule service with a complete list of required parts and tasks exactly when and as needed.

Similarly in the health services sector, your IoT system collects all of your biometric data automatically, loads your activity data to Strava, alerts you to suspicious bodily and vital sign changes and perhaps even calls the doctor to set up your appointment.

The subscription fees should be low because they provide for efficiencies in the system that benefit both the subscriber and the service provider.  The car dealer orders the parts they need when they need them, reducing inventory, providing faster turnaround of cars, obviating the need for overnight storage of cars and payment for rentals.

Doctors see patients less often and then only when something is truly out of whack.

And on and on.

Certainly the possibility for tiered levels of subscription may make sense for some businesses.  There may be ‘free’ variants that provide limited but still useful information to the subscriber but at the cost of sharing their data for broader community analysis. Paid subscribers who share their data for use in broader community analysis may get reduced subscription rates. There are obvious many possible subscription models to investigate.

These described industry capabilities and direction facilitated by the Internet of Things are either pollyannaish or visionary.  It’s up to us to find out. But for now, what do you think?

Tags: , , , , , , , , , , ,

Spring_Llama-icon-512x512On June 2, 2014, we released our first Android application to the Google Play store.  “Llama Detector” is a lifestyle app that gives end-users ability to detect the presence of llamas in social situations.  It affords the end-user greater comfort in their daily interactions by allowing them to quietly and quickly detect hidden llamas wherever they may be. It does this using the your platform’s on-device camera hardware and peripherals. This amazing and technologically advanced application is guaranteed to provide end-users with seconds or even minutes of amusement. This posting serves as the end-user documentation and FAQ listing.

Usage

Using the Llama Detector is simple and straight forward.  The application prioritizes use of the rear-facing camera on your device.   If the device has no rear-facing camera then the front-facing camera is used. If the device has no camera then you will need to ask the supposed llama directly if it is a llama or spend a few minutes carefully examining the suspected area for llamas.

Upon launching the application, point the camera at the item or region that you suspect to be llama-infested.  Depress the button labelled ‘Scan’ when you have successfully framed the area that needs to be analyzed. The image will be captured and the red scan line will traverse the screen and the detection process will begin.

If you decide against analysis after beginning the scan, for any reason, you may cancel the operation by depressing the ‘Cancel’ button. Otherwise, scanning will continue for approximately 10 seconds.  After scanning, the Llama Detector will indicate if any llamas have been detected. Sometimes other items are detected and Llama Detector is able to indicate what it has identified.

If you would like to alter the detection sensitivity of the application, you may do so through the application preferences.  Choose the preferences either through the soft button or menu bar.  Then, display the Llama Sensitivity Filter.  Enter an integer value between 1 and 1000 where 1000 is the highest sensitivity value (and 1 is lowest).  This will alter the detection algorithm characteristics. A higher value will make the results more accurate with fewer false positives.  The default value is 800.

FAQ

1. How much does this amazing application cost?

Llama Detector is an absolutely free download from the Google Play store.

2. Free? That’s crazy! How do you do that?

How do we do it? Volume.

3. What sort of personal information does Llama Detector collect?

Llama Detector collects no personal information and does not communicate with any external servers. It should be noted though that by downloading the application you have identified yourself as either a llama or a llama enthusiast.

4. I went to the zoo and used Llama Detector at the llama exhibit but it detected no llamas.  Why is that?

Llamas are very difficult to hold in captivity.  They tend to sneak out of their pens and hang out at the concession stands eating hot dogs and trying to pick-up women. For this reason, most zoos use camels or, in some cases, baby giraffes dressed up as llamas in the llama pens.  The Llama Detector application can be used to indicate if a zoo is engaged in this sort of duplicity.  For this reason, many zoos nationwide ban the use of Llama Detector within the confines of their property.

5. I used Llama Detector in my house and it detected a llama in my bathroom. Now I am afraid to use the bathroom.  What do I do?

Llamas are quite agile and fleet of foot.  It is important to note that detection of llamas should be run multiple times for surety.  If the presence of a llama is verified, start making lettuce noises and slowly move to an open space.  The llama will follow you to that space.  Then stop making the lettuce noises.  The llama will wonder where the lettuce went and start looking around the open space.  Then quickly and silently proceed into the now llama-free bathroom.

6. When will the iOS version be available?

Our team of expert programmers are hard at work developing a native iOS version of this application so that iPhone user can enjoy the comfort and protection afforded by this new technology. The team is currently considering whether to wait for the release of iOS 8 to ensure a richer user experience.

7. I have another question but I don’t know what it is.

Feel free to post your questions to android at formidableengineeringconsultants dot com.  If it’s a really good question, we’ll even answer it.

Tags: , , , , , , , ,

next-big-thing1There is a great imbalance in the vast internet marketplace that has yet to be addressed and is quite ripe for the picking. In fact, this imbalance is probably at the root of the astronomical stock market valuations of existing and new companies like Google, facebook, Twitter and their ilk.

It turns out that your data is valuable.  Very valuable.  And it also turns out that you are basically giving it away.  You are giving it away – not quite for free but pretty close.  What you are getting in return is personalization. You get advertisements targeted at you providing you with products you don’t need but are likely to find quite iresistable.  You get recommendations for other sites that ensure that you need never venture outside the bounds of your existing likes and dislikes. You get matched up with companies that provide services that you might or might not need but definitely will think are valuable.

Ultimately, you are giving up your data so businesses can more efficiently extract more money from you.

If you are going to get exploited in this manner, it’s time to make that exploitation a two way street. Newspapers, for instance, are rapidly arriving at the conclusion that there is actual monetary value in the information that they provide.  They are seeing that the provision of vetted, verified, thougful and well-written information is intrinsicly worth more than nothing.  They have decided that simply giving this valuable commodity away for free is giving up the keys to the kingdom.  The Wall Street Journal, the New York Times, The Economist and others are seeing that people are willing to pay and do actually subscribe.

There is a lesson in this for you – as a person. There is value in your data.  Your mobile movements, your surf trail, your shopping preferences  It  should not be the case that you implicitly surrender this information for better personalization or even a $5 Starbucks gift card.  This constant flow of data from you, your actions, movements and keystrokes ought to result in a constant flow of money to you.  When you think about it, why isn’t the ultimate personal data collection engine, Google Glass, given away for free? Because people don’t realize that personal data collection is its primary function.  Clearly, the time has come for the realization of a personal paywall.

The idea is simple, if an entity wants your information they pay you for it.  Directly.  They don’t go to Google or facebook and buy it – they open up an account with you and pay you directly.  At a rate that you set.  Then that business can decide if you are worth what you think you are or not.  You can adjust your fee up or down anytime and you can be dropped or picked up by followers. You could provide discount tokens or free passes for friends.  You could charge per click, hour, day, month or year.  You might charge more for your mobile movements and less for your internet browsing trail.  The data you share comes with an audit trail that ensures that if the information is passed on to others without your consent you will be able to take action – maybe even delete it – wherever it is.  Maybe your data lives for only a few days or months or years – like a contract or a note – and then disappears.

Of course, you will have to do the due diligence to ensure you are selling your information to a legitimate organization and not a Nigerian prince.  This, in turn, may result in the creation of a new class of service providers who vet these information buyers.

This data reselling capability would also provide additional income to individuals.  It would not a living wage to compensate for having lost a job but it would be some compensation for participating in facebook or LinkedIn or a sort of kickback for buying something at Amazon and then allowing them to target you as a consumer more effectively. It would effectively reward you for contributing the information that drives the profits of these organizations and recognize the value that you add to the system.

The implementation is challenging and would require encapsulating data in packets over which you exert some control.  An architectural model similar to bitcoin with a central table indicating where every bit of your data is at any time would be valuable and necessary. Use of the personal paywall would likely require that you include an application on your phone or use a customized browser that releases your information only to your paid-up clients. In addition, some sort of easy, frictionless mechanism through which companies or organizations could automatically decide to buy your information and perhaps negotiate (again automatically) with your paywall for a rate that suits both of you would make use of the personal paywall invisible and easy. Again this technology would have to screen out fraudulent entities and not even bother negotiating with them.

There is much more to this approach to consider and many more challenges to overcome.  I think, though, that this is an idea that could change the internet landscape and make it more equitable and ensure the true value of the internet is realized and shared by all its participants and users.

Tags: , , , , , , , , , , , ,

basicsI admit it. I got a free eBook.  I signed up with O’Reilly Media as a reviewer. The terms and conditions of this position were that when I get an  eBook,  I agree to write a review of it.  Doesn’t matter if the review is good or bad (so I guess, technically, this is NOT log rolling).  I just need to write a review.  And if I post the review, I get to choose another eBook to review.  And so on. So, here it is.  The first in what will likely be an irregular series.  My review.

The book under review is “The Basics of Web Hacking” subtitled “Tools and Techniques to Attack the Web” by Josh Pauli. The book was published in June, 2013 so it is fairly recent.  Alas, recent in calendar time is actually not quite that recent in Internet time – but more on this later.

First, a quick overview. The book provides an survey of hacking tools of the sort that might be used for either the good of mankind (to test and detect security issues in a website and application installation) or for the destruction of man and the furtherance of evil (to identify and exploit security issues in a website and application installation).  The book includes a several page disclaimer advising against the latter behavior suggesting that the eventual outcomes of such a path may not be pleasant.  I would say that the disclaimer section is written thoughtfully with the expectation that readers would take seriously its warnings.

For the purposes of practice, the book introduces the Damn Vulnerable Web Application (DVWA).  This poorly-designed-on-purpose web application allows you to use available tools and techniques to see exactly how vulnerabilities are detected and exploits deployed. While the book describes utilizing an earlier version of the application, figuring out how to install and use the newer version that is now available is a helpful and none-too-difficult experience as well.

Using DVWA as a test bed, the book walks you through jargon and then techniques and then practical exercises in the world of hacking. Coverage of scanning, exploitation, vulnerability assessment and attacks suited to each vulnerability including a decent overview of the vast array of available tools to facilitate these actions.  The number of widely available very well built applications with easy-to-use interfaces is overwhelming and quite frankly quite scary.  Additionally, a plethora of web sites provide a repository of information regarding already known to be vulnerable web sites and how they are vulnerable (in many cases these sites remain vulnerable despite the fact that they have been notified)

The book covers usage of applications such as Burp Suite, Metasploit, nmap, nessus, nikto and The Social Engineer Toolkit. Of course, you could simply download these applications and try them out but the book marches through a variety of useful hands-on experiments that exhibit typical real-life usage scenarios. The book also describes how the various applications can be used in combination with each other which can make investigation and exploitation easier.

In the final chapter, the book describes design methods and application development rules that can either correct or minimize most vulnerabilities as well as providing a relatively complete list of “for further study” items that includes books, groups, conferences and web sites.

All in all, this book provides a valuable primer and introduction to detecting and correcting vulnerabilities in web applications.  Since the book is not that old, changes to applications are slight enough that figuring out what the changes are and how to do what the book is describing is a great learning experience rather than simply an exercise in frustration. These slight detours actually serve to increase your understanding of the application.

I say 4.5 stars out of 5 (docked a star because these subject areas tend to get out-of-date too quickly but if you read it NOW you are set to grow with the field)

See you at DEFCON!

Tags: , , , , , , , , ,

google-glass-patent-2-21-13-01Let me start by being perfectly clear.  I don’t have Google Glass.  I’ve never seen a pair live.  I’ve never held or used the device.  So basically, I just have strong opinions based on what I have read and seen.  And, of course, the way I have understood what I have read and seen.  Sergei Brin recently did a TED talk about Google Glass during which, after sharing a glitzy, well-produced video commercial for the product, he maintained that they developed Google Glass because burying your head in a smartphone was rude and anti-social.  Presumably staring off into the projected images produced by Google Glass but still avoiding eye-contact and real human interaction is somehow less rude and less anti-social.  But let that alone for now.

The “what’s in it for me” of Google Glass is the illusion of intelligence (or at least the ability to instantly access facts), Internet-based real-time social sharing, real-time scrapbooking and interactive memo taking amongst other Dick Tracy-like functions.

What’s in it for Google is obvious.  At its heart, Google is an advertising company – well – more of an advertising distribution company.  They are a platform for serving up advertisements for all manner of products and services.  Their ads are more valuable if they can directly target people with ads for products or services at a time and place when the confluence of the advertisement and the reality yield a situation in which the person is almost compelled to purchase what is on offer because it is exactly what they want when they want it.  This level of targeting is enhanced when they know what you like (Google+, Google Photos (formerly Picasa)), how much money you have (Google Wallet), where you are (Android), what you already have (Google Shopping), what you may be thinking (GMail), who you are with (Android) and what your friends and neighbors have and think (all of the aforementioned).  Google Glass, by recording location data, images, registering your likes and other purchases can work to build and enhance such a personal database.  Even if you choose to anonymize yourself and force Google to de-personalize your data, their guesses may be less accurate but they will still know about you as a demographic group (male, aged 30-34, lives in zip code 95123, etc.) and perhaps general information based on your locale and places you visit and where you might be at any time.  So, I immediately see the value of Google Glass for Google and Google’s advertising customers but see less value in its everyday use by ordinary folks unless they seek to be perceived as cold, anti-social savants who may possibly be on the Autistic Spectrum.

I don’t want to predict that Google Glass will be a marketplace disaster but the value statement for it appears to be limited.  A lot of the capabilities touted for it are already on your smartphone or soon to be released for it.  There is talk of image scanning applications that immediately bring up information about whatever it is that you’re looking at.  Well, Google’s own Goggles is an existing platform for that and it works on a standard mobile phone.  In fact, all of the applications touted thus far for Google Glass rely on some sort of visual analysis or geolocation-based look-up that is equally applicable to anything with a camera. It seems to me that the “gotta have the latest gadget” gang will flock to Google Glass as they always do to these devices but appealing to the general public may be a more difficult task.  Who really wants to wear their phone on their face?  If the benefit of Google Glass is its wearability then maybe Apple’s much-rumored iWatch is a less intrusive and less nerdy looking alternative.  Maybe Apple still better understands what people really want when it comes to mobile connectivity.

Ultimately, Google Glass may be a blockbuster hit or just an interesting (but expensive) experiment.  We’ll find out by the end of the year.

Tags: , , , , , , , , , , , , ,

3d keyA spate of recent articles describes the proliferation of back doors in systems.  There are so many such back doors in so many systems, it claims, that the idea of a completely secure and invulnerable system is, at best, a fallacy.  These back doors may be as result of the system software or even designed into the hardware.  Some back doors are designed in to the systems to facilitate remote update, diagnosis, debug and the like – usually never with the intention of being a security hole.  Some are inserted with subterfuge and espionage in mind by foreign-controlled entities keen on gaining access to otherwise secure systems.  Some may serve both purposes, as well. And some, are just design or specification errors.  This suggests that once you connect a system to a network, some one, some how will be able to access.  As if to provide an extreme example, a recent break-in at the United States Chamber of Commerce was traced to an internet-connected thermostat.

That’s hardware.  What about software?  Despite the abundance of anti-virus software and firewalls, a little social engineering is all you really need to get through to any system. I have written previously about the experiment in which USB memory sticks seeded in a parking lot were inserted in corporate laptops by more than half of employees who found them without any prompting. Email written as if sent from a superior is often utilized to get employees to open attached infected applications that install themselves and open a hole in a firewall for external communications and control.

The problem is actually designed in.  The Internet was built for sharing. The sharing was originally limited to trusted sources. A network of academics. The idea that someone would try to do something awful to you – except as some sort of prank – was inconceivable.

That was then.

Now we are in a place where the Internet is omnipresent.  It is used for sharing and viewing cat videos and for financial transactions.  It is used for the transmission of top secret information and buying cheese.  It connected to servers containing huge volumes of sensitive and personal customer data: social security numbers, bank account numbers, credit card numbers, addresses, health information, etc.  And now, not a day goes by without reports of another breach.  Sometimes attributed to Anonymous, the Chinese, organized crime or kids with more time than sense, these break-ins are relentless and everyone is susceptible

So what to do?

There is a story, perhaps apocryphal, that, at the height of the cold war, when the United States captured a Soviet fighter jet and were examining it, they discovered that there was no solid state electronics in it.  The entire jet was designed using vacuum tubes.  That set the investigators thinking.  Were the Soviets merely backward or did they design using tubes to guard against EMP attacks?

Backward to the future?

Are we headed to a place where the most secure organizations will go offline.  They will revert to paper documents, file folders and heavy cabinets stored in underground vaults?  Of course such systems are not completely secure, as no system actually is.  On the other hand, a break in requires physical presence, carting away tons of documents requires physical strength and effort.  Paper is a material object that cannot be easily spirited away as a stream of electrons. Maybe that’s the solution. But what of all the information infrastructure built up for convenience, cost effectiveness, space savings and general efficiency? Do organizations spend more money going back to paper, staples, binders and hanging folders? And then purchase vast secure spaces to stow these materials?

Will there instead a technological fix in designing a parallel Internet infrastructure from the ground up redesigned so that it incorporates authentication, encryption and verifiable sender identification? Then all secure transactions and information could move to that newer, safer Internet? Is that newer, safer Internet just a .secure domain? Won’t that just be a bigger, better and more value laden target for evil-doers? And what about back-doors – even in a secure infrastructure, an open door or even a door with a breakable window ruins even the finest advanced security infrastructure.  And, of course, there is always social engineering of people that provides access more easily that any other technique. Or spies. Or people thinking they are “doing good”.

The real solution may not yet even be defined or known.  Is it Quantum Computing (which is really just a parallel environment of a differently-developed computing infrastructure)? Or is it really nothing – in that there is no solution and we are stuck with tactical solutions?  It’s an interesting question but for now, it is clear as it was some 20 years ago when Scott McNeally said it “The future of the Internet is security”.

Tags: , , , , ,

Back at the end of March, I attended O’Reilly‘s Web 2.0 Expo in San Francisco. As usual with the O’Reilly brand of conferences it was a slick, show-bizzy affair. The plenary sessions were fast-paced with generic techno soundtracks, theatrical lighting and spectacular attempts at buzz-generation. Despite their best efforts, the staging seems to overhwelm the Droopy Dog-like presenters who tend to be more at home coding in darkened rooms whilst gorging themselves on Red Bull and cookies. Even the audience seemed to prefer the company of their smartphones or iPads than any actual human interaction with “live tweets” being the preferred method of communication.

In any event, the conference is usually interesting and a few nuggets are typically extracted from the superficial, mostly promotional aspects of the presentations.

What was clear was that every start-up and every business plan was keyed on data collection. Data collection about YOU. The more – the better. The goal was to learn as much about you as possible so as to be able to sell you stuff. Even better – to sell you stuff that was so in tune with your desires that you would be helpless to resist purchasing it.

The trick was – how to get you to cough up that precious data? Some sites just assumed you’d be OK with spending a few days answering questions and volunteering information – apparently just for the sheer joy of it. Others believed that being up-front and admitting that you were going to be sucked into a vortex of unrelenting and irresistable consumption would be reward enough. Still others felt that they ought to offer you some valuable service in return. Most often, this service, oddly enough, was financial planning and retirement saving-based.

The other thing that was interesting (and perhaps obvious) was that data collection is usually pretty easy (at least the basic stuff). Getting details is harder and most folks do expect something in return. And, of course, the hardest part is the data mining to extract the information that would provide the most compelling sales pitch to you.

There are all sorts of ways to build the case around your apparent desires. By finding out where you live or where you are, they can suggest things “like” other things you have already that are nearby. (You sure seem to like Lady Gaga, you know there’s a meat dress shoppe around the corner…) By finding out who your friends are and what they like, they can apply peer-pressure-based recommendations (All of your friends are downloading the new Justin Beiber recording. Why aren’t you?). And by finding out about your family and demographic information they can suggest what you need or ought to be needing soon (You son’s 16th birthday is coming up soon, how about a new car for him?).

Of all the sites and ideas, it seems to me that Intuit‘s Mint is the most interesting. Mint is an on-line financial planning and management site. Sort of like Quicken but online. To “hook” you, their key idea is to offer you the tease of the most valuable analysis with the minimum of initial information. It’s almost like given your email and zip code they’ll draw up a basic profile of you and your lifestyle. Give them a bit more and they’ll make it better. And so you get sucked in but you get value for your data. They do claim to keep the data separate from you but they also do collect demographically filtered data and likely geographically filtered data.

This really isn’t news. facebook understood this years ago when their ill-fated Beacon campaign was launched. This probably would have been better accepted had it been rolled out more sensitively. But it is ultimately where everyone is stampeding right now.

The most interesting thing is that there is already a huge amount of personal data on the web. It is protected because it’s all in different places and not associated. facebook has all of your friends and acquaintances. Amazon and eBay have a lot about what you like and what you buy. Google has what you’re interested in (and if you have an Android phone – where you go). Apple has a lot about where you go and who you talk to and also through your app selection what you like and are interested in. LinkedIn has your professional associations. And, of course, twitter has when you go to the bathroom and what kind of muffins you eat.

Each of these giants is trying to expand their reservoir of data about you. Other giants are trying to figure out how to get a piece of that action (Yahoo!, Microsoft). And yet others, are trying to sell missing bits of information to these players. Credit card companies are making their vast purchasing databases available, specialty retailers are trying to cash in, cell phone service providers are muscling in as well. They each have a little piece of your puzzle to make analysis more accurate.

The expectations is that there will be acceptance of diminishing privacy and some sort of belief that the holders of these vast databases will be benevolent and secure and not require government intervention. Technologically, storage and retrieval will need to be addressed and newer, faster algorithms for analysis will need to be developed.

Looking for a job…or a powerful patent? I say look here.

Tags: , , , ,

Internet Immortality

My social network appears to be wide, diverse and technologically savvy enough that I have a large number of friends and acquaintances with large Internet footprints. That includes people with a presence on a variety of social networking sites like facebook and LinkedIn, Twitter and Flickr feeds, multiple email accounts and even blogs.

Having a broad sample of such connections means that life cycle events are not unusual in this group either. That includes death. I have now – several times – had the oddly jarring event of having a message reminding me about a birthday of a friend who passed away or a suggestion to reconnect with a long-dead relative and similar communications from across the chasm – as it were.

There is both joy and sorrow associated with these episodes. The sorrow is obvious but the joy is in spending a few moments reviewing their blog thoughts or their facebook photos and, in essence, celebrating their life in quiet, solitary reflection. And it provides these people with their own little slice of immortality. It bolsters the line from the movie The Social Network saying that “The Internet isn’t written in pencil; it’s written in ink”.

This got me thinking.  In an odd way, this phenomena struck me as an opportunity.  An opportunity for a new Internet application.

I see this opportunity as having at least two possibilities. The first would be a service (or application) that seeks out the Internet footprint of the deceased and expunges and closes all the accounts. This might have to include a password cracking program and some clever manner to deduce or infer login names – for the cases where little is known about the person’s online activities.  It may be the case that after closing the account, the person may live on in the databases hidden behind the websites that are never purged, but they will be gone from public view.

The alternate would serve those who wish to be celebrated and truly immortalized. This would collect the entire presence of a person on the WWW and provide a comprehensive home page to celebrate their life, through their own words and images. This home page would include links to all surviving accounts, photos, posts and comments thereby providing a window into a life lived (albeit online).

In an odd way, this creates an avatar that is a more accurate representation of yourself than anything you could possibly create on Second Life or any similar virtual world. One could certainly imagine, though, taking all that data input and using it to create a sort of stilted avatar driven by the content entered over the course of your life.  It might only have actions based on what was collected about you but a more sophisticated variation would derive behaviors or likely responses based on projections of your “collected works”.

Immortality?  Not exactly.  But an amazing simulation.

Tags: , , , ,

A friend of mine who made the move from the world of electronic design automation (EDA) to the world wide web (WWW) once told me that he believed that compared to the problems being solved in EDA, WWW programming is a walk in the park. 

I had an opportunity to reflect on this statement when I visited Web 2.0 Expo in San Francisco the other day.  I spent a fair amount of my career working in the algorithm heavy world of EDA developing all manner of simulators (logic and fault), test pattern generators and netlist modifiers.  The algorithms we used and modified included things like managing various queues, genetic algorithms, path analysis, determing covering sets and the like.  The nature of the solutions meant that we also had opportunity or more specifically a need to utilize better software development techniques, processes and tools.

As I wandered the exhibit hall, I was alternately mystified by the prevalence of buzz words and jargon (crowd sourcing, cloud computation, web analyticscollaborative software, etc.) and amazed at how old technologies were touted as new (design patternsobject-oriented programming! APIs!).  Of course, I understand that any group of people who band together tends to develop their own language so as to more effectively communicate ideas,  identify themselves to one another and sometimes even to exclude outsiders.  So I accept the language stuff but what was truly interesting to me was that this seemingly insular society appears to have slapped together the web without consideration of the developments in computer science that preceded them!

I guess I should be happy they are figuring that out now and are attempting to catch up but then I think “what about all that stuff that’s out there already?’  Does this mean there are all these existing web sites and infrastructure that are about collapse as a result of the force of their own weight?  Is there a  disaster about to befall these sites when they need to upgrade, enhance or even fix significant bugs? Are there major web sites built out of popsicle sticks and bubble gum?

So why is there a big push to hire people who have experience developing these (bad) web sites?  Shouldn’t these Web 2.0 companies be looking for developers that know software rather than developers that know how to slap together a heap of code into a functional but otherwise jumbled mess?

Tags: , ,
« Previous posts Back to top