There is a huge focus on big data nowadays. Driven by ever decreasing prices and ever increasing capacity of data storage solutions, big data provides magical insights and new windows into the exploitation of the long tail and addressing micro markets and their needs. Big data can be used to build, test and validate models and ideas Big data holds promise akin to a panacea. It is being pushed as a universal solution to all ills. But if you look carefully and analyze correctly what big data ultimately provides is what Marshall MacLuhan described as an accurate prediction of the present. Big data helps us understand how we got to where we are today. It tells us what people want or need or do within a framework as it exists today. It is bounded by today’s (and the past’s) possibilities and ideas.
But big data does not identify the next seismic innovation. It does not necessarily even identify how to modify the current big thing to make it incrementally better
In the October 2013 issue of IEEE Spectrum, an article described the work of a company named Lex Machina. The company is a classic big data play. They collect, scan and analyze all legal proceedings associated with patent litigation and draw up statistics identifying, for instance, the companies who are more likely to settle, law firms that are more likely to win, judges who are more favorable to defendants or the prosecution, duration and cost assessments of prosecutions in different areas. So it is a useful tool. But all it does is tell you about the state of things now. It does not measure variables like outcomes of prosecution or settlements (for instance, if a company wins but goes out of business or wins and goes on to build a more dominant market share or wins and nothing happens). It does not indicate if companies protect only specific patents that have, say, an estimated future value of, say, $X million or what metric companies might use in their internal decision making process because that is likely not visible in the data.
Marissa Meyer, the hyper-analyzed and hyper-reported-on CEO of Yahoo!, famously tests all decisions based on data. Whether it is the shade of purple for the new Yahoo! logo, the purchase price of the next acquisition or value of any specific employee – it’s all about measurables.
But how can you measure the immeasurable? If something truly revolutionary is developed, how can big data help you decide if it’s worth it? How even can little data help you? How can people know what they like until they have it? If I told you that I would provide you with a service that lets you broadcast your thoughts to anyone who cares to subscribe to them, you’d probably say. “Sounds stupid. Why would I do that and who would care what I think?” If I then told you that I forgot one important aspect of the idea, that every shared thought is limited to 140 characters, you would have likely said, “Well, now I KNOW it’s stupid!”. Alas, I just described Twitter. An idea that turned into a company that is, as of this writing, trading on the NYSE for just over $42 per share with a market capitalization of about $25 billion.
Will a strong reliance on big data lead us incrementally into a big corner? Will all this fishing about in massive data sets for patterns and correlations merely reveal the complete works of Shakespeare in big enough data sets? Is Big Data just another variant of the Infinite Monkey Theorem? Will we get the to point that with so much data to analyze we merely prove whatever it is we are looking for?
Already we are seeing that Google Flu Trends is looking for instances of the flu and finds them where they aren’t or in higher frequencies than they actually are. In that manner, big data fails even to accurately predict the present.
It is only now that some of the issues with ‘big data’ are being considered. For instance, even when you have a lot of data – if it is bad or incomplete, you still have garbage only just a lot more of it (that is where wearable devices, cell phones and other sophisticated but merely thinly veiled data accumulation appliances come into play – to help improve the data quality by making it more complete). Then the data itself is only as good as the analysis you can execute on it. The failings of Google Flu Trends are often attributed to bad search terms in the analysis but of course, there could be many other different reasons.
Maybe, in the end, big data is just big hubris. It lulls us into a false sense of security, promising knowledge and wisdom based on getting enough data but in the end all we learn is where we are right now and its predictive powers are, at best, based merely on what we want the future to be and, at worst, are non-existent.
No comments yet.
Leave a comment