It is a term that technocrats use to describe the growth of the digitization of information. During the 1990s, we experienced an incredible shift to digital technology. Businesses deployed technology throughout their production systems and digitized many of their information sources. Cultural content — books, movies, music — also began to migrate to digital. Most households have now made the transition to digital through things like email, digital photography, music, television, etc.
A relatively new concept, the Internet of Things, has also impacted the digitization of information and provided the catalyst for big data.
The latest estimates show more things are connected to the Internet than people, roughly 5 billion devices versus 2 billion people.
Kevin Ashton first attempted to define the Internet of Things back in 1999:
The fact that I was probably the first person to say “Internet of Things” doesn’t give me any right to control how others use the phrase. But what I meant, and still mean, is this: Today computers””and, therefore, the Internet””are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings””by typing, pressing a record button, taking a digital picture or scanning a bar code. Conventional diagrams of the Internet include servers and routers and so on, but they leave out the most numerous and important routers of all: people. The problem is, people have limited time, attention and accuracy””all of which means they are not very good at capturing data about things in the real world.
And that’s a big deal. We’re physical, and so is our environment. Our economy, society and survival aren’t based on ideas or information””they’re based on things. You can’t eat bits, burn them to stay warm or put them in your gas tank. Ideas and information are important, but things matter much more. Yet today’s information technology is so dependent on data originated by people that our computers know more about ideas than things.
If we had computers that knew everything there was to know about things””using data they gathered without any help from us””we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best.
We need to empower computers with their own means of gathering information, so they can see, hear and smell the world for themselves, in all its random glory. RFID and sensor technology enable computers to observe, identify and understand the world””without the limitations of human-entered data.
Ten years on, we’ve made a lot of progress, but we in the RFID community need to understand what’s so important about what our technology does, and keep advocating for it. It’s not just a “bar code on steroids” or a way to speed up toll roads, and we must never allow our vision to shrink to that scale. The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.
How much data have we created? What has been the rate of growth?
Bounie and Gille reported the following in the International Journal of Communication:
First, by grouping all the media, we found thatÂ the worldwide flows and stocks of original contents and copies amounted to 275.3 exabytes, which is roughly equivalent to 1 gigabyte for every man and woman on earth. TheÂ largest quantity of information moved through magnetic media, which accounted for 72.8% (200.5Â exabytes) of the total flows and stocks of original contents and copies stored; optic media were in secondÂ place (21.8% with 60.1 exabytes). Despite being the oldest of the media, paper and plastic contained aÂ very small quantity of information compared to recent media (0.3% and 5.1%).
An exabyte is a quintillion bytes, or a billion, billion bytes or a billion gigabytes. 275 exabytes, well that, my friends, is a lot of data. And there is a lot more data on the Internet than the 50 petabytes Kevin Ashton highlighted back in 1999.
In the same report, Bounie and Gille also estimated that the world produced 14.7 exabytes of new information in 2008, nearly triple the volume of information in 2003. And I suspect that the rate of growth is not slowing down.
From Domo is an infographic on the amount of data being created every minute.
Big data. And big changes ahead.