IoT, Big Data and why you should care about data copies

IoT, Big Data and why it’s the data copies you need to worry about
IoT, Big Data and why it’s the data copies you need to worry about

Ash Ashutosh, CEO of Actifio, says that IoT deployments could be hindering by data copies.

The Internet of Things (IoT) is fast becoming the next major technological revolution. According to Gartner, 6.4 billion connected things will be in use by the end of 2016 and the IoT will support total services spending of $235 billion (£163 billion). With this huge amount of revenue comes the data to match. The impact of the IoT and the data it generates is being felt across the entire IT spectrum, with companies having to upgrade technology and processes to manage this deluge of data efficiently and securely.

For many, Big Data is seen as the Holy Grail for organisations. It will enable them to understand what their customers want and target them to drive sales and growth. The Big Data trend has the potential to revolutionise the IT industry by offering new business insight into the data they previously ignored. To say it’s critical for organisations to harness the potential of Big Data is a huge understatement.

In an age where Big Data is the mantra and terabytes quickly become petabytes, the surge in data quantities is causing the complexity and cost of data management to grow at an alarming rate. At the current rate, by the end of this year the world will be producing more digital information than it can store – incredible. Just look at that mismatch between data and storage – one zettabyte would fill the storage on 34 billion smartphones.

The real challenge with Big Data

The problem of overwhelming data quantity exists because of the proliferation of multiple physical data copies. IDC estimates that 60% of what is stored in data centres is actually copy data – multiple copies of the same thing or out-dated versions. The vast majority of stored data is extra copies of production data created every day by disparate data protection and management tools like backup, disaster recovery, development, testing and analytics.

IDC estimates up to 120 copies of specific production data is being circulated by a company whereby, the cost of managing the flood of data copies reached $44 billion dollars worldwide.

Also read: Why the UK is playing catch-up with Big Data

Tackling data bloating

While many IT experts are focused on how to deal with the mountains of data that are produced by this intentional and unintentional copying, far fewer are addressing the root cause of data bloating. In the same way that prevention is better than cure, reducing this weed-like data proliferation should be a priority for all businesses.

Copy data virtualisation – freeing organisations’ data from their legacy physical infrastructure just as virtualisation did for servers a decade ago – is increasingly seen as the way forward. In practice, copy data virtualisation reduces storage costs by 80%. At the same time, it makes virtual copies of ‘production quality’ data available immediately to everyone in the business everywhere they need it. That includes regulators, product designers, test and development teams, back-up administrators, finance departments, data-analytics teams, marketing and sales departments. In fact, any department or individual who might need to work with company data can access and use a full, virtualised data set. This is what true agility means for developers and innovators.

Moreover, network strain is eliminated. IT staff – traditionally dedicated to managing the data – can be refocused on more meaningful tasks that can help grow the business. Data management licences are reduced, due to no longer requiring back-up agents, de-duplication software and WAN (wide area network) optimisation tools.

The ‘golden master copy’

By eliminating copy data and working off a ‘golden master copy’, storage capacity is reduced as well – and along with it, all the attendant management and infrastructure overheads. The net result is a more a streamlined organisation driving innovation. When you consider all of the ways to tackle the issue of data bloating, the remedies result in cost savings worth millions and millions. It’s one of the main reasons this issue has fast become a key topic discussed at boardroom level.

You’ve heard of both server virtualisation and network virtualisation; two concepts that once seemed outlandish. However, fast-forward to now and the benefits of both have seen them become commonplace within IT departments. Now, it’s the turn of copy data virtualisation. As the IoT spaces continue to grow so significantly at the rate they do, so will the need for businesses to put a data management strategy in place to capitalise on the opportunity presented by Big Data.

Also read: IT managers excited by IoT, see Big Data challenges