How should we define the tools and mechanics we use to process the algorithmic calculations that exist within the IoT? More fundamentally, is it time to stand back and question how we approach the programming languages and math that drives the IoT itself?
How complex is the IoT?
First let’s consider quite how complex the IoT is today and how much complexity it could produce – and need to shoulder – in the future.
When the term itself first emerged back in 2009, Proctor & Gamble’s Kevin Ashton was attempting to describe an internet not built of information created by humans, but one in which machines started to create that information automatically.
“Today, computers — and, therefore, the Internet — are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings — by typing, pressing a record button, taking a digital picture or scanning a bar code,” wrote Ashton, in RFID Journal.
Ashton wanted to describe a new way for the internet to grow, where computers are empowered with their own means of gathering information to be able to track and analyze the world around them – in all its “random glory”, as he put it.
When the IoT comes of age (and perhaps we might reasonably argue that it has yet to do so), then we will need to considerably up our game in terms of mathematical complexity.
IoT is more than SCADA
Although clarity has come to our modern (for now) understanding of the IoT, we must also appreciate that the IoT, in some ways, existed in various forms before now.
Take, for example, SCADA (supervisory control And data acquisition) systems in manufacturing industries: these have been around for many years, even before the millennium. But SCADA systems were silos and their data often languished in areas rarely accessed by the engineers (both software and mechanical) who might have benefitted from them.
The IoT is a fundamentally more connected, more analytical, more predictive, more responsive and more autonomic set of interwoven machine systems (and their corresponding devices) dependent upon co-related data streams and the wider breadth of the web itself. What is really important though (and let’s think back to what Ashton said at the start), is that the IoT is more random.
We must appreciate the enormity (and the randomness) of the mathematical complexity that confronts us. Failure to do so could arguably result in some parts of the IoT coming to a juddering halt.
So how do we cope with randomness? Andrei Macsin, CTO at Data Science Central (an online resource for big data practitioners) in Romania, advises that one of the most commonly used methods for modelling uncertainty is Bayesian networks.
“A Bayesian network is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph. Bayesian networks can be used extensively in Internet of Things projects to ascertain data transmitted by the sensors,” writes Macsin, on IoT Central.
Our quantum future
All software programming languages have an appreciation for random numbers by virtue of their core mathematical structure and logic. It is in the implementation that they differ and this aspect could become one of the next determining factors that drives IoT language usage.
As we move out of conventional binary computing constructs and into the world of quantum computing superposition and qubits we will ultimately start to make use of quantum randomness. South Korean telco SK Telecom has already started to work on quantum random number generation as a means of securing IoT devices. The complexity is coming, as is additional randomness, so let’s make sure we do the math now.