Bare-metal (single tenant physical server) cloud provider Packet sees a near future in which increasingly complex, real-time applications permeate every facet of our daily lives: from autonomy and the Internet of Things to augmented reality and personalised medicine.
Its part in this revolution is to provide “smart, API-driven bare metal”, it says, rather than the multi-tenant, virtualised environments that are increasingly commonplace among both providers and many client organisations.
CEO Zac Smith took time out to talk with Internet of Business about Packet’s vision for the changing roles of cloud services and edge computing in the Internet of Things.
Internet of Business: You say that the way we are using computers and the IoT requires a new approach to back-end technology. Can you explain why this is so, and what features are required that existing technology can’t provide?
Zac Smith: “The workloads that are tied to 5G, autonomy, the IoT and other new experiences are just massive.
“History shows us that as workloads grow in size or importance, you start to see the hardware become much more specialised. Add the impact of latency and regulation on these kinds of experiences, and you start to see a very different infrastructure requirement than what exists today.
The current wave of cloud computing compelled us to put a lot of the same things in a few places, abstracting it away and making it easier to consume IT. But what’s next will require us to put a few things in a lot of places, and that is a totally different challenge.
“We believe this shift requires a different approach to infrastructure – a more fundamental, agnostic one that can embrace variety and rapid changes in hardware. That’s why Packet’s technology focuses on automating at the lowest layer. We can automate nearly anything, and make it consumable to a developer or to her software at global scale.”
Isn’t that just edge computing? If not then can you explain in what ways Packet’s offering is different?
“While the terminology around ‘edge computing’ is in flux, we certainly expect a very diverse approach to infrastructure, including what it is and how many locations it is in. Over the next few years, we’ll see workloads that can take advantage of compute power in hundreds or thousands of locations.
“But this is valuable for more than just computing at the very edge; it’s also relevant for less dramatically distributed scenarios. For example, let’s say you are a personalised medicine company, and a new GPU comes to market that, in concert with your proprietary software, can analyse DNA in 15 seconds instead of a few hours. You just need to put this special GPU into every major city in the world. Massive differentiation!
“So how do you get that combination of smart software and innovative hardware to market? Well, you can either wait for a public cloud provider like AWS to adopt that hardware and make it available to you when they choose – assuming they don’t offer a personalised medicine service of their own – or you can deploy it yourself across a hundred locations, setting up a global network while you’re at it, and maintaining everything from the firmware to the cables to the legal entity in Hong Kong.
“Packet’s technology and global distribution approach was built for just this scenario. We make it not just possible, but enjoyable, for a developer-driven customer to deploy custom hardware anywhere in the world, from the Equinix to the Edge.”
You’ve spoken elsewhere about designing computers around software. What does that look like in the real world?
“A great example is probably in your hand right now: your iPhone. Apple started its journey about 10 years ago with generic components purchased from suppliers. Now they design their own processors, and just about everything else in their devices, to optimise the software experience they are building above it.
“You see the same thing with AWS building its own Smart NIC with custom silicon that does exactly what it needs, and nothing more. We think this mashup of hardware and software will only accelerate as the experiences become more immersive, and as the requirements increase on the hardware to be both performant and efficient.”
Your vision is of hardware being in hundreds or thousands of locations still demands that they need to share data, communicate, use AI features, and so on. So some centralisation will be needed. How does that work best with the distributed or edge model?
“This isn’t an ‘all or nothing’ equation. Certain workloads will definitely still live in centralised server farms, for sure. We think it will come down to advanced algorithms around cost.
“For instance, how much are you willing to process your workload on scarce resources at the edge? Which elements can be handled in off hours, or shipped off via low-orbit satellites to a location with cheaper power? Our expectation is that container orchestration software, like Kubernetes, will get ever smarter and help decide what workload gets handled where, according to business rules, regulations, economics, and more.”
As the IoT, AI, and big data analytics rise, and organisations realise that a lot of the number crunching needs to be done either on premise, in the distributed core, or in a dynamic edge environment, do you think the the pendulum will continue swinging away from centralised, public cloud computing in the future? Or is there still a place for traditional cloud?
“Absolutely, there’s still a place. Workloads that aren’t latency specific will continue to be served from the cheapest reasonable locations, and – as I said – we think software like Kubernetes will continue to evolve to optimise cost and performance against business rules and regulation requirements.”
When will the shift to the edge happen en masse?
“We are just at the beginning of a new adoption cycle, so it’s a bit of the Wild West out there! But as 5G comes online, and early use cases like IoT and gaming start to accelerate, we think edge computing will move from hype to reality in 2019.”
Internet of Business says
With vendors such as Dell and Microsoft all now investing billions of dollars in IoT environments, with a stated focus on distributed core and edge computing, the move to the edge is already clear.
Meanwhile, other applications, such as enterprise-grade AI and analytics, require a shift back from the cloud to on-premise data centres – hence companies such as NVIDIA and its partners producing ‘AI supercomputers in a box‘ to slot into organisations’ own racks. And time-critical functions, such as an autonomous car’s need to avoid a collision, need processing and intelligence available both onboard and at the edge.
In short, organisations need to recognise the cloud’s limitations for many enterprise-scale or connected applications; the hybrid cloud’s investors made the right bet.
Read more on the the shift from cloud to edge computing.