Building out the edge during the pandemic without micro data centers

Building out the edge during the pandemic without micro data centers

Edge networking NEW_80cc62.jpeg

People and businesses move – and the edge needs to move with them. If we think of a micro-data center, we are starting with the idea of replicating the hyperscale data center, but miniaturizing it for edge applications.

That approach is a bit backwards, because it starts with the hyperscale data center requirement of cost optimization through massive scale. But cost optimization through scale does not make sense at the edge, so why would you start with that idea as your blueprint? The device edge of the network has some very unique requirements, which include not only the computing, but also scalable and flexible connectivity.

In fact, if history is a guide, the closer equipment gets to the network edge and the user, the more important integration becomes. Think of the smart phone revolution. Mobile communications started with the “brick” phones, followed not only by a constant size reduction, but also by a continuous integration of other functions – music players, cameras, browsers, and so much more. It is all about doing more with less. But what is needed at the network device edge that data centers do not provide? Connectivity.

That micro-data center must rely on a lot of other equipment to connect to your devices, be they smartphones over Wireless WAN or Wi-Fi, or sensors and controls over Bluetooth, Zigbee, or LoRaWAN. At Veea, our founder Allen Salmasi saw a great opportunity to combine the edge compute and the edge connectivity requirements into something new. We call that new equipment category the Smart Edge Node (SEN). But integration processing with connectivity is just the start. A SEN-based edge platform also has to be flexible, scalable, easily deployed and re-deployed, and cohesively manageable, even as SENs are added or moved around. Wireless mesh technology and centralized cloud-based management are key enablers here.

How does the pandemic impact this?

The pandemic has accelerated an increase in more remote work, and with more people working remotely, their proximity to enterprises where micro data centers with their data closets would be located is limited. These remote workers need edge processing where they are and now, not where they used to be.

SENs like our VeeaHubs make even greater sense in this world, because they are about the size of the home wireless router that we are already accustomed to, but they include server-grade processing, enterprise-class security, and connectivity for a broad array of devices. Therefore, we don’t have to worry about where to put new equipment closets with the associated power, cooling, and cabling. And SENs can easily connect to each other and to the WAN, either wirelessly for maximum flexibility, or wired where that is preferred.

That mesh connectivity allows them to operate as a large, virtual, scalable processing resource which is as close as possible to the source of the data that they are processing.

Building out the edge without using micro data centres

There are architectures where the end devices themselves – the smart phones, laptops, and the like – are used together for edge processing tasks. The idea is that there is a lot of processing on smartphones and laptops, that at any given moment is unused. However, the owners of those devices are unlikely to support this approach, given that it would tend to shorten their battery life or increase their bandwidth use.

There are ways of dealing with the bandwidth issue through “zero-rate billing” approaches for the data that is not directly tied to the user’s activity, but that brings a lot of complexity. A more realistic scenario focuses on adding processing in the access points. However, we see a hybrid processing solution, which combines processing at the network edge and in the core clouds as the “best of all worlds” approach, with the “rapid response” processing being done at the network edge, or when a small degree of edge processing can dramatically reduce the bandwidth being sent over the WAN to the cloud data center.

Other processing tasks can then be allocated to micro data centers which may be further back in the network, and at the hyper-scale cloud data centers. It is simply a tradeoff of response time and bandwidth required. Processing closer provides faster responses and less bandwidth to the cloud, while processing in the cloud data center can be deeper and more cost effective. Some tasks are conducive to one, some to the other.

The mobile network infrastructure is another place where a mix of SENs and micro data centers can make sense. Due to the frequencies and bandwidths involves, there are relatively more 5G base stations for a given number of subscribers in a given area. A smaller edge processing element, relative to what one might need in a 4G base station, can make sense. Basically, there is no “one size fits all” approach. SENs, micro data centres, and hyper-scale data centres all have their place. The SEN is just the newest element in the infrastructure arsenal.

Predictions

I do not consider myself an expert in the micro data center space, but I have been in the telecom and datacom space long enough to know that the demand for more processing is not going to slow down any time soon.

That will drive growth across the entire spectrum of processing solutions – SENs at the device edge, micro data centers at the access edge, larger data centers further in the network at the carrier edge, and then of course the hyper-scale data centers like Google’s, Apple’s, and the like. There will be growth and room for all, because not all processing tasks are equal, and they will tend to migrate to the places where they are most suitably done.

Growth pushing elements

Video is a big driver. If a picture is worth a thousand words, and a video is 30 pictures per second, then the amount of information that can be extracted from video – for all sorts of purposes – is endless. Zoom conferences today are likely to become the virtual or augmented reality sessions of tomorrow, and all of that video must be processed. Not only in handling it, but interpreting, using AI to extract knowledge from the images. And in the video world, there continues to be a separation between processing tasks that require immediate processing at the device edge, and other tasks that can be done further back in the network.

You can think about different levels of processing like this: maybe you detect if a person is in the image at the device edge, and capture sufficient frames to send that to a larger data center further back in the network. At that larger data center, you determine if there are any specific characteristics of the person that warrant a closer inspection. And in the hyper-scale data center, you not only analyze that video more deeply, but you make cross-comparisons and correlations with other video coming from various other locations.

    • Natural Language processing: Today, we can ask our digital assistants like Alexa, Google, and Siri to do tasks, but having a human-speed two-way conversation is still a stretch. That will require much more processing closer to the users.

    • Autonomous vehicles – of all types. Cars, trucks, drones, all types of vehicles. These applications take video to the nth degree. At the individual vehicle level, they leverage vision systems to make life-critical decisions in real time. But they also pass data to the network that is used by other vehicles or control systems to make intelligent decisions. So the processing must be available at a variety of locations in the network, based on where the data is coming from, and how broad an area is being managed.  

 

By Kurt W. Michel, Senior Vice President, Veea

Gift this article