At the recent Zero-Touch and Carrier Automation Congress in Madrid, I was reminded of a quote from George Westerman of MIT:
“When digital transformation is done right, it’s like a caterpillar turning into a butterfly, but when done wrong, all you have is a really fast caterpillar.”
One of the most debated topics at the show was how to deal with issues of complexity when enabling automation in operators’ networks. The current processes designed to manage these networks have been designed to make human intervention and management relatively efficient. In the new world of automation, however, even minimal human intervention is problematic. Despite the industry’s best efforts, translating complex human decision-making into a computer-readable, algorithm-friendly process remains a real challenge.
Redesigning around simplicity
Indeed multiple speakers at the show agreed that trying to automate a complex set of processes was actually the wrong approach, and that a fresh take was needed. Starting again, to define new, simpler processes which focus on enabling zero-touch provisioning and automation, offers a more logical and achievable route forward.
Deciding what processes are going to deliver the most benefits should be a relatively easy and time efficient task. Equally important, however, will be identifying which will encounter the least amount of internal resistance. Building trust in new processes and tools is a key pre-condition of success, so starting with the low-hanging fruit that will build confidence in the approach is the best way to begin.
Putting complexity in its place
Would taking this approach mean curtains for complexity? Not quite. Complex processes still have their place, but care is needed to ensure they’re being used effectively and efficiently. This could be at the network edge, for example, where processing decisions are taken using the data at hand and close to where the results of decision-making have an impact.
Standardised outputs and data types is essential
Enabling a carrier’s network to adapt to changing future requirements requires a level of analysis that can only be achieved using common data types and APIs. Today, networks are home to competing automation domains and multiple APIs, which defy automation and make upgrades both complex and costly. Related to this requirement is the need for clean data; ‘dirty data’ is the silent killer of automation. Most operators believe that up to half of their available data is either of poor quality, or just plain wrong.
Simplifying the network is critical
To enable this vision of simplicity and automation, establishing a standardized computing plane across the network is essential. With this in place, VNFs (virtualised network functions) can then be hosted at any location on the operators’ networks: whether that’s from the Cloud (their main data centers), the network edge with distributed CORDs (central office re-architected as a data centre), up to and including street cabinets and the customer premises.
Completely redesigning processes around network automation is a fairly dramatic proposal and one that will take time to build credibility within the operators’ teams. That said, if it is done judiciously, with standardised technologies that support openness and interoperability, this approach may well enable the ‘true transformation’ that Westerman had in mind.