Today’s subsea cable system is massive, with close to 400 underwater telecommunications cables in operation around the world. Google, Amazon, and Microsoft are all making significant investments in subsea cable infrastructure projects.
But for most organisations, conversation around connectivity is usually centred in the clouds, around options for adoption, migration, and different hosting configurations. Not around optimisations for subsea cables or the impact on performance.
But just as critical as cloud provider architectures, subsea cables are hugely important for carrying data between destinations. Moreover, subsea cables require large amounts of data and processing power in the cloud to work efficiently.
To better understand capacity levels and performance challenges, it all starts with understanding how to treat data as it enters subsea transmission cables.
Remaining in control
Between cloud providers, the contracted or leased capacities on subsea cables vary. The performance of a network can also change over time and among various regions. Depending on the destination, some cloud service providers attempt to route traffic through the Internet and bring it closer to their physical locations. Other cloud providers endeavour to bring traffic into their networks as close as possible to their base.
Additionally, the growing, criss-crossed web of subsea transmission cables that carry data between international destinations can be challenging to keep track of. Yet, the domestic transmission networks can make a big difference to the end-to-end path that the traffic ultimately takes.
But this is not well understood — so, it becomes a complicated space where businesses seemingly lack control, visibility and knowledge of the inner workings that are necessary to shape outcomes. Especially when we consider ‘on-land’ cloud customers are usually accustomed to having significant control over the setup and configuration of their accounts and instances.
Traffic planning for the future
Users rely entirely on their ISP (Internet service provider) to oversee their data traffic reaching point A to B. The relationship, rules and thresholds that cause traffic to be routed one way or another, are only sometimes established.
For example, if an issue occurs, such as degradation or breakage, it’s unclear which traffic pathway this is routed on. When businesses rely on ISP for rerouting, they specify in advance which alternate route they’d like their traffic to take. As a result, this can have unforeseen customer-facing impacts, such as longer round trip times for traffic, which causes slow and disruptive cloud services for customers.
To avoid this, it’s imperative that organisations are aware of such potential performance bottlenecks – regardless of whether they are operating in the cloud or in the sea.
In the case of subsea transmission, it pays to understand what happens on that infrastructure — who carries what, when and why. This will help companies identify what they’re buying access to, and whether it is fit for purpose for their needs. This is particularly crucial if traffic encounters unusual operating conditions and the rerouting options are limited.
Authenticating this visibility is a vital step forward for contingency planning. Businesses should be aware of the ISPs behind cabling systems and the routing options, to coordinate the best possible traffic outcomes. Visibility can also determine a different ISP with better traffic routes, such as spare subsea capacity and redundancy across multiple cable systems, to improve connectivity.
Overall, once businesses understand the path their traffic takes in all possible situations, efficient connectivity decisions can be made – both in the cloud and the sea.