Capacity explores what the release of o1 means for operators in the telco and infrastructure space.
Subscribe today for free
What is OpenAI o1?
Released in mid-September, o1 is an generative AI model from OpenAI with enhancing reasoning capabilities, meaning it’s designed to spend more time thinking through its response to user queries.
Initially released in preview versions — o1-preview and o1-mini — the o1 model can better handle more complex inputs, like answering queries related to science and maths as it’s built to more carefully think through its answers.
How is OpenAI o1 different to other AI models?
Traditional large language models are designed to predict the next word in a sentence, subsequently generating an answer to a query.
OpenAI o1 meanwhile is designed to reason through responses, “thinking” more carefully about its answer.
Instead of predicting the next likely word or number in a sequence, o1 is designed to consider its approach to problem-solving more carefully, which in turn, provides a more accurate response, particularly to more complex queries.
O1 can therefore handle difficult maths and science questions and coding queries with greater accuracy.
How will OpenAI o1 impact telcos and infrastructure operators?
Use cases: Improved analytics, network automation
OpenAI’s o1 with its advanced reasoning can enhance use cases and applications where telcos and infrastructure operators are already leveraging existing AI systems to augment operations.
O1’s advanced reasoning capabilities would be able to improve basic applications like customer-facing chatbots, providing more detailed and accurate responses to customer queries as the model thinks through responses more carefully.
The new OpenAI model can also provide enhancements to analytics efforts, such as in traffic routing and anomaly detection for infrastructures.
Cache Merrill, founder of software development company Zibtek, outlined to Capacity that applying o1 to analytics-related uses could significantly cut down both downtime and operating costs.
Merrill also suggested that o1 provides more benefits to network automation, adding: “Providers can also take part in the self-managed operation with the use of system with AI-enabled automation. Human intervention in the fault diagnostics and network repair, especially for the future 5G networks would be low.”
The Zibtek founder also opined that o1 could be used to improve network designs.
“Why do telcos dissociate themselves with o1 when they can customise designs? Customisation of existing models is feasible for the particular problems, but the possibility for understanding higher complexity datasets within one model o1 completely reduces the necessity of several models or additional tuning,” Merrill said.
“It is about ensuring that their systems can cope, in the future, with the increasingly complex and heterogeneous networks that are emerging as more devices connect to it.”
Increased power demands
OpenAI has published little information on the underlying mechanisms behind the o1 model. Traditionally, the ChatGPT developer unveils a model, then a few weeks later publishes a model card which provides a more detailed insight into an AI system’s inner workings.
Despite having concrete information on its mechanics, one possible insight that could be inferred from o1 for operators is that its improved reasoning could come at a cost of requiring more computing power to run.
Tom Traugott, SVP of strategy at EdgeCore Digital Infrastructure, explained to Capacity that o1’s usage pricing, which is six times that of OpenAI’s current flagship GPT-4o and the longer processing time for queries means operators may need to use more resources from a cost and time point to greater computing power.
“The impact may be more so at the inference level, which runs on smaller GPU systems (with greater availability) than the large GPU systems responsible for training the models facing the greatest constraints today,” Traugott said.
“Should o1 find more widespread adoption, though, it absolutely puts greater pressure on aggregate computing capacity needs and, per an interview with Mira Murati, former CTO of OpenAI, adds another vector for growth, where this current scaling paradigm meets a new paradigm of reasoning that will likely find its culmination in the next GPT-5 model. Expect the other foundation and frontier models to follow suit in demonstrating their reasoning capabilities as well — more computing is needed.”
Chris Dukich, founder and CEO of SaaS company Display NOW, agreed that the processing power demands brought on by o1’s improved efficiency would push current facilities to the limits.
“All things considered, the risk-benefit ratio may be worth implementing if it allows telecommunications companies to manage their companies and carry out more sophisticated and productive tasks with less staff presence,” Dukich said. “In the coming years, advancement of AI hardware and algorithms optimisation for specific tasks might also partially reduce some of these energy requirements and thus make such models as o1 more plausible.”
How to access OpenAI o1?
Telcos and network operators looking to get their hands on o1 can try out the model through ChatGPT Plus, Teams, and Enterprise subscriptions. However, users are limited to by how many inputs they can use to interact with the model.
The new reasoning model is also coming to ChatGPT Edu, a specially designed version of OpenAI’s chatbot built for schools and universities.
O1 will eventually be brought to the free version of ChatGPT, though OpenAI offered no time frame for when this will happen.
RELATED STORIES
OpenAI launches o1: An AI model with enhanced reasoning to handle complex tasks
OpenAI pushes for massive data centres to power next-gen AI models