What You Missed at Mobile World Congress 2024

Private networks were “in,” while network slicing seemed to be a solution looking for a problem...
innovations. The leading service providers were at the show in force, showcasing proof-of-concepts, innovations, collaborations and technical innovations. The service provider stands seemed to be the busiest, with an increased focus on leveraging partner ecosystems to create new revenue-generating services. Uses of Open APIs were showcased by many of the exhibitors to increase agility and automation without disrupting existing processes (a topic Pipeline is also covering in this issue).

The opening Keynote, attended by the King of Spain, industry leaders from China Mobile, Vodafone, Orange, and Telefonica, was very bullish about the market, but also acknowledged that global telecom revenues were flat while CAPEX expenditures were growing to support a predicted 4x’s traffic between 2023 and 2030. Sour grapes were kept to a minimum with only one brief reference to the fact that a mere six key players account for 50 percent of traffic. Part of the reason for this was perhaps the bullish attitude towards the use and adoption of AI within the industry to reduce operating costs (OPEX) and drive new revenues, particularly in the B2B and enterprise segments. Private networks were “in,” while network slicing seemed to be a solution looking for a problem to solve, according to industry innovators such as Anritsu. If there was one thing that stood out from everything at the show it was that AI, and Generative AI (GenAI) in particular, had arrived and is here to stay. If you were a major player and didn’t have an AI story, you didn’t have a story, period. Sandeep Raina, the Vice President of Global Head of Marketing at MYCOM OSI said, “There is a lot of talk on Generative AI. However, there will be a lot more automation in the industry in the coming years and AI will have to merge with automation.”

Figure 2 - Opening keynote
Well attending kickoff to Mobile World Congress (MWC) 2024

The industry has fully embraced GenAI, including in the BSS layer to improve CX and throughout the technology stack, with some very innovative uses of GenAI in network operations, the OSS layer, and beyond. For example, BeyondNow demonstrated use of its Wave AI product for CX, operations, and partner ecosystem management at MWC 2024. The use of AI/ML, or predictive AI, is so widespread as to now seem unremarkable. Interestingly, the digger you dig, the more it appears these technologies are often being used to assist and augment human effort rather than replace it through automation.

The biggest debate of the show was around the use of Large Language Models (LLMs). Do you build, buy, or borrow your LLM? Almost every PoC we saw seemed to put a new spin on an existing LLM. Anrtisu’s example would change its response depending on the user with which it engaged. For example, language would be modified depending on if a question was being asked by a CTO or a network engineer. Subex exhibited an alternative model that allowed the user to select different AI agents for different tasks across the network, and even allowed you to build a squad to tackle tasks in numbers with different AI inputs.

Ericsson was keen to highlight the importance of partner ecosystems, particularly within Open RAN, having recently announced the launch of their rApps ecosystem program with 17 members, supporting 20 rApps delivered by 11 application developers. They linked their recent developments in Service Management and Orchestration (SMO) to their major Open RAN deal with AT&T in the U.S. Ericsson was also very actively discussing their approach to GenAI in telecoms, and Elena Fersman, VP and Head of Global AI Accelerator, even suggested that every service provider would eventually build its own LLM. Although this seems unlikely due to the huge costs and time required, there does appear to be a strong belief in a “multi-LLM” future that incorporates private, telco LLMs, open LLMs, and industry specific LLMs with a recognition that all three may give very different answers to the same query. The industry also voiced its concerns about the bandwidth requirements, power utilization, and costs associated with GenAI queries. A number of companies quoted costs, such as $1.50 per typical query. The environmental impact of GenAI per query was estimated at 1 liter of water and the same electricity usage as a light bulb for 2 hours. Other concerns raised, including demonstrations from leading AI providers such as Optiva, included the use of AI by “bad actors,” ethical concerns, “AI hallucinations,” and “model poisoning.”

AI hallucinations refer to GenAI systems creating undesirable but seemingly valid content as a result of being trained on erroneous, corrupted or misleading data. For example, conspiracy theories are still being floated linking 5G to COVID. Although they are untrue the fact that the claims remain widespread and ubiquitous presents a danger of that information being repeated through GenAI systems hallucinations.

An excellent example was provided by Chrisaman Sood, Product Manager at Optiva. If every customer asks your AI billing system to “Tell me joke,” over time, unless you take the necessary precautions, your system will start to think its role is to be a comedian rather than helping with billing queries. That is an example of model poisoning. 


Latest Updates

Subscribe to our YouTube Channel