Intel
Nokia selects Intel’s Justin Hotard as new CEO to increase growth in IP networking and data center connections
Nokia today announced that their President and Chief Executive Officer, Pekka Lundmark will be replaced on April 1st by 50 year old Justin Hotard who currently leads Intel’s Data Center & AI Group. Hotard joins Nokia with more than 25 years’ experience with global technology companies, driving innovation, technology leadership and delivering revenue growth. Prior to Intel, he held several leadership roles at large technology companies, including Hewlett Packard Enterprise (more below) and NCR Corporation. He will be based at Nokia’s headquarters in Espoo, Finland.
“Leading Nokia has been a privilege. When I returned to Nokia in 2020, I called it a homecoming, and it really has felt like one. I am proud of the work our brilliant team has done in re-establishing our technology leadership and competitiveness, and positioning the company for growth in data centers, private wireless and industrial edge, and defense. This is the right time for me to move on. I have led listed companies for more than two decades and although I do not plan to stop working, I want to move on from executive roles to work in a different capacity, such as a board professional. Justin is a great choice for Nokia and I look forward to working with him on a smooth transition,” said Nokia’s President and CEO Pekka Lundmark.
“I am delighted to welcome Justin to Nokia. He has a strong track record of accelerating growth in technology companies along with vast expertise in AI and data center markets, which are critical areas for Nokia’s future growth. In his previous positions, and throughout the selection process, he has demonstrated the strategic insight, vision, leadership and value creation mindset required for a CEO of Nokia,” said Sari Baldauf, Chair of Nokia’s Board of Directors.
“I am honored by the opportunity to lead Nokia, a global leader in connectivity with a unique heritage in technology. Networks are the backbone that power society and businesses, and enable generational technology shifts like the one we are currently experiencing in AI. I am excited to get started and look forward to continuing Nokia’s transformation journey to maximize its potential for growth and value creation,” said Justin Hotard.
Rumors started to swirl in September, after a report in the Financial Times newspaper, that Nokia was seeking a replacement for Lundmark, who by then had been its CEO for about four years. Nokia said in a statement: “The Board fully supports President and CEO Pekka Lundmark and is not undergoing a process to replace him.”
–>How seriously should the FT and all other media now take the company’s public statements?
Lundmark had told Nokia’s board months earlier, in the spring of 2024, that he would consider stepping down once “the repositioning of the business was in a more advanced stage.” This author certainly does not think that “advanced stage” has been reached yet. “The current CEO has not got to grips with the growth problem. The top line has not increased since the Alcatel-Lucent takeover,” said a shareholder.
Nokia’s share price is now only 10% of its peak $260 Billion valuation in 2000 peak (that’s a 90% decline in price over almost 25 years- buy and hold?). However, the company has gained almost 40% in the last year after operational improvements and signs that construction of AI data centers could be a significant growth opportunity for Nokia’s network infrastructure business group, its second-biggest unit. In his leaving video, Lundmark drew attention to the sales growth rate of 9% for the final quarter of 2024 and the operating margin of 19.1%, Nokia’s best in a decade.
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
“We’re at the start of a super cycle with AI,” said Hotard. “One that I see [as] very similar to the one we saw a couple of decades ago with the internet. In these major market transitions new winners are created and incumbents either reinvent themselves or fail… My focus will be to accelerate the transformation journey.”
During the Nokia conference call Q&A Hotard was defensive to questions about his plans for the company. He did say that networking comes second only to compute hardware when it comes to share of AI datacenter investment and he looks forward to the completion of the $2.1 billion Infinera acquisition. “The hundreds of billions of dollars being invested in data centers today from a technology standpoint of course start with compute accelerators and GPUs [graphical processing units], but the second thing is the network and the connectivity and further it is not just the connectivity inside the data center but the connectivity across data centers,” said Hotard on today’s call. That implies an increased emphasis Nokia will place on optical networking within and between data centers.
Indeed, IP networking and data center connectivity are becoming a fast-growing part of Nokia’s network infrastructure unit that provides the connectivity inside those data centers, recently landing deals with Microsoft and UK-headquartered Nscale. The hoped-for return is an additional €1 billion ($1.03 billion) in revenues by 2028. The Infinera acquisition, announced in June 2024 and expected to be finalized in the next few weeks, is also partly a data center play, bolstering Nokia’s portfolio of optical networking assets.
On Nokia’s Q3 2024 earnings call in October, Lundmark said, “Across Nokia, we are investing to create new growth opportunities outside of our traditional communications service provider market. We see a significant opportunity to expand our presence in the data center market and are investing to broaden our product portfolio in IP Networks to better address this. There will be others as well, but that will be the number one. This is obviously in the very core of our strategy.” At that time, Lundmark said Nokia’s telco total addressable market (TAM) is €84 billion, while its data center total addressable market is currently at €20 billion. “I mean, telco TAM will never be a significant growth market,” he added to no one’s surprise.
On today’s call, Lundmark drew attention to its revival and strength when asked to compare Nokia with Ericsson. “Of course, we respect them as a competitor in radio networks. We are slightly behind them in terms of market share, but we have had great deal momentum recently – you’ll have seen some of the deal announcements – and, very importantly, the feedback we are receiving from our customers is that we are now fully competitive in terms of our portfolio.”
Justin Hotard, slated to be Nokia’s new boss on April 1, 2025
During the past year Hotard has headed up Intel’s Datacenter & AI Group. Prior to that he was at HPE for nine years heading up the High Performance Computing, AI & Labs group.
“Networks are the backbone that power society and businesses, and enable generational technology shifts like the one we are currently experiencing in AI. I am excited to get started and look forward to continuing Nokia’s transformation journey to maximize its potential for growth and value creation,” said Justin Hotard.
What will Hotard led Nokia’s future commitment be to a shrinking market for mobile networks? Revenues generated by the global mobile market are estimated to have fallen about $5 billion last year, to $35 billion, after a $5 billion drop in 2023, according to Omdia (an Informa owned market research firm), as network operators cut spending. But an exit would rid Nokia of a business still responsible for 40% of total sales just as smaller rivals appear to be struggling.
Nokia seems to value Hotard’s U.S. background and experience in the data center and AI market. “If you look at the market and look at the world, the U.S. is an important market for us and so that is one element we consider – experience from that technology business there,” said Sari Baldauf, Nokia’s chair, when asked on today’s call why an external candidate was preferred to an internal appointment.
……………………………………………………………………………………………………………………………………………………………………………………………………………..
Telecoms.com Scott Bicheno offered his opinion: “Hotard reckons Nokia’s telco customer base gives it an advantage when it comes to AI datacenters, which are increasingly built near to sources of power, often in remote locations. So, while this does feel like a promising strategic pivot for Nokia, those telco customers might be worried about mobile being deprioritized as a consequence. The appointment of someone from a company with an appalling track record in that sector is unlikely to ease that concern.”
……………………………………………………………………………………………………………………………………………………………………………………………………………..
References:
https://www.telecoms.com/ai/nokia-signals-a-move-away-from-mobile-and-europe-with-new-ceo
https://www.ft.com/content/5f086aee-91b9-421a-9f32-c33e67b1af7f
Initiatives and Analysis: Nokia focuses on data centers as its top growth market
Nokia to acquire Infinera for $2.3 billion, boosting optical network division size by 75%
vRAN market disappoints – just like OpenRAN and mobile 5G
Most wireless network operators are not convinced virtual RAN (vRAN) [1.] is worth the effort to deploy. Omdia, an analyst company owned by Informa, put vRAN’s share of the total market for RAN baseband products at just 10% in 2023. It is growing slowly, with 20% market share forecast by 2028, but it far from being the default RAN architectural choice.
Among the highly touted benefits of virtualization is the ability for RAN developers to exploit the much bigger economies of scale found in the mainstream IT market. “General-purpose technology will eventually have so much investment in it that it will outpace custom silicon,” said Sachin Katti, the general manager of Intel’s network and edge group, during a previous Light Reading interview.
Note 1. The key feature of vRAN is the virtualization of RAN functions, allowing operators to perform baseband operations on standard servers instead of dedicated hardware. The Asia Pacific region is currently leading in vRAN adoption due to rapid 5G deployment in countries like China, South Korea, and Japan. Samsung has established a strong presence as a supplier of vRAN equipment and software.
The whole market for RAN products generated revenues of just $40 billion in 2023. Intel alone made $54.2 billion in sales that same year. Yet Huawei, Ericsson and Nokia, the big players in RAN base station technology, have continued to miniaturize and advance their custom chips. Nokia boasts 5-nanometer chips in its latest products and last year lured Derek Urbaniak, a highly regarded semiconductor expert, from Ericsson in a sign it wants to play an even bigger role in custom chip development.
Ericsson collaborates closely with Intel on virtual RAN, and yet it has repeatedly insisted its application-specific integrated circuits (ASICs) perform better than Intel’s CPUs in 5G. One year ago, Michael Begley, Ericsson’s head of RAN compute, told Light Reading that “purpose-built hardware will continue to be the most energy-efficient and compact hardware for radio site deployments going forward.”
Intel previously suffered delays when moving to smaller designs and there is gloominess about its prospects as note in several IEEE Techblog posts like this one and this one. Intel suffered a $17 billion loss for the quarter ending in September, after reporting a small $300 million profit a year before. Sales fell 6% year-over-year, to $13.3 billion, over this same period.
Unfortunately, for telcos eyeing virtualization, Intel is all they really have. Its dominance of the small market for virtual RAN has not been weakened in the last couple of years, leaving operators with no viable alternatives. This was made apparent in a recent blog post by Ericsson, which listed Intel as the only commercial-grade chip solution for virtual RAN. AMD was at the “active engagement” stage, said Ericsson last November. Processors based on the blueprints of ARM, a UK-based chip designer that licenses its designs, were not even mentioned.
The same economies-of-scale case for virtual RAN is now being made about Nvidia and its graphical processing units (GPUs), which Nvidia boss Jensen Huang seems eager to pitch as a kind of general-purpose AI successor to more humdrum CPUs. If the RAN market is too small, and its developers must ride in the slipstream of a much bigger market, Nvidia and its burgeoning ecosystem may seem a safer bet than Intel. And the GPU maker already has a RAN pitch, including a lineup of Arm-based CPUs to host some of the RAN software.
Semiconductor-related economies of scale, should not be the sole benefit of a virtual RAN. “With a lot of the work that’s been done around orchestration, you can deploy new software to hundreds of sites in a couple of hours in a way that was not feasible before,” said Alok Shah of Samsung Electronics. Architecturally, virtualization should allow an operator to host its RAN on the same cloud-computing infrastructure used for other telco and IT workloads. With a purpose-built RAN, an operator would be using multiple infrastructure platforms.
In telecom markets without much fiber or fronthaul infrastructure there is unlikely to be much centralization of RAN compute. This necessitates the deployment of servers at mast sites, where it is hard to see them being used for anything but the RAN. Even if a company wanted to host other applications at a mobile site, the processing power of Sapphire Rapids, the latest Intel generation, is fully consumed by the functions of the virtual distributed unit (vDU), according to Shah. “I would say the vDU function is kind of swallowing up the whole server,” he said.
Indeed, for all the talk of total cost of ownership (TCO) savings, some deployments of Sapphire Rapids have even had to feature two servers at a site to support a full 5G service, according to Paul Miller, the chief technology officer of Wind River, which provides the cloud-computing platform for Samsung’s virtual RAN in Verizon’s network. Miller expects that to change with Granite Rapids, the forthcoming successor technology to Sapphire Rapids. “It’s going to be a bit of a sea change for the network from a TCO perspective – that you may be able to get things that took two servers previously, like low-band and mid-band 5G, onto a single server,” he said.
Samsung’s Shah is hopeful Granite Rapids will even free up compute capacity for other types of applications. “We’ll have to see how that plays out, but the opportunity is there, I think, in the future, as we get to that next generation of compute.” In the absence of many alternative processor platforms, especially for telcos rejecting the inline virtual RAN approach, Intel will be under pressure to make sure the journey for Granite Rapids is less turbulent than it sounds.
Another challenge is the mobile backhaul, which is expected to limit the growth of the vRAN industry. Backhaul connectivity ia central s widely used in wireless networks to transfer a signal from a remote cell site to the core network (typically the edge of the Internet). The two main methods of mobile backhaul implementations are fiber-based and wireless point-to-point backhaul.
The pace of data delivery suffers in tiny cell networks with poor mobile network connectivity. Data management is becoming more and more important as tiny cells are employed for network connectivity. Increased data traffic across small cells, which raises questions about data security, is mostly to blame for poor data management. vRAN solutions promise improved network resiliency and utilization, faster network routing, and better-optimized network architecture to meet the diverse 5G requirements of enterprise customers.
References:
https://www.lightreading.com/5g/virtual-ran-still-seems-to-be-not-worth-the-effort
https://www.ericsson.com/en/blog/north-america/2024/open-ran-progress-report
https://www.sdxcentral.com/5g/ran/definitions/vran/
LightCounting: Open RAN/vRAN market is pausing and regrouping
Dell’Oro: Private 5G ecosystem is evolving; vRAN gaining momentum; skepticism increasing
Huawei CTO Says No to Open RAN and Virtualized RAN
Heavy Reading: How network operators will deploy Open RAN and cloud native vRAN
CES 2025: Intel announces edge compute processors with AI inferencing capabilities
At CES 2025 today, Intel unveiled the new Intel® Core™ Ultra (Series 2) processors, designed to revolutionize mobile computing for businesses, creators and enthusiast gamers. Intel said “the new processors feature cutting-edge AI enhancements, increased efficiency and performance improvements.”
“Intel Core Ultra processors are setting new benchmarks for mobile AI and graphics, once again demonstrating the superior performance and efficiency of the x86 architecture as we shape the future of personal computing,” said Michelle Johnston Holthaus, interim co-CEO of Intel and CEO of Intel Products. “The strength of our AI PC product innovation, combined with the breadth and scale of our hardware and software ecosystem across all segments of the market, is empowering users with a better experience in the traditional ways we use PCs for productivity, creation and communication, while opening up completely new capabilities with over 400 AI features. And Intel is only going to continue bolstering its AI PC product portfolio in 2025 and beyond as we sample our lead Intel 18A product to customers now ahead of volume production in the second half of 2025.”
Intel also announced new edge computing processors, designed to provide scalability and superior performance across diverse use cases. Intel Core Ultra processors were said to deliver remarkable power efficiency, making them ideal for AI workloads at the edge, with performance gains that surpass competing products in critical metrics like media processing and AI analytics. Those edge processors are targeted at compute servers running in hospitals, retail stores, factory floors and other “edge” locations that sit between big data centers and end-user devices. Such locations are becoming increasingly important to telecom network operators hoping to sell AI capabilities, private wireless networks, security offerings and other services to those enterprise locations.
Intel edge products launching today at CES include:
- Intel® Core™ Ultra 200S/H/U series processors (code-named Arrow Lake).
- Intel® Core™ 200S/H series processors (code-named Bartlett Lake S and Raptor Lake H Refresh).
- Intel® Core™ 100U series processors (code-named Raptor Lake U Refresh).
- Intel® Core™ 3 processor and Intel® Processor (code-named Twin Lake).
“Intel has been powering the edge for decades,” said Michael Masci, VP of product management in Intel’s edge computing group, during a media presentation last week. According to Masci, AI is beginning to expand the edge opportunity through inferencing [1.]. “Companies want more local compute. AI inference at the edge is the next major hotbed for AI innovation and implementation,” he added.
Note 1. Inferencing in AI refers to the process where a trained AI model makes predictions or decisions based on new data, rather than previously stored “training models.” It’s essentially AI’s ability to apply learned knowledge on fresh inputs in real-time. Edge computing plays a critical role in inferencing, because it brings it closer to users. That lowers latency (much faster AI responses) and can also reduce bandwidth costs and ensure privacy and security as well.
Editor’s Note: Intel’s edge compute business – the one pursuing AI inferencing – is in in its Client Computing Group (CCG) business unit. Intel’s chips for telecom operators reside inside its NEX business unit.
Intel’s Masci specifically called out Nvidia’s GPU chips, claiming Intel’s new silicon lineup supports up to 5.8x faster performance and better usage per watt. Indeed, Intel claims their “Core™ Ultra 7 processor uses about one-third fewer TOPS (Trillions Operations Per Second) than Nvidia’s Jetson AGX Orin, but beats its competitor with media performance that is up to 5.6 times faster, video analytics performance that is up to 3.4x faster and performance per watt per dollar up to 8.2x better.”
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
However, Nvidia has been using inference in its AI chips for quite some time. Company officials last month confirmed that 40% of Nvidia’s revenues come from AI inference, rather than AI training efforts in big data centers. Colette Kress, Nvidia Executive Vice President and Chief Financial Officer, said, “Our architectures allows an end-to-end scaling approach for them to do whatever they need to in the world of accelerated computing and Ai. And we’re a very strong candidate to help them, not only with that infrastructure, but also with the software.”
“Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low,” explained Nvidia CEO Jensen Huang during his company’s recent quarterly conference call.
“Our hopes and dreams is that someday, the world does a ton of inference. And that’s when AI has really succeeded, right? It’s when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, and coding, of course. And so we hope that every company is doing inference 24/7.”
……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Sadly for its many fans (including this author), Intel continues to struggle in both data center processors and AI/ GPU chips. The Wall Street Journal recently reported that “Intel’s perennial also-ran, AMD, actually eclipsed Intel’s revenue for chips that go into data centers. This is a stunning reversal: In 2022, Intel’s data-center revenue was three times that of AMD.”
Even worse for Intel, more and more of the chips that go into data centers are GPUs and Intel has minuscule market share of these high-end chips. GPUs are used for training and delivering AI. The WSJ notes that many of the companies spending the most on building out new data centers are switching to chips that have nothing to do with Intel’s proprietary architecture, known as x86, and are instead using a combination of a competing architecture from ARM and their own custom chip designs. For example, more than half of the CPUs Amazon has installed in its data centers over the past two years were its own custom chips based on ARM’s architecture, Dave Brown, Amazon vice president of compute and networking services, said recently.
This displacement of Intel is being repeated all across the big providers and users of cloud computing services. Microsoft and Google have also built their own custom, ARM-based CPUs for their respective clouds. In every case, companies are moving in this direction because of the kind of customization, speed and efficiency that custom silicon supports.
References:
https://www.intel.com/content/www/us/en/newsroom/news/2025-ces-client-computing-news.html#gs.j0qbu4
https://www.intel.com/content/www/us/en/newsroom/news/2025-ces-client-computing-news.html#gs.j0qdhd
https://www.wsj.com/tech/intel-microchip-competitors-challenges-562a42e3
Massive layoffs and cost cutting will decimate Intel’s already tiny 5G network business
WSJ: China’s Telecom Carriers to Phase Out Foreign Chips; Intel & AMD will lose out
The case for and against AI-RAN technology using Nvidia or AMD GPUs
Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers
FT: Nvidia invested $1bn in AI start-ups in 2024
AI winner Nvidia faces competition with new super chip delayed
AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions