A chip startup takes on Nvidia and Broadcom single-handedly.
A team set on disrupting Nvidia’s competitive moat
Everyone knows Nvidia for its GPUs, but as we’ve noted in numerous reports, this industry giant has also built a formidable presence in networking. Driven by robust demand for connectivity in AI data centers, Nvidia’s networking revenue surged 162% year-over-year to $8.19 billion in the third quarter of fiscal 2026 – a figure that far exceeds the sum it paid to acquire Mellanox back in the day. NVLink has rightfully become a core competitive moat for Nvidia.
Against the backdrop of slowing scaling in single-chip performance today, connectivity demands for Scale-Up and Scale-Out architectures are set to remain the dominant trend for the foreseeable future. Put simply, any company capable of building a UALink switch with a high port count (i.e., a large number of ports) and high aggregate bandwidth across ports – one that can rival Nvidia’s NVSwitch memory architecture and NVLink ports – is poised to reap substantial rewards.
Upscale AI is precisely the company founded with this very vision. Its founder, Rajiv Khemani, is a legendary serial entrepreneur in the chip industry whose name resonates throughout the sector.
According to reports, Rajiv Khemani once served as a Senior Product Manager at Sun Microsystems, where he was responsible for SPARC servers and the Solaris operating system. He also held positions at NetApp and Intel, overseeing strategy and marketing across multiple business units at both companies.
In 2003, he assumed the role of Chief Operating Officer (COO) at Cavium Networks, a chip startup. Founded in 2000, the company initially got its start manufacturing MIPS processors, but later broke into the Arm server market with the launch of its ThunderX server CPU in 2014, rising to prominence in the industry as a result. In the same year, Cavium acquired XPliant, an emerging developer of application-specific integrated circuit (ASIC) chips for programmable switches. In June 2016, Cavium acquired QLogic’s storage business in a $1.36 billion deal. In November 2017, semiconductor giant Marvell acquired Cavium for $6 billion, marking its formal foray into the data center sector.
Khemani departed Cavium in 2015 to become Co-founder and Chief Executive Officer (CEO) of Innovium, a firm that designs high-bandwidth, minimalist ASICs for hyperscale Ethernet switches under its TeraLynx product line. In August 2021, Marvell acquired Innovium in an $11 billion transaction to further advance its ambitions for data center semiconductor chips.
In February 2022, Rajiv Khemani and Barun Kar co-founded Auradine, a company focused on developing AI and blockchain compute and networking chips built on 4-nanometer (4nm) and 3-nanometer (3nm) process technologies. Auradine completed two funding rounds and raised a total of $161 million by 2024, before securing an additional $153 million in a Series B round in April 2025.
In May 2024, Khemani and Kar decided to spin off a portion of Auradine’s networking business to launch a new company named Upscale AI, aiming to target more directly the AI interconnect market – a market projected to reach $100 billion by the end of the decade. From its inception, the company has garnered support from industry players including Intel, AMD and Qualcomm.
Notably, Kar is the other co-founder of both Auradine and Upscale AI. He previously served as Senior Vice President of Engineering and a founding team member at Palo Alto Networks, a leading manufacturer of firewalls and other cybersecurity products. Prior to that, as far back as the dot-com bubble era, Kar held the position of Senior Systems Manager at Juniper Networks, where he oversaw the company’s Ethernet router and switch product lines.
Upscale AI states that its core strategy is to integrate GPUs, AI accelerators, memory, storage and networking into a single synchronous AI engine. A central pillar of Upscale AI’s strategy is its SkyHammer solution, built from the ground up for scaling. By shortening the physical distance between accelerators, memory and storage, SkyHammer enables a unified rack architecture and converts the entire technology stack into a single, synchronous system.
Upscale AI's AI platform is built on open standards and open-source technologies, and the company is actively advancing the development of these standards and technologies – including ESUN, Ultra Accelerator Link (UAL), Ultra Ethernet (UEC), SONiC and the Switch Abstraction Interface (SAI). It also actively participates in the Ultra Accelerator Link Consortium, Ultra Ethernet Consortium, Open Compute Project (OCP) and SONiC Foundation.
A chip optimized specifically for networking
As noted above, AI clusters consist of multiple racks, each of which can house dozens of servers. These servers exchange data with one another via switches integrated into the host rack. The technical characteristics of rack switches differ significantly from those of other networking equipment – such as devices used to connect disparate racks.
SkyHammer, the product currently under development by Upscale AI, is a chip optimized specifically for scale-up networking – i.e., connecting hardware components within a single rack – that delivers deterministic latency. This means the time required for data to travel between rack components can be predicted with a high degree of precision.
As is well known, AI models process data through computations that must be executed in a fixed sequence. As such, latency in a single computation tends to cascade into delays across all subsequent processing steps. Predicting network latency in advance mitigates unplanned data transmission delays, thereby preventing slowdowns in AI workloads.
In an interview with The Next Platform, Upscale AI outlined its core objectives:
First and foremost, today there is truly only one practical option for scaled AI networking: NVSwitch. This is also one of the reasons behind Nvidia’s tremendous success amid the GenAI wave – alongside other factors, of course. Upscale AI, however, aims to offer customers more viable alternatives.
"I have long believed that heterogeneous computing and heterogeneous networking are the future of the industry,” an Upscale AI executive told The Next Platform. “Customers should have the freedom to choose and flexibly combine a variety of resources, because every organization has its own unique needs – and such combinations can be optimized to align with those specific requirements.”
Against this backdrop, Upscale AI is committed to democratizing networking for AI computing, as it firmly believes in the untapped potential of heterogeneous computing.
"We recognize that Nvidia has exceptional technology and is an outstanding innovator in the industry,” Upscale AI stressed. “But looking ahead, as the pace of AI innovation continues to accelerate, we don’t believe any single company can deliver all the technologies that AI demands – especially when it comes to shaping future industry trends. This inevitably means that different vendors will provide distinct types of computing solutions for the AI ecosystem.”
Upscale AI also argues that the PCI-Express (PCIe) switching mechanism works well in scenarios where a small number of CPUs communicate with a small number of GPUs, the GPUs have relatively low memory bandwidth, and CPUs and GPUs are closely colocated within a server node. When Upscale AI launched its operations in early 2024, the UALink Consortium and the ESUN standard proposed by Meta Platforms had yet to be established – but the concept of heterogeneous infrastructure had long been in existence. Its aim is not merely to build a single infrastructure capable of handling all tasks, but to create one that better aligns with the workflows of different tasks.
“A single GPU may not be able to handle all computing tasks in the future, and heterogeneous computing will become the mainstream,” Upscale AI explained. “Certain CPUs, GPUs or XPUs may specialize in pre-encoding and pre-filling, while others may excel at decoding. But what if Vendor X specializes in pre-filling and Vendor Y in decoding? Switching has now become the core of this entire system; it connects all these functions together, and must ensure fair connectivity, as well as scalability and reliability.
Reliability is paramount, because any operation you perform will directly impact all computations within the system.”
In its interview with The Next Platform, Upscale AI scoffs at the practice of building UALink, ESUN or SUE switches by merely beautifying PCI-Express switch ASICs or tinkering with Ethernet switch ASICs.
"I see many approaches that amount to retrofitting PCI-Express – taking PCIe substrates and trying to repurpose them for other tasks, or other vendors attempting to modify Ethernet for similar ends. But the critical truth about memory architectures is that they cannot be retrofitted this way. Such practices fail to deliver truly optimized, scale-up-only stacks for customers, because they ultimately boil down to taking a base substrate and trying to strip out unnecessary components. Anyone with long experience in the ASIC industry knows: you can remove numerous modules, but the fundamental building blocks remain unchanged. Every ASIC has its immutable core DNA."
Thus, Khemani and Kar set out to build a memory fabric ASIC from the ground up, specifically designed for this purpose, and then ensure it supports updates to memory semantic protocols.
While the company has not disclosed detailed ASIC specifications, Upscale AI states that SkyHammer will generate real-time telemetry data – technical data about the system that is not only crucial for troubleshooting but also essential for configuration tasks. Administrators can analyze telemetry data on the status of network devices to identify ways to optimize their performance.
SkyHammer is also compatible with various open-source networking technologies, including UALink and ESUN. Both initiatives aim to leverage Ethernet for scalable networking applications. ESUN is the newer of the two projects, launched last year with support from Nvidia, Broadcom, and other major industry players.
SkyHammer will additionally support a networking technology called UEC. While ESUN focuses on connecting components within a rack, UEC specializes in linking different racks together. It can support AI clusters with up to one million chips.
"We are developing a high-radix switch and an ASIC specifically designed to make all this possible," Upscale AI emphasized.
Closing Thoughts
NVLink is a high-speed interconnect technology developed by NVIDIA that abstracts memory and computing resources from multiple GPUs to present them as a single logical resource.
First introduced in 2024, this technology has since prompted companies including AMD and Cisco to develop alternative solutions. Their efforts to date – such as UALink and ESUN – however, remain unrefined.
AMD’s first rack-mount systems built on UALink are scheduled to launch later this year, yet they will transmit the protocol through Ethernet tunneling. A dedicated UALink switch that can compete with NVIDIA’s NVSwitch is still not available. Upscale AI aims to reverse this situation with its custom SkyHammer ASIC chip.
Barun Kar, CEO of Upscale AI, told foreign tech publication El Reg: "We are not retrofitting traditional systems, but reimagining what true scalability means in AI networking." "At its core, this architecture is inherently engineered for scaling out. It is designed exclusively for AI workloads, and serves no other purpose at all."
Although we do not have sufficient information to compare this chip with NVSwitch 6 or Broadcom’s Tomahawk 6, Kar told us that it adopts a memory-semantic-based load-store network architecture and will feature collective communication acceleration capabilities similar to Nvidia Sharp.
The platform will also support both UALink and its competing ESUN protocol simultaneously.
To enable large-scale management of the entire system, Upscale is working to expand support for the SONiC network operating system (NOS). SONiC is an open-source NOS originally developed by Microsoft, which has been widely deployed and is highly favored by hyperscale customers.
Currently, Upscale is primarily focused on scale-up networking products, but in the long term, it plans to expand its product line to more traditional scale-out switches. Kar told us that, for this purpose, the company is still evaluating various options and may leverage third-party intellectual property from partners.
"We have established partnerships with hyperscale data center operators and GPU vendors, who have already validated the architecture. That part of the work is complete. Now, the focus of this funding is to turn the innovation into real-world deployment," Kar said.
Rajiv Khemani, Executive Chairman of Upscale AI, also stated, "Upscale AI has achieved extraordinary momentum in an exceptionally short time. The market demands open, scalable AI networking solutions, and Upscale AI, with its unique advantages, can help customers break through current networking limitations."






