1
Country: USA | Funding: $475M
Unconventional AI aims to create a new, energy-efficient AI computer platform inspired by neuroscience. It develops silicon circuits that exhibit brain-like nonlinear dynamics to create a new foundation for intelligence. This means the startup will create hardware-isomorphism and run neural networks directly on physical objects, rather than simulating physical systems using software as is currently the case. This approach will enable capabilities significantly exceeding existing models while consuming only a fraction of the energy.
Unconventional AI aims to create a new, energy-efficient AI computer platform inspired by neuroscience. It develops silicon circuits that exhibit brain-like nonlinear dynamics to create a new foundation for intelligence. This means the startup will create hardware-isomorphism and run neural networks directly on physical objects, rather than simulating physical systems using software as is currently the case. This approach will enable capabilities significantly exceeding existing models while consuming only a fraction of the energy.
2
Country: USA | Funding: $1.8B
Cerebras is building Wafer-Scale Engine (WSE) – the largest chip ever built for deep learning systems. The chip has a size of a silicon wafer, with a very large number of transistors and cores. Most of the memory on the chip is SRAM, with no or minimal (compared to SambaNova) external memory such as DRAM / HBM. This creates certain limitations in terms of flexibility of use and scaling of models, especially when the model is very large. But it has a very high bandwidth of the internal bus that enables super-fast data transfer between components inside one large plate. Each Cerebras system uses several such chips and requires significant cooling capacity. The WSE-chip powers the Cerebras CS-X - the AI supercomputer that enables less networking and a smaller footprint than a GPU-based cluster, and eliminate programming complexity by interacting with a single logical device at every scale. Cerebras also provides cloud learning/inference service for LLM companies like OpenAI.
Cerebras is building Wafer-Scale Engine (WSE) – the largest chip ever built for deep learning systems. The chip has a size of a silicon wafer, with a very large number of transistors and cores. Most of the memory on the chip is SRAM, with no or minimal (compared to SambaNova) external memory such as DRAM / HBM. This creates certain limitations in terms of flexibility of use and scaling of models, especially when the model is very large. But it has a very high bandwidth of the internal bus that enables super-fast data transfer between components inside one large plate. Each Cerebras system uses several such chips and requires significant cooling capacity. The WSE-chip powers the Cerebras CS-X - the AI supercomputer that enables less networking and a smaller footprint than a GPU-based cluster, and eliminate programming complexity by interacting with a single logical device at every scale. Cerebras also provides cloud learning/inference service for LLM companies like OpenAI.
3
Country: USA | Funding: $1.8B
Groq creates hardware AI accelerators for large language models that improve performance and reduce power consumption compared to classic GPU accelerators. It produces a Language Processing Unit (LPU) chip that uses exclusively on-chip SRAM memory (without external DRAM/HBM modules), which requires installing multiple chips for scalable payloads. Its architecture minimizes elements associated with unpredictable behavior (branch prediction, caches, etc.). Groq enables best performance for small and medium batch/task sizes, especially if the model fits into their configuration. The software-defined, single-core architecture removes traditional software complexity while continuous, token-based execution delivers consistent performance without tradeoffs. Groq also provides its cloud AI inference platform built for developers - available in public, private, or co-cloud instances.
Groq creates hardware AI accelerators for large language models that improve performance and reduce power consumption compared to classic GPU accelerators. It produces a Language Processing Unit (LPU) chip that uses exclusively on-chip SRAM memory (without external DRAM/HBM modules), which requires installing multiple chips for scalable payloads. Its architecture minimizes elements associated with unpredictable behavior (branch prediction, caches, etc.). Groq enables best performance for small and medium batch/task sizes, especially if the model fits into their configuration. The software-defined, single-core architecture removes traditional software complexity while continuous, token-based execution delivers consistent performance without tradeoffs. Groq also provides its cloud AI inference platform built for developers - available in public, private, or co-cloud instances.
4
Country: USA | Funding: $1.1B
SambaNova Systems produces chips based on the Reconfigurable Dataflow Unit architecture, which allows more flexible distribution of computations and memory, adaptation to different types of models. This architecture supports the "composition of experts" mechanism - when several specialized models work on certain parts of the data/tasks. The chip has multi-level memory and is optimized for large open models - Llama, DeepSeek, etc. It uses low-precision formats to accelerate computations and strives to achieve an optimal ratio of performance / throughput per watt, that's why RDU is very energy efficient and enables more compact hardware infrastructure. The company offers a full-featured stack: not just hardware, but also a software platform, a cloud service and the ability to deploy an on-premises system. SambaNova Systems is focused on open technologies and standards.
SambaNova Systems produces chips based on the Reconfigurable Dataflow Unit architecture, which allows more flexible distribution of computations and memory, adaptation to different types of models. This architecture supports the "composition of experts" mechanism - when several specialized models work on certain parts of the data/tasks. The chip has multi-level memory and is optimized for large open models - Llama, DeepSeek, etc. It uses low-precision formats to accelerate computations and strives to achieve an optimal ratio of performance / throughput per watt, that's why RDU is very energy efficient and enables more compact hardware infrastructure. The company offers a full-featured stack: not just hardware, but also a software platform, a cloud service and the ability to deploy an on-premises system. SambaNova Systems is focused on open technologies and standards.
5
Country: Canada | Funding: $1B
Tenstorrent develops AI processors Tensix that feature precision, anchored in RISC-V’s open architecture, delivering specialized, silicon-proven solutions for both AI training and inference. Basing on these processors, the company also manufactures PCI boards for desktop computers, desktop workstations and servers for corporations and research institutes. They provide superior performance per dollar for developers who need to run, test and develop AI models, as well as port and develop libraries for high-performance computing (HPC). The company also offers own cloud service where these servers can be rented.
Tenstorrent develops AI processors Tensix that feature precision, anchored in RISC-V’s open architecture, delivering specialized, silicon-proven solutions for both AI training and inference. Basing on these processors, the company also manufactures PCI boards for desktop computers, desktop workstations and servers for corporations and research institutes. They provide superior performance per dollar for developers who need to run, test and develop AI models, as well as port and develop libraries for high-performance computing (HPC). The company also offers own cloud service where these servers can be rented.
6
Country: USA | Funding: $822M
Lightmatter uses photonic computing to accelerate computation and communication between chips in cloud AI systems. The company's first two products are Passage chips, which utilize both photons and electrons to improve operational efficiency. They combine the computational tasks that electrons excel at (such as memory) with those that light excels at (such as performing massive matrix multiplications in deep learning models). Photonics enables multiple computations to be performed simultaneously because data arrives as light of different colors. This increases the number of operations per unit area and reuses existing hardware, improving energy efficiency. Passage takes advantage of the bandwidth of light to connect processors, similar to how fiber optic cables use light to transmit data over long distances. This allows disparate chips to function as a single processor.
Lightmatter uses photonic computing to accelerate computation and communication between chips in cloud AI systems. The company's first two products are Passage chips, which utilize both photons and electrons to improve operational efficiency. They combine the computational tasks that electrons excel at (such as memory) with those that light excels at (such as performing massive matrix multiplications in deep learning models). Photonics enables multiple computations to be performed simultaneously because data arrives as light of different colors. This increases the number of operations per unit area and reuses existing hardware, improving energy efficiency. Passage takes advantage of the bandwidth of light to connect processors, similar to how fiber optic cables use light to transmit data over long distances. This allows disparate chips to function as a single processor.
7
Country: China | Funding: $761.9M
Cambricon is called Chinese Nvidia - as it builds core processor chips and general-purpose graphics processing units for use in the field of artificial intelligence (AI). The company's core business is the development and design of AI chips for cloud servers, edge devices and terminals, as well as data-centre clusters but it also produces AI chips to power smartphones. Like Nvidia, Cambricon designs chips itself but outsources the wafer manufacturing to foundries. Cambricon heavily relies on government-affiliated clients that is why it was added to US trade blacklist, restricting it from acquiring US core technology, including using foundry services offered by TSMC. So Cambricon relies on mainland foundries such as Semiconductor Manufacturing International Corporation and Hua Hong Semiconductor.
Cambricon is called Chinese Nvidia - as it builds core processor chips and general-purpose graphics processing units for use in the field of artificial intelligence (AI). The company's core business is the development and design of AI chips for cloud servers, edge devices and terminals, as well as data-centre clusters but it also produces AI chips to power smartphones. Like Nvidia, Cambricon designs chips itself but outsources the wafer manufacturing to foundries. Cambricon heavily relies on government-affiliated clients that is why it was added to US trade blacklist, restricting it from acquiring US core technology, including using foundry services offered by TSMC. So Cambricon relies on mainland foundries such as Semiconductor Manufacturing International Corporation and Hua Hong Semiconductor.
8
Country: UK | Funding: $692M
Graphcore is a semiconductor company that develops accelerators for AI and machine learning. It aims to make a massively parallel Intelligent Processing Unit that holds the complete machine learning model inside the processor.
Graphcore is a semiconductor company that develops accelerators for AI and machine learning. It aims to make a massively parallel Intelligent Processing Unit that holds the complete machine learning model inside the processor.
9
Country: USA | Funding: $588.9M
Celestial AI develops optical interconnect technology for compute-to-compute, compute-to-memory and on-chip data transmission.
Celestial AI develops optical interconnect technology for compute-to-compute, compute-to-memory and on-chip data transmission.
10
Country: South Korea | Funding: $457.7M
Rebellions.ai builds AI accelerators by bridging the gap between underlying silicon architectures and deep learning algorithms.
Rebellions.ai builds AI accelerators by bridging the gap between underlying silicon architectures and deep learning algorithms.
11
Country: USA | Funding: $355M
SiMa.ai is building an ultra low-power software and chip solution for machine learning at the edge.
SiMa.ai is building an ultra low-power software and chip solution for machine learning at the edge.
12
Country: Israel | Funding: $343.9M
Hailo has developed a specialized deep learning processor that delivers the performance of a data center-class computer to edge devices.
Hailo has developed a specialized deep learning processor that delivers the performance of a data center-class computer to edge devices.
13
Country: USA | Funding: $300M
MatX is an AI chip startup that designs chips that support large language models.
MatX is an AI chip startup that designs chips that support large language models.
14
Country: USA | Funding: $272M
Blaize is an AI computing platforms company that develops products for the automotive, smart vision, and enterprise computing markets.
Blaize is an AI computing platforms company that develops products for the automotive, smart vision, and enterprise computing markets.
15
Country: USA | Funding: $240M
Enfabrica develops networking hardware to drive AI workloads
Enfabrica develops networking hardware to drive AI workloads
16
Country: USA | Funding: $212M
Kneron develops an application-specific integrated circuit and software that offers artificial intelligence-based tools.
Kneron develops an application-specific integrated circuit and software that offers artificial intelligence-based tools.
17
Country: Netherlands | Funding: $203.2M
Axelera is working to develop AI acceleration cards and systems for use cases like security, retail and robotics that it plans to sell through partners in the business-to-business edge computing and Internet of Things sectors.
Axelera is working to develop AI acceleration cards and systems for use cases like security, retail and robotics that it plans to sell through partners in the business-to-business edge computing and Internet of Things sectors.
18
Country: USA | Funding: $164.7M
Mythic goes beyond conventional digital architectures, memory, and calculation elements – rethinking everything from the ground up: from transistors and physics, through circuits and systems, up to software and AI algorithms.
Mythic goes beyond conventional digital architectures, memory, and calculation elements – rethinking everything from the ground up: from transistors and physics, through circuits and systems, up to software and AI algorithms.
19
Country: USA | Funding: $162.9M
EnCharge AI delivers a battle-tested computing platform to unlock the best AI computing, from the edge to the cloud.
EnCharge AI delivers a battle-tested computing platform to unlock the best AI computing, from the edge to the cloud.
20
Country: USA | Funding: $126M
EdgeQ intends to fuse AI compute and 5G within a single chip. The company is pioneering converged connectivity and AI that is fully software-customizable and programmable.
EdgeQ intends to fuse AI compute and 5G within a single chip. The company is pioneering converged connectivity and AI that is fully software-customizable and programmable.
























