Top 28 Startups developing AI Hardware in USA
Mar 11, 2026
|
Like
24
1
Funding: $200M
Eridu is creating a new generation of networking equipment for AI data centers. They've reimagined computer networks from the ground up, starting with network chips they designed specifically for AI data flows between boxes in data center. Eridu systems are designed to replace multiple multi-layer optical connections with on-chip communications. The company is also working on a switch that will accommodate more functions on the chip itself. This saves enormous amounts of energy and makes the network much more reliable, as optics are the least reliable part of the network.
Eridu is creating a new generation of networking equipment for AI data centers. They've reimagined computer networks from the ground up, starting with network chips they designed specifically for AI data flows between boxes in data center. Eridu systems are designed to replace multiple multi-layer optical connections with on-chip communications. The company is also working on a switch that will accommodate more functions on the chip itself. This saves enormous amounts of energy and makes the network much more reliable, as optics are the least reliable part of the network.
2
Funding: $154M
Ethernovia makes Ethernet-based processors that help collect data from sensors scattered around a system - like in an autonomous vehicle - and quickly move it to a central computer.
Ethernovia makes Ethernet-based processors that help collect data from sensors scattered around a system - like in an autonomous vehicle - and quickly move it to a central computer.
3
Funding: $2.8B
Cerebras is building Wafer-Scale Engine (WSE) – the largest chip ever built for deep learning systems. The chip has a size of a silicon wafer, with a very large number of transistors and cores. Most of the memory on the chip is SRAM, with no or minimal (compared to SambaNova) external memory such as DRAM / HBM. This creates certain limitations in terms of flexibility of use and scaling of models, especially when the model is very large. But it has a very high bandwidth of the internal bus that enables super-fast data transfer between components inside one large plate. Each Cerebras system uses several such chips and requires significant cooling capacity. The WSE-chip powers the Cerebras CS-X - the AI supercomputer that enables less networking and a smaller footprint than a GPU-based cluster, and eliminate programming complexity by interacting with a single logical device at every scale. Cerebras also provides cloud learning/inference service for LLM companies like OpenAI.
Cerebras is building Wafer-Scale Engine (WSE) – the largest chip ever built for deep learning systems. The chip has a size of a silicon wafer, with a very large number of transistors and cores. Most of the memory on the chip is SRAM, with no or minimal (compared to SambaNova) external memory such as DRAM / HBM. This creates certain limitations in terms of flexibility of use and scaling of models, especially when the model is very large. But it has a very high bandwidth of the internal bus that enables super-fast data transfer between components inside one large plate. Each Cerebras system uses several such chips and requires significant cooling capacity. The WSE-chip powers the Cerebras CS-X - the AI supercomputer that enables less networking and a smaller footprint than a GPU-based cluster, and eliminate programming complexity by interacting with a single logical device at every scale. Cerebras also provides cloud learning/inference service for LLM companies like OpenAI.
4
Funding: $625M
MatX is an AI chip startup that designs chips that support large language models. The company’s goal is to make its processors 10 times better at training LLMs and delivering results than Nvidia’s GPUs.
MatX is an AI chip startup that designs chips that support large language models. The company’s goal is to make its processors 10 times better at training LLMs and delivering results than Nvidia’s GPUs.
5
Funding: $305.1M
Positron is an AI infrastructure company that designs and supplies purpose-built hardware and software systems for transformer-based models.
Positron is an AI infrastructure company that designs and supplies purpose-built hardware and software systems for transformer-based models.
6
Funding: $335M
Ricursive is developing an AI system that designs chips—placing microcomponents on the chip to ensure performance, good power consumption, and any other design requirements. Their AI model is based on a self-improving "reward signal" that evaluates the design's quality and improves the agent after each subsequent chip is developed. Ricursive's platform also uses LLM for design validation. Nvidia is the startup's main investor and primary target customer, along with AMD and Intel.
Ricursive is developing an AI system that designs chips—placing microcomponents on the chip to ensure performance, good power consumption, and any other design requirements. Their AI model is based on a self-improving "reward signal" that evaluates the design's quality and improves the agent after each subsequent chip is developed. Ricursive's platform also uses LLM for design validation. Nvidia is the startup's main investor and primary target customer, along with AMD and Intel.
7
Funding: $1.8B
Groq creates hardware AI accelerators for large language models that improve performance and reduce power consumption compared to classic GPU accelerators. It produces a Language Processing Unit (LPU) chip that uses exclusively on-chip SRAM memory (without external DRAM/HBM modules), which requires installing multiple chips for scalable payloads. Its architecture minimizes elements associated with unpredictable behavior (branch prediction, caches, etc.). Groq enables best performance for small and medium batch/task sizes, especially if the model fits into their configuration. The software-defined, single-core architecture removes traditional software complexity while continuous, token-based execution delivers consistent performance without tradeoffs. Groq licenses its technology to NVidia. The company also provides its cloud AI inference platform built for developers - available in public, private, or co-cloud instances.
Groq creates hardware AI accelerators for large language models that improve performance and reduce power consumption compared to classic GPU accelerators. It produces a Language Processing Unit (LPU) chip that uses exclusively on-chip SRAM memory (without external DRAM/HBM modules), which requires installing multiple chips for scalable payloads. Its architecture minimizes elements associated with unpredictable behavior (branch prediction, caches, etc.). Groq enables best performance for small and medium batch/task sizes, especially if the model fits into their configuration. The software-defined, single-core architecture removes traditional software complexity while continuous, token-based execution delivers consistent performance without tradeoffs. Groq licenses its technology to NVidia. The company also provides its cloud AI inference platform built for developers - available in public, private, or co-cloud instances.
8
Funding: $1.1B
SambaNova Systems produces chips based on the Reconfigurable Dataflow Unit architecture, which allows more flexible distribution of computations and memory, adaptation to different types of models. This architecture supports the "composition of experts" mechanism - when several specialized models work on certain parts of the data/tasks. The chip has multi-level memory and is optimized for large open models - Llama, DeepSeek, etc. It uses low-precision formats to accelerate computations and strives to achieve an optimal ratio of performance / throughput per watt, that's why RDU is very energy efficient and enables more compact hardware infrastructure. The company offers a full-featured stack: not just hardware, but also a software platform, a cloud service and the ability to deploy an on-premises system. SambaNova Systems is focused on open technologies and standards.
SambaNova Systems produces chips based on the Reconfigurable Dataflow Unit architecture, which allows more flexible distribution of computations and memory, adaptation to different types of models. This architecture supports the "composition of experts" mechanism - when several specialized models work on certain parts of the data/tasks. The chip has multi-level memory and is optimized for large open models - Llama, DeepSeek, etc. It uses low-precision formats to accelerate computations and strives to achieve an optimal ratio of performance / throughput per watt, that's why RDU is very energy efficient and enables more compact hardware infrastructure. The company offers a full-featured stack: not just hardware, but also a software platform, a cloud service and the ability to deploy an on-premises system. SambaNova Systems is focused on open technologies and standards.
9
Funding: $822M
Lightmatter uses photonic computing to accelerate computation and communication between chips in cloud AI systems. The company's first two products are Passage chips, which utilize both photons and electrons to improve operational efficiency. They combine the computational tasks that electrons excel at (such as memory) with those that light excels at (such as performing massive matrix multiplications in deep learning models). Photonics enables multiple computations to be performed simultaneously because data arrives as light of different colors. This increases the number of operations per unit area and reuses existing hardware, improving energy efficiency. Passage takes advantage of the bandwidth of light to connect processors, similar to how fiber optic cables use light to transmit data over long distances. This allows disparate chips to function as a single processor.
Lightmatter uses photonic computing to accelerate computation and communication between chips in cloud AI systems. The company's first two products are Passage chips, which utilize both photons and electrons to improve operational efficiency. They combine the computational tasks that electrons excel at (such as memory) with those that light excels at (such as performing massive matrix multiplications in deep learning models). Photonics enables multiple computations to be performed simultaneously because data arrives as light of different colors. This increases the number of operations per unit area and reuses existing hardware, improving energy efficiency. Passage takes advantage of the bandwidth of light to connect processors, similar to how fiber optic cables use light to transmit data over long distances. This allows disparate chips to function as a single processor.
10
Funding: $588.9M
Celestial AI develops optical interconnect technology for compute-to-compute, compute-to-memory and on-chip data transmission.
Celestial AI develops optical interconnect technology for compute-to-compute, compute-to-memory and on-chip data transmission.
11
Funding: $475M
Unconventional AI aims to create a new, energy-efficient AI computer platform inspired by neuroscience. It develops silicon circuits that exhibit brain-like nonlinear dynamics to create a new foundation for intelligence. This means the startup will create hardware-isomorphism and run neural networks directly on physical objects, rather than simulating physical systems using software as is currently the case. This approach will enable capabilities significantly exceeding existing models while consuming only a fraction of the energy.
Unconventional AI aims to create a new, energy-efficient AI computer platform inspired by neuroscience. It develops silicon circuits that exhibit brain-like nonlinear dynamics to create a new foundation for intelligence. This means the startup will create hardware-isomorphism and run neural networks directly on physical objects, rather than simulating physical systems using software as is currently the case. This approach will enable capabilities significantly exceeding existing models while consuming only a fraction of the energy.
12
Funding: $355M
SiMa.ai is building an ultra low-power software and chip solution for machine learning at the edge.
SiMa.ai is building an ultra low-power software and chip solution for machine learning at the edge.
13
Funding: $272M
Blaize is an AI computing platforms company that develops products for the automotive, smart vision, and enterprise computing markets.
Blaize is an AI computing platforms company that develops products for the automotive, smart vision, and enterprise computing markets.
15
Funding: $212M
Kneron develops an application-specific integrated circuit and software that offers artificial intelligence-based tools.
Kneron develops an application-specific integrated circuit and software that offers artificial intelligence-based tools.
16
Funding: $164.7M
Mythic goes beyond conventional digital architectures, memory, and calculation elements – rethinking everything from the ground up: from transistors and physics, through circuits and systems, up to software and AI algorithms.
Mythic goes beyond conventional digital architectures, memory, and calculation elements – rethinking everything from the ground up: from transistors and physics, through circuits and systems, up to software and AI algorithms.
17
Funding: $162.9M
EnCharge AI delivers a battle-tested computing platform to unlock the best AI computing, from the edge to the cloud.
EnCharge AI delivers a battle-tested computing platform to unlock the best AI computing, from the edge to the cloud.
18
Funding: $126M
EdgeQ intends to fuse AI compute and 5G within a single chip. The company is pioneering converged connectivity and AI that is fully software-customizable and programmable.
EdgeQ intends to fuse AI compute and 5G within a single chip. The company is pioneering converged connectivity and AI that is fully software-customizable and programmable.
19
Funding: $125.4M
Etched.ai is an AI chip startup that develops Sohu, a chip designed specifically for running transformer models.
Etched.ai is an AI chip startup that develops Sohu, a chip designed specifically for running transformer models.
20
Funding: $124M
Esperanto Technologies is a company develops high-performance, energy-efficient computing solutions encouraging your innovation in artificial intelligence via flexible RISC-V open instruction set architecture (ISA) designs.
Esperanto Technologies is a company develops high-performance, energy-efficient computing solutions encouraging your innovation in artificial intelligence via flexible RISC-V open instruction set architecture (ISA) designs.
21
Funding: $115M
Luminous develops supercomputer for AI on a single chip that will replace 3000 TPU boards.
Luminous develops supercomputer for AI on a single chip that will replace 3000 TPU boards.
22
Funding: $105M
Axiado Corporation is a security processor company redefining hardware root of trust with hardware-based security technologies, including per-system AI.
Axiado Corporation is a security processor company redefining hardware root of trust with hardware-based security technologies, including per-system AI.
23
Funding: $54M
Built around a patented Polymorphic Dataflow Architecture, supported by a comprehensive SDK, Kinara Ara Edge AI processors accelerate and optimize real-time decision making for unrivaled edge AI solutions. Our Ara AI accelerators power smart edge devices and gateways that demand responsive AI computing with optimal energy efficiency.
Built around a patented Polymorphic Dataflow Architecture, supported by a comprehensive SDK, Kinara Ara Edge AI processors accelerate and optimize real-time decision making for unrivaled edge AI solutions. Our Ara AI accelerators power smart edge devices and gateways that demand responsive AI computing with optimal energy efficiency.
24
Funding: $40.2M
Rain Neuromorphics builds artificial intelligence processors, inspired by the brain.
Rain Neuromorphics builds artificial intelligence processors, inspired by the brain.
25
Funding: $27.8M
Leading provider of software and hardware accelerated solutions for Advanced Artificial Intelligence and Machine Learning applications.
Leading provider of software and hardware accelerated solutions for Advanced Artificial Intelligence and Machine Learning applications.
26
Funding: $14.1M
Extropic is developing a fundamentally new computing device, which it calls "thermodynamic sampling units" (TSUs). The TSU's silicon components capture thermodynamic fluctuations of electrons to model the probabilities of random events in various complex systems. According to the developers, it is thousands of times more energy efficient than GPUs for some specialized ML calculations. The chip is used for probabilistic computing and creation of energy-based models, which can be used in numerical models for weather forecasting, image generation and robot trajectory planning. Extropic also releases TRHML software, which simulates TSU behavior on a GPU. The first working Extropic chip has already been delivered to several partners, including leading AI research labs and weather modeling startups.
Extropic is developing a fundamentally new computing device, which it calls "thermodynamic sampling units" (TSUs). The TSU's silicon components capture thermodynamic fluctuations of electrons to model the probabilities of random events in various complex systems. According to the developers, it is thousands of times more energy efficient than GPUs for some specialized ML calculations. The chip is used for probabilistic computing and creation of energy-based models, which can be used in numerical models for weather forecasting, image generation and robot trajectory planning. Extropic also releases TRHML software, which simulates TSU behavior on a GPU. The first working Extropic chip has already been delivered to several partners, including leading AI research labs and weather modeling startups.
27
Funding: $10M
Ambient Scientific is a leading developer of industry’s lowest power programmable AI processors designed to address the explosive demand for Inference and Training in endpoint, edge and Battery-operated AI devices giving both connected and unconnected devices and appliances their own personalities. Ambient’s AI processor and microcontroller products and software libraries are designed to bring a plethora of exciting new innovative products to the market quickly.
Ambient Scientific is a leading developer of industry’s lowest power programmable AI processors designed to address the explosive demand for Inference and Training in endpoint, edge and Battery-operated AI devices giving both connected and unconnected devices and appliances their own personalities. Ambient’s AI processor and microcontroller products and software libraries are designed to bring a plethora of exciting new innovative products to the market quickly.
28
Boulder AI helps companies use computer vision and artificial intelligence to solve problems. With experience in AI software development, AI hardware design and manufacturing, and AI vision systems design and implementation, we provide technical know-how and services to businesses that want to “see.”
★
See also:

































