1
Country: USA | Funding: $150M
Inferact mission is to accelerate AI progress by making inference cheaper and faster.
Inferact mission is to accelerate AI progress by making inference cheaper and faster.
2
Country: USA | Funding: $110.5M
Modal provides a cloud infrastructure optimized for AI workloads. The company has developed proprietary technologies to accelerate the inference of trained AI models to reduce computational costs and latency between user requests and AI responses. Customers can run their own or open-source AI models using Python syntax while remaining in their application code, executing tasks using the Modal SDK. Modal's container engine launches GPUs in less than 1 second when calling your inference function. The platform also provides code-based model training, batch job execution, running AI-generated code in dynamically defined sandboxes and high-performance GPU notebooks.
Modal provides a cloud infrastructure optimized for AI workloads. The company has developed proprietary technologies to accelerate the inference of trained AI models to reduce computational costs and latency between user requests and AI responses. Customers can run their own or open-source AI models using Python syntax while remaining in their application code, executing tasks using the Modal SDK. Modal's container engine launches GPUs in less than 1 second when calling your inference function. The platform also provides code-based model training, batch job execution, running AI-generated code in dynamically defined sandboxes and high-performance GPU notebooks.
3
Country: USA | Funding: $10M
Mirai allows to deploy and run models of any architecture directly on user devices. It developed fastest inference engine built from scratch for Apple devices.
Mirai allows to deploy and run models of any architecture directly on user devices. It developed fastest inference engine built from scratch for Apple devices.
4
Country: USA | Funding: $400M
RadixArk focuses on developing infrastructure for AI inference and training systems.
RadixArk focuses on developing infrastructure for AI inference and training systems.
5
Country: USA | Funding: $327M
Fireworks AI provides cloud-based platform that enables developers to build, customize and scale AI applications using open-source models. It features a library of ready-made models and enables scaling inference at minimal cost. It also includes coding assistance tools (IDE assistants, code generation, debugging agents), agent systems for creating multi-stage reasoning, planning and execution pipelines, ready-to-use enterprise assistants (for summarization, semantic search, personalized recommendations), enterprise RAG search for knowledge bases. The company provides SDK for prototyping, quality assessment, and scaling with confidence.
Fireworks AI provides cloud-based platform that enables developers to build, customize and scale AI applications using open-source models. It features a library of ready-made models and enables scaling inference at minimal cost. It also includes coding assistance tools (IDE assistants, code generation, debugging agents), agent systems for creating multi-stage reasoning, planning and execution pipelines, ready-to-use enterprise assistants (for summarization, semantic search, personalized recommendations), enterprise RAG search for knowledge bases. The company provides SDK for prototyping, quality assessment, and scaling with confidence.
6
Country: USA | Funding: $5.8M
Luminal develops a compiler for optimizing machine learning models and provides cloud platform for running optimized models. Companies upload their Huggingface models and their weights to the Luminal cloud and receive a serverless endpoint (i.e., you simply send a request for example, an image, text, or audio to a special URL and receive the result). Luminal compiles models into GPU code with zero overhead. Optimization methods allow to squeeze more computing power out of existing infrastructure. The compiler, which sits between the written code and the GPU hardware, effectively competes with Nvidia's proprietary CUDA stack.
Luminal develops a compiler for optimizing machine learning models and provides cloud platform for running optimized models. Companies upload their Huggingface models and their weights to the Luminal cloud and receive a serverless endpoint (i.e., you simply send a request for example, an image, text, or audio to a special URL and receive the result). Luminal compiles models into GPU code with zero overhead. Optimization methods allow to squeeze more computing power out of existing infrastructure. The compiler, which sits between the written code and the GPU hardware, effectively competes with Nvidia's proprietary CUDA stack.
7
Country: USA | Funding: $337M
Fal is a cloud platform that provides developers with a variety of generative models for image, video, 3D and audio in a single location. Specifically, Fal provides an infrastructure layer for multimodal AI for Adobe, Shopify, Canva and Quora. Developers are provided with a library of over 600 ready-to-use models (including Flux, Nano Banana, Kling Video and Veo) via the API and H100, H200 and B200 virtual machines, as well as dedicated clusters with Fal compute resources for instant inference and rapid scaling from zero to thousands of GPUs. The platform includes tools for model rapid deployment, testing, production deployment and monitoring. Enterprises are offered private model hosting and model co-development options.
Fal is a cloud platform that provides developers with a variety of generative models for image, video, 3D and audio in a single location. Specifically, Fal provides an infrastructure layer for multimodal AI for Adobe, Shopify, Canva and Quora. Developers are provided with a library of over 600 ready-to-use models (including Flux, Nano Banana, Kling Video and Veo) via the API and H100, H200 and B200 virtual machines, as well as dedicated clusters with Fal compute resources for instant inference and rapid scaling from zero to thousands of GPUs. The platform includes tools for model rapid deployment, testing, production deployment and monitoring. Enterprises are offered private model hosting and model co-development options.
8
Country: USA | Funding: $533.5M
Together provides a cloud platform for developing AI applications, accelerating training, fine-tuning and inference on performance-optimized GPU clusters. The cloud uses proprietary optimization technologies at the inference and training stages (ATLAS speculator system and Together Inference Engine) to improve performance and reduce overall costs and allows inference to be run with an API call. It contains a library of over 200 open-source models for chat, images, video, code and more, allowing migration from proprietary models to OpenAI-compatible APIs. The system allows to fine-tune open-source models and train your own models from the ground up, leveraging research breakthroughs such as Together Kernel Collection (TKC) for reliable and fast training.
Together provides a cloud platform for developing AI applications, accelerating training, fine-tuning and inference on performance-optimized GPU clusters. The cloud uses proprietary optimization technologies at the inference and training stages (ATLAS speculator system and Together Inference Engine) to improve performance and reduce overall costs and allows inference to be run with an API call. It contains a library of over 200 open-source models for chat, images, video, code and more, allowing migration from proprietary models to OpenAI-compatible APIs. The system allows to fine-tune open-source models and train your own models from the ground up, leveraging research breakthroughs such as Together Kernel Collection (TKC) for reliable and fast training.
9
Country: USA | Funding: $285M
Baseten develops a stack of inference optimization technologies and provides cloud platform for high-performance inference. The company's customers can deploy open-source, custom and optimized AI models on infrastructure specifically designed for high-performance inference at near-real-world scale. Baseten provides pre-optimized model APIs that enable instant testing of new workloads, product prototyping or evaluation of the latest AI models, performance studies using custom kernels, latest decoding methods and advanced caching built into the Baseten inference stack, scaling workloads across any region and any cloud.
Baseten develops a stack of inference optimization technologies and provides cloud platform for high-performance inference. The company's customers can deploy open-source, custom and optimized AI models on infrastructure specifically designed for high-performance inference at near-real-world scale. Baseten provides pre-optimized model APIs that enable instant testing of new workloads, product prototyping or evaluation of the latest AI models, performance studies using custom kernels, latest decoding methods and advanced caching built into the Baseten inference stack, scaling workloads across any region and any cloud.
10
Country: USA | Funding: $100M
Clarifai provides organizations with a cloud platform for developing and monitoring AI models. It enables unified system for quickly creating, managing and coordinating AI workflows across the entire organization. It allows to optimize computing resources across computing providers and control AI-expenses more effectively. The company has developed proprietary GPU acceleration technology that delivers an optimal balance of speed and price. Clarifai's compute orchestration system is fully compatible with OpenAI, so clients can simply redirect existing applications to Clarifai and start saving. The platform also supports other models, such as DeepSeek, LLama, and custom enterprise models. Companies can also deploy MCP servers and edge-optimized models on Clarifai.
Clarifai provides organizations with a cloud platform for developing and monitoring AI models. It enables unified system for quickly creating, managing and coordinating AI workflows across the entire organization. It allows to optimize computing resources across computing providers and control AI-expenses more effectively. The company has developed proprietary GPU acceleration technology that delivers an optimal balance of speed and price. Clarifai's compute orchestration system is fully compatible with OpenAI, so clients can simply redirect existing applications to Clarifai and start saving. The platform also supports other models, such as DeepSeek, LLama, and custom enterprise models. Companies can also deploy MCP servers and edge-optimized models on Clarifai.
11
Country: UK | Funding: $66M
Runware provides an API platform that enables developers to integrate generative AI for creating and transforming images, video and audio content. The startup has its own AI inference infrastructure for open-source models and provides day-one access (meaning, as soon as a model is released, it can run on Runware) and competitive pricing. Costs are reduced thanks to the Sonic Inference Engine, which runs on custom-designed AI hardware. Optimized model loading and unloading allows the service to support over 400,000 models and deliver any of them for real-time inference. Runware also partners with third-party AI cloud service providers to automatically re-route workloads when memory capacity increases.
Runware provides an API platform that enables developers to integrate generative AI for creating and transforming images, video and audio content. The startup has its own AI inference infrastructure for open-source models and provides day-one access (meaning, as soon as a model is released, it can run on Runware) and competitive pricing. Costs are reduced thanks to the Sonic Inference Engine, which runs on custom-designed AI hardware. Optimized model loading and unloading allows the service to support over 400,000 models and deliver any of them for real-time inference. Runware also partners with third-party AI cloud service providers to automatically re-route workloads when memory capacity increases.
12
Country: USA | Funding: $4.5M
Tensormesh is a company that specialises in optimising inference efficiency for large language models (LLMs) and agentic AI systems.
Tensormesh is a company that specialises in optimising inference efficiency for large language models (LLMs) and agentic AI systems.
13
Country: USA
vLLM develops high-throughput and memory-efficient inference and serving engine for LLMs
vLLM develops high-throughput and memory-efficient inference and serving engine for LLMs


















