Taalas Raises $169M for Model-Specific AI Inference Chips

Updated on Feb 23, 2026 01:51 PM
Taalas Raises $169M for Model-Specific AI Inference Chips - feature image

TORONTO: Toronto-based chip startup Taalas raises $169 million in funding to develop model specific AI processors . The company develops specialized AI inference chips that print AI model components directly into silicon. Investors include Quiet Capital, Fidelity, and semiconductor veteran Pierre Lamond.

Taalas builds model-specific processors by hardwiring AI model portions onto chips. The approach trades generality for speed and cost efficiency. The custom silicon pairs with large amounts of on-chip SRAM memory. This eliminates external memory access during inference operations and reduces latency.

The company claims its first chip generates 17,000 output tokens per second. This delivers 73 times more performance than NVIDIA’s H200 graphics card. The processor uses one-tenth the power while delivering this performance.

Taalas partners with Taiwan Semiconductor Manufacturing Company to produce chips in approximately two months. The foundry-optimized workflow enables customers to move from model weights to deployable cards rapidly. The company’s HC1 chip uses TSMC’s 6-nanometer process.

The funding round brings Taalas’s total capital raised to over $200 million across three rounds. The company assembles 25 engineers with experience from AMD, Apple, Google, NVIDIA, and Tenstorrent. Taalas was founded in 2023 by Bajic, Lejla Bajic, and Drago Ignjatovic.

It is the bespoke design for each model that gives the Taalas chip its advantage

Ljubisa Bajic CEO of Taalas.

The company’s first product runs the open-source Llama 3.1 8B language model. Taalas plans to launch a chip capable of running a 20-billion-parameter Llama model this summer. The startup targets a cutting-edge processor capable of deploying frontier models by year end.

The announcement comes weeks after NVIDIA’s $20 billion deal to license IP from Groq. The transaction reignites investor interest in specialized AI inference technology. Competitors including Cerebras and Groq focus on custom silicon solutions for inference optimization.

Published on February 20, 2026

Shobhit Kalra

Chief Sub Editor

Shobhit Kalra is the Chief Sub Editor at Tea4Tech, with over 12 years of experience across digital media, digital marketing, and health technology. He is responsible for editorial review, content structuring, and quality control of articles covering software, SaaS products, and developments across the technology ecosystem. At Tea4Tech, Shobhit over...

View Bio