Business

Taalas Raises $169M for Model-Specific AI Inference Chips

TORONTO: Toronto-based chip startup Taalas raises $169 million in funding to develop model specific AI processors . The company develops specialized AI inference chips that print AI model components directly into silicon. Investors include Quiet Capital, Fidelity, and semiconductor veteran Pierre Lamond.

Taalas builds model-specific processors by hardwiring AI model portions onto chips. The approach trades generality for speed and cost efficiency, a direction also seen in startups building AI inference chips for production workloads.

The custom silicon pairs with large amounts of on-chip SRAM memory. This eliminates external memory access during inference operations and reduces latency.

The company claims its first chip generates 17,000 output tokens per second. The rapid rise in high-performance AI workloads is also contributing to a global memory chip shortage and rising DRAM prices, as infrastructure demand scales. This delivers 73 times more performance than NVIDIA’s H200 graphics card. The processor uses one-tenth the power while delivering this performance.

Taalas partners with Taiwan Semiconductor Manufacturing Company to produce chips in approximately two months. The foundry-optimized workflow enables customers to move from model weights to deployable cards rapidly. The company’s HC1 chip uses TSMC’s 6-nanometer process.

The funding round brings Taalas’s total capital raised to over $200 million across three rounds. The company assembles 25 engineers with experience from AMD, Apple, Google, NVIDIA, and Tenstorrent. Taalas was founded in 2023 by Bajic, Lejla Bajic, and Drago Ignjatovic.

It is the bespoke design for each model that gives the Taalas chip its advantage

Ljubisa Bajic CEO of Taalas.

The company’s first product runs the open-source Llama 3.1 8B language model. Taalas plans to launch a chip capable of running a 20-billion-parameter Llama model this summer. The startup targets a cutting-edge processor capable of deploying frontier models by year end.

The announcement comes weeks after NVIDIA’s $20 billion deal to license IP from Groq. The transaction reignites investor interest in specialized AI inference technology. Competitors including Cerebras and Groq focus on custom silicon solutions for inference optimization.

Shobhit Kalra

Shobhit Kalra is the Chief Sub Editor at Tea4Tech, with over 12 years of experience across digital media, digital marketing, and health technology. He is responsible for editorial review, content structuring, and quality control of articles covering software, SaaS products, and developments across the technology ecosystem. || At Tea4Tech, Shobhit oversees content accuracy, clarity, and adherence to editorial standards, ensuring published stories meet the newsroom’s guidelines for originality, sourcing, and consistency.

Recent Posts

AI Startup Subquadratic Bags $29mn in Seed Funding; Claims 1,000x Compute Cut

MIAMI: The Miami-based startup Subquadratic has successfully bagged $29 million in seed funding and bold…

2 days ago

Dutch Quantum Startup QuantWare Lands $178M for Chip Production

Netherlands: The Netherlands-based startup QuantWare that builds an open-architecture model for quantum hardware manufacturing, raises…

2 days ago

Astrocade Raises $56M for AI Game Creation Platform from Sequoia

SAN FRANCISCO: Astrocade closes $56 million in fresh funding to scale its generative AI gaming…

2 days ago

Amazon Launches AI Agent Access for WorkSpaces Virtual Desktops

SEATTLE: Amazon Web Services launches a preview feature letting AI agents access and operate WorkSpaces…

3 days ago

From $4.5bn to $15bn: AI Agent Startup Sierra Lands $950M Series E

SAN FRANCISCO: In the Series E investment round, AI Startup Sierra Technologies raises $950 million…

3 days ago

Why VPNs are Essential for Remote Teams Using Cloud-Based Software?

There are so many doors that remote work has unexpectedly opened. Doors for the global…

4 days ago