After Apple’s Dark Sky, This Weather App Predicts Rainbows Too!

New York: The team behind the popular weather app Dark Sky is back with a new product, this time as an independent startup. The former creators have launched Acme Weather, a new weather app that aims to offer more transparent and reliable forecasts, along with some playful and unique alerts, including alerts about rainbows and beautiful sunsets.

Unlike traditional weather apps that show only a single prediction, Acme Weather presents multiple possible outcomes. The app combines its own forecast with data from different weather models, satellites, ground stations, and radar systems. It also displays alternative forecast paths as grey lines on graphs, helping users understand uncertainty.

“Forecasts are often wrong, it’s the weather, right? It’s one of the hardest things to predict,” said Dark Sky co-founder Adam Grossman. “And our biggest pet peeve with a lot of weather apps is you just get their best guess, and you don’t know how certain they are,” he added.

Grossman said seeing alternate outcomes is especially useful during major events like winter storms, where conditions could shift between rain and snow.

Acme Weather is priced at $25 per year, with a two-week free trial. The subscription helps cover the cost of running multiple weather models and building maps in-house. “Most of our time has been spent on building our own forecast, our own data provider, in a way,” Grossman noted.

The app also includes standard alerts for rain, lightning, severe weather, and snow totals, as well as wind, temperature, humidity, cloud cover, and hurricane tracks. Another feature labeled as Community Reports, lets users share information about their current conditions to improve the app’s real-time weather reporting. These features live under a section called Acme Labs.

Acme Weather is currently available on iOS, with an Android version planned.

Canva Expands Creative Suite With Cavalry, MangoAI Acquisitions

Sydney: Photo editing giant Canva has expanded its ambitions beyond static design by acquiring two startups focused on motion and advertising intelligence. The company announced the acquisition of UK-based animation startup Cavalry and US-based AI firm MangoAI, strengthening both its professional creative tools and marketing capabilities.

Cavalry builds 2D motion animation software used across advertising, gaming, marketing, and generative art. Canva plans to integrate Cavalry’s animation technology into Affinity, its professional design suite for photo, vector, and layout editing that it acquired in 2024. Since Canva redesigned Affinity and made it free, the software has crossed five million downloads.

“By bringing Cavalry alongside Affinity, we’re closing that gap and unlocking a complete professional suite spanning photo, vector, layout, and now motion editing,” Canva stated in a blog post.

Cavalry will continue operating as standalone software while its tools embed into Canva and Affinity workflows. Cavalry customers include Amazon, ByteDance, Google and OpenAI according to the company website.

The company also acquired MangoAI, a stealth startup focused on using reinforcement learning to improve video ad performance. MangoAI’s technology helps brands test, measure, and refine ads based on real-world results.

MangoAI was founded by former Netflix executives Nirmal Govind and Vinith Misra. Govind will join Canva as its first Chief Algorithms Officer, while Misra will work on strengthening Canva’s marketing and growth products.

The acquisitions build on Canva’s recent push into marketing intelligence, following its purchase of Magicbrief and the launch of Canva Grow last year.

Samsung Builds Multi-Agent AI Ecosystem for Galaxy Devices

SEOUL: Samsung Electronics announces multi-agent ecosystem for Galaxy AI. The company integrates Perplexity as additional AI agent across upcoming flagship devices. Samsung research claims that nearly 80 percent of users rely on multiple AI agents.

Galaxy AI operates at system level through framework connections across devices. The approach reduces app switching and command repetition. Samsung positions Galaxy AI as orchestrator bringing different AI forms into cohesive experience.

Users access Perplexity through “Hey Plex” voice command or side button press. The agent embeds across Samsung Notes, Clock, Gallery, Reminder, and Calendar apps. Third-party app integration enables multi-step workflows without manual app management.

“We’ve been committed to building an open and inclusive integrated AI ecosystem that gives users more choice, flexibility and control to get complex tasks done quickly and easily,” said Won-Joon Choi, President and COO of Mobile eXperience Business at Samsung Electronics.

Samsung announces details ahead of Galaxy Unpacked on February 25. The company unveils Galaxy S26 at San Francisco event. Supported devices and experiences release soon following announcement.

The multi-agent expansion enables users to choose experiences fitting needs and preferences. Samsung collaborates with trusted partners while maintaining easy-to-use accessible experiences. Galaxy AI works in background curating experiences from supporting services.

Samsung integrates AI directly into operating system rather than individual apps. The system understands user context to support natural interactions. Everything feels seamlessly integrated within Galaxy environment.

Altman Labels OpenAI’s Camera Powered Speaker as ‘Coolest Piece of Technology’

SAN FRANCISCO: OpenAI develops smart speaker with integrated camera for February 2027 launch. The company assigns over 200 employees to hardware division. The Information reports details on February 20.

The speaker features facial recognition similar to Apple Face ID for purchase authentication. The device observes users and surroundings to identify objects and suggest actions. An internal presentation shows the speaker recommending early bedtime before morning meetings.

OpenAI prices the device between $200 and $300. The company works with former Apple design chief Jony Ive through his LoveFrom design firm. Ive acquired io Products in May 2025 for $6.5 billion.

“The coolest piece of technology that the world will have ever seen,”

Sam Altman, CEO of OpenAI

The hardware team includes former Apple employees Tang Tan on hardware, Evans Hankey on industrial design, and Scott Cannon on supply chain. Ive makes final calls on design choices. OpenAI employees complain about LoveFrom’s secrecy and slow design revision pace.

OpenAI explores smart glasses for potential 2028 production and a smart lamp prototype. The company cancels products beyond the speaker if early development falters. The devices division launched nine months ago following the io Products acquisition.

The speaker competes with Amazon Alexa and Google Home. Previous AI hardware attempts including Humane AI Pin and AI pendants receive criticism. Privacy concerns arise around devices ingesting intimate user data.

India’s Sarvam AI Challenges ChatGPT with Indus Chatbot

BENGALURU: Sarvam AI launches Indus chat app to compete with ChatGPT. The Indian startup enters the AI chatbot market dominated by OpenAI, Anthropic, and Google. The app serves as interface for Sarvam’s 105-billion-parameter large language model.

Indus currently operates in beta on iOS, Android, and web platforms. Users type or speak queries and receive responses in text and audio. The app supports all Indian languages with seamless mid-conversation switching capabilities.

The launch follows Sarvam’s unveiling of 105B and 30B models at India AI Impact Summit. The company announces enterprise initiatives and hardware plans. Partnerships include HMD for AI integration into Nokia feature phones and Bosch for automotive applications.

“We’re gradually rolling out Indus on limited compute capacity, so you may hit a waitlist at first. We will expand access over time”

Pratyush Kumar, Co-founder of Sarvam

The app allows users to upload images, PDFs, and documents for AI analysis. Voice command features enable verbal interaction with the assistant. Users sign in via phone number, Google, Microsoft account, or Apple ID.

Sarvam raised $41 million from Lightspeed Venture Partners, Peak XV Partners, and Khosla Ventures. Founded in 2023, the company builds large language models tailored for India.

The service currently limits access through a waitlist system as Sarvam expands compute capacity. The company seeks user feedback during the gradual rollout phase.

Tech Mahindra Launches 8B-Parameter Hindi LLM for Education

NEW DELHI: Tech Mahindra launches an 8-billion-parameter Hindi-first large language model on February 20. The company unveils the model at the India AI Impact Summit 2026. Tech Mahindra partners with NVIDIA under its Project Indus initiative.

The model provides AI-driven learning support in natural Hindi for foundational education subjects including physics. The architecture scales from Tech Mahindra’s previous 1.2-billion-parameter model. The upgrade reflects a significant expansion in computational capability.

The development team generated 500 million synthetic tokens using NVIDIA NeMo Data Designer. The synthetic training addresses data gaps in specific Indian languages. The model supports Agentic AI functionality for autonomous agents interacting in natural Hindi.

The partnership leverages NVIDIA NeMo framework for AI model development and NVIDIA NIM microservices for scalable deployment. The infrastructure ensures performance, scalability, and production readiness for educational applications.

“The model aims to democratize foundational education in subjects like physics for millions of students across India,” Tech Mahindra said in a statement.

Project Indus forms part of India’s broader push toward sovereign AI models. The initiative develops models with local languages, cultural context, and educational relevance. Tech Mahindra’s involvement reflects an industry drive to empower Indian learners with AI tools tailored to local needs.

The Hindi-first model is designed to reflect Indian linguistic and cultural context while supporting scalable artificial intelligence infrastructure. The collaboration used NVIDIA AI Enterprise solutions.

Reload Secures $2.3M, Launches Epic to Give AI Agents Shared Memory System

SAN FRANCISCO: Reload announces $2.275 million in funding on Thursday. The AI workforce management platform launches Epic, its first AI product. Anthemis leads the round with participation from Zeal Capital Partners, Plug and Play, Cohen Circle, Blueprint, and Axiom.

Reload enables organizations to manage AI agents across teams and departments. Companies connect agents regardless of origin, assign roles and permissions, and track work performed. The platform acts as a system of record for AI employees.

The company launches Epic to address agent memory limitations. Coding agents often lose context over time as they lack long-term memory. Epic serves as an architect alongside other coding agents, continuously defining product requirements and constraints.

Epic installs as an extension in AI-assisted code editors like Cursor and Windsurf. The product creates core system artifacts including product requirements, data models, API specifications, and tech stack decisions. Epic maintains structured memory of decisions, code changes, and patterns as development progresses.

The platform ensures multiple engineers using different agents build against the same shared source of truth. Teams currently use multiple agents simultaneously for coding, debugging, and refactoring tasks. These agents operate with only short-term memory and can lose context over time.

“Epic defines the system upfront and maintains shared project-level context across agents and sessions. If you switch coding agents, your structure and memory follow”

Newton Asare, CEO of Reload

Reload competes with LangChain and CrewAI in the AI infrastructure space. Co-founders Asare and Kiran Das previously had a company together that was acquired. The fresh capital supports hiring and product advancement to expand infrastructure for growing numbers of AI agents.agents.

Reddit Tests AI-Powered Shopping Search With Product Carousels

SAN FRANCISCO: Reddit announces AI-powered shopping search on Thursday. The feature matches community recommendations with products from shopping and advertising partners. A small group of U.S. users starts seeing interactive product carousels with pricing and buy links.

The search tool converts community discussions into commerce opportunities. Users searching for “best noise-canceling headphones” or “electronic gift ideas” see product carousels. The feature surfaces products directly mentioned in posts and comments. Users tap products to view details and purchase from retailers.

Reddit launched Dynamic Product Ads last year for personalized product recommendations. The shopping search builds on this e-commerce infrastructure. The test follows platforms like TikTok and Instagram integrating shopping features.

“This feature surfaces top-recommended products directly from discussions, giving redditors instant information about any product. This test is designed to make Reddit easier to navigate while keeping community perspectives at the center,” the company said in a blog post.

Reddit CEO Steve Huffman identifies AI search as the next big business opportunity. He notes weekly active users for search grew 30 percent over the past year. The number increased from 60 million to 80 million users.

Weekly active users for AI-powered Reddit Answers rose from 1 million in Q1 2025 to 15 million by Q4. The growth demonstrates user adoption of AI features on the platform.

OpenAI’s ChatGPT launched Instant Checkout in September for Etsy and Shopify purchases. The feature enables transactions within conversations. Reddit’s move positions the platform in the AI-driven commerce space.

The company continues learning from user behavior with the feature. Reddit plans to refine the experience over time based on usage patterns.

Taalas Raises $169M for Model-Specific AI Inference Chips

TORONTO: Toronto-based chip startup Taalas raises $169 million in funding to develop model specific AI processors . The company develops specialized AI inference chips that print AI model components directly into silicon. Investors include Quiet Capital, Fidelity, and semiconductor veteran Pierre Lamond.

Taalas builds model-specific processors by hardwiring AI model portions onto chips. The approach trades generality for speed and cost efficiency. The custom silicon pairs with large amounts of on-chip SRAM memory. This eliminates external memory access during inference operations and reduces latency.

The company claims its first chip generates 17,000 output tokens per second. This delivers 73 times more performance than NVIDIA’s H200 graphics card. The processor uses one-tenth the power while delivering this performance.

Taalas partners with Taiwan Semiconductor Manufacturing Company to produce chips in approximately two months. The foundry-optimized workflow enables customers to move from model weights to deployable cards rapidly. The company’s HC1 chip uses TSMC’s 6-nanometer process.

The funding round brings Taalas’s total capital raised to over $200 million across three rounds. The company assembles 25 engineers with experience from AMD, Apple, Google, NVIDIA, and Tenstorrent. Taalas was founded in 2023 by Bajic, Lejla Bajic, and Drago Ignjatovic.

It is the bespoke design for each model that gives the Taalas chip its advantage

Ljubisa Bajic CEO of Taalas.

The company’s first product runs the open-source Llama 3.1 8B language model. Taalas plans to launch a chip capable of running a 20-billion-parameter Llama model this summer. The startup targets a cutting-edge processor capable of deploying frontier models by year end.

The announcement comes weeks after NVIDIA’s $20 billion deal to license IP from Groq. The transaction reignites investor interest in specialized AI inference technology. Competitors including Cerebras and Groq focus on custom silicon solutions for inference optimization.

JioHotstar Partners With OpenAI for ChatGPT-Powered Content Discovery

NEW DELHI: JioHotstar and OpenAI announce partnership at India AI Impact Summit. The collaboration integrates ChatGPT-powered voice and text discovery into the streaming platform. Mukesh Ambani announces the partnership replacing keyword-based search with conversational AI.

The system accepts natural language inputs in multiple Indian languages by voice or text. Users describe what to watch in plain language including mood-based prompts. The interface interprets context beyond literal queries for tailored recommendations.

JioHotstar serves 450 million monthly average users with over 300,000 hours of programming. Content spans 19 languages across movies, originals, live sports, and entertainment. The platform represents India’s largest streaming destination.

“Traditionally, entertainment is a one-way experience where you passively consume content. AI completely changes that dynamic. Through our partnership with JioHotstar, we’re bringing personalized AI directly into entertainment and live sports,”

Fidji Simo, CEO of Applications at OpenAI.

The assistant extends to live sports content for conversational questions about ongoing matches. Users access player statistics, live scores, and key moments without leaving viewing. The rollout phases across select experiences before broader expansion.

The partnership creates two-way integration. ChatGPT surfaces JioHotstar recommendations when users search for entertainment suggestions. The system provides contextual suggestions and streaming links from the catalog.

The collaboration forms part of OpenAI’s broader “OpenAI for India” initiative. Other partnerships include Tata Group for AI data centers and companies including Pine Labs, Eternal, and MakeMyTrip.