WhatsApp’s New Update Brings AI Replies, Storage Tools & More 

California: WhatsApp is introducing a slew of new features for its users, all aimed at making chats easier to manage and faster to respond to across devices. While the main feature allows WhatsApp to offer AI‑written replies based on conversations, the update also includes storage-based improvements, photo editing, chat transfers, stickers, and multi‑account support.  

One of the most noticeable additions is an upgrade to WhatsApp’s Writing Help tool. The feature previously helped users rephrase messages, fix grammar, or change tone before sending. With the latest update however, Writing Help can now draft full reply suggestions by reading the context of a conversation. This means WhatsApp can suggest quick responses to messages, which users can send as‑is, edit, or ignore entirely. The feature is optional and only appears when users tap the AI icon while typing.  

WhatsApp says chats remain private even when using Writing Help, maintaining end‑to‑end encryption. Meta, WhatsApp’s parent company, appears to be encouraging users to rely on its built‑in AI tools instead of external apps like ChatGPT when drafting messages. It, however, at the same time, acknowledges that not everyone will want AI‑generated replies, especially in personal conversations.  

Beyond AI replies, WhatsApp is also making it easier to free up storage space. Users can now find and delete large files directly within individual chats without deleting the entire conversation. For example, someone can remove old videos or photos while keeping message history intact. This change is designed to help users manage storage without losing meaningful conversations.  

Another major update brings AI‑powered photo editing to chats. Using Meta AI, users can touch up their images by removing distractions, changing backgrounds, or applying simple visual effects before sending them. These tools are built directly into WhatsApp, so users don’t need to switch to another app for basic photo edits.  

WhatsApp is also improving chat transfer options. Users can now move their chat history between iOS and Android devices, as well as transfer chats within the same platform. In addition, iPhone users can finally log into two WhatsApp accounts on the same device, a feature Android users have had for some time. 

Stickers are getting a small but useful update too. As users type emojis, WhatsApp will now suggest matching stickers, allowing them to swap emojis with animated stickers more easily. 

Meta says all of these features are rolling out gradually and will reach users world over soon. The update, all in all, shows that WhatsApp is trying its best to evolve from a basic messaging app into a more feature‑rich communication platform that has a lot to offer, all under a single roof. 

Deccan AI Raises $25M to Power AI Post-Training for Frontier Labs

San Francisco: Deccan AI has raised $25 million in a Series A round to scale its AI post-training and evaluation services for frontier labs and enterprises. The all-equity round was led by A91 Partners, with participation from Susquehanna International Group and Prosus Ventures.

Founded in October 2024, Deccan sits in a category that rarely makes headlines but powers the entire AI industry. While labs like OpenAI and Anthropic build core models in-house, the post-training work data generation, evaluation, and reinforcement learning is increasingly outsourced as companies push to make systems reliable in real-world use.

Deccan handles that layer. Its services range from improving model coding and agent capabilities to training systems to interact with external APIs. It also builds reinforcement learning environments and runs evaluations for frontier lab customers including Google DeepMind and Snowflake.

The company is headquartered in San Francisco with a large operations team in Hyderabad. It employs around 125 people and draws on a network of over one million contributors including students, domain experts, and PhDs. Between 5,000 and 10,000 contributors are active in any given month.

The scale of that contributor network matters. Post-training is not a compute problem, it is a human judgment problem. Getting AI models to behave reliably in the real world requires expert feedback at volume. That is what Deccan’s India-based workforce provides, and it is why the model is attracting serious venture backing less than 18 months after founding.

Deccan also offers two enterprise products: Helix, an evaluation suite, and an operations automation platform. It currently serves around ten customers and runs several dozen active projects simultaneously.

The work is also expanding beyond language. As models move into so-called world models, systems that better understand physical environments. Deccan is positioning itself to support training for robotics and vision systems as well.

The $25M will fund headcount growth, product development, and expansion of its contributor network as demand for post-training services accelerates alongside the broader AI infrastructure boom.

Is Vibe Coding About to Disrupt the $250B SaaS Industry?

There is a moment in every technology cycle when the foundations of an industry begin to feel less like bedrock and more like scaffolding. The SaaS industry, valued at over $250 billion globally, may be entering that moment now.

The force behind this shift has an unusual name: Vibe Coding.

Popularized by Andrej Karpathy, vibe coding describes a new paradigm where human intent (your vibe) can be translated directly into functional software by AI systems. What was once a slow, resource-intensive process is becoming nearly instantaneous.

And that changes everything.

For almost over two decades, SaaS thrived on a simple economic truth: building software was expensive, slow, and required specialized talent. Buying software, even if imperfect, was easier.

That asymmetry powered the entire SaaS ecosystem. Today, that advantage is eroding rapidly. Platforms like Replit, Google AI Studio, and emerging tools such as Lovable are compressing the cost and time of software creation to near zero.

AI has effectively removed execution as the bottleneck. What remains is imagination.

Why Rent When You Can Own it?

Going by the widely considered concept that becoming an owner is always better than hiring something on rent. The implications, meanwhile, go far beyond productivity gains.

Traditional SaaS operates on a compromise. Businesses adopt tools that solve 70-80% of their needs and adjust their workflows around the remaining gaps. This ‘good enough’ model created a massive market.

But in a world where software can be generated on demand, that compromise starts to look unnecessary. Instead of renting generic tools, organizations can now build custom internal software tailored to their exact workflows, iterate in real time without waiting for vendor roadmaps and own and control their tools entirely.

This shift is already reflected in data.

According to Gartner, by 2028, 90% of enterprise developers will use AI coding assistants, up from just 14% in early 2024. Meanwhile, 75% of employees are expected to build or modify software themselves by 2027, signaling a move away from centralized IT control. This overall shift raises the question of paying for something generic.

A Growing Risk No One in SaaS Can Ignore

Saying that not all SaaS companies are equally vulnerable, enterprise giants like Salesforce, Workday, and ServiceNow benefit from deep integrations, proprietary datasets, and entrenched network effects. These are not easily replaced overnight.

But the long tail of SaaS: tools like niche workflow automation platforms, internal dashboards and lightweight project management tools, faces a different reality.

When a founder or operator can describe a requirement to an AI agent and receive a working tool within hours, the economic argument for subscription-based alternatives weakens dramatically.

Understanding Ephemeral Software

Another emerging concept is ‘ephemeral software’, tools built for a specific task, used briefly, and then discarded or regenerated. This is a stark departure from traditional SaaS, where tools are Persistent, Standardized and Designed for mass adoption. In contrast, AI-generated tools are Temporary, Highly personalized and Continuously evolving.

It’s software not as a product, but as a process.

Where the Value Shifts Next

If application-layer software becomes abundant, where does value move? The answer appears to lie in Models (AI capabilities themselves), Data (proprietary and contextual intelligence) and Platforms (environments where software is generated and deployed).

According to a report, the AI software platforms market is expected to reach $153 billion by 2028, growing at over 40% CAGR. Meanwhile, traditional SaaS spending continues to rise. For an instance, Zylo reports 9.3% year-over-year growth, driven partly by AI-related price increases.

Ironically, businesses are paying more for software at the very moment alternatives are becoming cheaper.

From Tools to Outcomes

For SaaS companies, this shift doesn’t necessarily signal extinction, but it does demand reinvention. It is just like the evolution of AI. When it was considered that AI will be a threat to people, it is actually helping them in becoming more productive in the given time.

Similarly, the next generation of successful companies will likely move up the value chain, from tools to outcomes, focus on intelligence, not interfaces and build deep relationships and ecosystems, not just products.

Because in a world where anyone can generate software, the software itself is no longer the moat.

This is an editorial piece and reflects the views of the Tea4Tech editorial team.

Google Makes It Easy to Move from Other AI Chatbots to Gemini 

California: Google has recently announced new features, namely “switching tools”, to help people make a switch from other AI chatbots, such as ChatGPT or Claude, to its own assistant, Gemini, in as easy a manner as possible.

These tools allow users to bring their past conversations and personal information directly into Gemini, so they don’t have to start over when trying a new chatbot. The update is designed to help Gemini easily make sense of who a user is, what they care about, and what kinds of conversations they have had before.

According to Google, this includes things like personal preferences, interests, relationships, and background details, referred to as “memories”. Once imported, Gemini can make use of this information to give more relevant responses right away, instead of learning everything slowly over time.  

The memory‑import feature works in a simple way. Gemini provides a ready‑made prompt that users are required to copy and paste into their current chatbot. That chatbot generates a summary of what it knows about the user. The user then has to copy and paste that summary into Gemini, which stores it as part of the user’s memory. Google says this process helps Gemini “get up to speed” with its users without requiring any extra effort from the user.  

In addition to memories, Google is also letting users move their entire chat history from other AI apps. People can export their past conversations from another chatbot as a ZIP file and upload it directly into Gemini. Once imported, these conversations appear inside Gemini just like regular chats. Users can search through them, continue discussions, or delete them if they choose. 

This update has come at a time when AI chatbots are becoming more and more competitive. OpenAI’s ChatGPT remains the most widely used chatbot, with hundreds of millions of active users. Gemini, despite being an integral part of Android phones, Google Search, and Chrome, has lagged slightly in popularity. These new tools, as such, are aimed at reducing the effort needed for people to try Gemini and possibly make it their main AI assistant. 

Google has also stressed that, post this update, users will continue to stay in control of their data, come what may. They will be able to review, search, or even delete the imported memories and chat histories any time. They can either do so in bulk or individually, giving them flexibility over what Gemini remembers. 

The new switching tools are slowly being launched in consumer Gemini accounts, although their availability may vary by country. With this update, Google, however, is redefining what people look for in an AI assistant. As in, smart answers alone aren’t enough; a great AI assistant should offer personalization and continuity, like Gemini would now onwards. For users curious about switching chatbots thus, this move by Google is the ultimate masterstroke that would make your transition as easy as it gets. 

Defense AI Startup Shield AI Raises $2B at $12.7B Valuation

San Diego: Shield AI has raised $2 billion in new funding at a $12.7 billion valuation. That marks a 140% jump from its $5.3 billion valuation just twelve months ago.

The raise has two parts. A $1.5 billion Series G led by Advent and a JPMorganChase investment group forms the core. Blackstone funds added $500 million in preferred shares. Shield AI also secured a $250 million credit facility to draw on as needed.

One contract drove the valuation surge. In February, the US Air Force selected Shield AI’s Hivemind software for its Collaborative Combat Aircraft programme. Hivemind will power autonomous drone operations alongside Anduril’s Fury fighter jet. The Air Force chose separate vendors deliberately avoiding single-vendor lock-in across aircraft and autonomy software.

Hivemind is Shield AI’s flagship product. It allows aircraft and drones to fly autonomously without GPS, communications links, or human remote control. The platform has seen deployment across multiple military programmes and now sits at the centre of the Air Force’s next-generation drone strategy.

Alongside the fundraise, Shield AI is acquiring Aechelon Technology. Aechelon builds flight simulation software that the US military uses to train pilots. The deal adds high-fidelity simulation to Shield AI’s existing stack which includes Hivemind Enterprise, EdgeOS, and its Forge autonomy factory. Terms remain undisclosed.

Other Series G participants include Snowpoint Ventures, InnovationX, Riot Ventures, Disruptive, and Apandion. Advent separately committed up to $1 billion for future defence tech investments.

Shield AI now ranks among the most valuable pure-play defence AI software companies globally. Only Anduril sits higher currently seeking up to $8 billion at a $60 billion valuation.

Conntour Raises $7M to Build an AI Search Engine for Security Cameras

Tel Aviv: Conntour has raised $7 million in seed funding to build an AI-powered search engine for enterprise security camera systems. The round was led by General Catalyst and Y Combinator, with participation from SV Angel and Liquid 2 Ventures.

Founded less than two years ago, Conntour lets security teams query live and recorded camera feeds using natural language. Instead of scrubbing through hours of footage manually, a user can ask the system to find a specific person, object, or situation across an entire camera network in real time. The company describes it as a Google-like search engine built specifically for security video.

Unlike legacy surveillance systems that rely on preset rules and fixed parameters, Conntour uses vision-language models to give its platform flexibility. A security team can describe what they are looking for in plain language and the system surfaces relevant footage automatically. It can also monitor feeds autonomously and trigger alerts based on preset conditions.

The startup already counts several large government and publicly listed enterprise customers. One of the most notable is Singapore’s Central Narcotics Bureau. That traction helped close the round quickly, the entire raise was completed within 72 hours of Conntour beginning investor meetings.

Conntour’s current client base spans use cases the team considers verifiable and defensible.

“Traditional video surveillance forces operators to define exactly what they’re looking for before they even know what they need to find,”

Matan Goldner, Co-founder and CEO,Conntour

The $7M will fund product development, team expansion, and customer growth. Conntour targets enterprise security operations across government agencies and large corporates. The company is headquartered in Tel Aviv with operations extending across global enterprise markets.

Vision-language models have rapidly expanded what is possible in the surveillance space. Conntour is betting that natural language search is the interface that finally makes large-scale camera networks practically usable for security teams.

Google Launches Lyria 3 Pro, Its Most Advanced AI Music Model

California: Google has officially launched Lyria 3 Pro, a new artificial intelligence model designed to generate longer and more detailed music tracks. The announcement was made exactly one month after the company released the earlier Lyria 3 model. According to Google, the Pro version acts a crucial upgrade, making AI‑generated music more useful for real‑world creative and commercial use.  

The biggest improvement with Lyria 3 Pro is track length. While Lyria 3 could only produce 30‑second clips, the new model can generate complete 3-minute-long tracks. This makes the tool more useful for content creators, video editors, and musicians who wish to create complete songs rather than short loops. Google says the model also understands music structure in a better way, allowing users to guide how a track is created from start to finish.  

Users can now make their prompts as detailed as they want. They can include information like where they want an intro, verse, chorus, or bridge. This added control helps the music sound more natural and follow familiar song structures. It, as per Google, makes Lyria 3 Pro more suitable for real production work, including background music for videos, presentations, podcasts, and advertisements.  

Lyria 3 Pro is being rolled out across several Google products. Paid users of the Gemini app will get access to the new model, while free users will continue to use earlier versions. The model is also coming to Google Vids, the company’s AI‑powered video editing tool, making it easier for users to add custom soundtracks that match the tone of their videos. In addition, Google is integrating Lyria 3 Pro into ProducerAI, a music‑creation platform Google acquired recently, as well as enterprise tools like Vertex AI, the Gemini API, and AI Studio.  

Google says Lyria 3 Pro was trained using licensed and approved data from its partners, including content from YouTube and other Google services. The company adds that the model does not copy or imitate specific artists. Even if a user mentions an artist in a prompt, the system only takes “broad inspiration” from them instead of replicating that artist’s style. To be transparent, every track made with Lyria 3 and Lyria 3 Pro includes a SynthID watermark, clearly marking the music as AI‑generated. 

The launch has come at a time when AI‑generated music is being closely kept a watch over. Music platforms such as Spotify and Deezer have recently introduced tools to identify and manage AI‑made tracks, helping prevent misuse, mislabeling, and spam. With Lyria 3 Pro, Google is acting as a responsible player in the market by focusing on controlled use and high‑quality tools, rather than flooding platforms with low‑quality AI‑generated music. 

Lyria 3 Pro, all in all, is Google’s strongest stance into AI‑powered music creation, offering longer tracks and better creative control for both individual creators and businesses alike.

Google Debuts TurboQuant to Reduce AI Memory Usage 

California: Google has launched TurboQuant, a new AI technology designed to significantly reduce memory usage in large AI models. The Google Research Team made the announcement, and it has drawn widespread attention across the tech industry.

TurboQuant is programmed to fix memory consumption, one of the biggest challenges facing modern AI systems. AI models need to remember information so they can keep up with long conversations or complete complex tasks without repeating the same calculations again and again.

This stored information uses a lot of expensive GPU memory, which can slow AI systems down and increase costs. Google says TurboQuant makes this process far more systematic by cutting memory usage by 6 times, while keeping the results just as accurate. 

According to Google Research, TurboQuant achieves these gains through advanced compression techniques. The system uses a mathematical approach known as vector quantization to store AI data more efficiently. It combines two methods, called PolarQuant and QJL, which together reduce how much memory data takes up while preserving accuracy.  

The announcement has led to comparisons with the fictional startup Pied Piper from HBO’s Silicon Valley, which was famous in the show for creating a groundbreaking compression technology. That similarity has sparked plenty of jokes online, but experts say TurboQuant’s real‑world impact could be far more vital. By using much less memory, the technology could make AI systems cheaper to run, easier to scale, and more accessible to companies that don’t have huge computing resources. 

The market reacted quickly to the news. Shares of memory‑chip companies fell shortly after the announcement, as investors began to question whether technologies like TurboQuant might reduce future demand for high‑end memory hardware. Though, as per industry experts, it is still too early to tell how big the impact will be, the reaction shows how important efficiency improvements are becoming in the AI industry. 

Google plans to share more details about TurboQuant at the ICLR 2026 conference, where researchers will present test results and explain how the technology works. The company has made it clear that TurboQuant is still in an experimental stage and is not yet being used in live AI systems. 

Its introduction, however, sheds light on a growing shift in how AI is being built. Instead of only making bigger models and using more powerful hardware, companies are now investing in smarter software solutions to make AI more productive. And with TurboQuant, Google is leading the charge in this new direction. 

AI Notes Startup Granola Hits Unicorn Status with $125M Series C

San Francisco: Granola has raised $125M in Series C funding, pushing its valuation to $1.5 billion. The round was led by Danny Rimer at Index Ventures, with Mamoon Hamid at Kleiner Perkins also joining. Existing investors Lightspeed, Spark, and NFDG participated. Total funding now stands at $192 million.

The valuation jump is striking. Granola was worth $250 million less than a year ago. A six-fold increase in under twelve months signals serious enterprise momentum and investor confidence that AI meeting notes are just the beginning.

Granola’s core product sits quietly on a user’s computer, transcribing meetings and generating notes without a visible bot joining the call. That privacy-first approach drove its early adoption among power users and has since carried it into enterprise customers including Vanta, Gusto, Asana, Cursor, Lovable, and Mistral AI.

With this round, Granola is launching Spaces team workspaces with granular access controls and the ability to query notes across folders. It is also introducing two new APIs: a personal API for individual users to access and share notes, and an enterprise API giving admins control over team-wide context.

The API launch is a deliberate move. AI meeting transcription is becoming a commodity, with dozens of players now offering similar features. Granola is betting its edge lies not in the notes themselves but in making those notes queryable, shareable, and integrated into broader AI workflows.

The $125M will fund enterprise sales expansion and continued platform development.

Legal AI Startup Harvey Hits $11B Valuation With $200M Series C

San Francisco: Harvey has closed a $200M Series C at an $11 billion valuation, cementing its position as the highest-valued pure-play legal AI company in the world. The round was co-led by returning backers GIC and Sequoia, with participation from Andreessen Horowitz, Coatue, Conviction Partners, Elad Gil, Evantic, and Kleiner Perkins.

The raise is a significant step up. Harvey was valued at $8 billion just three months ago. That $3 billion jump in a single quarter reflects both strong commercial traction and intensifying investor conviction in legal AI as a standalone category.

Harvey’s platform automates the most time-intensive parts of legal work. Its AI assistant finds relevant legislation and precedents, extracts key information, and drafts initial versions of legal documents. A separate tool called Vault can run analyses across up to 100,000 documents simultaneously, a capability targeted at large-scale due diligence and contract review workflows.

“AI isn’t just assisting lawyers. It’s becoming the system through which legal work gets done,”

Winston Weinberg, CEO and co-founder of Harvey

The company serves law firms and enterprise legal teams, competing directly with Legora, which raised $550M at a $5.55B valuation earlier this month. Harvey’s valuation now sits at double Legora’s, a gap that reflects its earlier market entry, deeper US penetration, and broader enterprise footprint.

Capital from this round will fund expansion of Harvey’s agentic workflow products across both law firms and corporate legal departments. The company is pushing beyond AI assistance toward autonomous agents that can handle multi-step legal tasks end to end.

Legal AI has become one of the fastest-funded verticals in enterprise software. Harvey’s latest raise confirms it intends to lead that category.