OpenAI to Buy Python Toolmaker Astral to Strengthen Codex 

San Francisco: OpenAI has recently made public its plan to acquire Astral, a startup known for building some of the most widely used open-source tools in the Python developer community. The company says the acquisition will help it strengthen Codex, its AI system designed to assist with software development.  

Astral is best known for tools like uv, Ruff, and ty, which help developers manage dependencies, format code, and check for type errors. These tools have become essential to modern Python development and power millions of developer workflows.  

OpenAI says bringing Astral’s engineering talent and tools into its ecosystem will help Codex evolve from simply generating code to supporting developers throughout the development process. This includes planning changes, updating codebases, running tools, and checking results. This integration, as per OpenAI, will also allow its AI agents to work more naturally with the software tools developers already use every day.  

The deal, however, still needs regulatory approval, and until then, OpenAI and Astral will operate as separate companies. Once approved, Astral’s team will join OpenAI’s Codex division to build more advanced, AI-powered developer tools. OpenAI says it will continue supporting Astral’s open-source projects even after the acquisition closes. 

WordPress.com Enables AI Agents Write, Edit, Publish Content

San Francisco: WordPress.com powers over 43% of all websites on the internet. That statistic just took on a new dimension. The platform has announced that AI agents can now draft, edit, and publish content on customers’ websites directly, autonomously, and at scale.

Built on MCP support introduced last October, the new capability lets any MCP-enabled AI client, Claude, Cursor, ChatGPT, or others connect to a WordPress.com site and take action, not just read.

The scope is wider than just writing posts. AI agents can now create landing pages, About pages, and full site structures. Agents can approve, reply to, and moderate comments. They can manage tags, categories, and metadata. They can fix alt text and captions for SEO.

Every change is logged through the site’s Activity Log. Posts written by AI are saved as drafts by default, and all actions require user approval before going live.

The agent also reads the site’s existing theme before creating anything. It learns the colors, fonts, spacing, and block patterns already in use, so new content fits rather than clashes.

To enable it, customers visit wordpress.com/mcp, toggle on the capabilities they want, and connect their preferred AI client. The whole setup takes minutes.

The implications are significant. WordPress.com sees 20 billion pageviews and 409 million unique visitors monthly. Allowing AI agents to create and manage content at that scale reshapes what the web looks like and who, or what, is building it.

It follows a broader pattern already underway. Meta recently acquired Moltbook, a social network where AI agents posted and interacted autonomously. The line between human-authored and machine-authored content online is disappearing faster than most expected.

Google AI Studio Launches ‘Vibe Coding’ Upgrade with Antigravity Agent

San Francisco: Google AI Studio has launched a completely rebuilt vibe coding experience. It is powered by Antigravity, Google’s new coding agent and backed by a native Firebase integration. Together they let users build real, deployable web applications from plain-language prompts. No leaving the platform. No separate backend setup.

The gap between prototype and production has always been where vibe coding broke down. A working demo is one thing. An app with user authentication, a live database, real-time multiplayer, and external API connections is another. The new experience closes that gap directly.

When the Antigravity agent detects that an app needs a database or login system, it proactively suggests a Firebase integration. Once approved, it provisions Cloud Firestore for storage and Firebase Authentication for secure sign-in automatically.

The agent also installs external libraries on its own. If an app needs smooth animations, it pulls in Framer Motion. If it needs icons, it adds Shadcn. Users can also connect their own API credentials to integrate live services like Google Maps or payment processors, stored securely in a new Secrets Manager.

The update adds support for Next.js alongside existing React and Angular frameworks. Sessions now persist across devices, close a tab and the project picks up exactly where it left off. Google says the platform has already been used internally to build hundreds of thousands of apps over the past few months.

The launch sits alongside this week’s Stitch redesign, Google’s AI-native UI design canvas. Taken together, the two products form a clear pipeline, design in Stitch, build in AI Studio, ship via Antigravity.

Perplexity Unveils Health Tool with Apple Health & Fitbit Support

San Francisco: Perplexity has recently launched Perplexity Health, a new feature that connects directly to users’ health data from apps and devices like Apple Health, Fitbit, etc., as well as EHR from more than 1.7 million healthcare providers. The company says the goal is to bring all the medical and fitness-related information of an individual in one place and offer AI‑powered insights based on their actual health data. 

With these new connectors, Perplexity Health can pull in data such as heart rate, activity levels, sleep patterns, lab reports, and medical history. All this information is shown on a personalized dashboard that tracks trends over time. On the basis of which, users can ask questions regarding their health and get personalized answers instead of generic search results.  

Perplexity says the system uses trusted medical literature, including clinical guidelines and peer-reviewed research, to generate explanations and suggestions. The company has also formed a Health Advisory Board of doctors, researchers, and health‑tech experts to check the accuracy and safety of its health responses.  

Privacy has been prioritised too. Perplexity says all health data is encrypted, protected with strict access controls, and never used to train its AI models or sold to third parties. Users can delete their health information or disconnect data sources at any time.  

Perplexity Health is rolling out first to Pro and Max subscribers in the United States, with broader availability planned soon. The feature, as per Perplexity, is not a replacement for medical advice though.  

Google Reinvents UI Design with AI-Powered Stitch Canvas

San Francisco: Google Labs has relaunched Stitch as a fully AI-native design canvas. Anyone can now describe a product idea, a business objective, or even a mood and Stitch will generate high-fidelity UI designs from it. No wireframes, No design tools, No prior experience required.

Vibe coding gave non-developers the ability to build software by describing what they wanted. Google is now applying the same logic to design.

The redesigned canvas is infinite and context-aware. Users can bring in images, text, or code as starting points. A new design agent reasons across the entire project history not just the current screen. And an Agent Manager lets designers work on multiple ideas in parallel without losing track of progress.

Three features stand out.

  1. DESIGN.md, an agent-friendly markdown file that lets users extract a design system from any URL and apply it across projects or export it to other tools.
  2. Instant interactive prototyping and static designs become clickable flows in seconds, with Stitch. Automatically generating logical next screens based on user interaction.
  3. Users can now speak directly to the canvas, asking for real-time critiques, color palette variations, or entirely new screens mid-conversation.

Stitch also connects to developer workflows via an MCP server and SDK, with export paths into Google AI Studio and Antigravity. Its Skills library has already accumulated 2,400 GitHub stars.

The broader context matters! Google AI Studio simultaneously launched a full-stack vibe coding experience this week, signalling that Google is building an end-to-end AI-native development pipeline from idea to design to working code, entirely within its own ecosystem.

Google Brings Safer App Installation Option to Android 

California: Google is introducing a new, safer way for Android users to install apps from outside the Play Store in order to protect them from scams. The company announced the change after recently settling a major antitrust case over how Android apps are distributed. The new system, called advanced flow, aims to balance user freedom with stronger safety checks.  

For years, Google has required that all Android apps be linked to verified developers in an attempt to reduce malware, financial fraud, and data theft. But many Android users still prefer the option to install apps from other sources. The advanced flow process gives users this choice while adding several steps to make sure scammers cannot pressure them into installing a harmful app.  

To begin, users must turn on developer mode. This prevents accidental changes and restricts scammers from bypassing protections. The device then asks the user if anyone is guiding them to disable security settings, since scam callers often coach victims to do so in real time. Once confirmed, the phone needs to be restarted. This cuts off any active calls or screen sharing that a scammer might be using.  

The most crucial part of the process is the one day waiting period. Since scammers, as per Google, rely on fear and urgency, the delay will only give people time to reconsider before taking any action. After 24 hours, users confirm the change with their fingerprint, face unlock or PIN, and can then install apps from unverified developers. 

Google says sideloading will always remain an integral part of Android. The new system is solely designed to keep that freedom while making it harder for scammers to take advantage of users. 

Cloaked Raises $375M to Bring AI-Powered Privacy Protection to Enterprise

New York: Most security tools solve one problem. A password manager here. A VPN there. Identity protection somewhere else. Cloaked was built on the premise that fragmentation is itself the vulnerability.

Founded in 2020 by brothers Arjun and Abhijay Bhatnagar, Cloaked bundles identity masking, data removal, VPN, dark web monitoring, and AI-powered call screening into a single platform. The company has now raised $375 million in Series B and growth financing. General Catalyst and Liberty City Ventures led the round. Lux Capital, DuckDuckGo, LG Technology Ventures, and the NFL Players Association also participated.

The raise arrives at an inflection point. AI is making social engineering attacks faster, cheaper, and harder to detect. Cloaked processed over 50 million scam and spam calls since launching its AI screening feature last year. It now has 350,000 paying customers and has scrubbed more than one billion records from data broker sites.

“We’ve seen AI get better than humans at compromising individuals, it comes down to how you find a solution that fits between surveillance, scam spam, and phishing, and we’re winning in this category from the consumer side,”

Arjun Bhatnagar, CEO – Cloaked

The company is now pushing into enterprise. CISOs will be able to monitor employee-level risk exposure, track aggregated data clean-ups, and receive alerts on scams that could affect the business. An AI agent that autonomously resets compromised credentials is also in testing. With just 70 employees and $29 million raised prior to this round, the jump to $375 million signals serious institutional conviction.

Google Expands Personal Intelligence Access to All U.S. Users 

Washington, DC: Google has made its Personal Intelligence feature free for all users in the United States, instead of limiting it to paid accounts. The tool helps the AI give smarter, more personal answers by using info from apps like Gmail and Google Photos. For example, it can identify past purchases or travel bookings stored in your email and use those details to tailor recommendations. 

The update reflects Google’s broader goal of blending AI more naturally into everyday digital activity. By pulling information already available in a user’s account, the system can surface details faster and with less effort from the user. For instance, if someone is searching for travel ideas, Google can automatically consider their previous trips or stored confirmations to suggest better options. 

For now, this expansion is limited to U.S. users, and Google emphasizes that users remain in control of the data they choose to connect. Personal Intelligence stays off by default, meaning people need to opt in if they want the feature to access information from apps like Gmail or Photos. 

Overall, the update aims to make Google’s AI more context‑aware and aligned with each person’s preferences, without requiring them to repeat their information every time. 

Perplexity’s Comet AI Browser Now Available for iPhones 

San Francisco: Perplexity has recently made available its Comet AI browser on iPhones, expanding its AI‑driven browsing experience to users on iOS 18 and above. The browser was earlier available only on Mac, Windows, and Android.  

Comet, by acting like a built‑in helper, is programmed to make browsing simpler for all. Instead of relying only on regular searches, it uses Perplexity’s AI to understand what users are looking for and gather information regarding it from across the web. This helps people get clear answers without having to jump through multiple tabs or apps. 

A key feature of Comet is its side‑panel assistant, which stays available while users browse. It can summarise long articles, answer questions based on what is on the screen, and provide helpful context when researching or, say, comparing products. It also supports practical tasks such as booking hotels, sending emails, or completing online purchases, all from inside the browser. 

Perplexity says Comet adapts to each user’s habits and helps them stay organised, making it easier to follow through on tasks or revisit useful information. With this iPhone launch thus, Comet gives Apple users a fresh alternative to their usual browsers, offering a more interactive way to navigate the web. 

Meta’s Manus Brings Its AI Agent to the Desktop With My Computer Feature

San Francisco: Manus, the AI agent startup acquired by Meta in late 2025 for approximately $2 billion, has launched a desktop application for macOS and Windows. Its core feature is called My Computer. It gives the Manus agent direct access to a user’s local files, applications, and terminal, tasks that previously required uploading data to a remote server.

The shift is significant. Until now, Manus operated entirely through a web interface. Users gave it a task, it ran in a cloud sandbox, and returned results. My Computer collapses that gap. The agent can now read, edit, and organise local files, execute command-line instructions, launch applications, and tap into a machine’s own GPU for inference tasks.

Practical use cases include sorting thousands of unsorted photos into labelled folders, renaming batches of invoices, and building applications using programming environments already installed on the machine. In one demonstration, Manus built a real-time translation and subtitle app on macOS entirely through terminal commands, no manual coding required.

The launch is a direct response to OpenClaw, the open-source desktop AI agent that went viral earlier this year. Jensen Huang called OpenClaw the “next ChatGPT.”

Its creator Peter Steinberger has since joined OpenAI. Unlike OpenClaw, which is free and open-source, Manus runs on Meta’s proprietary model stack as a paid subscription, pitching itself as the polished commercial alternative.

To address privacy concerns, all terminal commands require explicit user approval before execution. Users can approve actions individually or set trusted recurring tasks to run automatically.