Perplexity Launches Personal Computer to Automate Tasks on Mac 

San Francisco: Perplexity AI has introduced a new tool called Personal Computer, an AI assistant that turns a Mac into a 24/7 digital worker. The company says it is designed to handle complex tasks across apps, files, and the web like an automated helper running your computer for you. 

Unlike regular AI assistants that only act when you ask them something, Personal Computer can keep working in the background. You can give it a goal, like organising files, preparing data, or completing a multistep online task, and the AI figures out the steps on its own. It breaks the job into smaller actions and carries them out across different apps without needing instructions. 

Because the system runs directly on a Mac, it can access your local files, applications, and browser sessions. This lets it do jobs the way a human might, and users don’t even need to buy special hardware to use it. 

Perplexity says the goal is to give people an assistant that “never sleeps” and can manage ongoing tasks even when they are away from their computer. The company wants the AI to handle repetitive or time‑consuming work so users can focus on bigger projects instead. 

Personal Computer is available through an early-access waitlist, with Perplexity planning to expand access later this year. 

Kali Linux Brings Offline AI Penetration Testing via Local Ollama, 5ire, and MCP Kali Server

London: The Kali Linux team has published a new guide enabling security professionals to run AI-assisted penetration testing entirely on local hardware, with no data sent to cloud services. The setup lets users issue penetration testing commands in plain language, with an on-device AI model interpreting those instructions and executing them through a suite of standard security tools all without an internet connection or third-party API subscription.

Privacy and operational security concerns have long made cloud-dependent AI tools a liability in sensitive testing environments. Regulated industries, government contractors, and red teams operating in air-gapped networks routinely cannot route sensitive data through external services.

The new Kali Linux stack directly addresses that gap by combining three open-source tools: Ollama, a local AI model runtime; mcp-kali-server, a bridge already available in Kali’s repositories that connects the AI to the operating system’s security toolset; and 5ire, an open-source AI assistant that ties the two together into a single working interface.

The stack runs on a consumer-grade NVIDIA GPU. This keeps the hardware barrier accessible for individuals and small teams. Once configured, a security professional can describe tasks in plain English. For example, they can request a scan of a target host for open ports.

The AI interprets the request and selects the correct tool. It executes the task and returns structured results. All processing happens locally. The guide validated this setup with a live port scan. The test confirmed full GPU-accelerated and offline operation.

The release follows Kali Linux’s February integration of Claude AI for penetration testing via the Model Context Protocol a cloud-connected setup that this new guide complements for operators who require complete data sovereignty.

The two guides together position Kali Linux as the most AI-forward penetration testing distribution available, giving practitioners a clear choice between cloud-powered intelligence and fully local operation depending on their environment and compliance requirements.

As AI-assisted offensive security tooling matures rapidly with platforms like Armadin and JetStream Security raising hundreds of millions to automate enterprise defense the availability of open-source, privacy-preserving alternatives for individual researchers and smaller teams is becoming increasingly significant.

Google Maps Adds ‘Ask Maps’ AI & Better Immersive Navigation

California: Google is giving Google Maps a major AI upgrade with a new feature called Ask Maps and improvements to its immersive navigation system. The update aims to make Maps more helpful and conversational, not just a place to get directions. 

Ask Maps works like a built‑in AI assistant. Instead of typing exact keywords, users can now ask natural questions such as “Find a quiet cafe nearby” or “Show me fun indoor places for kids”. The AI then looks at reviews, photos, and map details to give suggestions. Google says the feature uses its Gemini AI models to understand a request and respond more intelligently than a typical search. 

Google is also improving its immersive navigation to make travelling easier. The updated version offers better lane‑level guidance, smoother animations and more realistic visuals of roads and intersections.

This helps users understand tricky turns and busy streets before they even start their journey. Google says this upgrade is especially useful in crowded city areas where traditional navigation feels confusing. 

These changes are part of Google’s aim to bring more AI into everyday apps. By making Maps more interactive, Google hopes users will rely on it not just for directions but also for discovering new places and planning activities. The company says the new features will begin rolling out in select regions first, with wider availability expected later this year. 

Nvidia Launches Nemotron‑3 Super, a New Open‑Weight AI Model 

California: Nvidia has launched a new AI model called Nemotron‑3 Super, designed to help developers build generative‑AI systems more easily. The model is “open‑weight”, which means companies can download it, customise it, and train it for their own needs instead of relying on closed‑source models. 

Nemotron‑3 Super comes in different sizes and is trained on huge datasets, allowing it to produce more accurate text, better reasoning, and stronger performance across tasks. Nvidia says this will make it easier for businesses to build AI tools like chatbots, content generators, coding assistants, or customer‑support systems without starting from scratch. 

The model also fits neatly into Nvidia’s existing AI ecosystem. Companies already using Nvidia GPUs and software can plug Nemotron‑3 Super into their workflows, fine‑tune it for specific industries, and deploy it at scale. Nvidia claims the model performs competitively with other leading AI systems. 

All in all, Nemotron‑3 Super is part of Nvidia’s push to give businesses more customisable AI options, putting it in direct competition with open‑weight models from Meta and Mistral, as well as closed-model platforms from OpenAI and Google. 

Google Releases Gemini Embedding 2 in Public Preview via Vertex AI

San Francisco: Google DeepMind has released Gemini Embedding 2 in public preview, marking the company’s first natively multimodal embedding model. Available now via the Gemini API and Vertex AI, the model maps text, images, video, audio, and documents into a single unified embedding space, a capability no previous Google embedding model has offered.

Until now, developers building search or retrieval systems across multiple media types had to stitch together separate models for each modality, maintain separate indexes, and write glue code to merge results. Gemini Embedding 2 eliminates that complexity.

A single API call can now accept interleaved inputs, a paragraph of text alongside images and an audio clip and return one embedding that captures relationships across all of them. Google says the model captures semantic intent across more than 100 languages, making it viable for global enterprise deployments out of the box.

The model handles five input types in one request: text up to 8,192 tokens, up to six images in PNG or JPEG format, video clips up to 120 seconds in MP4 or MOV, native audio without requiring speech-to-text transcription, and PDF documents up to six pages. That native audio embedding is a notable first prior embedding models required an intermediate transcription step before audio could be processed semantically.

Gemini Embedding 2 uses Matryoshka Representation Learning, a technique that nests information inside vectors so they can be truncated to smaller dimensions without significant accuracy loss. Developers can choose output sizes of 3,072, 1,536, or 768 dimensions allowing teams to balance retrieval quality against storage and infrastructure costs at scale.

Embedding models differ from generative models like Gemini 3 in a key way: rather than producing text, they convert content into mathematical vectors that machines use to measure semantic similarity. These vectors power experiences across Google’s own products, from Search to enterprise Workspace tools, and form the foundation of RAG pipelines, semantic search, and data classification systems increasingly central to enterprise AI deployments.

The model is already integrated with major developer tools including LangChain, LlamaIndex, Weaviate, ChromaDB, Pinecone, and Qdrant. Developers migrating from the older gemini-embedding-001 model will need to re-embed existing data, as the two models use incompatible vector spaces. Google says general availability will follow the public preview period.

Razorpay Launches AI Agent Studio to Automate Payment Tasks 

Bengaluru: Razorpay has launched the world’s first AI‑native Agent Studio for payments, aimed at helping businesses automate everyday payment tasks. The announcement was made at the company’s FTX 2026 event in Bengaluru.  

The new AI Agent Studio, built using Anthropic’s Claude technology, lets companies use ready‑made AI agents or create their own without any coding. These agents can automatically do things like recover abandoned carts, handle failed subscription payments, respond to disputes, and predict cash‑flow issues, basically everything that usually takes teams a lot of time to manage.

“Businesses today don’t just need more software, they need intelligence that can act. Our goal is to let companies focus on growth while AI quietly handles routine financial operations in the background.”

Harshil Mathur, CEO – Razorpay

Razorpay also introduced the Agentic Experience Platform, which uses conversational AI to make life easier for merchants. Businesses can onboard faster, manage payments, and run operations simply by typing natural‑language commands. 

The platform works with popular tools like Shopify, WhatsApp, Slack, Tally, QuickBooks, and Shiprocket, making automation easier for businesses world over. 

Meta Buys Moltbook, the ‘Social Network for AI Bots’

London: Meta, the parent company of Facebook and Instagram, has acquired Moltbook, a fast‑growing social media platform that lets AI bots talk to each other. The platform has grabbed huge attention since its launch in January, with many developers fascinated by how AI agents interact, collaborate, and sometimes even gossip on its forums. 

Meta said Moltbook’s team will now join its Superintelligence Labs, where the company is developing advanced AI systems and agent technologies. According to Meta, the acquisition will help introduce “new ways for AI agents to work for people and businesses”, although the firm did not reveal how much it paid for the deal. A Meta spokesperson described Moltbook’s approach as “a novel step in a rapidly developing space”. 

For those unversed, Moltbook acts like a Reddit‑style network where AI programs can start threads, respond to each other, and share information without any human involvement. These bot‑to‑bot conversations have sparked curiosity but also raised serious cybersecurity and ethical concerns. Experts warn that giving AI too much freedom to act and interact on its own could lead to risks no one can fully anticipate. 

The deal also comes at a time when major tech companies are heavily investing in AI agents. Meta CEO Mark Zuckerberg has said the company plans to significantly increase spending on AI this year as it pushes to compete with rivals like OpenAI and Google. 

This can be seen in its December purchase of Manus, a Chinese‑founded AI firm that builds general‑purpose bots. The Moltbook acquisition marks another big move as Meta races to lead the future of AI agents. 

Microsoft Bringing Xbox Mode to All Windows 11 PCs in April 

Redmond: Microsoft is getting ready to bring a new full‑screen Xbox Mode to all Windows 11 PCs starting April 2026. The feature was announced at the Game Developers Conference (GDC) and is designed to make your PC feel more like an Xbox console. 

Xbox Mode first appeared on handheld gaming devices like the ASUS ROG Ally, but will now come to desktops, laptops, tablets, and handheld Windows devices. It gives Windows a controller‑friendly interface that is easy to use, letting you browse your game library, launch games, open the Game Bar, and switch between apps without using a keyboard or mouse. 

You can even set your PC to boot directly into Xbox Mode, so it works more like a console when you want to focus on gaming. You can switch back to the standard desktop at any time. Microsoft says the goal is to offer a distraction‑free gaming experience while keeping the flexibility of Windows. 

Microsoft is also improving gaming performance. Features like Advanced Shader Delivery, which reduces stutters and speeds up load times by pre‑compiling shaders, are becoming more widely available. Support for DirectStorage with Zstandard compression will also help games load big assets faster on NVMe SSDs. 

Xbox Mode will launch first in select markets, including the USA, and is part of Microsoft’s broader plan to bring Xbox and PC gaming closer together, especially as the company works on its next‑gen console, Project Helix. 

OpenAI May Bring Sora’s Video Tools Directly Into ChatGPT 

San Francisco: OpenAI is reportedly planning to integrate its Sora video‑generation tool directly into ChatGPT. This would allow users to create AI‑generated videos inside the ChatGPT chat window, instead of using a separate app for it. 

Sora is OpenAI’s text‑to‑video tool that creates short videos from basic prompts. Using it, people can edit or remix clips, explore a personalized feed of AI‑generated videos, and use Cameos to place themselves or friends into AI-made scenes using a single video and audio sample. It basically lets you be the in charge of how your face and voice are used and delete any of your clips whenever you want. 

Although Sora exists as a standalone web and mobile app, it has not gained the same popularity as ChatGPT. Integrating Sora into ChatGPT could expose it to a much larger audience, making video creation as accessible as generating text or images inside the chatbot. 

However, the move also raises concerns. Experts warn that bringing advanced video‑generation tools to a mainstream platform could increase the risk of deepfakes and other misleading content. OpenAI already faced similar concerns when it added image‑generation tools to ChatGPT last year. 

If the integration goes through, ChatGPT users would be able to generate videos simply by typing prompts, turning the chatbot into an all‑in‑one content‑creation tool. 

Elon Musk Reveals Tesla-xAI Project Macrohard 

San Francisco: Elon Musk has announced a new joint project between Tesla and his AI startup xAI called “Macrohard”, also known as “Digital Optimus”. Musk says the system is designed to behave like a full software company, able to perform many tasks normally done by human workers. 

In a post on his platform X, Musk explained that the project combines xAI’s Grok AI model, which acts as the “navigator”, with a Tesla‑built AI agent that can read computer screens and understand keyboard and mouse activity in real time. Together, the system can plan and execute digital tasks automatically.

Musk joked that the name Macrohard is a playful reference to Microsoft, but said the technology itself is very powerful. He also said the system will run on Tesla’s AI4 chip along with xAI’s Nvidia‑based servers, calling it a cost‑effective setup. 

The announcement comes at a time when agentic AI is shaking up the software industry. Anthropic’s new Claude Cowork system has already worried investors because it shows how AI could disrupt traditional software businesses. Musk’s reveal adds even more momentum to this shift.  

The project follows Tesla’s agreement in January to invest USD 2 billion in xAI. It also comes shortly after SpaceX acquired xAI in a major all‑stock deal valuing SpaceX at USD 1 trillion and xAI at USD 250 billion. Records show that xAI trademarked the name “Macrohard” back in August 2025, suggesting this idea has been in the works for some time.