u/Such-Run-4412

Meta Is Adding “Incognito Chat” for AI

Meta Is Adding “Incognito Chat” for AI

Meta is rolling out Incognito Chat for Meta AI on WhatsApp and the Meta AI app.

The pitch is simple: you can ask AI private questions without Meta being able to read the conversation.

Meta says these chats are processed in a secure environment, aren’t saved, and disappear by default.

This is clearly aimed at people who use AI for more personal stuff, like health questions, money problems, career advice, or anything they don’t want tied to their account forever.

Meta also says it’s bringing a private “Sidechat” feature later, where Meta AI can help inside a WhatsApp conversation without interrupting the main chat.

So basically, Meta is trying to make AI feel less like a public chatbot and more like a private assistant you can actually trust with sensitive questions.

Source: https://about.fb.com/news/2026/05/incognito-chat-whatsapp-meta-ai/

u/Such-Run-4412 — 21 hours ago

Anthropic Dropped a “Claude for Legal” Toolkit on GitHub

Anthropic just released Claude for Legal, a set of Claude plugins and agents built for legal work.

It covers a lot: commercial contracts, privacy, corporate law, employment, litigation, AI governance, IP, regulatory work, law school clinics, and more.

Some examples are pretty specific: vendor agreement reviewer, NDA triager, DSAR responder, launch reviewer, AI use case triager, trademark screener, docket watcher, and deposition prep.

The repo also includes connectors for tools like Slack, Google Drive, Box, DocuSign, iManage, Everlaw, CourtListener, and Ironclad.

The important part: Anthropic is very clear this is not a lawyer replacement. Outputs are drafts for attorney review, with source attribution, privilege safeguards, and gates before anything gets filed or sent.

Basically, Claude is being packaged less like a chatbot and more like a legal workflow assistant for real law teams.

Source: https://github.com/anthropics/claude-for-legal

u/Such-Run-4412 — 21 hours ago

Claude Users Are Getting Monthly Credits for Agent SDK Apps

Anthropic is giving Claude subscribers a separate monthly credit for using the Claude Agent SDK.

Starting June 15, 2026, Pro, Max, Team, and Enterprise users can claim credits for agent-style projects, the claude -p command, GitHub Actions, and third-party apps built on the Agent SDK.

The credits depend on your plan: Pro gets $20/month, Max 5x gets $100, Max 20x gets $200, and some Team/Enterprise seats get between $20 and $200.

The nice part: Agent SDK usage will no longer eat into your regular Claude plan limits. Your normal Claude chat, Claude Code, and Claude Cowork usage stay separate.

There are a few catches. The credit is per user, doesn’t roll over, and only works after you claim it. If you go past the credit, extra usage kicks in only if you have it enabled.

So basically, Anthropic is making it easier for regular Claude users to experiment with AI agents without immediately jumping into full API billing.

Source: https://support.claude.com/en/articles/15036540-use-the-claude-agent-sdk-with-your-claude-plan

reddit.com
u/Such-Run-4412 — 21 hours ago

Perplexity Says Its AI Computer Is Built With Security First

Perplexity just shared how it’s trying to make its AI “Computer” safer.

Every task runs in its own isolated sandbox, so one job can’t easily mess with another.

Instead of giving agents raw API keys, Perplexity uses short-lived proxy tokens, which is basically a safer temporary pass.

It also scans outside content with ML classifiers and its BrowseSafe model before the agent acts on it.

Files and connector data are encrypted, and uploaded files are automatically deleted after 7 days.

The bigger point: AI agents are starting to do real work across apps, files, and the web. So security can’t be an afterthought anymore.

Perplexity is clearly trying to prove that agentic AI can be useful without becoming a security nightmare.

Source: https://www.perplexity.ai/hub/blog/how-we-built-security-into-computer

u/Such-Run-4412 — 21 hours ago

China Tried to Get Access to Anthropic’s New Cyber AI. Anthropic Said No.

Anthropic reportedly rejected a request from a Chinese think tank to access Claude Mythos, its powerful new AI model for finding software bugs.

This is not just another chatbot story. Mythos is built for cybersecurity, and Anthropic says it can find flaws that humans and normal security tools missed for years.

That makes it useful for defense, but also risky if the wrong people get it. A model that can find bugs fast could also help attackers move faster.

Anthropic is keeping Mythos limited for now through Project Glasswing, giving access to selected companies and government partners so they can fix vulnerabilities before bad actors exploit them.

The bigger story is the U.S.-China AI race. AI is starting to look less like a normal tech product and more like a national security weapon.

And this shows where the fight may be headed next: not just who has the best chatbot, but who controls the AI systems that can find, break, and defend the world’s software.

Source: https://www.nytimes.com/2026/05/12/us/politics/china-ai-anthropic-openai-mythos-chatgpt.html

u/Such-Run-4412 — 21 hours ago

SoftBank Just Made a Huge Profit From Its OpenAI Bet

SoftBank’s big OpenAI gamble is paying off.

The company’s annual profit jumped to around $32 billion, helped by massive gains from its OpenAI investment.

Its OpenAI stake reportedly gained around $44–45 billion in value over the past year, making it one of SoftBank’s biggest wins so far.

SoftBank has been going all-in on AI, putting tens of billions into OpenAI while also investing in chips, data centers, robotics, and other AI infrastructure.

But there’s a catch: SoftBank is also borrowing heavily and selling other assets to fund these bets.

So the story is basically this: Masayoshi Son is making another giant AI-era gamble — and for now, OpenAI is making the numbers look amazing.

Source: https://www.wsj.com/business/earnings/softbank-more-than-quadruples-annual-profit-on-44-billion-in-openai-gains-d2bab91f

reddit.com
u/Such-Run-4412 — 21 hours ago

Google Might Launch AI Data Centers Into Space With SpaceX

Google is reportedly talking with SpaceX about launching experimental AI data centers into orbit.

The project is called Suncatcher, and the idea is pretty wild: put Google’s AI chips on solar-powered satellites and run AI compute from space.

Why space? Because AI data centers on Earth need insane amounts of electricity, land, cooling, and infrastructure. In orbit, Google could use almost constant solar power, which could make future AI systems easier to scale.

It’s still very early. Google is planning test satellites with Planet Labs around 2027, and SpaceX could help with launches if the plan moves forward.

There are still big problems to solve, like radiation, heat, satellite repairs, and sending huge amounts of data back to Earth.

But the takeaway is clear: the AI race is getting so intense that companies are now seriously looking at space as the next place to build compute.

Source: https://www.bloomberg.com/news/articles/2026-05-12/google-in-talks-to-use-spacex-to-launch-space-data-centers-wsj

u/Such-Run-4412 — 21 hours ago

Amazon’s AI Push Created a “Tokenmaxxing” Problem

Amazon employees are reportedly using an internal AI agent tool called MeshClaw to automate unnecessary tasks just to boost their AI usage scores. The tool can handle things like code deployments, email triage, and app interactions, but some workers are using it mainly to increase token consumption and look more active with AI.

The pressure comes from Amazon’s internal AI adoption targets. The company reportedly wants more than 80% of developers using AI weekly and has tracked token usage through internal leaderboards. Amazon says these stats are not used in performance reviews, but employees believe managers are still watching the numbers.

This has created a classic bad-incentive problem: once AI usage becomes a metric, people start optimizing for the metric instead of the work. Employees are calling the behavior “tokenmaxxing” — basically burning AI tokens to look productive.

There are also security concerns. MeshClaw can act on a user’s behalf across workplace systems, which raises the risk of AI agents making mistakes, triggering unintended actions, or getting too much access inside company tools.

Source: https://www.ft.com/content/8ee0d3ef-9548-422d-8ff1-ebd48ad4b2ca?syn-25a6b1a6=1

u/Such-Run-4412 — 2 days ago

Anthropic May Buy Stainless, the Startup Behind Major AI SDKs

Anthropic is reportedly in advanced talks to acquire Stainless for at least $300 million. Stainless builds developer tools that help companies turn APIs into high-quality SDKs, documentation, and MCP servers.

The important part: Stainless is not some random dev tool. Its customers have included major AI platforms like OpenAI, Anthropic, Runway, Meta’s Llama Stack, Groq, Cerebras, LangChain, Braintrust, and Writer.

This matters because SDKs are how developers actually use AI models in real products. If the SDK is clean, reliable, and easy to update, developers can build faster. If it is messy, adoption slows down.

Stainless also fits the agent era because it helps generate MCP servers, which let AI agents connect to external tools, APIs, and data sources more easily.

Source: https://www.theinformation.com/articles/anthropic-talks-buy-developer-tools-startup-used-openai-google?rc=mf8uqd

reddit.com
u/Such-Run-4412 — 2 days ago

Manus Can Now Use Your Preferred Browser for Web Automation

Manus introduced Preferred Browser, a new Browser Operator feature that lets users choose one authorized Chrome browser as the default environment for web tasks. That means Manus can work from a browser that already has the right sign-ins, extensions, permissions, and network access.

This matters because a lot of browser automation depends on account state. If Manus needs to check an analytics dashboard, open an internal tool, use a vendor portal, or run a recurring web task, it may need the exact browser where those accounts and extensions are already set up.

The setup can live on your main computer, a dedicated workstation, or even an always-online device like a Mac mini. Once selected, you can start tasks from another computer while Manus uses the preferred Chrome session in the background.

If the preferred browser is offline, Manus can fall back to another authorized browser. Availability may vary by account and rollout status.

Source: https://manus.im/blog/manus-preferred-browser

reddit.com
u/Such-Run-4412 — 2 days ago

Anthropic Is Reportedly Chasing a $900B+ Valuation

Anthropic is reportedly in talks to raise at least $30 billion in fresh funding at a valuation of more than $900 billion, not including the new investment. The round could close as soon as the end of May, but the deal is not finalized and no term sheet has been signed yet.

The scale is wild. Anthropic was already one of the most valuable AI labs, but a $900B+ valuation would put it near the level of the biggest technology companies in the world — and potentially ahead of OpenAI depending on final terms.

The money is about compute. Claude demand has exploded across coding, enterprise, legal, finance, and agent workflows, and Anthropic needs massive infrastructure to keep up. Recent reports say the company is also securing huge cloud and chip deals with Google, AWS, SpaceX, and others to expand capacity.

The bigger story is that frontier AI labs are no longer being valued like normal startups. Investors are treating them like future operating systems for the economy: models, agents, enterprise workflows, cybersecurity tools, coding assistants, and cloud-scale infrastructure all rolled into one.

Source: https://www.bloomberg.com/news/articles/2026-05-12/anthropic-in-talks-to-raise-30-billion-at-900-billion-valuation

u/Such-Run-4412 — 2 days ago

Meta Just Introduced Muse Spark, Its First Superintelligence Labs Model

Meta introduced Muse Spark, the first model from Meta Superintelligence Labs and its most powerful model yet. It now powers the Meta AI app and website, and it will roll out across WhatsApp, Instagram, Facebook, Messenger, Threads, and Meta’s AI glasses.

Muse Spark is built to make Meta AI faster, smarter, and more useful inside the apps people already use every day. It supports complex reasoning, multimodal tasks, image understanding, visual coding, and multiple subagents working in parallel on the same request.

The biggest shift is context. Meta AI can now pull in richer recommendations from Reels, public posts, maps, shopping content, creators, and communities. That means it is not just answering from the web — it is starting to use Meta’s own social graph and content ecosystem as an intelligence layer.

Meta is also adding new voice and live camera features. Users can talk naturally, interrupt, switch topics, change languages, generate images while speaking, and point their camera at the real world to ask questions in real time.

Shopping is another major focus. Meta AI can search Facebook Marketplace listings near you, compare new and used products across the internet, refine by price or distance, and browse public brand or creator content directly.

Source: https://about.fb.com/news/2026/04/introducing-muse-spark-meta-superintelligence-labs/

u/Such-Run-4412 — 2 days ago

Isomorphic Labs Raised $2.1B to Push AI-Designed Drugs Toward the Clinic

Isomorphic Labs, the Alphabet-backed AI drug discovery company, raised $2.1 billion in Series B funding to scale its AI drug design engine and advance its drug candidate pipeline. The round was led by Thrive Capital, with participation from Alphabet, GV, MGX, Temasek, CapitalG, and the UK Sovereign AI Fund.

The company is building IsoDDE, its AI drug design engine, to help discover and design new medicines faster across multiple disease areas and drug types. The goal is to use AI not just to analyze biology, but to actually help create drug candidates that can move through development.

The funding is meant to support global expansion, deeper R&D, and progress on Isomorphic’s internal drug pipeline. This matters because AI drug discovery is moving from “cool research demo” into the expensive part: proving whether AI-designed medicines can actually become real treatments.

Source: https://www.isomorphiclabs.com/articles/isomorphic-labs-announces-series-b-investment-round

u/Such-Run-4412 — 2 days ago

Google Is Turning Android Into a Proactive AI System

Google announced Gemini Intelligence for Android, a new AI layer designed to make phones more proactive instead of just reactive. The idea is that Android devices will not only answer questions, but also help complete multi-step tasks across apps.

Gemini will be able to handle tasks like booking rides, shopping from a grocery list, finding a syllabus in Gmail, adding required books to a cart, or using a photo of a travel brochure to find similar tours online. Users will be able to track progress through notifications, and Google says Gemini only acts after the user gives a command and stops when the task is complete.

Google is also bringing Gemini deeper into Chrome on Android. It can summarize, compare, research, and handle boring web tasks like booking appointments or reserving parking spots. Autofill is also getting smarter, using opted-in Gemini Personal Intelligence to fill complex forms across apps and Chrome.

Another new feature is Rambler, which turns messy spoken thoughts into polished messages. You can speak naturally, with pauses or corrections, and Rambler cleans it into concise text. It also supports switching between languages in the same message.

Google is also adding Create My Widget, a generative UI feature that lets users build custom Android or Wear OS widgets by describing what they want, like a meal prep dashboard or a weather widget focused only on rain and wind speed.

Source: https://blog.google/products-and-platforms/platforms/android/gemini-intelligence/

u/Such-Run-4412 — 2 days ago

Anthropic Founder Says the Next 1,000 Days Could Define the AI Era

Anthropic founder Dario Amodei has been arguing that powerful AI may arrive far sooner than most people expect — potentially as early as 2026, though he admits the timeline is uncertain. His version of “powerful AI” is not just a smarter chatbot, but a system that can outperform top humans across programming, science, math, writing, engineering, and long-running digital work.

The key idea is that AI could soon become more like a country of geniuses in a data center: millions of highly capable AI workers running at machine speed, handling tasks that would take humans hours, days, or weeks. That would radically change software, research, business operations, cybersecurity, biology, and the economy.

The urgency comes from the timeline. If the next wave of AI can automate serious parts of knowledge work, then society may not have decades to prepare. Companies, workers, schools, governments, and regulators may only have a few years to figure out what happens when intelligence becomes cheap, scalable, and available on demand.

The optimistic side is huge. Amodei has argued that powerful AI could speed up biology, medicine, neuroscience, poverty reduction, governance, and public services. But the risk side is just as serious: job disruption, misuse, concentration of power, cyber threats, and the possibility that society adapts too slowly.

Video URL: https://youtu.be/Hw7PE5a3DGo?si=jXBd1ZIrqqIAfTPc

u/Such-Run-4412 — 3 days ago

Kuaishou Wants to Spin Off Kling at a $20B Valuation

China’s Kuaishou is reportedly planning to spin off Kling AI, its AI video generation unit, ahead of a possible IPO in 2027. The target valuation is around $20 billion, which is massive considering Kuaishou’s own market value has recently been around the mid-$20B range.

Kling is one of China’s most important AI video tools, competing with players like OpenAI’s Sora, Runway, Google Veo, and other generative video platforms. The reported plan is to make Kling a more independent business so it can raise capital and scale faster.

The Information says Kling is targeting $1.3 billion in annualized revenue by Q1 next year, which shows how quickly AI video is turning from a demo market into a real business category.

Kuaishou is also reportedly looking to raise around $2 billion, with potential investors including Tencent. That would give Kling more firepower for model training, video infrastructure, creator tools, and global expansion.

Source: https://www.theinformation.com/articles/chinas-kuaishou-plans-spin-kling-ai-video-unit-20-billion-valuation?rc=mf8uqd

u/Such-Run-4412 — 3 days ago

Claude Platform Is Now Available Directly Through AWS

Anthropic launched Claude Platform on AWS, giving AWS customers access to the full Claude API platform while using AWS authentication, billing, and existing cloud commitments. That means companies can access Claude through their normal AWS setup instead of managing a separate procurement and access flow.

The platform includes major Claude API features like Claude Managed Agents, advisor strategy, web search, web fetch, code execution, Files API, Skills, MCP connector, prompt caching, citations, and batch processing. Anthropic says new Claude API features and betas will ship to the AWS platform the same day they launch on the native Claude API.

The important difference from Claude on Amazon Bedrock is control and data processing. Claude Platform on AWS gives customers the full native Anthropic API experience, but Anthropic operates the service and data is processed outside the AWS boundary. Bedrock keeps AWS as the data processor and is better for companies with strict AWS-only data requirements.

The platform supports models like Claude Opus 4.7, Sonnet 4.6, and Haiku 4.5, with new models added as they launch. It will be available in most AWS commercial regions and supports global and U.S. inference geographies.

Source: https://claude.com/blog/claude-platform-on-aws

u/Such-Run-4412 — 3 days ago

OpenAI Officially Launched Its Deployment Company

OpenAI launched the OpenAI Deployment Company, a new business unit designed to help organizations build and deploy AI systems inside their most important workflows. The goal is not just to sell models, but to send Forward Deployed Engineers into companies to redesign real operations around AI.

OpenAI is also acquiring Tomoro, an applied AI consulting and engineering firm, bringing around 150 Forward Deployed Engineers and Deployment Specialists into the Deployment Company from day one. These teams will help connect OpenAI models to a customer’s data, tools, controls, and business processes.

The partner list is massive. The company is backed by 19 global investment firms, consultancies, and system integrators, including TPG, Advent, Bain Capital, Brookfield, Goldman Sachs, SoftBank Corp., Warburg Pincus, Bain & Company, Capgemini, and McKinsey. It will launch with more than $4 billion in initial investment and stay majority-owned and controlled by OpenAI.

The important part is distribution. OpenAI says its partners sponsor more than 2,000 businesses around the world, while its consulting and integrator partners work with thousands more. That gives OpenAI a direct path into real companies that need help turning AI into measurable business results.

Source: https://openai.com/index/openai-launches-the-deployment-company/

u/Such-Run-4412 — 3 days ago

Thinking Machines Wants AI to Collaborate in Real Time, Not Just Wait for Prompts

Thinking Machines Lab introduced a research preview of interaction models — AI models designed to handle real-time collaboration natively instead of relying on external voice, video, or agent scaffolding. The goal is to make AI feel more like a live collaborator that can listen, watch, speak, interrupt, respond, and act while the user is still working.

The core problem they are targeting is the old turn-based AI interface. Today, most models wait until the user finishes speaking or typing, then respond in one block. Thinking Machines argues that this limits collaboration because real work is messy: people interrupt, correct themselves, show things visually, ask follow-ups, and change direction mid-task.

Their approach uses time-aligned micro-turns, where the model processes audio, video, and text in tiny 200ms chunks. That lets the AI handle overlapping speech, visual cues, interruptions, silence, timing, and live context instead of treating every interaction like a text message.

The system also splits work between two models: a fast interaction model that stays present with the user, and a deeper background model that handles reasoning, tools, browsing, and longer tasks. So the user gets quick live interaction while heavier thinking happens in the background.

The capabilities are very different from normal chatbots: seamless dialogue management, verbal and visual interjections, simultaneous speech, time awareness, live translation-style interaction, tool calls while talking, web search while listening, and generative UI while the conversation continues.

Thinking Machines says its model, TML-Interaction-Small, performs strongly on both intelligence and interactivity benchmarks, including better responsiveness and interaction quality than several realtime voice model baselines.

Source: https://thinkingmachines.ai/blog/interaction-models/

u/Such-Run-4412 — 3 days ago

Claude Code Now Has an Agent View for Managing Multiple Coding Agents

Anthropic introduced agent view in Claude Code, a new workspace for managing all your Claude Code sessions in one place. Instead of juggling multiple terminal tabs, tmux panes, and half-finished agent tasks, developers can now see which agents are working, waiting for input, or finished.

The feature lets users kick off new agents, send sessions to the background, peek at the latest response, reply inline when Claude needs a decision, and jump back into the full transcript when needed. You can open it by pressing the left arrow from any session or running claude agents in the terminal.

There are also background commands: /bg can add an existing session to agent view, while claude --bg [task] starts a new background session directly. That makes it easier to run several coding tasks in parallel without losing track.

Anthropic says early users are using it to dispatch multiple ideas at once, manage long-running agents like PR babysitters or dashboard updaters, and quickly scan which sessions produced pull requests.

Source: https://claude.com/blog/agent-view-in-claude-code

u/Such-Run-4412 — 3 days ago