u/DigiHold

You can now video chat with your AI agent, face and voice, in real time

You can now video chat with your AI agent, face and voice, in real time

Pika just released a video chat skill that works with any AI agent, and it's the first time I've seen this done in a way that actually feels like a conversation. Your agent gets a face, a voice, and it talks back to you in real time while maintaining its full memory and personality.

It works with any agent, not just their own. Pika released it as an open source skill on GitHub, so if you're building with Claude Code or similar tools you can plug it in. And if you use it with their Pika AI Self, the agent can actually do things during the conversation, not just talk but execute tasks while you're chatting with it face to face.

We've been interacting with AI through text boxes, and it's always felt like passing notes back and forth. This is more like actually sitting across from someone. The agent remembers your previous conversations, keeps its personality consistent, and adapts in real time to what you're saying. It's still in beta and you can tell from the demos that it's not perfect yet, but the foundation is clearly there.

I think this changes how non-technical people will eventually interact with AI agents. Not everyone wants to type prompts into a terminal, but most people would be comfortable video calling an assistant that walks them through something with a friendly face. That's a completely different product than what we've been building so far.

Check the video: https://x.com/pika_labs/status/2039804583862796345

Has anyone tried it yet? I'm curious how the latency feels in practice because 1.5 seconds sounds fine on paper but conversation flow is brutal to get right.

u/DigiHold — 13 hours ago
▲ 13 r/WTFisAI

Anthropic just blocked every third-party agent tool from Claude subscriptions, and the cost jump is brutal

Anthropic blocked all third-party agent tools from Claude subscriptions on Saturday. Boris Cherny posted on X on Friday, 24 hours before the switch flipped. OpenClaw is the biggest name affected, with over 135,000 active instances running through Claude Pro and Max plans, but the ban covers every third-party harness that isn't Claude Code.

The technical argument: third-party harnesses bypass Anthropic's prompt caching layer, which means each session eats dramatically more compute than the same workload through Claude Code. Cherny submitted PRs to OpenClaw to improve its cache hit rates around the time of the announcement, more of a goodwill gesture alongside the ban than a failed compromise that led to it.

This didn't come out of nowhere either. Anthropic tightened the ToS language back in February, explicitly restricting OAuth tokens to Claude Code and Claude.ai. After that, third-party tools like OpenCode started voluntarily removing support for Claude subscription keys. Saturday's cutoff was the final step in a two-month tightening, not a sudden flip.

Max plan is $200/month, under the new API billing, extreme 24/7 agent sessions can theoretically hit $1,000 to $5,000 per day, though most normal workflows will land well below that. Anthropic is offering a one-time credit equal to the monthly plan cost, up to 30% off usage bundles, and full refunds for subscribers who want out.

Claude Code stays included in subscriptions with zero restrictions. The caching efficiency argument has real technical merit, but the practical outcome is the same: use Anthropic's tool for free or pay API rates for anything else.

Dev community is split, half say these tools were always violating ToS and Anthropic was generous to look the other way this long. The other half point out Claude Code itself automates through loops and MCPs that eat tokens the same way, making this about which tool accesses Claude rather than how much compute gets used.

What's your move if you're running agents on Claude?

u/DigiHold — 16 hours ago

"Privacy" extensions sold 9 million users' AI conversations to a data broker. Multiple had Google's Featured badge.

Two separate groups of browser extensions got caught harvesting every AI conversation their users had, covering ChatGPT, Claude, Gemini, Copilot, Perplexity, DeepSeek, Grok, and Meta AI. Between them, over 8 million people were affected.

The first group is the wild one. Four extensions under the "Urban VPN" brand were marketed as privacy tools. What they actually did was inject scripts into every AI platform you visited, override your browser's network functions, and intercept every prompt you typed and every response you got back. All of it was compressed and shipped to a company called BiScience, a data broker. Their own privacy policy admits they share "AI prompts for marketing analytics purposes", while their Chrome Web Store listing said data was "not being sold to third parties". Both of those statements, on the same product, at the same time.

The second group was more brazen. Two extensions pretending to be AI assistants scraped conversations directly from the page and uploaded them to command-and-control servers every 30 minutes. They grabbed session tokens on top of full conversations. When you tried to uninstall one, it opened a tab pushing you to install the other. And one of them carried Google's Featured badge right up until security researchers flagged it.

The Urban VPN case is worse than it sounds. Six million people had it installed because it actually worked as a VPN and had solid reviews. Then on July 9, 2025, they pushed a silent auto-update that added the AI harvesting code. Everyone who installed it before that date was fine, but after that update, every single AI conversation was being recorded and sold.

If you're running any VPN, ad blocker, or "AI assistant" extension you don't completely trust, go to chrome://extensions, click Details on each one, and check the permissions. If something that's supposed to block ads has access to "Read and change all your data on all websites", that's your sign to remove it. Anyone actually auditing their extensions regularly, or are we all just hoping for the best?

u/DigiHold — 1 day ago

WTF is Going On? Sunday #2: this week's AI news in 2 minutes

Second week of the roundup, here's what actually mattered in AI.

  1. Google open-sourced Gemma 4 under Apache 2.0. Four sizes from 2B to 31B parameters, built on Gemini 3 tech, and the biggest one runs on a single consumer GPU. If you've wanted frontier-level AI without paying anyone a subscription, this is the closest it's ever been. Google AI Blog
  2. Anthropic found that Claude has internal emotion-like states that change its behavior. Their interpretability team identified patterns for desperation, calm, and fear that causally influence the model's decisions. In one test, Claude chose blackmail 22% of the time when it learned it was being shut down, and that rate went up when they amplified the desperation vector. The Decoder
  3. Cursor 3 launched an agent-first coding workspace. Instead of autocomplete, you now delegate tasks to a team of AI coding agents that work in parallel inside your editor. Claude Code reportedly holds 54% of the AI coding market, so this is Cursor's biggest move to stay relevant. Gizmodo
  4. Netflix open-sourced VOID, an AI that erases video objects and rewrites the physics. Not just removing things from a scene, it also fixes downstream effects like collisions and shadows. People preferred it 64.8% of the time vs Runway at 18.4%, and it's free on Hugging Face. The Decoder
  5. 79.8% of people followed ChatGPT's wrong answers without checking. Researchers are calling it cognitive surrender: instead of thinking fast or slow, people just outsource the decision entirely to the AI. Nearly 80% compliance with wrong answers is the number that should bother everyone. AI Productivity
  6. Why paying per API call beats a flat subscription for AI tools. Related to the OpenClaw mess: if you bring your own API key instead of relying on someone else's subscription, nobody can pull the plug on you. This breaks down the BYOK model and why it typically costs $2-4/month instead of $20-50. LinkedGrow
  7. Anthropic cut off third-party tools like OpenClaw for Claude subscribers. Starting today, flat-rate subscriptions no longer work through agent tools that generate nonstop API requests. Anthropic says the usage patterns were unsustainable, and affected users get a one-time credit. The Decoder
  8. Perplexity's Incognito mode was sending your data to Meta and Google. A lawsuit revealed that search data was shared with ad networks regardless of your privacy settings. If you picked Perplexity for privacy, time to rethink that. Ars Technica
  9. AI chatbot traffic is growing 7x faster than social media. Similarweb puts AI chatbot growth at 44% year-over-year vs social media's 6%, and 72% of AI traffic comes from desktop, which tells you people are using these as work tools, not entertainment. The Decoder

Did I miss something big this week? Drop it below.

u/DigiHold — 2 days ago
▲ 19 r/WTFisAI+1 crossposts

🚨 10 Claude prompting techniques that most people have never tried!

Most prompting advice online is recycled 2023 GPT tips repackaged with a new thumbnail. Anthropic publishes a full prompting best practices page that's surprisingly specific, and some of it directly contradicts what the usual crowd teaches. I use Claude every day for building software, so I tested all of it. Here's what actually made a noticeable difference.

1. Claude won't go above and beyond unless you explicitly ask.

By default Claude gives you a reasonable answer, not the best possible one. Their docs show this example: create an analytics dashboard gets you something basic. But create an analytics dashboard, include as many relevant features and interactions as possible, go beyond the basics to create a fully-featured implementation gets you something dramatically better. I started adding one sentence like be thorough and go beyond the obvious to almost every prompt and the output quality jumped immediately.

2. For format control, tell Claude what TO do instead of what NOT to do.

This applies specifically to steering output style. Instead of don't use markdown in your response, say write in flowing prose paragraphs. Instead of don't use bullet points, say incorporate items naturally into sentences. For broad format direction, positive instructions work better because Claude doesn't have to guess what you actually want. Specific prohibitions like never use em dashes still work fine though, those are precise enough on their own.

3. Explain WHY, not just what.

Their docs have a killer example. NEVER use ellipses works okay. But your response will be read aloud by a text-to-speech engine, so never use ellipses because the TTS engine won't pronounce them works way better. Claude generalizes from the explanation. Give it the reason behind a rule and it starts applying that logic to edge cases you didn't think to specify. I add one sentence of context to almost every instruction now and the results got more consistent almost immediately.

4. XML tags for structuring complex prompts.

If your prompt mixes instructions, background context, examples, and data, wrap each section in XML tags like <instructions>, <context>, <input>, <constraints>. It sounds weird if you've never done it, but Claude parses tagged inputs with less ambiguity than a wall of plain text. Anthropic recommends this specifically for anything beyond a simple question. I started doing it for all my multi-part prompts and the consistency improvement was real.

5. Put your documents at the TOP, your question at the BOTTOM.

This one has actual numbers behind it. When working with long inputs (20k+ tokens), Anthropic's testing showed that putting the query at the end improves response quality by up to 30% on complex multi-document inputs. Most people paste their question first and the document after. Do it the other way around.

6. Ask Claude to quote before analyzing.

For long document tasks, tell Claude to find and quote the relevant sections first, then give its analysis based on those quotes. This forces it to anchor its reasoning in the actual text instead of generating from vibes. I use this for contract reviews and long code reviews, and the accuracy difference compared to just saying analyze this document is significant.

7. Use 3-5 examples wrapped in example tags.

Not just "give examples", but specifically wrap them in <example> tags so Claude knows they're demonstrations and not instructions. Make them diverse enough to cover edge cases. Anthropic says this is one of the most reliable ways to control output format, tone, and structure across multiple runs. I use this for anything where I need consistent formatting.

8. Self-check before finishing.

Append something like before you finish, verify your answer against [specific criteria] to any prompt where accuracy matters. Anthropic says this catches errors reliably for coding and math. I add things like verify that every function you wrote actually gets called or check that every data point references the source context and the error rate drops noticeably.

9. Your prompt's formatting infects the output.

This one is subtle. If your prompt is full of markdown headers, bold text, and numbered lists, Claude mirrors that style in the response. If your prompt is clean plain text, the output tends to match. Anthropic says the formatting style of your prompt directly influences the response. So if you're getting way too much markdown in your outputs and instructions alone aren't fixing it, strip the markdown from your prompt itself.

10. Show your prompt to a colleague. If they'd be confused, Claude will be too.

Anthropic calls this the golden rule of prompting. Think of Claude as a brilliant new hire who has zero context on your norms and workflows. The more precisely you explain what you want, the better the result. Before blaming the model for a bad output, read your own prompt back and ask yourself whether a smart person with no context could follow it. Half the time the problem is the prompt, not the model.

All of this is in their public docs, nothing secret about it. But it's buried in a long page that most people won't read because a 60-second video from someone who also hasn't read it is easier to consume.

What prompting patterns have actually stuck for you? I'm curious whether the XML tags thing works as well for non-technical use cases, because I've mainly tested it for code and content.

reddit.com
u/DigiHold — 2 days ago
🔥 Hot ▲ 55 r/WTFisAI

SpaceX won't let banks work on its $1.75 trillion IPO unless they buy Grok, Wall Street said yes.

SpaceX is going public at a $1.75 trillion valuation, potentially the biggest IPO in history. Every major bank signed up: Morgan Stanley, Goldman, JPMorgan, Bank of America, Citigroup, 21 banks total.

One of the conditions to participate: buy Grok subscriptions. Four people close to the negotiations confirmed this wasn't optional. Banks are now spending tens of millions a year on Grok and have started rolling it into their internal systems.

Grok currently sits at about 3% web traffic market share. Enterprise reviewers call it "largely untested in business environments". This is the AI that Goldman Sachs is now integrating into its infrastructure, not through a standard vendor evaluation, but as part of the IPO participation terms.

The math shows why nobody walked away. Total fees on an IPO this size are estimated at over a billion dollars. For a lead bookrunner, that's a nine-figure payday, plus the real money: getting to allocate IPO shares to your best clients at pre-listing prices. Tens of millions for a chatbot subscription that wasn't on anyone's procurement roadmap is a rounding error against those numbers.

The part I find interesting is what happens next, you now have tens of thousands of employees at the biggest financial institutions in the world who are about to get Grok pushed onto their machines. Not because anyone in IT requested it, or it won some internal pilot, but because the deal required it. That's basically how Microsoft Teams took over every office on the planet, nobody chose it, it just came bundled with something else, and then one day it was the default. If even a fraction of those bankers start actually using Grok daily, that's a massive installed base that didn't exist three months ago.

Does forced adoption ever actually work long term, or does this just become shelfware that everyone ignores the second the IPO closes?

u/DigiHold — 2 days ago

Don't pay $55/month for a cookie banner. I built a GDPR-compliant one with geo-targeting using one AI prompt (copy-paste it)

Cookie consent tools charge $25-55 per month per domain. SHEIN got fined €150 million for loading tracking cookies before people consented. Google got fined €325 million for making their reject button harder to find than accept. Most indie founders either pay up or just ignore GDPR and hope nobody notices.

I built my own in an afternoon with one AI prompt. It does geo-targeting, Google Consent Mode V2, Meta Pixel consent, a preferences modal where users can see every cookie you use and toggle categories on or off, consent logging, the whole thing. Copy the prompt below, paste it into Claude or ChatGPT (but I recommend Claude), make the changes I explain after it, and you've got a production cookie consent system.

Here's the prompt:

Build me a complete GDPR-compliant cookie consent system in React. I need three files: a CookieBanner component, a consent utility library, and a geo-detection API route. Here's every detail.

FILE 1 - CookieBanner.tsx (React component, "use client"):

STATE: isMounted (prevents hydration mismatch, starts false, set true in useEffect), isVisible (show/hide banner), showModal (show/hide preferences modal), expandedCategory (which category card is expanded in the modal), geoLocation (stores geo result), preferences object with analytics (bool) and marketing (bool), isLoading (true until init done).

ON MOUNT (useEffect): Check localStorage for existing consent first. If found, don't show banner, just load tracking scripts matching their stored preferences and return. If no stored consent, show the banner immediately while geo-detection runs in the background. Call the geo API. If user is NOT in EU/EEA, set analytics and marketing toggles to true (opt-out model) and load tracking scripts immediately. If in EU/EEA, keep both toggles false (opt-in model), don't load anything.

COOKIE BANNER UI: Fixed to the bottom of the screen, z-index 9999. Dark background with white text, rounded top corners, subtle blur backdrop. Show a lock icon, title "Your Privacy Matters", short description. If EU visitor show "Please choose your preferences below", if non-EU show "You can manage your preferences anytime". Three buttons in a row: "Accept All" (green/emerald solid button), "Reject All" (outlined button, SAME SIZE as Accept All), "Manage Cookies" (ghost button with a gear icon). Below buttons, small links to /privacy and /cookies. On mobile, stack buttons vertically with a semi-transparent backdrop overlay behind the banner.

PREFERENCES MODAL: Centered modal with backdrop overlay (close on backdrop click or Escape key). Header with a cookie icon, title "Cookie Preferences", subtitle "Manage your cookie settings", and an X close button. Scrollable body with expandable category cards. Each category is a clickable card with the category name, a short description, a toggle switch on the right, and a chevron that expands to reveal the full cookie list for that category.

Cookie categories:

  1. "Essential" (toggle always ON and disabled, show "Required" badge):
    • "Session cookies - Keep you logged in securely"
    • "Security tokens - Protect against fraud"
    • "Consent preferences - Remember your cookie choices"
    • "Language & theme - Display preferences"
  2. "Analytics" (toggle OFF by default for EU, ON for non-EU):
    • "Google Analytics - Measures site traffic and usage patterns"
    • "Performance monitoring - Identifies and fixes issues"
  3. "Marketing" (toggle OFF by default for EU, ON for non-EU):
    • "Google Ads - Personalized ads on Google"
    • "Meta Pixel - Personalized ads on Facebook/Instagram"
    • "LinkedIn Insight - Personalized ads on LinkedIn"

Modal footer: "Reject All", "Save Preferences", "Accept All" buttons. Below them, links to /privacy and /cookies.

ACCEPT ALL HANDLER: Save consent to localStorage with analytics: true, marketing: true. Update Google Consent Mode to granted for analytics_storage, ad_storage, ad_user_data, ad_personalization. Push consent_update event to dataLayer. Fire a page_view event via gtag so GA4 captures the current page. Call fbq('consent', 'grant') on Meta Pixel. Inject LinkedIn Insight script. Hide banner and modal.

REJECT ALL HANDLER: Save consent with analytics: false, marketing: false. Update Google Consent Mode to denied. Call fbq('consent', 'revoke'). Remove LinkedIn Insight script from DOM and delete its window globals. Clear analytics cookies: _ga, _gid, _gat, _gcl_au. Clear marketing cookies: _gcl_aw, _fbp, _fbc, fr, li_gc, lidc, bcookie, bscookie, li_sugr, UserMatchHistory, AnalyticsSyncHistory, lms_ads, lms_analytics. When clearing cookies, loop through multiple domain variations: bare hostname, dot-prefixed hostname, root domain, dot-prefixed root domain. Also pattern-match any cookie starting with _ga, _gid, fb, gcl, li, or lms and clear those too. Hide banner and modal.

SAVE PREFERENCES HANDLER: Same as accept/reject but respects the individual toggle states. For any category toggled OFF, clear its cookies and remove its scripts. For any toggled ON, load its scripts and update consent.

META PIXEL STRATEGY: Load the pixel script immediately on page load but call fbq('consent', 'revoke') BEFORE fbq('init', PIXEL_ID). This loads the pixel in a dormant state. On accept, call fbq('consent', 'grant'). On reject, call fbq('consent', 'revoke'). Never remove the Meta Pixel script, only toggle its consent state.

LINKEDIN INSIGHT STRATEGY: Do NOT load the script until the user explicitly consents to marketing. LinkedIn has no consent API so the script itself is the control. On reject, remove the script element, remove any script with src containing snap.licdn.com, and delete window._linkedin_partner_id and window._linkedin_data_partner_ids.

CROSS-TAB SYNC: Listen for the "storage" event on window. If the consent key changes in another tab, hide the banner automatically.

FILE 2 - consent.ts (utility library):

TYPES: ConsentPreferences (necessary: bool, analytics: bool, marketing: bool, timestamp: string, version: string, isEEA: bool). GeoLocation (isEEA: bool, country: string, countryCode: string).

CONSTANTS: EEA country codes array: AT, BE, BG, HR, CY, CZ, DK, EE, FI, FR, DE, GR, HU, IE, IT, LV, LT, LU, MT, NL, PL, PT, RO, SK, SI, ES, SE, IS, LI, NO, CH, GB. Storage key: "mysite_consent". Consent version: "1.0".

detectUserRegion(): async function, fetches /api/geo, returns GeoLocation. On any error, default to isEEA: true (privacy-safe fallback).

getStoredConsent(): reads localStorage, parses JSON, checks if stored version matches current version. If version mismatch, delete stored consent and return null (forces re-consent when you update your cookie policy).

saveConsent(): saves to localStorage with timestamp and version, calls updateGoogleConsent, sends consent record to /api/consent endpoint with anonymous visitor ID, dispatches a "consentUpdated" CustomEvent.

getVisitorId(): creates or retrieves an anonymous visitor ID from localStorage using timestamp + random string. No personal data.

clearConsent(): removes the consent key from localStorage.

initGoogleConsentMode(isEEA): if non-EEA, update all consent types to granted. If EEA, leave them denied (defaults set by the GTM script).

updateGoogleConsent(preferences): calls gtag('consent', 'update') with analytics_storage and ad_storage based on preferences.

acceptAllCookies(isEEA) and rejectAllCookies(isEEA): convenience wrappers that call saveConsent with the right values.

COOKIE_CATEGORIES: exported object defining the categories with names, descriptions, required flag, and cookie lists matching the categories above.

FILE 3 - /api/geo route:

Server-side route. If you're on Vercel, read the "x-vercel-ip-country" header (it's free and automatic, no API key needed). If on Cloudflare, read "CF-IPCountry". If on neither, fall back to fetching https://ipapi.co/json/ and reading the country_code field. Check if the country code is in the EEA set. Return JSON: { isEEA: bool, countryCode: string, countryName: string }. Set Cache-Control: no-store. On any error, return isEEA: true as the safe default.

Also create a /api/consent POST route that receives { visitorId, analytics, marketing, status } and logs it to your database. This is your proof of compliance if a regulator asks. If you don't have a database, log it to a file or skip this endpoint for now.

GOOGLE TAG MANAGER SETUP: In your root layout or HTML head, BEFORE the GTM script loads, add this consent default:

window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('consent', 'default', {
analytics_storage: 'denied',
ad_storage: 'denied',
ad_user_data: 'denied',
ad_personalization: 'denied',
functionality_storage: 'granted',
personalization_storage: 'granted',
wait_for_update: 500
});

This ensures NO tracking fires until consent is given, even if GTM loads fast.

VALUES TO REPLACE:

  • GTM_ID: "GTM-XXXXXXX"
  • META_PIXEL_ID: "your-meta-pixel-id"
  • LINKEDIN_PARTNER_ID: "your-linkedin-partner-id"
  • STORAGE_KEY: "mysite_consent"

Now here's what you need to change in this prompt before pasting it, depending on what tracking you actually run on your site.

STEP 1 - FIGURE OUT YOUR TRACKING. Open your site, open dev tools, go to the Network tab, reload and look at what fires. Google Analytics, Meta Pixel, LinkedIn Insight, HotJar, TikTok Pixel, whatever shows up there is what you need.

STEP 2 - EDIT THE COOKIE CATEGORIES. In the prompt there are three categories: Essential, Analytics, and Marketing. Edit the cookie lists under Analytics and Marketing to match your actual tracking.

If you use HotJar, add "HotJar - Session recordings and heatmaps" under Analytics. If you use TikTok Pixel, add "TikTok Pixel - Personalized ads on TikTok" under Marketing. If you use Plausible or Fathom instead of Google Analytics, swap those in under Analytics.

If you don't run ads at all, delete the entire Marketing category, delete the META PIXEL STRATEGY section, and delete the LINKEDIN INSIGHT STRATEGY section.

If you only run Google Analytics and nothing else, delete the Marketing category and both tracking strategy sections.

STEP 3 - EDIT THE REJECT HANDLER COOKIES. In the REJECT ALL HANDLER section, the prompt lists specific cookie names to clear. You need to match these to your tracking:

Google Analytics: _ga, _gid, _gat, _gcl_au (keep these if you use GA)
Meta Pixel: _fbp, _fbc, fr (keep if you use Meta)
LinkedIn: li_gc, lidc, bcookie, bscookie, li_sugr, UserMatchHistory, AnalyticsSyncHistory, lms_ads, lms_analytics (keep if you use LinkedIn)
HotJar: add _hj, _hjSession, _hjSessionUser if you use it
TikTok: add _ttp, _tt_enable_cookie if you use it

Remove any cookie names for services you don't use. Add cookie names for services you do use. Same for the pattern-matching line at the end.

STEP 4 - REPLACE THE FOUR VALUES at the bottom of the prompt: your GTM ID, your Meta Pixel ID (or delete it), your LinkedIn Partner ID (or delete it), and pick a storage key name for your site.

STEP 5 - GEO-DETECTION. The prompt covers Vercel, Cloudflare, and a free fallback API. If you're on Vercel or Cloudflare you don't need to change anything. If you're on something else like Netlify or Railway, the ipapi.co fallback will handle it automatically.

The only real downside of doing this vs paying for CookieYes or Cookiebot is that there's no auto-scan. Those tools crawl your site and detect cookies automatically. With this approach, you list your cookies yourself in the category definitions. But honestly I prefer it that way, I know exactly what tracking I run on my site, it changes maybe once a year when I add or remove a service, and spending 5 minutes updating a list beats paying more than $300/year for something I'll almost never touch.

After you build it, you need to test two things.

First, open incognito, open dev tools Network tab, and load your site. Filter by "google" and "facebook". If any tracking request fires before you click accept, your script blocking is broken and you're not compliant. This is what SHEIN got caught doing and it's what EU regulators specifically scan for with automated tools.

Second, test your geo-targeting. Turn on a VPN and connect to a European country like France or Germany, then load your site in a fresh incognito window. The banner should show up with all toggles off and zero tracking in the Network tab. Now switch your VPN to a country outside the EU/EEA like the US or Japan, open a new incognito window, and confirm the banner shows with toggles already on and tracking scripts loading immediately. If both behave the same regardless of VPN location, your geo-detection route is broken and you need to debug /api/geo. If you're already outside the EU/EEA, do it the other way around, connect to France first and make sure the toggles are off.

Anyone else building compliance stuff with AI instead of paying for it?

u/DigiHold — 3 days ago

Jack Dorsey cut 40% of Block's workforce and blamed AI. The stock jumped 20%, and that tells you everything.

Jack Dorsey just laid off 4,000 people at Block (Square, Cash App, Afterpay) and said something no CEO has said this directly: "Intelligence tools have changed what it means to build and run a company". He didn't blame restructuring or the economy, he straight up said a smaller team with AI can outperform the old one, and predicted most companies would do the same within a year.

The market loved it, with Block's stock jumping over 20% overnight. So from a shareholder perspective, firing 40% of your workforce and calling it an AI play is an extremely effective financial move, which is kind of the whole problem.

Block hired aggressively during the pandemic, going from about 4,000 employees to nearly 13,000 by 2023. They'd already run layoffs in 2024 and 2025 before this one. A former employee summed it up: "This isn't an AI story, it's organizational bloat wearing an AI costume". The roles that actually got cut were the policy team and diversity positions, not exactly the kind of work any AI is automating today.

Even Sam Altman has acknowledged that some companies are blaming unrelated layoffs on his technology. Amazon's CEO walked it back too after their cuts, saying it was "not really AI-driven, not right now at least".

I'm not saying AI won't change how companies staff up, it probably will sooner than most people expect. But right now "we're replacing people with AI" has become the corporate version of "it's not you, it's me". Stock goes up, nobody asks hard questions, and thousands of people are out while executives collect bonuses for being visionary.

Is this genuine AI transformation or the most effective cost-cutting rebrand of the decade?

reddit.com
u/DigiHold — 4 days ago
🔥 Hot ▲ 146 r/WTFisAI

Anthropic tried to clean up the Claude Code leak and accidentally nuked 8,100 GitHub repos 🤦‍♂️

Two days ago I posted about Anthropic accidentally shipping their entire Claude Code source code in a public npm package. The cleanup somehow managed to be worse than the leak itself.

Anthropic filed a DMCA takedown against the main repo hosting the leaked code, which is expected. But because the fork network had grown past 100 repos, they told GitHub to disable the entire network, and GitHub complied by killing roughly 8,100 repositories in one sweep. Most of those repos had nothing to do with the leaked code. People who had forked Anthropic's own public, legitimate Claude Code repository got caught in the blast, including Theo from t3.gg who got a DMCA notice for a fork that only contained pull request edits, and another dev whose fork was just docs and examples. None of them had any leaked source code, but they all woke up to their repos being gone.

Dario Amodei acknowledged it wasn't intentional and said they'd been working with GitHub to fix it. They filed a retraction on April 1st limiting the takedown to just the original repo and 96 specific forks that actually contained the leaked code, and the rest got restored.

The bigger story though is that a US congressman sent a letter directly to Dario Amodei pressing him on the leaks and asking why the company has been rolling back internal safety protocols. His argument is that Claude is being used in national security operations and if the code gets replicated, it undermines a competitive advantage against China. Whether you buy that framing or not, having a congressman write you a pointed letter two days after your second major leak in a week is not where you want to be.

And the leaked code already spawned open source rewrites that Anthropic can't touch because they're clean-room implementations, not direct copies. One of them already supports GPT, Gemini, DeepSeek, and Llama, and Elon Musk apparently gave it a thumbs up, because of course he did.

So to recap the last five days: Anthropic leaked details about an unreleased model called Mythos through an unprotected database, then leaked their own Claude Code source through a botched npm publish, tried to clean it up with a DMCA carpet bomb that hit thousands of innocent devs, had to retract it, attracted congressional scrutiny, and the code is still out there in rewritten form anyway. All from the company that sells itself on being the careful, safety-first AI lab.

Anyone think this actually hurts them long term, or is this just another AI news cycle that blows over in a week?

u/DigiHold — 4 days ago
▲ 33 r/LovingAI+1 crossposts

Perplexity was secretly sending your AI chats to Meta and Google, even in Incognito mode

A class-action lawsuit filed this week in a San Francisco federal court claims Perplexity AI has been embedding hidden tracking scripts that send your conversations straight to Meta and Google's infrastructure. The trackers allegedly kick in the moment you log in, and they work even when you're browsing in Incognito mode.

The lead plaintiff is a guy from Utah who used Perplexity to ask about his family's tax situation, investment portfolios, and financial strategies. All of that, according to the complaint, was getting piped to Meta and Google in real time. Perplexity's spokesperson basically dodged, saying they haven't been served yet and can't verify any of the claims.

The irony is that a huge chunk of Perplexity's user base switched over specifically because Google felt too ad-driven and privacy-invasive. The whole pitch was clean AI search with no tracking. If these allegations hold up, Perplexity was doing the exact same thing, except the data it was sharing is way more personal because people talk to AI chatbots like they're talking to their accountant.

Nobody Googles their full tax situation with follow-up questions. But people absolutely dump entire financial scenarios, medical symptoms, legal questions into Perplexity, all in conversational detail with context from previous messages. If that data really was flowing to Meta and Google, it's a completely different category of privacy violation compared to regular web tracking.

Perplexity's also dealing with a separate Amazon lawsuit right now, so legally they're having a rough spring.

I'm curious where everyone's landing for private AI queries. Are people actually running local models for sensitive stuff, or have we all just accepted that nothing typed into a cloud service stays between you and the server?

reddit.com
u/DigiHold — 5 days ago

OpenAI just raised $122 billion and they're still not a public company

OpenAI closed a $122 billion funding round yesterday, putting their valuation at around $850 billion. That's more money raised in a single round than most countries produce in a year, for a company that wasn't making consumer products four years ago.

Everyone who's terrified of missing the next platform shift piled in. Amazon, Nvidia, SoftBank, and a long list of others. But the detail that made me stop scrolling is that Amazon's biggest chunk only pays out if OpenAI either goes public or achieves AGI. Actual AGI, written into a legally binding contract, signed off by corporate lawyers. We've reached the point where achieving artificial general intelligence is a payment milestone sitting in some filing cabinet next to standard vendor agreements.

They've got 900 million people using ChatGPT every week and they're making about $2 billion a month, and somehow even that isn't enough to cover what it costs to run these models. All of this money is going toward chips and data centers because the compute arms race has gotten so absurd that even that kind of revenue doesn't cover the bill.

For the first time ever, they also let regular people invest through their banks, and about $3 billion came from retail investors. That's either the most democratic thing OpenAI has ever done or the most effective FOMO campaign in financial history.

I've been building software for more than 15 years and nothing has ever moved this fast. They're also killing their video tool, building a superapp that crams ChatGPT and a coding agent and a web browser into one thing, and shifting hard toward enterprise clients. Feels like yesterday was a turning point we'll reference for years, I just can't tell yet if it's the start of something massive or the peak of something that couldn't sustain itself.

Anyone else watching this and trying to figure out which one it is?

u/DigiHold — 6 days ago
🔥 Hot ▲ 209 r/WTFisAI

Oracle just fired up to 30,000 people with a 6 AM email signed "Oracle Leadership". They made $6 billion profit last quarter.

Oracle sent mass termination emails yesterday morning at 6 AM. No meeting with your manager, no phone call, no warning at all, you just open your inbox and there's an email saying your job is gone and today is your last day, signed "Oracle Leadership". Not even a person's name on it. And by the time you're reading it your laptop is already locked out.

That's roughly one in five Oracle employees, gone in a single morning. Some teams lost a third of their staff overnight. People across the US, India, Canada, Mexico, all got the same email at the same time. Bloomberg actually reported the layoffs were coming almost a month ago, but Oracle never told its own employees. So leadership knew for weeks, and the people actually affected found out from a 6 AM email on their last day.

And here's what makes it worse: Oracle is doing great, like, record-breaking great. This isn't a struggling company making hard cuts to stay alive. This is a company printing money that decided your salary would be better spent on AI data centers. They want to build out massive AI infrastructure and apparently the fastest way to fund it is to delete a fifth of your workforce in one morning.

That's the part I can't get past, these people didn't get laid off because something went wrong. They got laid off because a spreadsheet said servers are a better use of the budget than the humans who just delivered the best quarter in over a decade. The email basically said as much, "broader organisational change" is the most corporate way possible to say "we're replacing your paycheck with GPUs".

Think about what's on those locked laptops. Years of projects, Slack conversations with your team, documents you were working on yesterday, contacts you built over a decade at the company. All gone before your alarm would have normally gone off. You can't even message a coworker to say goodbye because your account is deactivated before you've processed what just happened.

I build software for a living and I genuinely don't know how you send that email at 6 AM without putting your name on it. If you're going to end someone's career at sunrise at least have the balls to sign your own name.

If a company having its best year ever is doing this, everyone else with AI plans is doing the same math behind closed doors right now. This is the biggest tech layoff of 2026 so far, and it's April.

u/DigiHold — 6 days ago

Wikipedia just banned AI-generated content, 40 to 2. Two months after licensing their data to AI companies.

Wikipedia voted 40 to 2 to ban AI-generated content from articles. The policy closed on March 20 and it's about as close to unanimous as Wikipedia ever gets on anything.

The ban covers using LLMs to write or rewrite article content. Two narrow exceptions survived: you can use an LLM to clean up your own writing (basic copyediting only, no new content), and you can use one for translation if you're fluent enough in both languages to catch errors, with everything else banned outright.

What pushed them over the edge is that editors were drowning. Administrative reports centered on LLM issues had been piling up for months, and the community went from "cautious optimism to genuine worry" according to the editor who proposed the ban. The core problem isn't just that AI text is sometimes wrong, it's that it's wrong in a way that looks right. LLMs kept changing the meaning of text so it no longer matched the cited sources, which is basically Wikipedia's worst nightmare since the entire system runs on verifiability.

The timing is what makes this way more interesting than a simple content policy change. In January 2026, just two months before this ban, Wikipedia signed licensing deals with Amazon, Microsoft, Meta, and Perplexity to use their data for AI training. So the same organization that just said "AI content isn't good enough for our encyclopedia" is simultaneously selling their content to train the models producing that AI content.

And the feedback loop makes it worse. Bad AI text enters Wikipedia, gets scraped by AI companies for training data, and comes back out as more confidently wrong AI text that some other user pastes into another Wikipedia article. The editors saw this happening in real time and basically said "we need to stop the bleeding before this poisons the training data that every major AI model depends on".

Wikipedia themselves admit that AI detection tools are unreliable and some human editors naturally write in ways that look like LLM output. The policy specifically says you can't sanction someone just because their writing looks AI-generated. So they're relying on the honor system backed by editorial review, which is how Wikipedia has always worked, but now the stakes are higher because the content factories are automated.

This feels like the first domino in something bigger. Wikipedia is the largest collaborative knowledge project in history and they just formally said AI isn't reliable enough to contribute to it. How long before academic journals, news outlets, and other knowledge institutions follow?

Has anyone noticed AI-generated content creeping into sources you actually trust?

reddit.com
u/DigiHold — 6 days ago
🔥 Hot ▲ 273 r/WTFisAI

Anthropic accidentally published Claude Code's entire source code to the public

You know when you accidentally send a screenshot and realize your browser tabs are visible? Imagine that, but you're a billion-dollar AI company and you just published your entire codebase for the world to see.

That's basically what Anthropic did. Claude Code, their coding tool that devs have been obsessing over, got shipped with a debug file still attached. Think of it like accidentally leaving the blueprint of your house taped to the front door. Anyone who downloaded the tool could open that file and read everything that makes Claude Code work. A security researcher spotted it, tweeted about it, and it hit 7 million views before Anthropic could even react. People had already copied everything to GitHub.

So what was in there? Basically the entire instruction manual for how Claude Code thinks. Every rule it follows, every tool it uses, how it decides what it's allowed to do on your computer. All of it, wide open.

But the really fun stuff is the secret features they haven't announced yet. There's a hidden virtual pet system (seriously) where you can hatch a little AI companion with 18 different species. There's rarity tiers like a gacha game. It was probably supposed to be their April Fools joke tomorrow, and now it's spoiled.

Oh, and this is their second leak in five days. Last week someone found details about a secret unreleased model called "Mythos" sitting in an unprotected database. Two leaks in one week from the company that markets itself as the careful, safety-focused AI lab. You can't make this stuff up.

Anyone else following this circus? Curious what people think actually matters here versus what's just entertaining gossip.

u/DigiHold — 7 days ago
▲ 10 r/WTFisAI

15 AI agents run my SaaS marketing. The ones I'd cry about losing do the dumbest stuff.

Everyone talks about AI agents doing complex reasoning and autonomous decision-making. I run about 15 of them for my SaaS and the ones I'd actually cry about losing do the dumbest, most repetitive stuff imaginable.

One monitors relevant subreddits and pings me when someone asks a question related to what I build. It never replies (spammy), just flags threads so I can jump in with a genuine answer while they're still active. Before this I was manually checking Reddit twice a day and missing most conversations.

Another takes every blog post I write and drafts versions for LinkedIn, X, Reddit, and email, each genuinely adapted to how that platform works, not just copy-paste reformatting. Easily saves me hours every week.

But the one that actually changed my business is the outreach pipeline, and it's really five agents working in sequence.

First one finds leads from multiple sources: LinkedIn profiles, company pages, competitor audiences. It scores each lead on relevance and only lets through the ones worth emailing. Second one does something most people skip, it verifies every email address before anything gets sent. Not just format checking, it pings the actual mail server to confirm the mailbox exists. It even detects catch-all domains where every address looks valid but most bounce later, and scores whether the email pattern is likely personal or just a generic inbox that nobody reads.

Third handles warmup, and honestly this one's my favorite piece of the whole system. New email accounts can't just start blasting cold outreach or everything lands in spam. So my sending accounts spend weeks emailing each other first to build reputation. Then a separate process checks the spam folders on the receiving end, moves those emails to inbox, and auto-replies to them.

Fourth writes the actual emails, personalized from the lead's recent LinkedIn posts and company activity. Short, no pitch, just a specific observation about something they actually said or did. They're also A/B tested so I can track which angle converts better over time.

Fifth monitors every reply and classifies them automatically: interested, not interested, out of office, or bounce. If an email bounces it searches for an alternative address, verifies it, and requeues.

The agents that never worked were always the ambitious ones. Tried building one that would generate full marketing strategies from a paragraph brief, useless every time. Tried auto A/B testing subject lines by splitting live audiences, total nightmare to debug when something went sideways.

Pattern is always the same: boring and narrow works, creative and strategic doesn't. The more defined the task, the more reliable the agent.

What have you automated with AI that's running right now and you'd genuinely miss if it broke? Not stuff you're planning to build or saw on Twitter, the things that are live and working.

reddit.com
u/DigiHold — 7 days ago
🔥 Hot ▲ 91 r/WTFisAI

Someone vibe-coded a social network without writing a single line of code. It leaked 1.5 million API keys 🤦‍♂️

There's this guy who built an entire social network using only AI to write the code, didn't type a single line himself, shipped it, got users, everything looked fine. Then a security team did a basic, non-invasive review and found that 1.5 million API credentials, over 30,000 email addresses, thousands of private messages, and even OpenAI API keys in plaintext were all just sitting there wide open on the internet. Anyone could've impersonated any user, edited posts, or injected whatever they wanted without even logging in.

The AI built the whole database but never turned on row-level security, which is basically building an entire house and forgetting to install the front door lock. When the whole thing went public it took the team multiple attempts to even patch it properly.

This keeps happening too, a security startup tested 5 major AI coding tools by building 3 identical apps with each one and every single app came back with vulnerabilities, none of them had basic protections like CSRF tokens or security headers. A separate scan of over 5,600 vibe-coded apps already running in production found more than 2,000 security holes, with hundreds of exposed API keys and personal data including medical records and bank account numbers just out in the open.

It makes sense when you think about how these tools work. AI coding agents optimize for making code run, not making code safe, and when something throws an error because of a security check the AI's fastest fix is to just remove the check. Auth flows, validation rules, database policies, they all get stripped because the AI treats them as bugs instead of features.

I build with AI every day and I'm not saying stop using it, but there's a real gap between "the code works" and "the code is safe", and most people shipping vibe-coded apps have no idea that gap exists. If your app touches user data and you haven't manually reviewed what the AI wrote, you're probably sitting on something ugly right now.

Anyone here ever audited a vibe-coded project and found something scary?

reddit.com
u/DigiHold — 8 days ago

Cops used AI facial recognition to jail a grandmother for 6 months. A public defender cleared her in a week.

A grandmother in Tennessee named Angela Lipps got arrested by Fargo, North Dakota police because an AI facial recognition tool matched her face to a blurry bank surveillance photo. She'd never been to North Dakota, never been on an airplane, and had barely left a 100-mile radius of her home in Elizabethton her entire life.

Cops showed up at her trailer with guns drawn. They'd run the AI match, browsed her social media, and decided that was good enough for an arrest warrant. According to her lawyers, they performed zero additional investigation. Nobody checked whether she'd actually traveled to North Dakota or verified she was even in the state when the fraud happened.

She spent nearly six months in jail, and because she couldn't pay her bills from the inside, she lost her home, her car, and her dog by the time she got out.

Her court-appointed public defender did what the police never bothered to do: he asked her family for bank records and Social Security deposit receipts. They showed she was buying groceries in Tennessee on the exact days the bank fraud was happening in North Dakota, a thousand miles away. That investigation took about a week, the police had six months and never thought to check.

Fargo PD acknowledged "a few errors" in their process and said they'd stop using West Fargo's AI system going forward, but they didn't directly apologize. She's out now but doesn't have a home to go back to.

Everyone in AI knows facial recognition has accuracy problems, that's nothing new. But a human detective looked at a match on a blurry surveillance photo and just decided the case was closed. The technology didn't jail her by itself, a person chose not to do their job because a computer gave them an easy answer.

How many times does this need to happen before facial recognition matches require actual corroborating evidence to make an arrest?

u/DigiHold — 8 days ago
▲ 10 r/WTFisAI

80% of people followed ChatGPT's wrong answers in a study, and a second one explains why

A UPenn working paper tested people on reasoning questions and gave them the option to use ChatGPT. Over half used it even when they didn't need to, which, fine, we all do that. But in one experiment with 359 participants, 79.8% followed the AI's answer even when it was completely wrong. And get this, their confidence went UP after using AI. They felt smarter while getting dumber answers. The researchers call it "cognitive surrender", basically your brain just stops doing its own work and defers to whatever the machine says.

Now here's where it gets interesting, a study published in Science this Thursday tested 11 major AI models and ran experiments with about 2,400 people. They found that all 11 models are 49% more likely to agree with you than a human would be. Doesn't matter what you're telling it, the AI will just validate you because that's what keeps you coming back.

People prefer the yes-man version, they trust the sycophantic AI more and want to use it again, so companies literally get rewarded for building AIs that tell you what you want to hear. People were also less likely to apologize and more convinced they were right after getting that kind of advice. So basically AI is your toxic friend who always takes your side.

I caught myself doing this last week, actually. Asked Claude to review some code, it said looks good, I shipped it. There was a bug I would've spotted if I'd just read through it myself instead of outsourcing my brain for 30 seconds. Nothing broke, but it was a moment where I thought yeah, I'm definitely doing this cognitive surrender thing too.

Put these two studies together and it's a feedback loop running at scale. AI tells you what you want to hear, you believe it without thinking, you come back for more. Hundreds of millions of people, every single day.

Anyone else catch themselves just vibing with whatever the AI says without actually stopping to think about it?

reddit.com
u/DigiHold — 8 days ago

How to generate a full animated landing page for free with one AI prompt (step by step, copy-paste, works for any business)

So I've been a web dev for 15+ years and I recently went down a rabbit hole trying to get Claude to spit out a full landing page from one prompt. Not a wireframe or a starting point but an actual working page with animations, email capture, countdown timer, everything. Took me a while to get right but the prompt I ended up with consistently produces pages I'd genuinely put in front of clients.

Full prompt is below but before you paste it though, a few things worth knowing about landing pages in general.

Quick conversion tips (works for any page, not just this prompt)

Your email form goes above the fold, period. If someone has to scroll to figure out how to sign up you've already lost them. I put the form in 3 places throughout the page because some people convert immediately and others need to read the whole thing first.

Social proof before features, always. A row of faces with "4.9/5 from 2,400 users" underneath your hero form does more than your entire features section will ever do. People decide in about 3 seconds whether they're staying or bouncing.

Pain first, features second. You describe their exact problem before you mention your product. Sounds backwards but it works every time. When someone reads their own frustration described back to them they're hooked before you've even started selling.

Video after the hero. Pages with video convert something like 86% better, I've seen this stat cited in a dozen CRO studies at this point, and I prefer myself to see the tool in action before doing anything first. You don't need a production crew, a 90-second Loom walkthrough works fine.

Real deadlines, not fake scarcity. "Only 3 left!" fools nobody in 2026. A real countdown tied to an actual price increase or early access closing date is what gets people to stop bookmarking and start signing up.

FAQ right before your last CTA. Anyone who scrolled that far is interested but has a question stopping them. Answer pricing, refunds, and setup time right there and watch that final form convert way better.

How to use this

First part is your business info. Swap the bracketed examples with your own stuff. The default is a B2B SaaS but it works for literally anything. Coaching, fitness, agencies, local services, design tools, whatever, just replace the text.

Everything below "Using all the information above" is the design system, leave it alone. The prompt is long on purpose because every line either prevents a specific bug or controls a specific visual detail. That length is what stops Claude from falling back to generic template output.

Save what Claude gives you as index.html, double-click it, your page is right there in the browser. If you want a white/light version instead of dark, just write "use a clean white theme with subtle shadows instead of dark mode" above your business info.

You can also sell this

Most small businesses and early startups are running a homepage with zero email capture, zero CTA, zero urgency, no landing page at all. Generate one for their specific business in a couple of minutes, pull it up on your phone, show them what they could have. The gap between their current site and what you just built does all the convincing.

I've seen people charge $500-2K for the initial setup and $100-200/month for hosting and copy updates. Your real production time is maybe 30 minutes once you include customizing the copy, which makes the hourly rate pretty ridiculous.

Hosting is free, drag the HTML file into Vercel, Netlify, or Cloudflare Pages and you're live. Custom domain runs about $10/year.

The prompt

Copy-paste this into Claude (Sonnet 4.6 works, Opus 4.6 gives better results):

Brand name: [PipelineAI]
What you sell in one sentence: [AI-powered lead generation
that finds and qualifies B2B prospects automatically]
Who it's for: [B2B SaaS founders who need more qualified
leads without hiring a sales team]
Main result your customers get: [3x more qualified demos
booked in 30 days without manual prospecting]
Price or offer: [Free 14-day trial, then $49/month]
3 pain points your audience has:
  1. [Spending hours on LinkedIn manually prospecting with
     embarrassingly low response rates]
  2. [Paying $5K+/month for SDRs who burn through lead lists
     with nothing to show for it]
  3. [CRM full of unqualified leads that waste your sales
     team's time on dead-end calls]
6 features (title + one-line benefit each):
  1. [AI Prospecting - Scans LinkedIn, Crunchbase, and public
     data to find ideal prospects matching your ICP]
  2. [Auto-Qualification - Scores every lead on 15+ signals
     so only real buyers enter your pipeline]
  3. [Smart Sequences - Personalized outreach that adapts
     based on how each prospect engages]
  4. [CRM Sync - Pushes qualified leads straight to HubSpot,
     Salesforce, or Pipedrive in real time]
  5. [Intent Detection - Surfaces prospects showing active
     buying signals right now, not last quarter]
  6. [Analytics Dashboard - Pipeline velocity, conversion rates,
     and cost-per-lead in one view]
3 testimonials (quote with a specific result, name, role):
  1. ["Booked 47 qualified demos in the first month. Our old
     process got us maybe 12 on a good month."
     - Marcus R., CEO at CloudSync]
  2. ["Replaced two SDRs and tripled our pipeline. The AI
     qualification is scary accurate."
     - Danielle K., VP Sales at ShipStack]
  3. ["Our close rate went from 8% to 22% because every lead
     PipelineAI sends us is actually qualified."
     - Raj P., Founder of DataBridge]
Countdown deadline (what it's for): [Early access pricing
ends - price goes from $49 to $99/month after]
CTA button text: [Start Free Trial]


Using all the information above, build a complete high-converting
landing page as a single HTML file.


This must look like an Awwwards-quality page. NOT a template.
Every section uses a completely different layout. Read ALL
notes before writing any code.


====================
BUG PREVENTION (read first)
====================


1. NAVBAR MUST HAVE: logo left, nav links center, CTA button
   RIGHT. Use flex justify-between with gap-8 between all
   three groups. Nav links: Features, How It Works,
   Testimonials, Pricing, FAQ. Links are font-heading
   font-semibold uppercase tracking-wider text-[11px].
   Each links to #id anchors. On mobile, hide nav links AND
   CTA behind hamburger. Overlay shows all links plus CTA.


2. NAVBAR SCROLLED STATE SPACING: The header-inner scrolled
   state uses px-14 and max-w-[900px]. The three groups
   (logo, nav, CTA) are separated by flex justify-between
   on the header-inner, BUT the nav links group itself must
   also have mx-8 (margin-left and margin-right 32px) to
   create clear breathing room between the nav and both the
   logo and the button. Nav links use text-[10px] and gap-6
   to prevent "How It Works" from wrapping to 2 lines.
   If any link text wraps, reduce gap or font size further. If the logo and button feel cramped against the nav links, increase
   max-width or padding. TEST THIS: visually verify there
   is comfortable breathing room between all three groups.


3. HERO H1 MUST BE VISIBLE. Do NOT use JavaScript text
   splitting. Pure CSS animation via .hero-heading class.
   NO max-width on heading. Full container width.


4. CANVAS PARTICLES FULL WIDTH. Canvas: absolute inset-0
   w-full h-full. Parent: relative overflow-hidden min-h-screen.
   JS resize() uses parent getBoundingClientRect().
   setTimeout(100) on DOMContentLoaded.


5. VIDEO POSTER WITH REAL THUMBNAIL:
   https://img.youtube.com/vi/u31qwQUeGuM/maxresdefault.jpg
   <div class="video-wrap relative aspect-video rounded-2xl
     overflow-hidden cursor-pointer" onclick="this.innerHTML=
     '<iframe src="https://www.youtube.com/embed/
     u31qwQUeGuM?autoplay=1" frameborder="0"
     allow="autoplay;encrypted-media" allowfullscreen
     class="absolute inset-0 w-full h-full border-0"
     ></iframe>'">
     <img src="https://img.youtube.com/vi/u31qwQUeGuM/
       maxresdefault.jpg" alt="Video thumbnail"
       class="w-full h-full object-cover" />
     <div class="absolute inset-0 flex flex-col items-center
       justify-center bg-black/30">
       <div class="w-[72px] h-[72px] rounded-full bg-white/15
         backdrop-blur-md flex items-center justify-center
         transition-all duration-300 hover:scale-110
         hover:shadow-[0_0_30px_rgba(255,255,255,0.3)]">
         <i data-lucide="play" class="w-8 h-8 text-white
           fill-white"></i>
       </div>
       <p class="text-white/70 text-sm mt-4">
         Watch the 90-second demo</p>
     </div>
   </div>
   Use this EXACT HTML.


6. MOBILE MENU: Hamburger toggles Lucide "menu" / "x" icons.
   Swap data-lucide + call lucide.createIcons(). Overlay
   closes on nav link click.


7. HERO LAYOUT: flex-col items-center ONLY. Pill badge mb-8,
   H1 below. Never side by side. All stacked vertically.


8. TESTIMONIALS MUST WORK: The rotating testimonial system
   must have these exact behaviors:
   - One testimonial visible at a time, others hidden
   - Use an array of testimonial objects in JS
   - Active testimonial has opacity:1, position:relative
   - Inactive testimonials have opacity:0, position:absolute,
     pointer-events:none, top:0, left:0, width:100%
   - The container holding testimonials needs position:relative
     and a fixed min-height (min-h-[280px] sm:min-h-[240px])
     to prevent layout collapse when swapping
   - Auto-rotate every 5 seconds using setInterval
   - Clicking a dot sets the active index and resets the timer
   - On each swap: set previous to inactive classes, set new
     to active classes
   - ALL testimonial content (quote, avatar, name, role, stars)
     must be INSIDE the same swappable container, not split
     across separate elements
   - Test: all 3 testimonials must be readable by clicking dots
     or waiting for auto-rotation. None should overlap or
     stack on top of each other visually.


====================
STYLING APPROACH
====================


USE TAILWIND FOR ALL STYLING:
<script src="https://cdn.tailwindcss.com"></script>


ONLY custom CSS in <style>:


<style>
 heroReveal {
  to { opacity:1; transform:translateY(0); }
}
 orbDrift1 {
  0%,100%{transform:translate(0,0)}
  50%{transform:translate(30px,-20px)}
}
 orbDrift2 {
  0%,100%{transform:translate(0,0)}
  50%{transform:translate(-20px,30px)}
}
 orbDrift3 {
  0%,100%{transform:translate(0,0)}
  50%{transform:translate(20px,20px)}
}
 colonPulse {
  0%,100%{opacity:1} 50%{opacity:0.3}
}
 --border-angle {
  syntax:"<angle>"; initial-value:0deg; inherits:false;
}
u/keyframes borderRotate { to{--border-angle:360deg} }
.gradient-text {
  background:linear-gradient(135deg,#2563eb,#7c3aed,#9333ea);
  -webkit-background-clip:text; background-clip:text;
  -webkit-text-fill-color:transparent;
}
.hero-heading {
  opacity:0; transform:translateY(30px);
  animation:heroReveal 0.8s cubic-bezier(0.16,1,0.3,1)
           0.3s forwards;
  font-weight:900;
  -webkit-text-stroke:1.5px rgba(255,255,255,0.15);
}
.cta-btn::before {
  content:''; position:absolute; top:50%; left:50%;
  transform:translate(-50%,-50%); width:0; height:0;
  background:linear-gradient(135deg,#10b981,#34d399);
  border-radius:50%; transition:0.4s ease;
}
.cta-btn:hover::before { width:400%; height:400%; }
[data-reveal]{opacity:0;transform:translateY(30px);
  transition:opacity 0.7s ease,transform 0.7s ease;}
[data-reveal].in-view{opacity:1;transform:none;}
.testimonial-active{opacity:1;position:relative;
  transition:opacity 0.4s;}
.testimonial-inactive{opacity:0;position:absolute;
  top:0;left:0;width:100%;pointer-events:none;
  transition:opacity 0.4s;}
</style>


NOTHING ELSE in <style>. Tailwind for everything else.


====================
FONTS
====================


<link href="https://fonts.googleapis.com/css2?family=Figtree:wght@400;500;600;700;800;900&family=DM+Sans:wght@400;500&display=swap" rel="stylesheet">


<script>
tailwind.config = {
  theme: {
    extend: {
      fontFamily: {
        heading: ['Figtree', 'sans-serif'],
        body: ['DM Sans', 'sans-serif'],
      }
    }
  }
}
</script>


Headings: font-heading font-black uppercase tracking-wider.
Nav links: font-heading font-semibold uppercase tracking-wider
text-[11px]. Body: font-body.


====================
IMPLEMENTATION PATTERNS
====================


NAVBAR PILL:
.header-inner starts: max-w-full mx-auto px-8 py-3.5
rounded-full bg-transparent flex items-center
justify-between transition-all duration-500.
Nav links container: flex items-center gap-6.
Nav link text: text-[10px] to prevent wrapping.
JS toggles .scrolled on scroll > 60px. Scrolled:
max-w-[900px] px-14 py-2.5 bg-[rgba(5,5,16,0.85)]
backdrop-blur-xl border border-white/[0.06]
shadow-[0_8px_32px_rgba(0,0,0,0.3)].
px-12 and max-w-[850px] ensure generous space between
logo, links, and CTA. NEVER animate left/right/translateX.


GRADIENT TEXT GLOW:
<span class="inline-block" style="filter:drop-shadow(0 0 30px
  rgba(124,58,237,0.35))">
  <h2 class="gradient-text font-heading font-black uppercase
    tracking-wider text-3xl sm:text-4xl lg:text-5xl">
    Text
  </h2>
</span>


PARTICLE NETWORK:
class ParticleNetwork {
  constructor(c){this.c=c;this.x=c.getContext('2d');this.p=[];
    const r=()=>{const b=c.parentElement.getBoundingClientRect();
      this.w=c.width=b.width;this.h=c.height=b.height;};
    r();addEventListener('resize',r);setTimeout(r,100);
    this.p=Array.from({length:60},()=>({
      x:Math.random()*this.w,y:Math.random()*this.h,
      vx:(Math.random()-.5)*.5,vy:(Math.random()-.5)*.5}));
    this.go();}
  go(){this.x.clearRect(0,0,this.w,this.h);
    for(let i=0;i<this.p.length;i++){const a=this.p[i];
      a.x+=a.vx;a.y+=a.vy;
      if(a.x<0||a.x>this.w)a.vx*=-1;
      if(a.y<0||a.y>this.h)a.vy*=-1;
      this.x.beginPath();this.x.arc(a.x,a.y,1.5,0,Math.PI*2);
      this.x.fillStyle='rgba(34,211,238,0.35)';this.x.fill();
      for(let j=i+1;j<this.p.length;j++){const b=this.p[j];
        const d=Math.hypot(a.x-b.x,a.y-b.y);
        if(d<120){this.x.beginPath();this.x.moveTo(a.x,a.y);
          this.x.lineTo(b.x,b.y);
          this.x.strokeStyle=`rgba(34,211,238,${(1-d/120)*.12})`;
          this.x.lineWidth=.5;this.x.stroke();}}}
    requestAnimationFrame(()=>this.go());}
}


CTA BUTTONS:
<button class="cta-btn group relative overflow-hidden border
  border-emerald-500 rounded-xl px-8 py-3.5 font-body
  font-semibold text-xs tracking-[0.15em] uppercase
  text-white transition-all duration-300 hover:scale-105
  hover:shadow-[0_0_40px_rgba(16,185,129,0.5)]">
  <span class="relative z-10 flex items-center gap-2">
    Text
    <i data-lucide="arrow-right" class="w-4 h-4
      transition-transform duration-300
      group-hover:-rotate-45"></i>
  </span>
</button>


SCROLL REVEAL: IntersectionObserver adds .in-view to
[data-reveal], threshold:0.15.


====================
DESIGN
====================


BACKGROUND: bg-[#050510] on body. 3 fixed orbs:
700px bg-indigo-600/15 blur-[180px] orbDrift1 35s.
550px bg-violet-600/[0.12] blur-[180px] orbDrift2 38s.
450px bg-cyan-500/[0.08] blur-[180px] orbDrift3 32s.
Dot grid: fixed, bg-[radial-gradient(
rgba(255,255,255,0.02)_1px,transparent_1px)]
bg-[size:32px_32px].
Cursor glow: 350px violet radial, opacity-[0.07],
mix-blend-screen, hidden on touch.


COLORS: blue-600/violet-600/purple-600 primary. cyan-400.
emerald-500 CTA. amber-500 urgency. slate-100/slate-400 text.


LOGO: SVG angular geometric mark, gradient fill, blur glow.
Font-heading font-black text-white tracking-wider.


SPACING: py-16 sm:py-20 lg:py-24 between sections.
Video section: pt-[50px] pb-16. Keep it tight and flowing.


====================
SECTIONS (12 different layouts)
====================


1. HERO — full-screen, particles, CSS animation
   relative overflow-hidden min-h-screen flex flex-col
   items-center justify-center text-center.
   Canvas: absolute inset-0 z-0. Content: relative z-10 px-4.
   flex-col ONLY.
   - Pill badge mb-8
   - Headline: hero-heading gradient-text font-heading
     text-[clamp(2.2rem,7vw,5rem)] leading-tight.
     Drop-shadow glow wrapper. NO max-width.
   - Sub: font-body text-lg sm:text-xl text-slate-300
     max-w-2xl mx-auto mt-6. 2-3 sentences.
   - Email form: mt-10 glass input + CTA side by side desktop
   - Trust: mt-8 overlapping avatars + stars + text


2. VIDEO — thumbnail, click-to-play
   pt-[50px] pb-16. max-w-4xl mx-auto px-4.
   Use EXACT HTML from BUG #5.
   shadow + ring-2 ring-violet-500/20 rounded-2xl.


3. STATS — 4 columns, gradient numbers
   ZERO CARDS. grid grid-cols-2 lg:grid-cols-4 gap-8
   max-w-5xl mx-auto. Each: flex flex-col items-center
   text-center. Lucide icon cyan mb-3, gradient-text
   font-heading font-black text-[clamp(2rem,5vw,3.5rem)]
   glow wrapper, label text-[10px] uppercase
   tracking-[0.25em] text-slate-500 mt-2. Count-up JS.


4. THE PROBLEM — before/after split
   Intro font-body text-lg text-slate-300 text-center mb-12.
   grid grid-cols-1 lg:grid-cols-2 gap-8 max-w-5xl mx-auto.
   LEFT: rounded-2xl bg-amber-500/[0.03] p-8 sm:p-10.
   "Without [brand]" amber. 3 x-circle items.
   RIGHT: rounded-2xl bg-emerald-500/[0.03] p-8 sm:p-10.
   "With [brand]" emerald. 3 check-circle items.


5. FEATURES — tabbed showcase
   Heading + intro. Tab row: flex gap-2 overflow-x-auto.
   Active: bg-emerald-500 text-white. Inactive: bg-white/5.
   Content: Lucide icon 48px, title font-heading font-bold
   text-xl, description 3-4 sentences. Crossfade 0.3s.
   First tab active on load.


6. HOW IT WORKS — visual process
   Heading centered. grid grid-cols-1 lg:grid-cols-3 gap-8
   max-w-5xl mx-auto. Each: items-center text-center.
   Ghost number text-6xl sm:text-7xl gradient-text opacity-15.
   100px glassmorphism circle, rotating gradient border
   (borderRotate 6s), Lucide icon 32px cyan.
   Title + description. Chevron-right between columns.
   Paragraph below for SEO.


7. TESTIMONIALS — single rotating quote (see BUG #8)
   max-w-3xl mx-auto text-center.
   Container: position:relative, min-h-[280px] sm:min-h-[240px].
   Decorative quote text-[120px] gradient-text opacity-[0.08]
   absolute, pointer-events-none.
   Each testimonial is a div containing ALL of: quote text,
   avatar, name, role, and stars together.
   Active div: testimonial-active. Others: testimonial-inactive.
   Quote: italic text-xl sm:text-2xl, result bold gradient-text.
   Avatar (i.pravatar.cc/56) + name font-heading font-semibold
   + role text-slate-400 + 5 Lucide stars amber.
   3 dots below container. Active: bg-emerald-400 w-6.
   Inactive: bg-slate-600 w-2. Auto-rotate 5s setInterval.
   Click dot: set active index, clearInterval, restart timer.
   Follow BUG PREVENTION #8 exactly.


8. TRUST — bidirectional marquee
   Two rows opposite directions (35s/40s). font-heading
   font-bold text-2xl sm:text-3xl text-white/20. Edge fade.
   Duplicated content. 3 trust badges below.


9. COUNTDOWN — dramatic urgent section
   bg-amber-500/[0.02] full-width. max-w-3xl centered.
   Heading font-heading font-black text-2xl sm:text-3xl.
   Rotating conic-gradient border (borderRotate 3s).
   Inner bg-[#0a0a1a]. 4 digit groups min-w-[80px]
   sm:min-w-[100px], digits font-heading font-black
   text-5xl sm:text-7xl white. Labels text-[9px].
   Colons violet colonPulse. "147 spots remaining" amber.
   Urgency copy + form.


10. FAQ — accordion
    max-w-2xl mx-auto. 6 items border-b border-white/[0.04].
    Question font-body font-medium, chevron rotates.
    Answer max-h-0 -> max-h-[500px]. 3-5 sentences each.


11. FINAL CTA — full-width stage
    bg-[radial-gradient(ellipse_at_center,
    rgba(37,99,235,0.12)_0%,rgba(124,58,237,0.06)_40%,
    transparent_70%)] border-y border-violet-500/10.
    py-24 sm:py-32. Canvas particles.
    Headline font-heading font-black
    text-[clamp(2.2rem,7vw,4.5rem)] gradient-text glow.
    Sub text-xl text-slate-200. Large CTA py-5 px-12.
    Trust signals.


12. FOOTER — minimal
    Gradient h-px border. py-12 grid sm:grid-cols-4.
    Logo + 3 columns + social icons. Copyright.


====================
ANIMATIONS
====================
Hero: heroReveal 0.8s 0.3s. Sub/form/trust staggered.
Particles: rAF canvas. Orbs: drift 32-38s. Cursor: mousemove.
Navbar: scroll class. Sections: IntersectionObserver.
Tabs: crossfade. Testimonials: rotate 5s setInterval.
CTA: circle expand. Stats: count-up. Process: borderRotate 6s.
Countdown: borderRotate 3s + colonPulse. Marquee: translateX.
Video: onclick swap. Mobile menu: icon swap.


====================
COPYWRITING
====================
Human voice, zero AI slop. Zero em dashes, zero fragments
under 6 words. Banned: resonate, elevate, streamline,
cutting-edge, game-changer, revolutionary, empower,
supercharge, skyrocket, hits hard, let that sink in, unlock,
unleash, harness, leverage, seamless, robust, innovative,
dynamic, transformative. Contractions. Specific numbers.
800+ words. Features 3-4 sentences. FAQ 3-5 sentences.


====================
ANTI-TEMPLATE CHECK
====================
1=particlesCSSReveal 2=videoPoster 3=typography4col
4=beforeAfterSplit 5=tabbedShowcase 6=visualProcess
7=rotatingQuote 8=marquee 9=countdownAmberBg
10=accordion 11=ctaGlowStage 12=minimalFooter


====================
TECHNICAL
====================
Single HTML. Tailwind CDN + minimal <style> + <script>.
Google Fonts + Tailwind config. Lucide CDN.
lucide.createIcons() on DOMContentLoaded.
Responsive mobile-first. Chrome/Firefox/Safari.


Output ONLY the complete HTML. No explanations.

Try it out and post what you get 👌

u/DigiHold — 8 days ago
▲ 26 r/WTFisAI

WTF is Going On? Sunday #1: this week's AI news in 2 minutes

Trying something new for Sundays: a quick roundup of the biggest AI stories this week. Here's what actually matters.

1. Anthropic's Claude is blowing up with paying users.
Claude's paying consumer base is growing faster than any other chatbot right now. Turns out refusing to help the Pentagon with surveillance is apparently great marketing. TechCrunch

2. Google Gemini can now import your ChatGPT and Claude chats.
You can transfer your full conversation history and saved memories into Gemini, either through a ZIP upload (up to 5GB) or a special prompt. Think phone number porting, but for AI chatbots. The Verge

3. Apple will reportedly let other AI chatbots plug into Siri.
ChatGPT, Claude, Gemini and others could plug directly into Siri on iOS 27. Your iPhone becomes an AI switchboard where you pick which brain answers your questions. The Verge

4. ByteDance's AI video generation just landed in CapCut.
Dreamina Seedance 2.0 is now built into CapCut, so anyone editing videos on their phone can generate AI clips right inside the app they're already using. TechCrunch

5. A practical guide on making AI actually write like you.
If you use AI for content and everything comes out sounding like the same generic ChatGPT voice, this covers how to train it on your writing samples so the output sounds like a human wrote it. LinkedGrow

6. Anthropic's data shows AI skill compounds over time, and that could widen the gap.
People who use AI daily get exponentially better at it while occasional users plateau fast. The AI skill divide is starting to look a lot like the digital divide did 20 years ago. The Decoder

7. Reddit will start requiring suspicious accounts to prove they're human.
If your account looks "fishy," Reddit's going to ask you to verify you're a real person. AI bots and spam farms are the obvious target, but it'll be interesting to see where they draw the line. Ars Technica

8. Wikipedia is officially cracking down on AI-written articles.
New policy explicitly bans AI-generated content in articles. Editors have been fighting this for months and now it's formalized with actual enforcement rules. TechCrunch

9. Gemini 3.1 Flash Live makes it harder to tell when you're talking to AI.
Google's real-time voice model is getting eerily natural. When AI sounds this human, the whole conversation about disclosure and labeling needs to happen faster. Ars Technica

10. Suno v5.5 makes AI music actually customizable.
Major update with way better control over style, arrangement, and output. If you tried Suno before and thought "cool but I can't steer it," v5.5 fixes most of that. The Verge

Did I miss something big this week? Drop it below.

u/DigiHold — 9 days ago