r/CreatorsAI

▲ 14 r/CreatorsAI+5 crossposts

I just made an AI that can switch to over 9 personalities including Tung Tung Tung Sahur!

i made this AI called ShiftAI, a voice AI, but it is not for assisting, it has the ability to switch personalities. it has over 9 personalities like: Mean, Depressed, philosophical and it can even turn into tung tung tung sahur! you can change its personality by saying: change your personality to (the one you want) all of the personalities are on the site and a better explanation. the site was made with HTML and CSS (obviously, and the app you DOWNLOAD was made with python + tkinter, uses Groq API for respnses. And also the site might look messy on a phone and I used tkinter which I'm pretty sure won't work on phones so if you're on a phone you unfortunately can't get this app. link in the comments and would love feedback!!!

u/Next-Ad-4052 — 3 days ago

Uber burned its entire 2026 AI coding budget in 4 months. The CTO said "I'm back to the drawing board." The tool that did it costs $200 a month per engineer.

Uber's CTO Praveen Neppalli Naga told The Information this month that the company's full-year AI budget is already gone. It is April. Three quarters of the year remain.
The culprit is not a failed infrastructure contract or a surprise cloud bill. It is a coding assistant. Claude Code rolled out to Uber's engineering organisation in December 2025. By February, usage had doubled. By April, the annual budget was ash.
Here are the numbers. Claude Code costs $200 per month per engineer at the individual level. Manageable. Individual monthly costs ran between $500 and $2,000 depending on usage intensity across Uber's 5,000 engineers. That is 5 to 20 times what most companies budget for a standard SaaS seat.
Adoption went from 32% to 84% of the engineering organisation in months. 95% of Uber engineers now use AI tools monthly. 70% of committed code originates from AI. Uber's internal AI agent is pushing 1,800 code changes every week without direct human input.
The tools did not fail. They worked so well that engineers could not stop using them, and nobody had built a budget model for what that actually costs.
This is the part every engineering leader needs to sit with. The entire FinOps playbook for software companies was built around predictable costs. EC2 instances, reserved capacity, SaaS seat licenses with fixed per-user pricing. Token-based billing is none of those things. It scales with engagement, not headcount. The more useful the tool, the more it gets used, the higher the bill. There is no natural ceiling unless one gets imposed artificially.
Uber did not make a mistake. They made a bet that AI adoption would produce enough output to justify the cost, and the adoption happened faster than any spreadsheet anticipated.
For engineering leaders already deploying AI tools at scale: how is consumption actually being tracked, and has anyone in finance asked yet? And for companies still planning the rollout, does the Uber story make the conversation more urgent or just harder to have?

reddit.com
u/Historical-Driver-64 — 4 days ago

Google just put a model that ranks #3 among all open models in the world on a laptop. It runs on 5GB of RAM. No API. No subscription. Your data never leaves your machine.

Gemma 4 dropped on April 3rd. The 31B model ranks number 3 among all open models globally on Arena AI's text leaderboard. The 26B outperforms models 20 times its size. The smallest version runs on 5GB of RAM.

Not a server. A laptop. A phone. A Raspberry Pi.

These are the same weights that rank at the top of open model leaderboards, optimized to run on hardware most people already own. The entire family is free to download, free to use commercially, no subscription, no usage limits, no terms of service update that changes the rules mid-project.

One command to get started: ollama run gemma4.

All four sizes handle text, image, and video natively. Every model has a built-in reasoning mode. Context windows go up to 256K tokens on the larger models, meaning an entire document library processed in a single session.

Every token of every conversation stays on the device. A healthcare tool, a legal document processor, a financial analyzer. Data that cannot leave the building, now with a model that does not need to.

This is the part that matters most for anyone building products around client data. HIPAA constraints, attorney-client privilege, financial compliance, internal company information that cannot touch a third-party server. Every one of those use cases just got a credible option that did not exist six months ago.

The honest limitation: OpenAI and Anthropic still outperform on the hardest reasoning tasks. If the ceiling matters for what is being built, the cloud APIs are still the ceiling. What Gemma 4 changes is the floor. The floor for what runs locally, privately, and for free is now genuinely competitive with what most real applications actually need.

Developers have downloaded previous Gemma models over 400 million times. The community has built more than 100,000 variants on top of earlier versions. The ecosystem is not starting from zero.

If a client asked where their data goes when they use a tool built for them, would the answer change if the model never left their own device? And has privacy ever actually been the thing that stopped a project from moving forward?

reddit.com
u/Successful_List2882 — 6 days ago

11 years of coding and caught myself unable to debug without AI last month. That scared me more than any bug I've ever seen.

Last month, a network timeout in a service written two years ago. Intermittent. Production only. The kind of bug that used to mean an hour of methodical, solitary thinking.
Instead, Claude got opened, the symptom described, a hypothesis followed, a dead end hit. Forty minutes later the bug was not found. Just directions being followed.
When the chat closed, something was wrong. The internal voice that used to say "check the connection pool" or "maybe there is a retry storm building" was quieter than it used to be. Not gone. Quieter.
The bug got found eventually. It took longer without AI than it would have taken three years ago without any AI at all.
The problem is not that AI gives wrong answers. The problem is that it gives a direction when the entire skill is learning to generate your own directions under uncertainty.
Use GPS for five years, lose signal, and you do not just lack information. You lack the mental map you would have built navigating manually. The skill and the model degrade together. Nobody notices until the signal drops.
Eleven years in means over a decade of instinct built before any of this existed. The atrophy is noticeable but there are reserves to fall back on.
Someone who started their first engineering job in 2023 and has been using AI tools since week one does not have those reserves. They are building their entire mental model of problem solving on top of a tool that generates the next step for them.
Still using the tools every day. But deliberately closing the chat on the hard problems now and sitting with the discomfort for thirty minutes before reaching for help. Not because it is faster. Because the muscle only stays alive if it actually gets used.
What nobody is measuring is not the productivity gains. Those are settled. It is what is quietly leaving at the same time.
Is genuine debugging intuition still being built in this industry, or are we just getting collectively better at prompting toward an answer?

reddit.com
u/Successful_List2882 — 7 days ago
▲ 8 r/CreatorsAI+6 crossposts

New release: An Evening in 2050 by KinFable.

A thoughtful electronic album exploring a 2050 future through atmospheric tracks like “Organic in a Plastic World” and “Algorithm Lullaby.”

Listen now!
Spotify: https://open.spotify.com/album/6PEdvXcgRKf0zeiGXKZT5j
YouTube Music: https://music.youtube.com/playlist?list=OLAK5uy_l0h9Kkhvw9oUYG_cCmDcgZ_50dqqepHXA

Available on Apple Music, Amazon Music and everywhere else.

Stream it and share your thoughts below. Upvote if you enjoy it!

u/Deep-Pangolin9320 — 7 days ago

A Cal State professor submitted their own hand-written work through Turnitin to test the system. It came back 98% AI probability. They had written every word themselves. That is not an edge case. That is the system working exactly as designed, which is the problem. A 2026 study evaluating commercial AI detectors on 192 authentic student texts found false positive rates ranging from 43% to 83%. Meaning in some cases, nearly every real essay was flagged as fake. For non-native English speakers it is worse. A landmark study published in Computers and Education: AI found that detectors incorrectly labeled 61.3% of essays written by non-native English speakers as AI-generated. Stanford HAI tested seven detectors on TOEFL essays and found that 19% were unanimously flagged as AI by all seven tools at once.The students actually using AI figured this out faster than the institutions trying to catch them.Prompt engineering communities on Reddit now have detailed guides on how to make AI output sound human. Not by removing the AI, but by prompting it differently. Write as a first draft. Vary sentence length deliberately. Let a point develop unevenly before it lands. Use conjunctions at the start of sentences. These adjustments drop AI detection scores by 10 to 30 percentage points on most tools, according to testing published by NaturalRewrite in March 2026. The students being punished are mostly the ones who did not know these tricks existed.Turnitin now tracks over 150 AI humanizer tools. In October 2025 alone, 43 of those tools recorded 33.9 million website visits in a single month, according to an NBC News investigation. Students are not using these tools because they are lazy. Many are using them because they already wrote the essay themselves and got flagged, and are now trying to make their own writing pass a broken test.Here is the part that should make every university administrator uncomfortable. Grammarly reported that students created over 5 million Authorship reports last year, mostly never submitted, used only to self-check before turning in their own work. Students are now editing how they naturally write to avoid triggering a detector. One student told NBC News directly: "I'm writing just so that I don't flag those AI detectors."The system designed to protect academic integrity is teaching students that clear, structured, well-argued writing is dangerous. Write messily. Write unevenly. Write like you did not quite finish the thought. That is what passes now.The counterargument worth taking seriously: AI use in academic writing is genuinely widespread and genuinely difficult to address. A University of California survey in 2024 found that 43% of students admitted to using AI on assignments where it was not permitted. Institutions are not wrong to look for a solution. The problem is that the tool being used to enforce the policy is producing false accusations at a rate that would be considered unacceptable in any other context where someone's academic record is on the line.A University of Michigan student filed suit in 2026 after being accused based on a detection score. Courts are beginning to establish that an AI detection score alone does not constitute evidence of academic dishonesty.If you have been flagged for writing you did entirely yourself, what happened next? And if you are an educator still using these tools, what would actually change your mind about relying on them?

reddit.com
u/Successful_List2882 — 11 days ago

Building a production AI agent from scratch takes months. Not because the agent itself is complicated. Because 80% of the work is plumbing. Sandboxed execution environments so the agent cannot wreck your system. Checkpointing so a two-hour task does not restart from zero after a network blip. Credential management, scoped permissions, error recovery, observability. All of it before shipping a single feature a user cares about.
On April 8, 2026, Anthropic said they will handle all of that. For $0.08 per runtime hour.
Claude Managed Agents launched in public beta and is now available to all Claude API accounts. The product is not a new model. It is a managed infrastructure layer. Secure sandboxed containers, persistent session state, built-in tool orchestration, and full tracing inside the Claude Console. Notion, Asana, Rakuten, and Sentry are already in production on it. Rakuten reportedly deployed specialist agents across five departments in a week each.
The commercial logic is straightforward and worth stating plainly. Selling model access is a commodity. Any company can switch from Claude to GPT-5 to Gemini with a few lines of code. A managed runtime is different. Once your agents run on Anthropic's infrastructure, with their session format, their sandboxing, their tooling, their state storage, the switching cost is real. VentureBeat noted it directly: session data is stored in a database managed by Anthropic. The workflows become embedded in how the business runs. This is not an accident of design. It is the design.
The agentic AI startup market that Managed Agents competes with directly attracted $2.8 billion in venture funding in the first half of 2025 alone. Sierra, the customer service agent company co-founded by Bret Taylor, raised $350 million at a $10 billion valuation and hit $100 million in annual recurring revenue in under two years. Those companies were building exactly the layer Anthropic just absorbed into its platform. The "there goes a whole YC batch" reaction is not hyperbole. It is an accurate description of what happens when the model provider moves up the stack.
The honest limitations matter here. The two features that would make this most compelling for serious enterprise use, multi-agent coordination and self-evaluation, are not in the public beta. They are in "research preview" and require a separate access request. The pricing is beta-era and not committed for general availability. Claude-only is a hard constraint. No GPT-5, no Gemini, no open-source models inside the managed harness. And Anthropic's own internal testing showed a 10-point improvement in task success rates over standard prompting loops, which is meaningful but not a dramatic leap.
For a solo developer or a startup watching costs carefully, the math deserves scrutiny before committing. A fleet of 24 agents each running eight-hour daily tasks costs $15.36 a day in session overhead before inference. A 500-agent system running simultaneously costs $40 an hour in session costs alone, plus tokens. At scale, the $0.08 number looks different than it does on a single session.
The larger question is not really about this product. It is about the pattern. Every major cloud provider spent a decade absorbing the middle layer of enterprise software, databases, deployment pipelines, monitoring, into their own platform. The companies that built those middle-layer tools either differentiated fast or got folded in. The AI infrastructure market is running the same playbook at much higher speed.
If you are building in the agent infrastructure space right now, how are you thinking about the floor dropping? And if you are an enterprise evaluating this, what would actually move you from a self-hosted stack to handing the runtime to your model provider?

reddit.com
u/Historical-Driver-64 — 11 days ago

During the 2025 holiday season, AI chatbots and browsers drove double the e-commerce traffic compared to 2024, according to Salesforce. AI was credited with influencing 20% of all retail sales, generating $262 billion in revenue. That happened without most brands noticing, let alone preparing for it.What is coming next is a different category of problem entirely.Agentic commerce is when an AI does not just recommend a product but completes the purchase. You tell it "keep my household essentials under $300 a month" and it monitors inventory, compares prices across merchants, applies discount codes, and places the order. You find out when the package arrives. Google's official definition from their January 2026 announcement states it plainly: "Agentic commerce is where AI doesn't just suggest products, but actually helps complete the task of checking out."Shopify, PayPal, Google, and Stripe are all building infrastructure for agents to browse catalogs and execute purchases directly. This is not a concept. It is already being deployed.The commercial consequence that nobody in retail wants to say out loud: agents optimize for utility, value, and fit. Not brand loyalty. When an AI is choosing between two similar products at similar prices, it is not going to pick the one with the better Instagram presence or the founder story that resonated in a 2022 campaign. It is going to pick the one whose product data is cleaner, whose delivery promises are more consistent, and whose API is easier to transact with. Switching costs approach zero when the agent handles everything.A Mirakl survey of retail technology partners found that the most commonly cited risk of agentic commerce is disintermediation: brands losing direct traffic and customer relationships as discovery shifts entirely to AI platforms. When an agent buys on your behalf, you never visit the brand's website. You never see the cross-sell. You never join the loyalty program. The entire behavioral data pipeline that modern e-commerce is built on stops working because the human stopped showing up.Weekly retail site traffic is already down 21% between 2024 and 2025 according to Quantum Metric data. Conversion rates dropped 27% in the same period. Shoppers are making fewer, larger purchases rather than browsing and impulse buying. The behavior that built the current e-commerce model, the casual scroll, the comparison tab, the "maybe I'll add that too," is quietly disappearing before agents even reach mainstream adoption.The honest counterargument: only 46% of consumers currently trust AI recommendations enough to act on them without checking elsewhere, according to eMarketer. Julie Towns, VP of product marketing at Pinterest, said in January 2026 that fully autonomous end-to-end shopping will remain underdeveloped through this year, especially for high-stakes purchases. Trust is the ceiling that technology keeps running into. People will delegate toothpaste before they delegate a mattress.Forrester predicts that by 2026, one in five sellers will need to respond to AI-powered buyer agents with dynamically delivered counteroffers via their own seller-controlled agents. The negotiation layer of commerce, which has been invisible to consumers for decades, is about to become a machine-to-machine protocol.Product content optimized for Google does not work for AI agents. An agent does not read a hero image or a brand story. It reads structured data, pricing consistency, delivery window accuracy, and return eligibility in a format it can compare across hundreds of merchants in seconds. Merkle's commerce team called this out directly: there is a fundamental mismatch between how most brands have built their digital presence and what agents actually need to make a decision.If you run an e-commerce brand or work in retail, has the shift in traffic patterns changed how you are thinking about where to put resources? And if you are a consumer who has already let an AI make a purchase on your behalf, what was the thing that made you comfortable enough to hand that over?

reddit.com
u/Successful_List2882 — 10 days ago