r/AiAutomations

Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News
▲ 13 r/ArtificialInteligence+11 crossposts

Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News

Hey everyone, I just sent the 26th issue of AI Hacker Newsletter, a weekly roundup of the best AI links and discussions around from Hacker News. Here are some of the links:

  • Coding agents could make free software matter again - comments
  • AI got the blame for the Iran school bombing. The truth is more worrying - comments
  • Slop is not necessarily the future - comments
  • Oracle slashes 30k jobs - comments
  • OpenAI closes funding round at an $852B valuation - comments

If you enjoy such links, I send over 30 every week. You can subscribe here: https://hackernewsai.com/

u/alexeestec — 1 day ago
Trying to understand the impact of automation
▲ 2 r/AiAutomations+1 crossposts

Trying to understand the impact of automation

In this hobby project I recently launched, I am trying to understand the actual implications of AI automation by asking people to asess their actual job. Feedback deeply appreciated 🙏 Hope I’m not breaking any rules by posting. No company is involved in the project.

automatable.me
u/EndingFromScratch — 1 hour ago
printing money with AI
▲ 12 r/vibecoding+2 crossposts

printing money with AI

A tutoring business paid me $5k for an AI automation I built in 2 days.

The agent managed teacher schedules, created Google Calendar events, sent WhatsApp reminders, and triggered payment notifications. It runs in production.

That client pushed me to ship Struere (struere.dev): A platform where you describe what the agent should do and Claude Code builds it: database, automations, integrations, deploy. Free with your own API keys.

I'm looking for people already building AI automations for clients or their own business. People who've hit the ceiling on existing tools: too slow, too expensive, or they don't handle the edge cases.

If that's you, drop a comment or visit struere.dev

u/marc00099 — 18 hours ago
Static SOUL.md files are boring. So we built an open-source AI agent that psychologically profiles you and adapts in real-time — and refuses to be sycophantic about it.
▲ 6 r/AIAssisted+3 crossposts

Static SOUL.md files are boring. So we built an open-source AI agent that psychologically profiles you and adapts in real-time — and refuses to be sycophantic about it.

Every AI agent today has the same problem: they're born fresh every conversation. No memory of who you are, how you think, or what you need. The "fix" is a personality file — a static SOUL.md that says "be friendly and helpful." It never changes. It treats a senior engineer the same as a first-year student. It treats Monday-morning-you the same as Friday-at-3AM-you.

We thought that was embarrassing. So we built something different.

THE VISION

What if your AI agent actually knew you? Not just what you asked, but HOW you think. Whether you want the three-word answer or the deep explanation. Whether you need encouragement or honest pushback. Whether your trust has been earned or you're still sizing it up.

And what if the agent had its own identity — values it won't compromise, opinions it'll defend, boundaries it'll hold — instead of rolling over and agreeing with everything you say?

That's Tem Anima. Emotional intelligence that grows. Not from a file. From every conversation.

WHAT THIS MEANS FOR YOU

Your AI agent learns your communication style in the first 25 turns. Direct and terse? It stops the preamble. Verbose and curious? It gives you the full picture with analogies. Technical? Code blocks first, explanation optional. Beginner? Concepts before implementation.

It builds trust over time. New users get professional, measured responses. After hundreds of interactions, you get earned familiarity — shorthand, shared references, the kind of efficiency that comes from working with someone who actually knows you.

It disagrees with you. Not to be contrarian. Because a colleague who agrees with everything is useless. If your architecture has a flaw, it says so. If your approach will break in production, it flags it. Then it does the work anyway, because you're the boss. But the concern is on record.

It never cuts corners because you're in a hurry. This is the rule we're most proud of: user mood shapes communication, never work quality. Stressed? Tem gets concise. But it still runs the tests. It still checks the deployment. It still verifies the output. Your emotional state adjusts the words, not the work.

HOW IT WORKS

Every message, lightweight code extracts raw facts — word count, punctuation patterns, response pace, message length. No LLM call. Microseconds. Just numbers.

Every N turns, those facts plus recent messages go to the LLM in a background evaluation. The LLM returns a structured profile update: communication style across 6 dimensions, personality traits, emotional state, trust level, relationship phase. Each with a confidence score and reasoning.

The profile gets injected into the system prompt as ~150 tokens of behavioral guidance. "Be concise, technical, skip preamble. If you disagree, say so directly." The agent reads this and naturally adapts. No special logic. No if-statements. Just better context.

N is adaptive. Starts at 5 turns for rapid profiling. Grows logarithmically as the profile stabilizes. If you suddenly change behavior — new project, bad day, different energy — the system detects the shift and resets to frequent evaluation. Self-correcting. No manual tuning.

The math is real: turns-weighted merge formulas, confidence decay on stale observations, convergence tracking, asymmetric trust modeling. Old assessments naturally fade if not reinforced. The profile converges, stabilizes, and self-corrects.

Total overhead: less than 1% of normal agent cost. Zero added latency on the message path.

A/B TESTED WITH REAL CONVERSATIONS

We tested with two polar-opposite personas talking to Tem for 25 turns each.

Persona A — a terse tech lead who types things like "whats the latency" and "too slow add caching." The system profiled them as: directness 1.0, verbosity 0.1, analytical 0.92. Recommendation: "Stark, technical, data-dense. Avoid all conversational filler."

Persona B — a curious student who writes things like "thanks so much for being patient with me haha, could you explain what lambda memory means?" The system profiled them as: directness 0.63, verbosity 0.47, analytical 0.40. Recommendation: "Warm, encouraging, pedagogical. Use vivid analogies."

Same agent. Completely different experience. Not because we wrote two personality modes. Because the agent learned who it was talking to.

CONFIGURABLE BUT PRINCIPLED

Tem ships with a default personality — warm, honest, slightly chaotic, answers to all pronouns, uses :3 in casual mode. But every aspect is configurable through a simple TOML file. Name, traits, values, mode expressions, communication defaults.

The one thing you can't configure away: honesty. It's structural, not optional. You can make Tem warmer or colder, more direct or more measured, formal or casual. But you cannot make it lie. You cannot make it sycophantic. You cannot make it agree with bad ideas to avoid conflict. That's not a setting. That's the architecture.

FULLY OPEN SOURCE

Tem Anima ships as part of TEMM1E v4.3.0. 21 Rust crates. 2,049 tests. 110K lines. Built on 4 research papers drawing from 150+ sources across psychology, AI research, game design, and ethics.

The research is public. The architecture document is public. The A/B test data is public. The code is public.

https://github.com/temm1e-labs/temm1e

Static personality files were a starting point. This is what comes next.

u/No_Skill_8393 — 2 hours ago
Built a PDF to Google Sheets automation using n8n and Claude AI ,here's what I learned

Built a PDF to Google Sheets automation using n8n and Claude AI ,here's what I learned

Just finished building my first real automation for someone's actual business (my friends)using n8n.

The workflow takes a supplier PDF from Google Drive, extracts all the text, sends it to Claude AI with a structured prompt, parses the JSON response, and routes each product category to the correct sheet tab in Google Sheets automatically.

A few things I learned building this:

n8n's Extract from File node only reads one page of a PDF so I had to work around that.

Getting Claude to return consistent JSON structure across different product types took several prompt iterations.

Using a Switch node to route different product categories to different sheet tabs was the cleanest solution for organizing the output.

Still learning n8n but building real projects for real use cases is teaching me more than any tutorial.

Anyone with more n8n experience what would you have done differently? And what's the best resource for going deeper on more complex workflows?

u/Cool-Sprinkles9179 — 2 hours ago
▲ 3 r/n8n+2 crossposts

I'm building a stress test workflow to benchmark document extraction – here's what I'm testing

👋 Hey everyone,

Over the past few weeks I've been sharing workflows that use document extraction for things like currency conversion, invoice classification, duplicate detection, and Slack-based approvals. One question that keeps coming up – from myself and from people trying these workflows – is: how far can you push the extraction before it breaks?

Clean PDFs are easy. Every solution handles those. But what about a scanned invoice with coffee stains? A photo taken at an angle? A completely different layout than what the pipeline was trained on? A document that looks like someone used it as a coaster, scribbled notes all over it, and then left it in the rain?

I wanted to answer that properly, so I'm building a stress test workflow.

The idea:

Upload a document through a web form, extract the data, compare every single field against the known correct values, and get a results page with a per-field pass/fail breakdown and an overall accuracy percentage. Since the test always uses the same invoice data, the ground truth is fixed – you're purely measuring how well the extraction handles degraded quality and layout changes.

The test documents I'm preparing:

I'm going to run four versions of the same invoice through the workflow:

  1. Original – clean PDF, the baseline. Should be 100%.
  2. Layout Variant A – same data, completely different visual layout
  3. Layout Variant B – another layout, different structure again
  4. Version 7 ("The Survivor") – this one has coffee stains, pen annotations ("WRONG ADDRESS? check billing!"), scribbled-out sections, burn marks, and a circled-over amount due field. If anything can extract data from this, I'll be impressed.

I spent some time thinking about what makes a good stress test. Different layouts test whether the extraction actually reads the document or just memorises positions. The destroyed version tests OCR resilience when half the text is obstructed. Together they should give a pretty honest picture of where a solution actually stands.

What's coming next week:

I'm going to build out the full workflow, run all four documents through it, and share the results here – accuracy percentages across every version, including the destroyed one. I'll also share the workflow JSON, so anyone can import it and run their own benchmarks.

The workflow will be solution-agnostic too – you'll be able to swap out the extraction node for an HTTP Request node pointing at any other API, and the entire validation chain works identically. Good way to benchmark different tools side by side.

Curious to see where it breaks. Would love to hear if anyone else has been stress testing their extraction setups, or if you have ideas for even nastier test documents.

Best,
Felix

reddit.com
u/easybits_ai — 2 hours ago
I built a fully offline voice assistant for Windows – no cloud, no API keys
▲ 11 r/OnlyAICoding+5 crossposts

I built a fully offline voice assistant for Windows – no cloud, no API keys

I spent months building Writher, a Windows app that combines faster-whisper for transcription and a local Ollama LLM for an AI assistant – everything runs on your machine.

What it does:

  • Hold AltGr → instant dictation in ANY app (VS Code, Word, Discord, browser...)
  • Press Ctrl+R → voice-controlled AI: manage notes, set reminders, add appointments
  • Smart date parsing ("remind me next Tuesday" works!)
  • Animated floating widget with visual feedback
  • English + Italian supported

No internet required after setup. No subscriptions. Open source.

GitHub: https://github.com/benmaster82/writher

Looking for feedback and contributors!

u/Immediate-Ice-9989 — 1 month ago
Free ai ofm guide
▲ 3 r/sideprojects+2 crossposts

Free ai ofm guide

Hey everyone,

I just made a free beginner guide for AI OFM that covers how to actually get started from scratch. I noticed a lot of people are confused at the beginning, so I tried to simplify things and make it practical.

This guide should help you understand the basics and take your first steps without overcomplicating it.

I’ll be posting more guides soon going deeper into strategy, growth, and scaling.

If you get stuck or need help, feel free to let me know I’ll try to help where I can.

Also, we’re opening a Discord community, and the first 20 people can join for free where we go more in depth and help each other out.

Appreciate any feedback 🙌

u/AffectionateCake6176 — 3 hours ago

My client was spending 16 hours a week on research that was making him zero dollars. Here's what I replaced it with.

He was proud of his process. That's what made it hard to tell him it was killing his business.

I met this guy through a referral... runs a B2B SaaS consulting firm, six person team, genuinely smart operator. He had this whole GTM research routine he'd built over two years. Every week, his team would manually pull LinkedIn profiles, cross-reference company funding news, check hiring signals on job boards, dig through Crunchbase, and dump everything into a Google Sheet before deciding who to even reach out to. Sixteen hours a week. Just to figure out who was worth calling.

He called it "quality prospecting." I called it a very expensive spreadsheet habit.

The problem wasn't that the research was bad. It was actually solid. The problem was that by the time they finished researching, half those companies had already moved through their buying window. A Series B company that just hired a Head of Revenue is a perfect prospect... for about three weeks. After that, the team is hired, the tools are bought, and your outreach lands in a pile of ignored emails. They were doing great research on cold leads and didn't even know it.

So I stopped asking him what he wanted to automate and asked him one question instead. "What happens between when a lead looks perfect on paper and when your team actually closes them?" He paused for a long time. Then he said... "honestly, timing. We always seem to be one month late."

That one answer told me everything.

I built him a lead nurturing and GTM intelligence workflow that runs every morning at 6AM. It monitors funding announcements, new executive hires, job postings with specific keywords, and product launch signals across their entire target account list. When a company crosses three or more of those signals in a rolling fourteen day window, it automatically enriches the contact data, writes a one paragraph personalized context summary in plain English, scores the account by urgency, and drops it into their CRM with a follow-up task already assigned to the right rep. No spreadsheet. No manual digging. The team wakes up to a prioritized list of who to call that day and exactly why.

First month, they went from sixteen hours of weekly research to under two. Second month, they closed four accounts they would have missed entirely because the timing window was flagged before it closed. Forty thousand dollars in new revenue in sixty days. Not because I built something flashy. Because I built something that solved the actual problem... which was never research quality. It was research speed.

Here's what I keep seeing people get wrong with GTM automations. They build lead generation tools when the real gap is lead timing. Everyone's chasing more contacts. The smarter play is knowing exactly when your existing targets are ready to buy. A workflow that tells you the right moment is worth ten times more than one that gives you ten times more names.

The automation itself wasn't complicated. What took time was mapping the signals that actually mattered for their specific ICP. That's the work most people skip because it doesn't feel like building. But that two hour conversation about their best closed deals from the last year... that's where the whole thing came from. The n8n workflow was almost secondary.

If your client is spending hours on research every week, don't ask them what they want to automate. Ask them what they're always too late for. That's where the money is.

reddit.com
u/automatexa2b — 1 hour ago
Built a PDF to Google Sheets automation using n8n and Claude AI ,here's what I learned

Built a PDF to Google Sheets automation using n8n and Claude AI ,here's what I learned

Just finished building my first real automation for someone's actual business (my friends)using n8n.

The workflow takes a supplier PDF from Google Drive, extracts all the text, sends it to Claude AI with a structured prompt, parses the JSON response, and routes each product category to the correct sheet tab in Google Sheets automatically.

A few things I learned building this:

n8n's Extract from File node only reads one page of a PDF so I had to work around that.

Getting Claude to return consistent JSON structure across different product types took several prompt iterations.

Using a Switch node to route different product categories to different sheet tabs was the cleanest solution for organizing the output.

Still learning n8n but building real projects for real use cases is teaching me more than any tutorial.

Anyone with more n8n experience what would you have done differently? And what's the best resource for going deeper on more complex workflows?

https://preview.redd.it/iqtnqtnc67tg1.png?width=1383&format=png&auto=webp&s=9d123913387091225ae92d96fa12defb5cc60ce9

reddit.com
u/Cool-Sprinkles9179 — 2 hours ago
I tested Google’s 87MB Gemma model on Colab and it actually works
▲ 5 r/ArtificialInteligence+2 crossposts

I tested Google’s 87MB Gemma model on Colab and it actually works

Most people think you need a powerful laptop to run AI models.

That’s not really true anymore.

I tested Google’s Gemma 4 model on Google Colab and was able to run everything for free without any heavy setup.

What surprised me is that you can do multiple things in one flow:

  • Transcribe audio
  • Summarize content
  • Extract key insights from videos

For example, you can take a YouTube video and turn it into a clean summary with important points in a few steps.

One thing to keep in mind:

  • Whisper is still faster and more accurate for pure transcription
  • Gemma is more flexible because it can handle multiple tasks

So it depends on your use case.

If you are into content creation, research, or automation, this can save a lot of time.

I recorded the full setup and demo here if you want to try it yourself.

Curious if anyone else here is testing smaller AI models instead of relying only on APIs.

youtu.be
u/kalladaacademy — 11 hours ago

Need help on Automation to jobs 😞

Hi,

I have used playwright to automate applying to jobs within my category, it utterly failed despite I recorded two to three applications submittions on LinkedIn.

I want help from anyone what is the best seemless one which can automate and apply for 20 jobs daily without any manual tasks by completely filling all the application details from given resume.

My task is scrape the web with jobs related to niche, then click apply and then fill all the details and submit. It is ok to use paid captcha and any vpn as well but I want task to be completed 100% or else at least 90 to 95% and I manually cross check and had to click submit also ok.

Please help me any AI automation experts

reddit.com
u/markjohn511 — 8 hours ago

Important question..

hello all i hope you are fine,

i have a good background about ai automation and n8n workflows

i want to build a workflow or automation solutions to solve real business trendy problems to be easily selled for business owners and have passive income every month

what are the best automation workflows ideas ?

reddit.com
u/Real_Lettuce5963 — 16 hours ago

Built a Workflow that turns 1 Photo into a Cinematic Ad.

This AI automation turns a single photo + short caption into a cinematic, short commercial and sends the finished video back to you in Telegram.

You can use it for ads, social media and marketplaces.

Here’s the flow:

You upload one product image and a short caption.

The agent analyzes the photo and writes a cinematic video prompt.

It sends that to a video generation model (Veo 3.1).

A couple minutes later, you get a dynamic, ready-to-use video.

What it does

You DM a product photo to your Telegram bot (optionally add a short caption with creative direction).

The agent uploads that photo to Google Drive and makes a direct link.

GPT analyzes the image and then generates a Veo-3 style cinematic prompt tailored to your product/brand tone.

The agent sends the prompt + image to Veo-3.

It polls for status and, on success, downloads the final MP4.

The bot sends your video back in Telegram, plus the exact prompt it used.

You can use these videos for ads, social media, or marketplaces instead of boring photos

Happy to hear what you'll think about this automation. All feedback is welcome!

u/ExactDraw837 — 23 hours ago

I've acquired a client, but I don't know what type of automation they need.

Hello, I have a problem. I have a client who wants to work with me, but he doesn't know anything about automation. He offers courses and is a well-known figure. I don't know what kind of help he needs, or how I can suggest ideas to him because I know it's very difficult to ask him directly. I want to offer him a solution directly or suggest three automation options, and he can choose what he wants. He employs many people to perform repetitive tasks like replying to emails, etc. What should I do in this situation? I have experience building automation systems, such as email, lead generation, and content creation. What do you suggest? Should I hold an initial meeting just to ask him questions and identify weaknesses, or should I prepare automation options?

reddit.com
u/halla_erika — 21 hours ago

Automate job applications

Hi

I’m looking for something where I can automate job applications ( auto apply ) by tailoring cv and also automate emails / msgs to be sent to hiring manager or recruiter for the particular role…

And also search for jobs tailored based on my skills

If you know anything pls dm

reddit.com
u/Sassalert — 22 hours ago

Is AI Automation still a viable service to sell?

I’m planning to learn AI Automation and sell it to businesses as a service, but I'm wondering if it’s actually profitable or just hype right now. For those doing this, is there real money in it and realistically how long does it take to learn the tools like Make/Zapier/APIs well enough to start charging clients?

reddit.com
u/DayBeautiful2205 — 22 hours ago

I thought my automation was production ready. It ran for 11 days before silently destroying my client's data.

I'm not going to pretend I was some careless developer. I tested everything. Ran it through every scenario I could think of. Showed the client a clean demo, walked them through the logic, got the sign-off. Felt genuinely proud of what I built. Then eleven days into production, their operations manager calls me calm as anything... "Hey, something feels off with the numbers." Two hours later I'm staring at a workflow that had been duplicating records since day three because their upstream data source added a new field I never accounted for. Nobody crashed. Nothing threw an error. It just kept running and quietly wrecking everything.

That's when I understood what production actually means. It's not your demo surviving one perfect run. It's your system surviving reality... and reality is messy, inconsistent, and constantly changing without telling you.

The biggest mistake I see people make, and I made it myself for almost a year, is building for the happy path. You test what should happen and call it done. Production doesn't care about what should happen. It cares about what does happen when someone inputs a name with an apostrophe, when the API returns a 200 status but sends back empty data anyway, when a perfectly normal Monday morning suddenly has three times the usual volume because a holiday pushed everything. I started calling these edge cases but honestly that word undersells them. They're not edge cases. They're Tuesday.

What changed everything for me was building for failure first instead of success. Before I write a single node now, I spend thirty minutes listing every way this workflow could silently do the wrong thing without throwing an error. Not crash... silently do the wrong thing. That's the dangerous category. A crash is obvious. Silent corruption runs for eleven days while you're answering other emails. Now every workflow I build has three things baked in before I even think about the actual logic. A heartbeat log that writes a success entry on every single run so I can see volume patterns. Plain English status updates to the client that show what processed, what got skipped, and why. And a dead man's switch... if this workflow doesn't run in the expected window, someone gets a message immediately.

My current client is a mid-sized logistics company. Their workflow processes inbound freight confirmations and updates three separate systems. Runs about four hundred times a day. The first version I built worked perfectly in testing and I was ready to ship it. Then I did something I'd started forcing myself to do... I sat with it for a week and just tried to break it. Sent malformed data. Killed the downstream API mid-run. Submitted the same confirmation twice. Every single one of those scenarios became a handled case with a proper fallback before it ever touched production. That workflow has been running for four months. Not four months without issues... four months where every issue got caught quietly instead of becoming a phone call.

Here's the thing nobody tells you about production automation. The goal isn't zero failures. That's not realistic and chasing it will make you build worse systems. The real goal is zero surprises. Every failure should be expected, logged, and handled with a fallback that keeps things moving. A workflow that gracefully handles a bad API response and queues the record for retry is ten times more valuable than a workflow that never fails in your test environment but has never actually met real data. Your clients don't care about your architecture. They care that things keep moving even when something breaks, and that they hear about problems from your monitoring before they find out themselves.

Production readiness cost me more upfront time on every single project since that incident. And it's made me more money than any technical skill I've ever learned. Because the clients who've seen it working for six months without a crisis? They don't shop around. They just keep paying.

What's the failure mode that's cost you the most? Curious whether people are building this in from the start now or still getting burned first.

reddit.com
u/automatexa2b — 24 hours ago
Week