u/Leading-Equal204

Year 3 of selling courses, my 'passive' income is about 30% passive

Saw the thread about the lottery winner and another about reels posters claiming effortless income, and figured I'd put some actual numbers out.

Three years selling courses across two niches, last 12 months averaged $7,800/mo gross. Of that, roughly 30% is actually passive (drip from old launches, evergreen funnel sales while I sleep). The other 70% is active. Updating content when the niche shifts. Answering refund emails. Running new launches every 8 to 12 weeks because evergreen alone tops out. Replying to student questions on Discord because completion rates collapse if you ghost them. Posting on the same socials all the 'passive income' people post on, just less frequently.

Two things I wish someone told me at year 1. First, the 'passive income from courses' framing is mostly marketing. Real numbers from working creators look more like a small business that happens to scale better than services. Second, the platform you use shapes the active part. Teachable charged me 5% per sale plus my own platform monthly. Once I crossed $3K/mo, the math flipped and I switched to a self hosted setup with CourseAI which has fee free sales on every tier. Saved about $200 to $400 a month immediately and the active hours went down because I wasn't dealing with two separate dashboards for emails vs course delivery.

For anyone curious about the actual revenue split by month, my best month was $14,200 (a launch month with new course release plus existing evergreen) and my worst was $2,400 (dead summer, no launch). Standard deviation is wider than people pretend.

What's your active vs passive split look like if you actually track it? And do you count the launch effort as active work or 'investment' the way most gurus frame it?

reddit.com
u/Leading-Equal204 — 2 days ago

The 'Elara' problem isn't a model problem, it's a workflow problem

Saw the Elara trap thread (https://www.reddit.com/r/WritingWithAI/comments/1t9xhi8/i\_fell\_into\_the\_elara\_trap\_please\_help\_me\_get\_out/) yesterday and felt seen. Spent months trying to prompt away the AI cliches before I realized I was solving the wrong problem.

The reason every AI tool produces 'Elara,' 'shimmering,' 'cascading silver,' 'her breath caught,' and the rest is that the models were trained on the same content. Doesn't matter if you switch from GPT to Claude to Gemini to Kimi. They share enough training signal that the cliches show up across all of them. Prompt engineering only gets you so far before you're just rewriting the AI's bad drafts manually, at which point you've lost most of the time savings.

What actually worked for me on book 3 was treating the AI like a junior writer who needs a style guide every session. Not a one paragraph instruction. An actual document. Character voice samples, banned words, banned sentence shapes, banned tropes, point of view rules. My style guide for the current book is about 4,000 words. I feed it into every session, then run the output through a second pass that specifically hunts for the list of banned constructions.

The deeper problem is that most AI tools don't persist the style guide between sessions. You upload again, paste again, prime again every time. After a few weeks of this I switched to

StoryMint (https://storymint.io) which holds the style guide and character voice at the series level so every chapter draft starts inside that constraint. Not a full fix. Cuts the Elara density by maybe 70% though, which combined with the second pass gets the output close enough that revision is actually fast.

Anyone else running explicit style guides into their AI sessions? Curious what's in yours and what's working.

reddit.com
u/Leading-Equal204 — 2 days ago

MU at $746, 120% YTD, where does the margin of safety actually sit on memory right now?

I keep going back and forth on Micron and figured I'd put my thinking out here. Up 120% YTD, roughly $700B market cap as of Friday, multi year HBM supply already booked through 2026 with prepayment agreements from hyperscalers. That last piece is what I haven't seen before in this name. Memory has always been a brutal cyclical and the bull thesis is that AI structurally changes the demand profile by making memory the bottleneck instead of compute.

The bear case isn't that demand is fake. It's that the entire AI capex cycle gets repriced if cloud monetization disappoints in 2027. If hyperscaler capex pulls back from $400B+ to something like $250B, MU's HBM book gets renegotiated and the multiple compresses fast. Burry's not wrong that this looks like late stage froth in places.What I can't get comfortable with is paying 22x forward earnings on something that, in every prior cycle, has traded at 8 to 12x at peak.

Even if you assume the cycle doesn't break, you're paying for perfect execution. DA Davidson's $1,000 target requires HBM4 ramping on schedule, HBM5 winning the next socket battle against Samsung and SK Hynix, and AI capex holding. Possible. Not high margin of safety. I owned this in 2018 and got out too early at $50. So fwiw I'm biased toward not chasing. But I'd genuinely like to hear the bull side. What's the path to $1k that doesn't require 3+ years of perfect execution?

reddit.com
u/Leading-Equal204 — 4 days ago