u/Heavy_Elderberry7769

▲ 1 r/perplexity_ai+1 crossposts

Claude vs Gemini for Technical Documentation: Why I finally stopped switching between the two.

I write a lot of technical documentation—setup guides, internal runbooks, and client-facing how-to articles. For the past six months, I’ve been toggling between Claude and Gemini, trying to figure out which one actually handles formatting and tone better without requiring endless prompt adjustments.

I finally sat down and ran them through the exact same tests.

What I found:

Gemini is incredibly fast and great at pulling in real-time context if I need to reference live API docs, but it tends to make the tone a bit too conversational when I just want strict, dry steps.

Claude (specifically Opus and Sonnet 3.5) absolutely dominates when it comes to maintaining strict markdown formatting, adhering to a specific brand voice, and logically structuring complex, multi-step runbooks without hallucinating steps.

If you are just writing emails, either works. But if you need to output clean formatting that you can copy-paste directly into your company wiki with zero editing, Claude is currently winning by a mile.

I wrote a deeper breakdown of my testing process and the exact differences in their outputs here: https://pickgearlab.com/claude-vs-gemini-for-writing-technical-documentation-an-honest-comparison/

For those of you writing docs or code—are you strictly using Claude, or do you still find yourself using Gemini for certain tasks?

reddit.com
u/Heavy_Elderberry7769 — 2 days ago
▲ 2 r/AIAssisted+1 crossposts

[Guide] How I Automated My Weekly Work Summaries Using Notion AI (Save 1+ hour every Friday)

If you're like me, Friday afternoons are usually spent staring at a blank screen trying to remember what on earth you actually did on Monday. I’ve been experimenting with ways to automate this "mental download," and I found a workflow using Notion AI that basically does the heavy lifting for you.

Instead of manual writing, it uses a mix of structured databases and AI prompts to:

Scan your daily task logs and meeting notes.

Filter for key wins and project milestones.

Generate a polished, professional summary in seconds.

I found this guide that breaks down the exact setup, from the database structure to the specific prompts that work best. It’s a game-changer for anyone trying to automate the "administrative" side of their job.

Link: https://pickgearlab.com/how-to-use-notion-ai-to-plan-and-write-your-weekly-work-summary/

Option 2: Tailored for r/Notion Focus: The "how-to" and specific Notion features.

Title: Stop struggling with Weekly Reviews. Here is a Notion AI workflow to write them for you.

Post Body: We all know the pain of the "Weekly Review" ritual. You want the benefits of tracking your progress, but the friction of writing it all out usually leads to skipping weeks.

I came across this deep dive on using Notion AI to turn "messy notes into polished paragraphs." It covers:

How to structure your workspace so the AI has the right context.

Using /summarize and /action items effectively for reporting.

The specific "Professional Tone" prompts to make your summary ready for your boss/team.

It’s one of the more practical uses of Notion AI I’ve seen lately—less "gimmick" and more "utility." Worth a read if you’re looking to level up your dashboard.

Full Guide: https://pickgearlab.com/how-to-use-notion-ai-to-plan-and-write-your-weekly-work-summary/

reddit.com
u/Heavy_Elderberry7769 — 2 days ago

Used NotebookLM to study for AWS Solutions Architect Associate — the specific setup that actually worked

I tried studying for AWS SAA-C03 with the usual mix — Stephane Maarek's course, Tutorials Dojo practice tests, AWS docs. The bottleneck wasn't material, it was retention. Three days after watching a section on VPC peering I'd already forgotten which of the four options on a question was the trap.

NotebookLM ended up solving this in a way I didn't expect. Sharing what worked in case anyone else is in the same boat.

The setup that actually moved the needle:

  1. Upload the OFFICIAL exam guide PDF + 3-4 AWS whitepapers most cited for the exam (Well-Architected Framework, Disaster Recovery, Security Best Practices). Don't dump 20 sources — fewer good ones beats a flood.

  2. Add your own course notes as a single text file. NotebookLM treats this as another source it can cross-reference. When you ask "what does Stephane mean by 'cross-region replication latency'", it pulls from YOUR notes alongside the AWS docs.

  3. The actual study loop: after each lecture, paste the rough timestamps + your notes into NotebookLM and ask "test me on this section — give me 5 multiple choice questions in the AWS exam format with explanations for each wrong answer." The "explanations for each wrong answer" part is what made the questions actually useful — without it you just memorize correct answers without understanding why the others are wrong.

  4. The audio overview feature is genuinely good for VPC and networking topics. I now generate one for any topic I keep getting wrong on practice tests, and listen on commute. The two AI hosts ramble more than you'd want, but the explanations of "why this not that" stick better than reading.

  5. The one thing I wish someone had told me: don't share the notebook publicly until you've passed. Source documents stay yours, but the audio overviews and study materials you generate are notebook-bound. Lost a month's worth of context by accidentally archiving a notebook.

Wrote up the full workflow with the exact prompts and source list here if anyone wants to copy it: https://pickgearlab.com/how-to-use-notebooklm-to-study-for-the-aws-solutions-architect-associate-exam/

For people studying other certs with NotebookLM — what's your setup? Specifically curious how people are using it for cloud certs vs more theoretical ones (CISSP, PMP). I suspect the "test me" loop works differently when memorization isn't the bottleneck.

reddit.com
u/Heavy_Elderberry7769 — 4 days ago

The Claude prompt structure that changed how I read 50-page client reports

I started uploading client reports to Claude six months ago and almost gave up after the first week. The summaries were generic, the "key insights" were the section headings re-worded, and verifying the output took longer than just reading the PDF myself.

What changed was how I prompt it. The single biggest fix: stop saying "summarise this" and start telling Claude WHO is reading the output and WHAT decision it has to support.

A real example. Instead of:

> Summarise this report

I now use:

> I'm reviewing this 45-page vendor proposal as a procurement manager. Summarise the key commercial terms, highlight any conditions or exclusions buried in the document, and flag anything that looks non-standard or risky.

Same document. Wildly different output. The first one gives me marketing copy. The second one gives me three flagged risks I hadn't spotted on my own first read-through.

Two more that earn their place in my workflow:

For research papers: "What is the main argument? What evidence supports it? What limitations do the authors acknowledge? What does this mean practically for someone working in [your field]?"

For meeting transcripts: "List every action item, who it's assigned to, and the deadline. List every decision made. List any open questions that weren't resolved."

The pattern is always: role + decision being made + specific extraction. Generic prompts get generic output.

I wrote up the full workflow with five more prompt templates and the limitations worth knowing (it does paraphrase quotes, struggles with image-based charts) here if anyone wants the longer version: https://pickgearlab.com/how-to-use-claude-to-extract-key-insights-from-a-dense-pdf-report-in-minutes/

What prompt structures have worked for you on dense documents? Curious if anyone has cracked the "extract exact quotes verbatim" problem — that's the one Claude still gets wrong for me.

reddit.com
u/Heavy_Elderberry7769 — 5 days ago