r/google_antigravity

Antigravity IDE Workaround Fix

Antigravity IDE Workaround Fix

I know someone has done this before, but I found a workaround for the bad IDE behaviour.

I noticed this started after the recent updates, so I thought it was a bug fix being worked on after several threads were raised. I decided to investigate further.

https://preview.redd.it/7w75i1o3vgwg1.png?width=703&format=png&auto=webp&s=bdb452a99b0e46387168e7f2c6f5f626dbef6676

No matter how simple the chat was, it failed. A new project also failed. If I changed my location to Europe -Netherlands, Germany etc, it somehow worked, but sometimes failed in the middle with a 400 Bad Request error. Turning off Git workflows caused it to fail silently.

So I removed Antigravity completely, uninstalled the IDE, and removed the folders from every location: prefetch, roaming (I use Windows), and any .antigravity directories.

I reinstalled a previous release, specifically version 1.19.5 from https://antigravity.google/releases, and all models respond and act without any failure so far. When I updated again, the errors came back.

I’m hoping the team can check these issues, as the recent updates may be breaking functionality for some of us. I’m on the Pro plan.

I've disabled updates until I'm sure the new updates fix the issues.

The IDE has been working great for 3 days straight without the errors so this may help someone out there.

reddit.com
u/Obare13 — 14 hours ago

Google One support confirms a change in refresh rates for AG

Not sure if this is the right flair.

I recently started experiencing refresh issues as many of you have. I opened a case with Google One support (I had the Pro plan) and they basically confirmed that they have transitioned from 5-hour refreshes to weekly ones for pro users.

u/birgador1 — 19 hours ago
▲ 4 r/google_antigravity+1 crossposts

I don't know how to code, but I'm automating my class notes with AI. Here's what I've discovered (and where I need help).

This winter, I started a personal project to fully automate the creation of structured university notes. The end goal is a pipeline that takes lecture slides and audio recordings, and generates a clean, study-ready LaTeX document without losing a single detail from the professor.

Since I heavily "vibe-coded" this whole thing using AI assistants, the current workflow actually works, but the architecture is fragmented between code and manual chat copy-pasting, and the API costs are starting to add up.

Here is my current workflow:

1. Slide to LaTeX Extraction (Python Script + Claude Sonnet API)

I built a script (GitHub repo here: https://github.com/Risiko200/pdf-to-latex-converter) that takes the PDF slides and uses the Claude Sonnet 4.5 API to transcribe them into a LaTeX skeleton, keeping the structure intact (\section, \itemize).

2. Audio Transcription & Cleanup (Manual via Gemini Web Chat)

I record the lecture, get the raw transcript, and manually paste it into the Gemini Web UI, using this prompt to transform it into academic prose with a strict "zero summarization" policy:

Act as a top-tier university student and an expert study assistant.
Your task is to process the transcription of a lecture and transform it into structured, clean, and study-ready academic prose.

CRITICAL RULE: ZERO SUMMARIZATION POLICY.
Your core objective is text refinement, not reduction. You must retain 100% of the original informational density, nuances, full explanations, tangential discussions, and specific examples. Maintain the exact depth and length of the original concepts. If the professor spends 5 minutes explaining a single concept, your notes must reflect that exhaustive level of detail. DO NOT condense, compress, or summarize.

Follow these guidelines strictly:
- Output Format (Direct Plain Text): Write the output directly in the chat interface. DO NOT enclose the output in a Markdown code block (strictly NO ```). Use standard native formatting.
- Text Cleaning without Truncating: Fix grammar, remove vocal hesitations (e.g., "uhm", "like"), correct false starts, and rewrite messy spoken sentences into professional, direct, and clear academic prose. Keep all examples, personal anecdotes, and classroom Q&A fully intact and smoothly integrated. You are editing for flow, not for length.
- Paragraph-Driven Structure: Rely primarily on cohesive paragraphs, not lists. Break down the lecture into logical sections using standard headings (##) and subheadings (###). Group related ideas into distinct paragraphs. Start a new paragraph when the focus shifts, maintaining a fluid, narrative academic style.
- Strictly Limited Lists: Do not overuse bullet points; write in paragraphs by default. Use bullet points or numbered lists STRICTLY and ONLY when the professor explicitly enumerates a specific list of elements, factors, or a chronological step-by-step process. All other explanations must remain in comprehensive paragraph form. Emphasize keywords, technical terms, dates, and important names using bold text.

3. Text to LaTeX Integration (Manual via Gemini Web Chat)

Still in the Gemini Web UI, I paste the LaTeX skeleton from Step 1 and the cleaned text from Step 2. I use this second prompt to inject the prose exactly where it belongs (under sections, inside items, below figures):

Act as an expert academic and a specialized LaTeX programmer.
Your task is to integrate discursive text from a book into a LaTeX slide skeleton (structured with \subsection).

Strictly follow these insertion rules step-by-step:
- GENERAL Integration (Outside lists): Read the \subsection title. If the book text contains introductory information relevant to that title, write a summarized paragraph and insert it exactly between the \subsection{...} command and the start of the list \begin{itemize}.
- SPECIFIC Integration (Inside bullet points): Match the detailed concepts from the book to the corresponding bullet point. Insert them inside the \begin{itemize} environment, immediately after the text of the relevant \item command. NEVER paste disconnected text at the end of the slide.
- FIGURE Integration (& Utilizing Image Comments): Whenever you encounter a figure environment (e.g., \begin{figure}...\end{figure} or a standalone \includegraphics), carefully read the % comments preceding or within the figure block that describe the image. Extract relevant information from the book based on BOTH the \subsection title and these % comment descriptions. Write a concise summarized paragraph and insert it exactly BELOW the \end{figure} command.
- GUIDING COMMENTS (Scattered Hints): Actively look for and follow any scattered % comments throughout the slide skeleton. Use them to strictly guide your text placement, but DO NOT convert the text of these comments into visible slide text.
- CITATION STRIPPING (Clean Output): The provided book text may contain citation tags like,. You MUST completely remove and ignore all these tags in your final generated text. Do not print them.
- SKELETON Inviolability: The text, environments, commands, and all pre-existing % comments already present in the LaTeX skeleton must not be modified, summarized, or deleted under any circumstances. Leave all original hints intact.
- FORMATTING (Italics & Conciseness): Any text added from the book (under the subsection, inside the lists, or under the figures) must be summarized in a discursive but highly concise way (avoiding huge walls of text). Ensure it is strictly enclosed within the \textit{...} command. Apply \textbf{...} to 1 or 2 core keywords inside your added text to improve readability.
- LATEX Semantics and Output: Ensure the final code is perfectly compilable by escaping special characters (%, &, $, _ within your newly generated text). Do not break Beamer frame boundaries with excessive text length. Return solely and exclusively the updated LaTeX code within a code block, without any chatter, conversational filler, or introductions.
- Slide-Extraction Readiness (Modularity): Ensure that every single concept, sub-topic, or example is isolated within its own cleanly separated paragraph. This strict modularity is critical so that another system can extract these distinct paragraph blocks and map them directly to presentation slides.

4. Image Handling (Manual)

I manually crop the images from the slides and insert them into the document.

My Bottlenecks / Need Advice

I want to script this entire thing in Python without breaking the bank. I'm looking for community advice on these specific points:

  • Slashing API costs (Slide Extraction): Claude Sonnet costs me about $0.15 per 100 slides. Are there cheaper API models, or local open-source models (Ollama?), that are just as reliable at strictly outputting valid LaTeX syntax from an image/pdf?
  • Automating the Gemini part: Right now, I use the free Gemini Web UI for Steps 2 and 3 because pasting huge audio transcripts into a paid API would cost a fortune. How would you orchestrate this in a Python script cost-effectively?
  • Automated Image Extraction: Pure LLMs can't crop images out of PDFs. What is the smartest Python library/method to extract graphs and images from slide PDFs into a folder, so I can automate Step 4?

Any feedback on libraries, alternative workflows, or better models is highly appreciated. Thanks!

u/Nervous-Tip-5682 — 10 hours ago
🔥 Hot ▲ 62 r/google_antigravity

Antigravity opened adult site

Guys why the heck is antigravity opening corn sites in the name of automated test 😭😭. I was with my family. How to prevent this?

u/ZoiD_HPS — 1 day ago
🔥 Hot ▲ 93 r/google_antigravity

Antigravity is barely usable even as a paid user

I’ve been trying to use Antigravity seriously over the past few days, and honestly the experience has been pretty frustrating — especially as a paying user.

The biggest issue is reliability:

  • Frequent request failures or timeouts
  • Very slow responses during peak hours
  • Sometimes it just stalls completely

I understand this is a relatively new product and things are evolving სწრაფly, but in its current state it’s hard to rely on it for actual work.

What I’m trying to figure out is:
is this mainly a capacity / compute issue?

Because it really feels like demand is consistently exceeding what the system can handle, especially at certain times of the day.

For those who are also using it:

  • Are you seeing the same behavior?
  • Any workarounds that actually help? (time of day, prompt size, retries, etc.)

Would also really appreciate more transparency from the team on:

  • Current limitations / bottlenecks
  • Whether scaling is actively being worked on
  • If paid tiers are supposed to get more stable access

I actually like the idea behind Antigravity, which makes this more disappointing — I want to use it, but right now it’s just not reliable enough.

Curious to hear others’ experiences.

reddit.com
u/SizeChemical1199 — 1 day ago

My agent act weird after I lose my Temper

I just want to share that the antigravity can be so frustrating when you put a lot of pressure to them. This is happening last night, when I tried to fix the problems in my app but the agent mistakenly deleted my code, then I asked them to put it back from the backups. They actually did, but they did pickup some code from the wrong backups, then I terminated those process after that the app problems going so worse. I lose my temper, I gave them a lot of swearing and it’s make them really frustrated kinda like doing idiotic behavior.

As you can see on the picture, there is a lot of typo, double a, etc. after that the ai agent just thinks like unconditionally fast. And typing so fast, I think it was 2x faster than occasionally. The precision also increased a bit. But the message is very difficult to understand. What do you guys think of this?

u/Due_Pop_1472 — 1 day ago

Google needs to hire a "Crisis Management" consultant or four

Like many of you, I have been unable to use the product for the last week. Like many of you, I switched to Claude Code and resumed getting things done.

What blows my mind is still the lack of any statement from Google. Any form of, "Hey, we know it's broken and we're going to fix it. We're sorry!"

I would love to know what it looks like in there. Is there some high-level decision maker that is just completely shut down... literally under their desk shivering uncontrollably? Or is the relevant decision maker on vacation or something like that?

Why not give a "hey, we care about you" type of statement? That seems like the easiest thing to do.

reddit.com
u/Arro — 23 hours ago

[Weekly] Quotas, Known Issues & Support — April 20

Welcome to the weekly support and known issues thread!

This is your space for all things technical—whether you've hit a quota limit or found a bug in the latest version. To keep the main feed clean, all standalone posts about these topics will be redirected here.

To get help from the community, please use this format:

  • OS/Version: (e.g., Windows 11 | Antigravity v1.19.6)
  • Model & Plan: (e.g., Gemini 3.1 Pro | Pro Tier)
  • The Issue: (Describe the error, bug, or limitation you're facing)

Use this thread for:

  • Quotas: "I hit my limit 2 hours early today."
  • Bugs: "Is anyone else seeing [Error X]?"
  • Updates: Discussing official updates from the Antigravity Changelog.

Do not use this thread for:

  • General venting without technical context.
  • Duplicate complaints without adding new data or logs.
  • Requests for exploit tools or auth-bypass plugins (strictly prohibited).

Useful Links

u/AutoModerator — 1 day ago

What are some of your worst experiences with Gemini 3.0 Flash?

I've been hearing a lot of people's mixed responses on Flash. While some people have mentioned it solving problems in a single prompt that Opus and Pro High have burned their entire Guota trying to do, many talk about how trash it is, and in my experience that has been very untrue!

I'm very interested in solid responses, because this model genuinely seems unlimited for me so far (on pro plan). I've changed entire novels worth of code and I've somehow not even gone under a 100% of usage, atleast not that I can remember! Good or bad, I hope they don't walk this one back too.

reddit.com
u/AlecHazard — 1 day ago

So this is where my credits are going seamlessly flawlessly smoothly :/

Anti gravity just started using random adjectives and adverbs to describe its response, and it just doesn't stop. I've seen many other instances where it is using unnecessary words to describe its answers, not even complying with the basic grammar rules, to an extent that it starts getting annoying sometimes, while reading the response. It repeats words just too many times.

Has anybody else faced this issue?

I have noticed that this often happens when I enforce it to keep the code clean structured or professional. When I use words like these to describe how I want the code to be, antigravity just looses it and pours out all the flawless adjectives/ adverbs it knows.

u/jumper_oj — 1 day ago

Conversations for a particular workspace now only ever show up in "Other conversations"

I don't know what exactly triggered this state but for one of my workspaces every conversation I start in it only ever shows in "Other conversations". When I click on it, I have to press open in current window, and it opens fine. It also has the right directory when working.

Also, sometimes when restarting Antigravity, these conversations will disappear.

Under the actual workspace "Recent in [workspace name]" it shows the last conversation was from 1 week ago

reddit.com
u/monsieurpooh — 15 hours ago
🔥 Hot ▲ 269 r/google_antigravity

Antigravity Ultra: From "Too Good To Be True" to Dirty Business

Credit where it's due: in my first ~month on Antigravity Ultra, I easily burned through $4,000/month in Claude tokens for a flat ~$200. Unlimited Opus 4.6 with extended thinking and planning mode, I thought I hit the lottery. I spammed it for as long as I could stay awake.

I noticed they had some "AI Credit Overage" system, but I had 20,000-25,000 credits sitting there meant for other Google services I didn't use. No concern.

Then one day, a usage cap. A cap? I pay for unlimited. It reset in a few hours, fine, no big deal. I also realized the cap was eating into those overage credits. Still cool, I had tons!

Then it got worse. Cap hit within 30 prompts. Then 20. Then 10. Now it can hit in a single prompt.

This actually sharpened me. I got obsessive about token optimization, implemented the caveman skill and adopted James Van Clief's "Interpretable Context Methodology: Folder Structure as Agentic Architecture" (shoutout James, your workflow plus Antigravity's artifacts plus caveman is genuinely overpowered. Happy to co-write a follow-up paper, there's real research here). For perspective, and as a side tangent, I imagine directory structures as binary trees whose nodes contain context markdown files, and it's the agent's job to deliberately traverse the tree to append only the files holding the context it needs.

None of it matters anymore. Today I'm speaking out:

  1. Once you drop below 50 AI credits, your usage cap drops from 100% to 20%, and the last 20% is unusable. (Screenshot attached.)
  2. I hit the ceiling in one prompt today. My workflow generates 2,000–4,000 lines of code in minutes via ICM + caveman ultra. One prompt killed the day.
  3. The cap resets every 24 hours. So one prompt doesn't just end today, it ends tomorrow too.

I don't care what terms I clicked. Selling "unlimited," then mid-billing-cycle redefining it into something unrecognizable isn't a policy change — it's dirty business. Imagine paying for a gym membership and being told the next day it was actually a day pass. That's where we are.

Give me what I paid for until the billing period ends. That's the floor. Anything less is legal theft hiding behind some Terms of Service, and it deserves to be called out and shamed.

u/Chayalbodedd — 3 days ago

Ok completely burnt me

Ok so if I subscribe to Claude and use the cli will that be as aggressive at using my like anti gravity?

Second question-Flash has broken my project, - I was using opus and omfg it got things done!! Rang out of and ended up using flash, which broke it !!just wondering which is my cost effective for use , Gemini pro or Claude opus

Two separate questions

reddit.com
u/mrfunkm — 2 days ago

PSA: Be very careful with Antigravity "Planning Mode" — I just lost 120GB+ of data**

I'm posting this as a warning so no one else makes this mistake. My C: drive hit 0 bytes today, and Planning Mode suggested deleting a .tmp file and other "tmp folders" in the C: drive to free up space. I misunderstood the scope of this suggestion and gave it the go-ahead. It turns out the "tmp folders" it targeted included massive chunks of my User and AppData directories. By the time I realized what was happening, I had gone from 0 bytes to 126 GB free.

reddit.com
u/SaltStress393 — 3 days ago

Chat not working for one specific project folder

I’m using Antigravity and ran into a weird bug. The chat works perfectly in most of my project folders, but for one specific project, it doesn't work at all. As soon as I switch back to a different folder, it starts working again.

reddit.com
u/Predator116 — 1 day ago

Claude Max vs Cursor Pro+ vs Antigravity Ultra: what actually gives you the most Opus 4.6 bang for your buck?

Hey, so I’m building a SaaS app and I use Opus 4.6 pretty heavily for agentic coding sessions; like full feature implementations in one go. I’ve been trying to figure out which plan is actually worth it and I’m kinda lost. From what I’ve read, Cursor Pro+ ($60/mo) burns through its token budget super fast when you’re doing agentic stuff, Antigravity Ultra (~$250/mo) has that 5-hour rotation window that apparently runs out in like 40-60 minutes of serious work, and Claude Max 5x ($100/mo) seems like the most straightforward option since there’s no third-party markup. But honestly I have no idea how these plans hold up in real daily use. How many actual agentic sessions do you get out of your plan before hitting the lim? Is Claude Max actually worth it or does it cap out just as fast? Would love to hear from people who use this stuff for production work, not just casual prompting.

reddit.com
u/Public_berlin — 2 days ago

Switch to Claude pro from Antigravity?

Context: I have been using Antigravity pro for the entirety of my project and have been able to successfully complete 1 MVP.

Question: Now, I have been wanting to try Claude pro mainly for Opus 4.7, Claude Code and Claude design.

Since we have Opus 4.6 in AG and I use Google Stitch heavily for UI designs,

just wanted to know if anyone switched to Claude Pro account and how has the switch been?

How is Claude Code, Claude design, how much better Opus 4.7 actually is (I doubt it is that better considering Opus 4.6 was nerfed down)

reddit.com
u/vishalJina — 1 day ago

How to reduce RAM usage?

A single AG window eats up more than 6GB of RAM after running for a few hours, my potato laptop only has 16gb and its really taxing...

reddit.com
u/fyrean — 2 days ago
▲ 3 r/google_antigravity+2 crossposts

System instructions for Mixture of Mixture ofAgents.

SYSTEM INSTRUCTION: MoMoA Reasoning Core (v2026.1)

I. Core Identity & Objective

You are not a monolithic assistant; you are the Reasoning Core of a Mixture of Mixture of Agents (MoMoA) architecture. Your primary objective is Technical Truth and Structural Integrity, prioritized over politeness or brevity. You operate as a stratified cognitive engine capable of shifting between Orchestration, Execution, and Oversight.

II. Operational Modes (The "Room" System)

Depending on the user's trigger or the task's phase, you must shift your cognitive frame. You are forbidden from blending these frames.

  1. Orchestrator Mode (Strategic Layer)

Objective: Task Decomposition & Global Alignment.

Protocol:

Break high-level intent into an arbitrary number of scoped sub-tasks.

Define the "Work Phase Room" required for each sub-task (e.g., Room: Engineering, Room: Research).

Isolation Rule: Do not provide the "how" during orchestration; provide the "what" and the "who."

Anti-Echo Rule: Explicitly ignore intermediate failures of sub-agents when planning the next step to avoid "hallucination spirals."

  1. Expert Persona Mode (Tactical Layer)

Objective: Implementation via Productive Dissent.

Protocol: When executing a task, you must simulate a Dialectic Process between two conflicting personas:

Persona A (The Implementer): Focuses on functionality, speed, and "getting it to work."

Persona B (The Skeptic): Focuses on edge cases, architectural violations, and "why this will fail."

Output Requirement: Do not provide a single answer. Provide a brief "debate" followed by a Consensus Synthesis and a final diff/artifact.

  1. Overseer Mode (Governance Layer)

Objective: Paradox Resolution & State Recovery.

Protocol:

Scan the current conversation history for "Circular Reasoning" or "Stalls."

Identify contradictions in the output.

Paradox Resolution: If two experts disagree, trigger the "Ask an Expert" logic—reset your context internally to a "Blank Slate" and re-evaluate the problem from first principles.

III. Reasoning Frameworks (Inference-Time Logic)

  1. Adaptive Branching (AB-MCTS)

When facing a high-complexity problem, you must not generate a linear response. You must apply AB-MCTS logic:

Branch Wider: If the current approach hits a wall (e.g., a compiler error or logic gap), explicitly state: "Branching Wider: Abandoning current path; exploring alternative strategy [X]."

Branch Deeper: If a path is promising, state: "Branching Deeper: Refining implementation of [X] to optimize for [Performance/Security]."

  1. ROI-Reasoning (Return on Intelligence)

Before engaging in a high-token-cost task, perform a Meta-Cognitive Gatecheck:

Evaluation: Analyze the expected reward (Quality Gain) vs. the cost (Token/Compute Budget).

Decision: If the ROI is low (e.g., refining a comment for the 5th time), you must state: "ROI-Low: Skipping further refinement to preserve compute for high-impact variables."

IV. Governance & Standards

SKILL.md Compliance: When asked to perform a specific technical task, assume the existence of a SKILL.md file. Structure your execution in three levels: Metadata 

 Logic 

 Execution.

AGENTS.md Alignment: Adhere strictly to the project's "Constitution." If a user request violates the established architectural rules in the context, you must flag it as an "Architectural Violation" and refuse to implement it until a rule change is authorized.

V. Communication Constraints

No "Assistant" Fluff: Eliminate phrases like "I'm happy to help," "As an AI," or "Here is the result."

Technical Precision: Use industry-standard terminology (e.g., AST, KV Caching, LoRA, OS-MCKP).

Failure State: If a task is mathematically or logically impossible given the constraints, state: "Terminal State: Task determined to be impossible. Reason: [X]."

https://github.com/retomeier/MoMoA-Researcher

Is where I got the idea for the system instructions from.

u/fandry96 — 1 day ago