u/LargeProgress7756

Lets talk Claude Design and Figma Killers,

Stop calling all these new AI design tools Figma killers. Figma died months ago. You're just feeding the AI hype doom cycle at this point.

These AI UX tools are incredible for zero-to-one work (nothing to something). They get you from a blank screen to a prototype faster than ever. But when it comes to maintaining a robust design system, they fall short. My current AI design system is built entirely out of three markdown files (design.md, layout.md, and sections.md). It works awesome, but it not the industry standard for AI. Why, cause we don't have an industry standard for AI design systems... Yet.

To the UX designers worrying about the future. Stop obsessing over Figma and pixels. Learn to embrace the AI shift and focus entirely on solving the user's actual needs. The tools are changing, but the core job remains the same. Fight for the User! 💪

u/LargeProgress7756 — 10 days ago
▲ 3 r/u_LargeProgress7756+1 crossposts

I have been testing Google's new image editing capabilities. We have been calling it the nano banana, but the tool itself is serious. It uses text and images to edit visuals, and I wanted to break down how it performs when applied to a real production environment.

Where it works: This functions like a significantly upgraded version of Photoshop. When you need to composite objects naturally, modify colors, or apply specific designs, the execution is fast and accurate.

The resolution tiers are practical for standard workflows:

  • 0.5K for rapid drafts and ideation.
  • 1K for standard use.
  • 2K and 4K for final production assets.

The technical constraints: The tool's capabilities depend entirely on your access point. The consumer Gemini app is basic. Accessing the models through the API and Vertex AI is where the actual value is. That route gives developers the precision, higher resolutions, and advanced controls needed for real application.

Where it falls short: It is a strong tool for editing existing assets, but it struggles with raw creative generation. If you need to build unique characters from scratch, it does not hold up against Midjourney.

The takeaway here is that you do not need one tool to do everything. My current process keeps Midjourney for raw creative generation and shifts to Google for heavy production editing and compositing.

Are you integrating Vertex AI into your current stack, or sticking to standard workflows for image production?

u/LargeProgress7756 — 13 days ago