r/GenAI4all

🔥 Hot ▲ 374 r/GenAI4all+1 crossposts

Seedance 2.0 finally goes public

Finally after so long wait due to so many Hollywood sues and stuff it’s here!!!

u/BholaCoder — 15 hours ago
🔥 Hot ▲ 497 r/OpenAI+2 crossposts

Revenge

This is a horror series made using script by ChatGPT + Cinema Studio 3.0 for image and image to video ,original series , in this we have blur some scenes so everyone can watch it :)

u/memerwala_londa — 3 days ago
Perplexity CEO says AI layoffs aren’t so bad because people hate their jobs anyways: ‘That sort of glorious future is what we should look forward to’
🔥 Hot ▲ 217 r/OpenAI+2 crossposts

Perplexity CEO says AI layoffs aren’t so bad because people hate their jobs anyways: ‘That sort of glorious future is what we should look forward to’

Perplexity CEO Aravind Srinivas recently stated that AI-driven job displacement isn't necessarily a bad thing because most people don't enjoy their jobs. Speaking on the All-In podcast, he argued that losing traditional employment to AI will free individuals to pursue entrepreneurship and start their own mini-businesses.

fortune.com
u/EchoOfOppenheimer — 3 days ago
🔥 Hot ▲ 52 r/Seedance_AI+1 crossposts

I built a custom node to remove the noise spikes in Seedance 2.0

So like everyone else, I've been deep in Seedance 2.0 lately. The quality is genuinely impressive — but after working with it extensively, I started noticing these subtle noise spikes that appear for 1-2 frames at a time. Chroma flicker, random color pops, that kind of thing.

At first I tried throwing Topaz and various upscale models at it, hoping they'd clean it up. They help with general quality, sure, but those frame-level noise spikes was still there.

Since I work with compositing tools (Nuke, Flame, etc..), and this reminded me of a classic technique — frame blending with motion compensation. So I decided to build it as a ComfyUI custom node that anyone can use.

------------------------------------------

What it does:

- Uses optical flow (MEMFOF) to align neighboring frames, then averages them to remove temporal

noise

- Separates chroma and luma so you can target color flicker without killing detail

- Scene-aware — handles cuts automatically. I tested 15-second clips with multiple scene

transitions and it worked clean

------------------------------------------

Here's the thing — depending on the shot, these noise spikes can be really obvious or barely noticeable. But from everything I've tested, they exist in literally every generated clip. Even the Higgsfield Cinema 3.0 showcase videos on their own site still have them. For me it seems like an white-labeled version of Seedance 2.0 tho.

So if you've ever had to toss a good take just because of a random color pop or flicker — give this a try.

GitHub: https://github.com/AIMZ-GFX/ComfyUI-FlowDenoise

This is still early stage and there's plenty of room for improvement. If you try it out and have ideas or feedback, I'd genuinely appreciate it. Thanks!

Child safety groups say they were unaware OpenAI funded their coalition
▲ 38 r/artificial+5 crossposts

Child safety groups say they were unaware OpenAI funded their coalition

A new report from The San Francisco Standard reveals that the Parents and Kids Safe AI Coalition, a group pushing for AI age-verification legislation in California, was entirely funded by OpenAI. Child safety advocates and nonprofits who joined the coalition say they were completely unaware of the tech giant's financial backing until after the group's launch, with one member describing the covert arrangement as a very grimy feeling.

sfstandard.com
u/EchoOfOppenheimer — 1 day ago
Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News
▲ 21 r/ArtificialInteligence+9 crossposts

Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News

Hey everyone, I just sent the 26th issue of AI Hacker Newsletter, a weekly roundup of the best AI links and discussions around from Hacker News. Here are some of the links:

  • Coding agents could make free software matter again - comments
  • AI got the blame for the Iran school bombing. The truth is more worrying - comments
  • Slop is not necessarily the future - comments
  • Oracle slashes 30k jobs - comments
  • OpenAI closes funding round at an $852B valuation - comments

If you enjoy such links, I send over 30 every week. You can subscribe here: https://hackernewsai.com/

u/alexeestec — 11 hours ago
Effects on interpersonal communication

Effects on interpersonal communication

I am doing an assignment for a university class on the interpersonal effects on users over fifty, if you have any experiences or stories you would like to share, please fill out this Google form. Thank you so much in advance.

u/Ac_frise666 — 1 hour ago

I built a custom node to remove the noise spikes in Seedance 2.0

So like everyone else, I've been deep in Seedance 2.0 lately. The quality is genuinely impressive — but after working with it extensively, I started noticing these subtle noise spikes that appear for 1-2 frames at a time. Chroma flicker, random color pops, that kind of thing.

At first I tried throwing Topaz and various upscale models at it, hoping they'd clean it up. They help with general quality, sure, but those frame-level noise spikes was still there.

Since I work with compositing tools (Nuke, Flame, etc..), and this reminded me of a classic technique — frame blending with motion compensation. So I decided to build it as a ComfyUI custom node that anyone can use.

------------------------------------------

What it does:

- Uses optical flow (MEMFOF) to align neighboring frames, then averages them to remove temporal

noise

- Separates chroma and luma so you can target color flicker without killing detail

- Scene-aware — handles cuts automatically. I tested 15-second clips with multiple scene

transitions and it worked clean

------------------------------------------

Here's the thing — depending on the shot, these noise spikes can be really obvious or barely noticeable. But from everything I've tested, they exist in literally every generated clip. Even the Higgsfield Cinema 3.0 showcase videos on their own site still have them. For me it seems like an white-labeled version of Seedance 2.0 tho.

So if you've ever had to toss a good take just because of a random color pop or flicker — give this a try.

GitHub: https://github.com/AIMZ-GFX/ComfyUI-FlowDenoise

This is still early stage and there's plenty of room for improvement. If you try it out and have ideas or feedback, I'd genuinely appreciate it. Thanks!

[workflow example]

https://preview.redd.it/a4fqc5ugwrsg1.png?width=4077&format=png&auto=webp&s=95d5d1293a7b2586cfd278634dfe7559611d0441

u/Primary_Internal9365 — 14 hours ago
Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News
▲ 3 r/GenAI4all+1 crossposts

Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News

Hey everyone, I just sent the 26th issue of AI Hacker Newsletter, a weekly roundup of the best AI links and discussions around from Hacker News. Here are some of the links:

  • Coding agents could make free software matter again - comments
  • AI got the blame for the Iran school bombing. The truth is more worrying - comments
  • Slop is not necessarily the future - comments
  • Oracle slashes 30k jobs - comments
  • OpenAI closes funding round at an $852B valuation - comments

If you enjoy such links, I send over 30 every week. You can subscribe here: https://hackernewsai.com/

u/alexeestec — 11 hours ago

Spec-driven development might be the missing layer in GenAI workflows

A lot of GenAI workflows today still rely on:

prompt - output -refine - repeat

It works well for quick tasks, but when you try to build something larger, it can get inconsistent and hard to manage.

Recently I started experimenting with spec-driven development with GenAI.

Instead of prompting directly, I first define:

  • what I want to build
  • expected behavior
  • inputs / outputs
  • constraints and edge cases

Then I let the model generate based on that.

This small shift made a big difference:

  • outputs are more consistent
  • less back-and-forth refinement
  • easier to debug and iterate

I’ve also been exploring tools that help track how AI applies these specs across a project like traycer, which makes things more manageable at scale.

Feels like spec-driven workflows could be a key layer for making GenAI more reliable beyond demos.

Curious if others here are experimenting with similar approaches.

reddit.com
u/Willing-Squash6929 — 10 hours ago
I asked AI to imagine a puppy in the 27th dimension

I asked AI to imagine a puppy in the 27th dimension

This infographic actually illustrates a multitude of concepts AI struggles with, including but not limited to, the chicken and the egg, the fifth dimensional paradox and how sting theory actually ends at the 26th dimension. Thoughts?

u/Hepdesigns — 22 hours ago
Week