r/nextjs

▲ 6 r/nextjs

Is the shadcn + AI workflow making our codebases a complete black box?

I’ve been using shadcn for a while now and I love it, but lately, combined with AI tools like v0 or Cursor, things are getting weird.

Instead of actually "building" UI, I find myself prompting an AI to generate a complex component, and then just dumping 200+ lines of shadcn-style code into my /components folder. It looks great and it's fast as hell, but I realized today that I barely understand the internals of half my UI anymore. If a Radix primitive or some complex Tailwind logic breaks, my first instinct is now to just ask the AI to fix its own mess instead of actually debugging it.

Are we just creating massive technical debt for our future selves? Does anyone else feel like we’re losing the "craft" of writing clean CSS/Tailwind, or is "prompt and paste" just the new standard we have to accept?

reddit.com
u/Known_Author5622 — 4 hours ago
▲ 16 r/nextjs

Looking for a PDF generator that supports full HTML/CSS — @react-pdf/renderer is too limiting

I built an ABM (Account-Based Marketing) outreach system for my portfolio site. When I reach out to a company, everything is personalized with their branding:

∙	Email → their logo, brand colors, company name in the header

∙	Landing page → mysite.com/landing?company=nike.com shows a custom proposition

∙	CV (PDF) → attached to the email, branded with their colors and logo

The email and landing page look great because I have full HTML/CSS control. But the PDF is the weak link. I’m using @react-pdf/renderer which has its own layout engine (Yoga/Flexbox subset), no CSS support, and limited styling options. The result looks noticeably worse than the rest of the experience.

What I’m looking for: A Node.js-compatible PDF generator that lets me use real HTML/CSS so my PDF can match the quality of my email and landing page. Ideally something that works in a serverless environment (Vercel).

Options I’m aware of:

∙	Puppeteer/Playwright (headless Chrome → PDF) — powerful but heavy for serverless

∙	Prince XML — expensive

∙	Gotenberg — self-hosted, needs Docker

Has anyone found a good solution for generating styled, dynamic PDFs from HTML/CSS in a Next.js/Vercel setup? What’s your go-to?

reddit.com
u/Senior_Ad_8034 — 15 hours ago
▲ 13 r/nextjs

Shadcn Slider: from standard to advanced!

Continuing my shadcn/ui curation, this list covers standard React sliders, range sliders, vertical sliders, knob sliders, scrub sliders, and more.

Explore all slider blocks here: shadcn sliders

u/PerspectiveGrand716 — 16 hours ago
▲ 2 r/nextjs

Redis Cache limits

I am using Dokploy and tried to set some limits for redis.

https://preview.redd.it/pcy9abba4otg1.png?width=1539&format=png&auto=webp&s=43e4bed716037e924bdf9946effc49558c9a13b1

Here is what I have done.

from terminal I set:

CONFIG SET maxmemory 2gb
CONFIG Set maxmemory-policy allkeys-lru

Works great until redis is restarted, then it reset to no limits or policy

Someone suggested I use the Advanced settings in Dokploy however, that doens't seem to work either

There is the screen and I also have tried few other variations

I have tried the Advanced settings Command with Command "resdis-server" and argumanets

  1. redis-server --maxmemory 4gb
  2. redis-server --maxmemory-policy allkeys-lfu

With variations of arguments like

  1. --maxmemory
  2. 4g
  3. --maxmemory-policy
  4. allkeys-lfu

and

  1. --maxmemory 4gb
  2. --maxmemory-policy allkeys-lfu

every reload / stop and start resets it

I joined the Dokploy discord server there doesn't seem to be any help either.

How to do I set the limit and policy and make stay set even when restarted

I tried CONFIG rewrite but it say there is file to rewrite

reddit.com
u/afrk — 5 hours ago
▲ 1 r/nextjs

As a Vue 3 dev with no Next.js experience, I built a full-stack CDK selling platform using Codex in 2.5 days — honest experience

I'm a Vue 3 developer who had never worked with Next.js before. Over a few spare evenings at work, I used Codex to build a complete full-stack platform for managing and selling ChatGPT Plus card keys (CDK).

What impressed me most:
Codex handled complex backend logic exceptionally well — webhook verification, inventory race conditions, atomic transactions, and other edge cases were managed automatically with almost no hand-holding.

UI challenge and the fix that made a big difference:
Out of the box, the generated layouts relied heavily on cards and quickly became messy. I applied a specific UI/UX refinement skill that cleaned everything up significantly. If design matters to you, I highly recommend checking it out: https://github.com/nextlevelbuilder/ui-ux-pro-max-skill

Final outcome:
The platform is now fully live and functional. It was a valuable practical exercise in transitioning from Vue to Next.js with AI assistance. If you're a Vue developer exploring Codex, I'd be interested to hear your experiences too.

https://preview.redd.it/u0nbxglxiotg1.png?width=1920&format=png&auto=webp&s=0838c380b264675bf27e385de3e837c923a440aa

https://preview.redd.it/bhijfkqyiotg1.png?width=1920&format=png&auto=webp&s=27a5bdbf7d81b451a075f4f18030c306b7ed32c0

https://preview.redd.it/sfquwo91jotg1.png?width=1920&format=png&auto=webp&s=3757eb49aeff96ff82a8d845c0c83cfc6fc643f8

reddit.com
u/moonbeam1013 — 3 hours ago
▲ 8 r/nextjs

Adblockers completely crashing my website because of the adsense scripts that I include in root layout

I have an issue with adblockers (uBlock Origin, Brave Browser..) completely blocking off access to my nextjs website. The website renders fine for a second, then goes black screen with an error 'Application error: a client-side exception has occurred while loading site.com (see the browser console for more information).' -> in console throws Uncaught Error: Connection closed.

I've narrowed down the issue to the adsense ad scripts I'm inserting in the NextJS root layout of my app. The adblockers seem to find the scripts being loaded via NextJS, I'm guessing in the js bundle, stream chunks or whatever, and block the entire app instead of just those adsense scripts. I have tried to include the scripts via Next's Head element in the root layout, and also outside of the Head and just use <head> in hopes that doesn't get optimized by Next. I also tried using simple <script> tags and then Next <Script> component. None of these resolved the issue in question. The site in question is using SSR.

Finally, I found one solution: I removed the adsense scripts completely from the app and via Cloudflare I created a worker that triggers on every route to rewrite the html by appending the said adsense scripts at the bottom of the body tag. This worked, but with a caveat. The worker trigger can only be a catch all route, doesn't seem to support regex. This means that when doing site.com/* the worker triggers on absolutely everything, from site.com/.next/ to rsc chunks of specific pages (for example one page load actually costs 10 worker credits since it's chunked). I refreshed a few pages and used up 1100 credits in just a few seconds, so this definitely isn't an option I can use.

Does anyone have any ideas on how to fix the issue, be it through loading the scripts via nextjs, or some tweak to the cf workers, something else? Thanks a lot :)

u/ImBoB99 — 22 hours ago
▲ 4 r/nextjs

Code That Looks Clean … and Code That Actually Stays Clean

I’ve noticed something interesting in a lot of React codebases.

Some components look clean at first glance.

Short functions, nice formatting, everything seems organized.

But once you look closer, they’re actually doing:

  • data fetching
  • validation
  • business rules
  • UI state
  • error handling

All inside the same component.

It works… but it feels like hidden complexity.

On the other hand, when I separate:

  • validation logic
  • service / API calls
  • UI layer

The code looks slightly more “verbose”, but way easier to reason about and extend.

So now I’m starting to think:

The real difference isn’t between messy code and clean code.

It’s between code that looks clean

and code that actually stays clean as the project grows.

Curious how you approach this.

Do you keep logic inside components for speed ?

Or do you prefer extracting everything early ?

reddit.com
u/OMAR_M_AHMAD — 15 hours ago
▲ 1 r/nextjs

Frontend randomly running into infinite loop when JSON-LD is on page (and cacheComponents is enabled)

Hi,

I'm fighting a super weird bug since 3 days. I upgraded Nextjs in my project from 15 to 16.2.2 and added cached components. Everything works really well, except one issue: When I refresh routes that have json-ld in them like explained in the docs (https://nextjs.org/docs/app/guides/json-ld), they randomly run into an infinite loop and ultimately crash the tab. It doesn't happen on every refresh tho, sometimes it already crashes when I open the url, sometimes I can refresh a few times before the crash happens.

I did a ton of digging to even find out that it is caused by the json-ld, but I can't find a solution except injecting it client side or removing it.

I also tried to replicate the issue in a minimal setup but couldn't really get it to the infinite loop. Since I don't really understand the Nextjs internals and I did a lot of debugging with Opus, here some things that I found out:

Env:

-Next.js 16.2.2, React 19.2.4, Turbopack (also using HeroUI v3)

- `cacheComponents: true` (PPR)

- App Router, async server components

- The error only appears on builds, not in dev. It only happens when cacheComponents is enabled.

**The symptom:**

On page load or after a handful of refreshes, the tab becomes completely unresponsive. CPU spikes to 130%. The HTML document is delivered successfully (verified with curl), but JS chunks stay "(pending)" in DevTools and the tab never recovers.

**Root cause:**

Pausing the frozen tab in DevTools lands inside React's PPR reveal function (`$RV`), specifically this loop:

```

for (; e.firstChild; ) f.insertBefore(e.firstChild, c);

```

The variables `e` and `f` are the same DOM element. Moving children from a node into itself never clears `firstChild` → infinite loop.

We also found that Next.js PPR generates duplicate HTML element IDs in the streaming output. For example, two separate `<div hidden id="S:4">` elements — one from the prerendered fallback, one from the dynamic content. When React's `$RC` function calls `document.getElementById("S:4")`, it can return the wrong element depending on timing, which leads to the `e === f` condition. <- This was a conclusion by Opus, but we later removed a lot of my Suspense boundaries which fixed the duplicate ID thing, but it still crashed.

It's basically way to far inside Nextjs for me to understand whats going on and I'm really out of ideas. I invested like 3 days to find the issue, try to build a minimal replication to be able open a Github issue (which didn't work) and I just want my json-ld in there that worked before.

json-ld code is basically the same like in the example in the docs:

```
return (

<script

type="application/ld+json"

dangerouslySetInnerHTML={{

__html: JSON.stringify(data).replace(/</g, "\\u003c"),

}}

/>

);

```

reddit.com
u/Powerful_Froyo8423 — 6 hours ago
▲ 1 r/nextjs

Help me to build websites....

Hello I am new to programming. I am building my first ever proper website which will be used by a client. Can someone tell me what to do and what to not.

also will be helpful if someone just provides a proper workflow.

also suggest a deployment strategy.

reddit.com
u/Icy-Preparation-2530 — 11 hours ago
▲ 2 r/Supabase+1 crossposts

Help me understand if my current setup is correct and lmk what I'm doing wrong

I am currently learning how the technologies mentioned below work together. My current setup is Turborepo, Next.JS, Supabase, Supabase Auth, drizzle, tRPC v11, tanstack-query.

I think I'm missing something and I have a few questions.

  1. From what I understand I only need to create a client if my api would be in the apps folder (used by hono or express). In that case I should use supabase.auth.getSession() in apps/web, get the access token, add it in headers and send it to the apps/api, correct? By doing this, I would verify that the client making the request is the same with the one that is on the apps/api.

  2. Am I using supabase.auth.getClaims() correctly? Am I calling this function too many times? I'm suppose it's ok to use getClaims() since I am not using a separate app for the API.

In total, I am using supabase.auth.getClaims() in 3 places: server.tsx, trpc/[trpc]/route.ts, auth/callback/route.ts, and a 4th place would be in proxy.ts once I write it. Is this setup correct?

If someone can take the time to look over the code and clarify when getClaims() should be used and why and why in a separate app it is important to send an "access token" in headers, I would much appreciate it. This is all a bit confusing to me.

monorepo/
├── apps/
│   └── web/                         # Next.js app
│       ├── app/
│       │   ├── api/
│       │   │   └── trpc/
│       │   │       └── route.ts    # tRPC HTTP handler
│       │   ├── auth/
│       │   │   └── callback/
│       │   │       └── route.ts    # Fixed auth callback
│       │   └── layout.tsx          # Wraps TRPCReactProvider
│       ├── lib/
│       │   └── trpc/
│       │       ├── client.tsx      # TRPCReactProvider
│       │       ├── server.tsx      # trpc proxy for RSC
│       │       └── query-client.ts
│       └── utils/
│           └── constants.ts
├── packages/
│   ├── db/                          # Drizzle + postgres
│   │   ├── client.ts                # db
│   │   └── instrument.ts
│   ├── trpc/
│   │   ├── init.ts                  # t, createTRPCContext, middleware
│   │   └── routers/
│   │       ├── _app.ts              # appRouter
│   │       ├── users.ts
│   │       └── ...
│   ├── supabase/
│   │   └── server.ts                # createClient
│   └── utils/
│       └── sanitize-redirect.ts
└── package.json

Here are my files:
packages\trpc\src\init.ts:

import type { Database } from "@monorepo/db/client";
import { initTRPC, TRPCError } from "@trpc/server";
import superjson from "superjson";

export type AuthClaims = {
  sub: string;
  email?: string;
  role?: string;
  app_metadata?: Record&lt;string, unknown&gt;;
  user_metadata?: Record&lt;string, unknown&gt;;
  [key: string]: unknown;
};

export type TRPCContext = {
  headers: Headers;
  claims: AuthClaims | null;
  db: Database;
};

/**
 * This context creator accepts `headers` so it can be reused in both
 * the RSC server caller (where you pass `next/headers`) and the
 * API route handler (where you pass the request headers).
 */
export const createTRPCContext = async (opts: {
  headers: Headers;
  claims: AuthClaims | null;
  db: Database;
}): Promise&lt;TRPCContext&gt; =&gt; {
  
// const user = await auth(opts.headers);
  return {
    headers: opts.headers,
    claims: opts.claims,
    db: opts.db,
  };
};

// Avoid exporting the entire t-object
// since it's not very descriptive.
// For instance, the use of a t variable
// is common in i18n libraries.
const t = initTRPC.context&lt;Awaited&lt;ReturnType&lt;typeof createTRPCContext&gt;&gt;&gt;().create({
  
/**
   * u/see https://trpc.io/docs/server/data-transformers
   */
  transformer: superjson,
});

const enforceAuth = t.middleware(async ({ ctx, next }) =&gt; {
  if (!ctx.claims) {
    throw new TRPCError({
      code: "UNAUTHORIZED",
      message: "You must be signed in to perform this action",
    });
  }

  return next({
    ctx: {
      ...ctx,
      claims: ctx.claims,
    },
  });
});


// Base router and procedure helpers
export const createTRPCRouter = t.router;
export const createCallerFactory = t.createCallerFactory;
export const publicProcedure = t.procedure;
export const protectedProcedure = t.procedure.use(enforceAuth);

apps\web\src\trpc\server.tsx:

// tRPC caller for Server Components
// To prefetch queries from server components, we create a proxy from our router.
// You can also pass in a client if your router is on a separate server.

import "server-only"; 
// &lt;-- ensure this file cannot be imported from the client
import { cache } from "react";
import { headers } from "next/headers";
import { db } from "@monorepo/db/client";
import { createClient } from "@monorepo/supabase/server";
import { createTRPCContext } from "@monorepo/trpc/init";
import type { AppRouter } from "@monorepo/trpc/routers/_app";
import { appRouter } from "@monorepo/trpc/routers/_app";
import { createTRPCOptionsProxy, type TRPCQueryOptions } from "@trpc/tanstack-react-query";
import { makeQueryClient } from "./query-client";


// IMPORTANT: Create a stable getter for the query client that
//            will return the same client during the same request.
export const getQueryClient = cache(makeQueryClient);

export const trpc = createTRPCOptionsProxy&lt;AppRouter&gt;({
  ctx: async () =&gt; {
    const supabase = await createClient();
    const { data } = await supabase.auth.getClaims();

    return createTRPCContext({
      headers: await headers(),
      claims: data?.claims ?? null,
      db,
    });
  },
  router: appRouter,
  queryClient: getQueryClient,
});

// If your router is on a separate server, pass a client instead:
// createTRPCOptionsProxy({
//   client: createTRPCClient({ links: [httpLink({ url: '...' })] }),
//   queryClient: getQueryClient,
// });

export function prefetch&lt;T extends ReturnType&lt;TRPCQueryOptions&lt;any&gt;&gt;&gt;(queryOptions: T) {
  const queryClient = getQueryClient();

  if (queryOptions.queryKey[1]?.type === "infinite") {
    void queryClient.prefetchInfiniteQuery(queryOptions as any).catch(() =&gt; {
      
// Avoid unhandled promise rejections from fire-and-forget prefetches.
    });
  } else {
    void queryClient.prefetchQuery(queryOptions).catch(() =&gt; {
      
// Avoid unhandled promise rejections from fire-and-forget prefetches.
    });
  }
}


export function batchPrefetch&lt;T extends ReturnType&lt;TRPCQueryOptions&lt;any&gt;&gt;&gt;(queryOptionsArray: T[]) {
  const queryClient = getQueryClient();

  for (const queryOptions of queryOptionsArray) {
    if (queryOptions.queryKey[1]?.type === "infinite") {
      void queryClient.prefetchInfiniteQuery(queryOptions as any).catch(() =&gt; {
        
// Avoid unhandled promise rejections from fire-and-forget prefetches.
      });
    } else {
      void queryClient.prefetchQuery(queryOptions).catch(() =&gt; {
        
// Avoid unhandled promise rejections from fire-and-forget prefetches.
      });
    }
  }
}

apps\web\src\app\api\trpc\[trpc]\route.ts:

import { db } from "@monorepo/db/client";
import { createClient } from "@monorepo/supabase/server";
import { createTRPCContext } from "@monorepo/trpc/init";
import { appRouter } from "@monorepo/trpc/routers/_app";
import { fetchRequestHandler } from "@trpc/server/adapters/fetch";

const handler = (req: Request) =&gt;
  fetchRequestHandler({
    endpoint: "/api/trpc",
    req,
    router: appRouter,
    createContext: async () =&gt; {
      const supabase = await createClient();
      const { data } = await supabase.auth.getClaims();

      return createTRPCContext({
        headers: req.headers,
        claims: data?.claims ?? null,
        db,
      });
    },
  });


export { handler as GET, handler as POST };

Now here is the auth, which I am not sure is correct. Here I am trying to verify the user role which is just a column in public.users table. Should I use tRPC here or just query the database directly?
apps\web\src\app\api\auth\callback\route.ts:

import { cookies } from "next/headers";
import type { NextRequest } from "next/server";
import { NextResponse } from "next/server";
import { db } from "@monorepo/db/client";
import { getUserById, getUserInvitesByEmail } from "@monorepo/db/queries";
import { createClient } from "@monorepo/supabase/server";
import { sanitizeRedirectPath } from "@monorepo/utils/sanitize-redirect";
import { addYears } from "date-fns";
import { Cookies } from "@/lib/utils/constants";

export async function GET(req: NextRequest) {
  const cookieStore = await cookies();
  const requestUrl = new URL(req.url);
  const code = requestUrl.searchParams.get("code");
  const returnTo = requestUrl.searchParams.get("return_to");
  const provider = requestUrl.searchParams.get("provider");

  if (provider) {
    cookieStore.set(Cookies.PreferredSignInProvider, provider, {
      expires: addYears(new Date(), 1),
    });
  }

  if (code) {
    const supabase = await createClient();
    await supabase.auth.exchangeCodeForSession(code);

    const { data } = await supabase.auth.getClaims();
    const user = data?.claims;

    if (user) {
      const userId = user.sub;
      const userEmail = user.email;

      const userData = await getUserById(db, userId);

      const userRole = userData?.role;

      
// Check if user has pending invitations when role is null
      if (userRole === null) {
        const inviteData = await getUserInvitesByEmail(db, userEmail!);

        // If there's an invitation, send them to teams page
        if (inviteData) {
          return NextResponse.redirect(`${requestUrl.origin}/team`);
        }

        
// Otherwise proceed with setup
        return NextResponse.redirect(`${requestUrl.origin}/setup`);
      }

      if (userRole === "admin" || userRole === "teacher") {
        return NextResponse.redirect(`${requestUrl.origin}/dashboard`);
      }

      if (userRole === "student" || userRole === "parent") {
        return NextResponse.redirect(`${requestUrl.origin}/profile`);
      }
    }
  }

  if (returnTo) {
    
// The middleware strips the leading "/" (e.g. "settings/accounts"),
    
// but sanitizeRedirectPath requires a root-relative path starting with "/".
    const normalized = returnTo.startsWith("/") ? returnTo : `/${returnTo}`;
    const safePath = sanitizeRedirectPath(normalized);
    return NextResponse.redirect(`${origin}${safePath}`);
  }

  return NextResponse.redirect(requestUrl.origin);
}

This is the tRPC client:

"use client";
// ^-- to make sure we can mount the Provider from a server component
import { useState } from "react";
import type { AppRouter } from "@monorepo/trpc/routers/_app";
import type { QueryClient } from "@tanstack/react-query";
import { QueryClientProvider } from "@tanstack/react-query";
import { createTRPCClient, httpBatchLink } from "@trpc/client";
import { createTRPCContext } from "@trpc/tanstack-react-query";
import superjson from "superjson";
import { makeQueryClient } from "./query-client";

export const { TRPCProvider, useTRPC } = createTRPCContext&lt;AppRouter&gt;();

let browserQueryClient: QueryClient;
function getQueryClient() {
  if (typeof window === "undefined") {
    
// Server: always make a new query client
    return makeQueryClient();
  }
  
// Browser: make a new query client if we don't already have one
  
// This is very important, so we don't re-make a new client if React
  
// suspends during the initial render. This may not be needed if we
  
// have a suspense boundary BELOW the creation of the query client
  if (!browserQueryClient) browserQueryClient = makeQueryClient();

  return browserQueryClient;
}


function getUrl() {
  const base = (() =&gt; {
    if (typeof window !== "undefined") return "";
    if (process.env.VERCEL_URL) return `https://${process.env.VERCEL_URL}`;

    return "http://localhost:3000";
  })();


  return `${base}/api/trpc`;
}


export function TRPCReactProvider({ children }: { children: React.ReactNode }) {
  
// NOTE: Avoid useState when initializing the query client if you don't
  
//       have a suspense boundary between this and the code that may
  
//       suspend because React will throw away the client on the initial
  
//       render if it suspends and there is no boundary
  const queryClient = getQueryClient();

  const [trpcClient] = useState(() =&gt;
    createTRPCClient&lt;AppRouter&gt;({
      links: [
        httpBatchLink({
          transformer: superjson, 
// &lt;-- if you use a data transformer
          url: getUrl(),
        }),
      ],
    }),
  );


  return (
    &lt;QueryClientProvider 
client
={queryClient}&gt;
      &lt;TRPCProvider 
trpcClient
={trpcClient} 
queryClient
={queryClient}&gt;
        {children}
      &lt;/TRPCProvider&gt;
    &lt;/QueryClientProvider&gt;
  );
}

packages\trpc\src\server\routers\users.ts:

import { getUserById } from "@monorepo/db/queries";
import { createTRPCRouter, protectedProcedure } from "../../init";

export const usersRouter = createTRPCRouter({
  me: protectedProcedure.query(async ({ ctx: { db, claims } }) =&gt; {
    const user = await getUserById(db, claims.sub);

    return user;
  }),
});
u/Western_Door6946 — 17 hours ago
▲ 1 r/nextjs+1 crossposts

Job hunt advice

Hey guys, so I’m graduating this May and need to start looking for work. Ig what I’m asking is what do I need to do to prepare for interviews in this market with ai. I have some really solid projects but I did use ai to build most of them. I also have a few internships which they also had me using AI for everything. So theoretically I feel like I’m pretty solid but technically in terms of coding my own I’m a little week but I feel like this era of ai we don’t rlly need to code much ourselves. But essentially, what should I do to prepare? Any advice would be grateful, thank you!

reddit.com
u/Sufficient-Citron-55 — 11 hours ago
▲ 1 r/nextjs

Intermittent 404s on dynamically generated images in Next.js on Vercel, turns out /tmp is per-container

I'm generating PNG images server-side (using a headless rendering library) and serving them through a separate API route. The render route writes the PNGs to os.tmpdir() under a batch ID, and the image route reads them back from the same path. Worked perfectly in local dev. Deployed to Vercel, started getting 404s on maybe 30-40% of image loads.

The stack trace looked alarming. The browser console showed what seemed like an infinite re-render loop, hundreds of repeated React reconciliation calls. That sent me down the wrong path for a while, thinking there was a client side state issue causing a retry spiral. Turned out that was just the browser console dumping the full React component tree for a failed &lt;img&gt; load. Not an actual loop.

The real cause is straightforward once you know it. Vercel serverless functions run on ephemeral containers. The render route writes files to /tmp/carousel/{batchId}/. The image route tries to read from the same path. But these are separate function invocations that can hit different containers. Container A writes the files. Container B gets the image request. Container B's /tmp is empty. 404.

It works sometimes because warm containers can serve both requests. It fails when the image request lands on a different container than the one that rendered. That's why it's intermittent, which is also why it took me longer than it should have to diagnose. Intermittent bugs with no error logs on the server side (just a clean 404) don't give you much to work with.

I also had a zip download route that read from the same /tmp path to bundle all images. Same bug, I just hadn't noticed it yet because I'd been testing downloads immediately after rendering (same warm container).

The fix was to stop using the filesystem entirely. The render route now returns base64 data URLs inline in the response instead of writing to disk. The client gets the image data directly. &lt;img src&gt; handles data URLs the same as regular URLs, so the component barely changed. For the zip download, I moved to client-side generation with JSZip. The browser has the base64 data already, so it just packs the zip locally instead of making another server round-trip.

This eliminated the image route and the download route completely. Both were dead code after the change. Fewer API routes, no filesystem dependency, and it's actually faster because there's no second HTTP request per image.

If you're generating files server-side on Vercel and serving them through a separate route, you'll hit this eventually. The /tmp directory is not shared across invocations. In-memory or inline responses are the way to go for anything that needs to survive between the write and the read.

TL;DR: Vercel serverless containers don't share /tmp. Writing files in one API route and reading them in another will 404 intermittently when requests hit different containers. Fix: return base64 data URLs inline instead of writing to disk. Eliminated two API routes and the filesystem dependency entirely.

reddit.com
u/Glittering-Pie6039 — 19 hours ago
▲ 0 r/nextjs

Technical Deep Dive: Using Rust to achieve zero-bundle-size MDX in React Server Components

Hi everyone,

MDX parsing in Next.js has traditionally been a trade-off between build-time performance and client-side hydration costs. In large-scale projects, the Remark/Rehype pipeline can become a significant bottleneck.

We’ve been experimenting with a different approach for Omni-Core: offloading the entire parsing logic to a native Rust core. Here’s a breakdown of the architecture and how it integrates with RSC.

1. The "Single Core" Architecture

Instead of maintaining separate parsers for different environments, we centralized the logic in Rust. This ensures identical parsing behavior whether you're rendering on the web via WASM or in a Python data pipeline.

2. Achieving Zero-JS for Content

By moving the parsing step into Rust and the rendering step into React Server Components, the heavy lifting happens entirely on the server. The result is that the MDX content reaches the browser as pure static HTML.

  • The Rust parser generates a "pristine" AST.
  • This AST is JSON-serializable, making it easy to cache or pass between server and client.
  • Standard UI components are mapped to JSX tags via a registry, allowing Server Components to handle the rendering without sending the parser to the client.

3. The Unified/Rehype Bridge

One challenge was maintaining compatibility with existing plugins. We implemented a bridge that maps the Rust-generated AST directly to the HAST (HTML AST) format. This allows developers to inject standard rehype plugins (like syntax highlighters) into the native pipeline.

4. Hybrid Runtime Execution

To ensure it works everywhere, the engine auto-detects the environment:

  • Native .node: High-speed execution for Node.js 18+ and Next.js RSC.
  • WASM Fallback: Ensures compatibility with Edge runtimes (Vercel Edge, Cloudflare Workers) and direct browser previews.

5. Native Math Handling

Instead of relying on external JS plugins for equations, we integrated math lexing directly into the Rust core. It outputs standard HTML data attributes that can be statically styled or hydrated with minimal overhead.

We've open-sourced the implementation under Omni-MDX v1.0.0. If you're interested in cross-platform parsing or RSC optimization, the source and full documentation are available below.

Documentation: https://omni-core.org/mdx

GitHub: https://github.com/TOAQ-oss/omni-mdx-core

reddit.com
u/Glittering_Sign_8150 — 8 hours ago
Week