r/reactjs

Chart library that actually feels “native” in a PWA?

Hey devs,

I’m building a PWA and most chart libraries I’ve tried feel… very “web-ish” (heavy, laggy, or just not matching native UI vibes).

I’m looking for something:

  • lightweight (fast load on mobile)
  • smooth animations / interactions
  • looks clean + modern (not dashboard-y)
  • ideally React-friendly

I’ve tried Chart.js but it still feels a bit generic.

Anyone using a chart lib that actually feels close to native apps?

Would love suggestions + real experiences 🙏

reddit.com
u/Guilty_Difference_42 — 3 hours ago
Open-sourced a React 18 media gallery app — TanStack Virtual, Zustand, shadcn/ui, and client-side ML

Open-sourced a React 18 media gallery app — TanStack Virtual, Zustand, shadcn/ui, and client-side ML

I've been building SubHarvest, a self-hosted Reddit media harvester, and just open-sourced it. The frontend has some patterns that might be useful to others so I wanted to share.

Frontend stack: React 18, Vite, TanStack Query, TanStack Virtual, Zustand, Tailwind CSS, Radix UI / shadcn/ui, Zod

Rendering 10k+ images without melting the browser

The main gallery uses TanStack Virtual for a masonry grid layout. Each card is measured and positioned dynamically. With virtualisation, scrolling through 10,000+ assets stays at 60fps because only visible items are in the DOM. Cursor-based pagination from the API means we never load the full dataset upfront.

Client-side NSFW detection

Instead of running ML on the server, we load TensorFlow.js + nsfwjs lazily in the browser via dynamic import(). The model only loads when the user enables enhanced NSFW scanning. Scores are computed client-side and PATCHed back to the API. The hook (useNsfwScanner) is a no-op when disabled — zero bundle cost.

State management split

  • Server state: TanStack Query handles all API data (assets, channels, sync jobs, collections). Query keys are structured so mutations invalidate the right caches.
  • Client state: Zustand for two small stores — auth (JWT tokens persisted to localStorage with auto-refresh on 401) and UI preferences (theme, view mode, NSFW filter toggle).
  • No Redux, no context providers for data fetching.

Shared types with the backend

The monorepo has a u/subharvest/types package with Zod schemas. The API validates requests with them, the frontend uses the inferred TypeScript types. One source of truth for the shape of every API response.

Other frontend patterns:

  • Command palette (Cmd+K) for global navigation
  • G-chord keyboard shortcuts (G then H for home, G then A for all assets)
  • Lightbox with keyboard nav, video controls, and metadata panel
  • Auto token refresh — the API client intercepts 401s, calls /auth/refresh, and retries the original request transparently
  • Filter bar state synced to URL params so gallery views are shareable/bookmarkable

Testing: 8 frontend tests covering the auth store (token parsing, persistence, clear) and the API client (auth headers, error handling, 401 refresh flow).

Repo: https://github.com/tamerUAE/AutoReddit

Feedback welcome — especially on the virtualisation approach or the auth flow.

u/Ok_Market_5845 — 7 hours ago

Help with working with openAI's api for image generation

How do I properly send the request for my code on sending the request to chatgpt?

                    <button onClick={() => {
                        
                        const APIBody = {
                            model: "gpt-image-1",
                            input: "Create a full-body studio fashion photograph of the person in this image. White background, professional lighting, high detail.",
                            images: [
                                personalPhotoBase64,
                                fashionPhotoBase64
                            ]
                        };



                        const fetchImage = async () => {
                            try {
                                const response = await axios.post("https://api.openai.com/v1/responses", APIBody, {
                                    headers: {
                                        "Content-Type": "application/json",
                                        Authorization: `Bearer ${API_KEY}`,
                                    }
                                }
                                );
                                //console.log(response.data)
                                if (response.status >= 200 && response.status <= 300) {
                                    
                                    setGeneratedImage(response.data.output[0].image.url)
                                    setGeneratedImages(prev => [...prev, response.data.output[0].image.url])
                                    setShowImage(true)
                                } 
                                
                                



                            } catch (error) {
                                console.error(error)
                                setStatusCode(error.status)
                                /*
                                    When Axios returns an HTTP error (4xx/5xx), it rejects the request and goes into the catch block, not the try block.
                                    You do get the status code inside the error.response object there.
                                */
                            }
                        };


                        if(personalPhotoBase64 && fashionPhotoBase64){
                            setIsClicked(true)
                            fetchImage();
                        } else {
                            setIsClicked(false)
                        }
                        
                    }} >UPLOAD & GENERATE</button>

                    <button onClick={() => {
                        
                        const APIBody = {
                            model: "gpt-image-1",
                            input: "Create a full-body studio fashion photograph of the person in this image. White background, professional lighting, high detail.",
                            images: [
                                personalPhotoBase64,
                                fashionPhotoBase64
                            ]
                        };



                        const fetchImage = async () => {
                            try {
                                const response = await axios.post("https://api.openai.com/v1/responses", APIBody, {
                                    headers: {
                                        "Content-Type": "application/json",
                                        Authorization: `Bearer ${API_KEY}`,
                                    }
                                }
                                );
                                //console.log(response.data)
                                if (response.status >= 200 && response.status <= 300) {
                                    
                                    setGeneratedImage(response.data.output[0].image.url)
                                    setGeneratedImages(prev => [...prev, response.data.output[0].image.url])
                                    setShowImage(true)
                                } 
                                
                                



                            } catch (error) {
                                console.error(error)
                                setStatusCode(error.status)
                                /*
                                    When Axios returns an HTTP error (4xx/5xx), it rejects the request and goes into the catch block, not the try block.
                                    You do get the status code inside the error.response object there.
                                */
                            }
                        };


                        if(personalPhotoBase64 && fashionPhotoBase64){
                            setIsClicked(true)
                            fetchImage();
                        } else {
                            setIsClicked(false)
                        }
                        
                    }} >UPLOAD & GENERATE</button>
reddit.com
u/Internal1344 — 3 hours ago
mdocUI — open-source generative UI for LLMs using Markdoc {% %} tags (streaming-first, 24 React components)

mdocUI — open-source generative UI for LLMs using Markdoc {% %} tags (streaming-first, 24 React components)

I built mdocUI to solve a problem I kept hitting: LLMs can write markdown, but rendering interactive components (charts, forms, tables) mid-stream is fragile with JSON or JSX.

mdocUI uses Markdoc {% %} tags inline with markdown. The streaming parser processes tokens character-by-character — no buffering, no regex.

What you get:

  • 24 theme-neutral React components (charts, tables, forms, cards, tabs)
  • Single onAction callback for all interactivity
  • classNames prop for Tailwind/custom styling
  • Swap any component with your own (shadcn, Radix, etc.)
  • useRenderer hook for streaming state

Demo: https://mdocui.vercel.app GitHub: https://github.com/mdocui/mdocui

Alpha (0.6.x) — feedback welcome. What components would you want built-in?

u/Plastic_Charge4340 — 5 hours ago
I built React hooks that predict text height before render, fixes accordion hacks, masonry layout, chat bubbles

I built React hooks that predict text height before render, fixes accordion hacks, masonry layout, chat bubbles

I built @pretext-studio/core to solve a specific annoyance: the browser won't tell you how tall a text block is until after it renders. This forces you into either a render-then-measure cycle (which causes layout shift) or hacks like max-height: 9999px for accordion animations (which makes easing look wrong because the animation runs over 9999px, not the actual content height).

The library wraps @chenglou/pretext, a pure-JS text layout engine that replicates the browser's line-breaking. algorithm using font metrics loaded once via the Font Metrics API. From there, computing height is arithmetic — no DOM, no getBoundingClientRect, no reflow. A prepare() call runs in ~0.03ms; a layout() call in under 0.01ms. Results are cached in a module-level LRU map so repeated calls for the same font/size pair are nearly free.

The main hooks are useTextLayout (height + line count for a block at a given width), useBubbleMetrics (finds the tightest width that preserves line count, which eliminates the dead space you get from CSS fit-content), and useStableList (pre-computes heights for a list of items before paint, useful for virtualized lists and masonry layouts). There's also a MeasuredText drop-in component with a debug overlay that draws predicted line boundaries over actual rendered text so you can see where predictions diverge.

The honest limitation: it only works with fonts you can load metrics for, so arbitrary system fonts or poorly-behaved variable fonts may drift. The isReady flag on every hook is false until font metrics load, so you need to gate renders on it when using web fonts. It also doesn't handle white-space: pre-wrap yet. Feedback welcome — especially if you've hit edge cases with font loading or non-Latin scripts.

GitHub: https://github.com/ahmadparizaad/pretext-studio-core — npm: @pretext-studio/core

u/ahmadparizaad — 2 hours ago

Help with working with openAI's api for image generation

How do I properly send the request for my code on sending the request to chatgpt?

                    <button onClick={() => {
                        
                        const APIBody = {
                            model: "gpt-image-1",
                            input: "Create a full-body studio fashion photograph of the person in this image. White background, professional lighting, high detail.",
                            images: [
                                personalPhotoBase64,
                                fashionPhotoBase64
                            ]
                        };



                        const fetchImage = async () => {
                            try {
                                const response = await axios.post("https://api.openai.com/v1/responses", APIBody, {
                                    headers: {
                                        "Content-Type": "application/json",
                                        Authorization: `Bearer ${API_KEY}`,
                                    }
                                }
                                );
                                //console.log(response.data)
                                if (response.status >= 200 && response.status <= 300) {
                                    
                                    setGeneratedImage(response.data.output[0].image.url)
                                    setGeneratedImages(prev => [...prev, response.data.output[0].image.url])
                                    setShowImage(true)
                                } 
                                
                                



                            } catch (error) {
                                console.error(error)
                                setStatusCode(error.status)
                                /*
                                    When Axios returns an HTTP error (4xx/5xx), it rejects the request and goes into the catch block, not the try block.
                                    You do get the status code inside the error.response object there.
                                */
                            }
                        };


                        if(personalPhotoBase64 && fashionPhotoBase64){
                            setIsClicked(true)
                            fetchImage();
                        } else {
                            setIsClicked(false)
                        }
                        
                    }} >UPLOAD & GENERATE</button>

                    <button onClick={() => {
                        
                        const APIBody = {
                            model: "gpt-image-1",
                            input: "Create a full-body studio fashion photograph of the person in this image. White background, professional lighting, high detail.",
                            images: [
                                personalPhotoBase64,
                                fashionPhotoBase64
                            ]
                        };



                        const fetchImage = async () => {
                            try {
                                const response = await axios.post("https://api.openai.com/v1/responses", APIBody, {
                                    headers: {
                                        "Content-Type": "application/json",
                                        Authorization: `Bearer ${API_KEY}`,
                                    }
                                }
                                );
                                //console.log(response.data)
                                if (response.status >= 200 && response.status <= 300) {
                                    
                                    setGeneratedImage(response.data.output[0].image.url)
                                    setGeneratedImages(prev => [...prev, response.data.output[0].image.url])
                                    setShowImage(true)
                                } 
                                
                                



                            } catch (error) {
                                console.error(error)
                                setStatusCode(error.status)
                                /*
                                    When Axios returns an HTTP error (4xx/5xx), it rejects the request and goes into the catch block, not the try block.
                                    You do get the status code inside the error.response object there.
                                */
                            }
                        };


                        if(personalPhotoBase64 && fashionPhotoBase64){
                            setIsClicked(true)
                            fetchImage();
                        } else {
                            setIsClicked(false)
                        }
                        
                    }} >UPLOAD & GENERATE</button>
reddit.com
u/Internal1344 — 3 hours ago

Built a CLI to stop manually duplicating design tokens between React and Flutter — also verifies they actually match

If you ship React + Flutter, you probably have the same design tokens defined twice. Once as CSS custom properties, once as Dart constants. And no way to know they match until a designer screenshots both apps side by side.

I got burned by this enough times that I built something. You define tokens once in a standard JSON file, run one command, and get:

- tokens.css — CSS custom properties with :root light and [data-theme="dark"] dark blocks

- app_theme.dart — full Flutter ThemeData, ColorScheme, TextTheme

- tokens.ts — typed TypeScript constants

Then it runs a parity check that numerically compares every token between the CSS and Dart outputs. Catches the #5C6BC0 vs #5B6BC0 type of drift before it ships. Can block CI on a mismatch.

npx tokensync init

npx tokensync build

npx tokensync check --ci # exits 1 if CSS and Dart diverge

Mainly curious — how do people here handle token sync between web and mobile? Separate files maintained by hand? Some other tool? Is this even a problem at your company or do you just accept the drift?

reddit.com
u/Ok_Edge1810 — 4 hours ago

Most React calendar libraries look good… until you actually try to use them (update after Reddit feedback)

A few days ago, I posted about a React calendar I was building.

Got some solid feedback here — and honestly, it confirmed something I was already feeling:

👉 Most calendar libraries break the moment you try to use them in real products.

So I took that feedback seriously and rebuilt a big part of it.

The reality I ran into (and many of you called out too): Drag & drop? → janky or hacky Recurring events? → missing or overly complex Timezones? → absolute pain DatePickers? → don’t scale beyond basic use “Customizable”? → until you need something slightly different

At some point it feels like: 👉 you’re not building your product 👉 you’re fighting the calendar library

Based on your feedback, I pushed a major update:

This is no longer just a calendar component.

👉 It’s turning into a scheduling engine

What changed: 🖱️ Proper drag & drop + resize (one of the most requested features) 🔁 Recurring events support (kept as simple as possible) 📆 Unified Date / Range / DateTime pickers 🔗 Works cleanly with react-hook-form / Formik 🌍 Timezone handling that doesn’t explode 🧩 Plugin-style architecture (so it doesn’t box you in) The core idea

Most libs force you to pick:

simple but limited or powerful but painful

I’m trying to build something that doesn’t force that tradeoff.

Need honest feedback again

The last post helped a lot — so doing it again.

If you’ve built scheduling systems:

👉 What’s still missing? 👉 What would make you actually switch to a new library? 👉 Or what’s the most painful thing you’ve dealt with?

Links npm: https://www.npmjs.com/package/schedultron demo: https://schedultron-live.vercel.app

If it’s bad, tell me why. If it’s useful, tell me what breaks.

reddit.com
u/Fun_Dragonfly8885 — 4 hours ago

I learned MERN but i want to something else too !!

I am so confused right now I don’t know what to learn next after mern, my top 2 choices are python and c++

python cause I thing i can learn AI DEVELOPMENT but i think the ai bubble may burst so I don’t want to waste time learning ai/ml if I doesn’t have a future

C++ cause i think it can be really useful in robotics,games and software making but still think it is difficult to learn

What should i learn??????!

reddit.com
u/ExpensiveDurian2259 — 7 hours ago
A friend of mine built something to skip MERN setup, and I’m not sure how I feel about it
▲ 0 r/webdev+1 crossposts

A friend of mine built something to skip MERN setup, and I’m not sure how I feel about it

A friend of mine has been complaining for a while about how repetitive starting a MERN project feels, and honestly, I didn’t think much of it at first.

I meaaan every time you start something new, you go through the same flow. Auth, folder structure, models, routes, controllers… by the time you’re done setting everything up, a few hours are gone and you haven’t actually built anything specific to your idea yet.

You either copy from an old project or rewrite everything again because it feels faster than digging through past code. Either way, it’s the same process every time.

So he built a small tool that basically generates a backend structure for you based on what you describe. Auth, models, routes, all wired up so you can just start building from there.

I tried it, and it does what it says, but I’m a bit conflicted. On one hand, it genuinely saves time and removes the boring part.

On the other hand, part of me feels like setting things up manually is just part of the process, especially if you want full control.

Curious how other people see this. Do you prefer starting from scratch every time, or would you actually use something that skips the setup phase?

If anyone wants to check it out, I have attached the link.

merngenie.com
u/Gloomy_Guard_ — 7 hours ago

Fichier PDF sous ReactJs

Bonjour à tous ! Je viens de commencer à créer un projet avec ReactJs qui consiste à manipuler des fichiers PDF. L'idée est simple, l'utilisateur qui ce sont inscrit au site pourra import des fichiers sous format PDF et puis ces fichier seront afficher dans la page d'accueil de celle-ci avec le nom du fichier comme titre. Et on pourra télécharger ces fichiers si l'on veut.

Par où commencer ?

Merci beaucoup !

reddit.com
u/Klutzy_Weight5907 — 4 hours ago
Week