u/AdorableMaids

Importing CSV data into PostgreSQL sounds simple until the file comes from a real system.

If it’s clean, COPY or \copy is usually fine. Matching columns, normal encoding, proper headers, no surprises.

The pain starts when the CSV is slightly cursed. Wrong delimiter, empty strings that should be NULL, dates in different formats, random extra columns, quotes inside quotes, or an Excel export that looks normal until Postgres disagrees.

For small imports, I still start with \copy: \copy table_name FROM 'file.csv' WITH CSV HEADER

But I don’t like loading messy files straight into the final table. I usually import into a staging table first and treat that data as suspicious. Then I can check what actually came in: row counts, weird NULLs, duplicates, broken dates, IDs that don’t match anything.

After that, moving clean rows into the real table feels much safer. A GUI tool can help when the file needs column mapping, preview, or quick fixes before import. I’ve tried this with DBeaver and dbForge Studio for PostgreSQL, but I still want validation SQL after the import, because the tool can load the file, but it won’t know what “correct” means for the app.

What do you use for importing CSV data into PostgreSQL? Plain COPY, pgAdmin, DBeaver, scripts, ETL tools, or something else?

reddit.com
u/AdorableMaids — 8 days ago