
A pretty common AI workflow among developers looks like this:
upsert row -> api call -> process with ai -> update row
So we thought, what if you can run LLMs right in your database with all the primitives and auth
We built Postgres LLM, which lets you run your LLM on row-level changes and store the results across columns.
You can easily write prompts that reference any column using template variables “{column_name}” for context and generate outputs to update multiple columns on any update or insert.
It’s pretty simple to get started.
First, you have to make sure your database has “http”, “hstore”, and optionally “pg_cron” extensions available. If you’re using Supabase, you’re good to go.
From the GitHub repo, copy the init.sql file and run that on your PostgreSQL instance. This will install all the extensions, schemas, and functions you need automatically.
Make sure to update the API key in the init.sql file from Interfaze dashboard, or if you’re using another LLM, replace both the BaseURL with a chat completion API and the API key.
Key note: The model you pick is important; not all models are a good choice to be close to your database, as you don’t want to store LLM bloat, hallucinated values, and inaccurate fields that will be used in production. Highly deterministic models that are great at structured output are your best bet.
That’s it. It’s a one-time setup, and now you can use the “llm.call” function.
The best way to use llm.call is with postgres triggers, so you can execute tasks on insert or update of any row.
The call is fully async, so it won’t block or delay any of your inserts/updates.
llm.call(’<prompt> {column_reference}’, ‘update_column_1’, ‘update_column_2’)
The first param is the prompt. You can add context automatically by referring to any column in the same row by using template variables like this “{first_name}”.
The second parameter onwards are the columns you want to update when the LLM process is completed. You update multiple columns in a single request. Make sure not to refer to the same column you’re updating in your prompt, this will create an infinite loop.
Here’s a common use case: Translating and sentiment analysis on user content
DROP TRIGGER IF EXISTS analyze_sentiment ON user_reviews;
CREATE TRIGGER analyze_sentiment
AFTER INSERT OR UPDATE OF review_text ON user_reviews
FOR EACH ROW
WHEN (NEW.review_text IS NOT NULL)
EXECUTE FUNCTION llm.call('Analyze the sentiment of this text and respond with only "positive", "negative", or "neutral". return value in lowercase. Also translate the text to english and if the text is already in english, keep it the same. Text: {review_text}','sentiment','translation');
This trigger runs on every insert or update to the user reviews table. That update then runs llm.call with a prompt of doing two tasks. Analyze the text from the “{review_text}” column and translate it into English. Then store the outputs in the “sentiment” and “translation” columns.
Now you can create as many tasks as you like, just copy and paste the above trigger, update the table and column names, then change the prompt to an OCR task to extract all text from an image URL or an enrichment task to get every new user's LinkedIn.
You can also read the full break here and for more examples: https://interfaze.ai/blog/run-llms-inside-postgres
Demo video: https://www.youtube.com/watch?v=bb6La0GZeBc