▲ 3 r/openrouter
Hi everyone,
I’m testing Whisper through OpenRouter, using openai/whisper-1 and openai/whisper-large-v3.
With the native OpenAI/Groq Whisper APIs, it’s possible to get subtitle-friendly outputs like SRT, VTT, or verbose_json with timestamps.
Through OpenRouter’s /api/v1/audio/transcriptions endpoint, I only seem to get plain text plus usage data:
{
"text": "...",
"usage": { ... }
}
I tried passing options like response_format and timestamp_granularities, but I still don’t get segments, word timestamps, SRT, or VTT.
Has anyone found a way to get timestamped subtitle output through OpenRouter, or is it currently text-only?
Thanks!
u/Feloxor — 10 days ago