
You can now video chat with your AI agent, face and voice, in real time
Pika just released a video chat skill that works with any AI agent, and it's the first time I've seen this done in a way that actually feels like a conversation. Your agent gets a face, a voice, and it talks back to you in real time while maintaining its full memory and personality.
It works with any agent, not just their own. Pika released it as an open source skill on GitHub, so if you're building with Claude Code or similar tools you can plug it in. And if you use it with their Pika AI Self, the agent can actually do things during the conversation, not just talk but execute tasks while you're chatting with it face to face.
We've been interacting with AI through text boxes, and it's always felt like passing notes back and forth. This is more like actually sitting across from someone. The agent remembers your previous conversations, keeps its personality consistent, and adapts in real time to what you're saying. It's still in beta and you can tell from the demos that it's not perfect yet, but the foundation is clearly there.
I think this changes how non-technical people will eventually interact with AI agents. Not everyone wants to type prompts into a terminal, but most people would be comfortable video calling an assistant that walks them through something with a friendly face. That's a completely different product than what we've been building so far.
Check the video: https://x.com/pika_labs/status/2039804583862796345
Has anyone tried it yet? I'm curious how the latency feels in practice because 1.5 seconds sounds fine on paper but conversation flow is brutal to get right.