Your avatar,
always speaking
3D VRM avatars with real-time lip sync powered by audio-driven viseme detection. Load any model, play any audio, and watch it come alive.
Capabilities
Everything your avatar needs
Real-time Lip Sync
Audio-driven viseme detection maps 15 Oculus visemes to VRM blend shapes. Smooth interpolation, natural transitions, zero latency.
Any VRM Model
Drag and drop any VRM avatar onto the canvas. Ships with a CC0 default model. Your custom characters work out of the box.
OpenClaw Integration
Register as an OpenClaw channel. Your AI agent's voice responses drive the avatar's mouth in real time via WebSocket.
Three steps
How it works
Load Model
Drop a .vrm file onto the canvas or use the included CC0 default avatar. Any VRM 0.x or 1.0 model works.
Play Audio
Upload an audio file, toggle your microphone, or connect an OpenClaw agent. Audio flows through HeadAudio.
Avatar Speaks
Viseme detection drives mouth shapes in real time. Auto-blink and idle sway animations bring it to life.
Quick start
Up and running in 30 seconds
Built with
Give your AI a face
Clawatar is free, open source, and ready to use. Start building avatar experiences today.
Get Started