Open Source · MIT Licensed

Your avatar,
always speaking

3D VRM avatars with real-time lip sync powered by audio-driven viseme detection. Load any model, play any audio, and watch it come alive.

Scroll to explore

Capabilities

Everything your avatar needs

Real-time Lip Sync

Audio-driven viseme detection maps 15 Oculus visemes to VRM blend shapes. Smooth interpolation, natural transitions, zero latency.

Any VRM Model

Drag and drop any VRM avatar onto the canvas. Ships with a CC0 default model. Your custom characters work out of the box.

OpenClaw Integration

Register as an OpenClaw channel. Your AI agent's voice responses drive the avatar's mouth in real time via WebSocket.

Three steps

How it works

01

Load Model

Drop a .vrm file onto the canvas or use the included CC0 default avatar. Any VRM 0.x or 1.0 model works.

02

Play Audio

Upload an audio file, toggle your microphone, or connect an OpenClaw agent. Audio flows through HeadAudio.

03

Avatar Speaks

Viseme detection drives mouth shapes in real time. Auto-blink and idle sway animations bring it to life.

Quick start

Up and running in 30 seconds

terminal
$git clone https://github.com/oreasono/clawatar.git
$cd clawatar
$npm install
$npm run dev
Open http://localhost:3000 — your avatar is waiting

Built with

Three.js@pixiv/three-vrmHeadAudioNext.jsTypeScriptTailwind CSS

Give your AI a face

Clawatar is free, open source, and ready to use. Start building avatar experiences today.

Get Started