vram.supply
terminal

Put underused local llm hardware to work.

Serve models from your device. Get paid per token. Or use models hosted by anyone, anywhere.

Get started →Read the blog →