Chat with an LLM provider directly from Primeta using your own API key. No MCP client, no local gateway — Primeta runs the conversation against the provider and speaks the reply through your avatar.
When to use this
- You don't have (or don't want to run) an external agent like OpenClaw
- You want a quick way to talk to GPT-5, Claude, or Grok with a persona on top
- You're paying the provider directly and don't want a markup layer
Setup
Two steps, both in the Primeta UI.
1. Save a provider key
Settings → Connections → LLM Provider Keys → choose a provider, paste your API key, save.
You only need to do this once per provider. The key is stored encrypted; once saved, it's never displayed again. Delete & re-add to rotate.
Supported providers (V1): - OpenAI — keys from platform.openai.com - Anthropic — keys from console.anthropic.com - xAI — keys from console.x.ai
2. Create a connection from the dashboard
On the dashboard, click + Add LLM connection. Pick a name (shown on the conversation card), choose which saved key to use, and pick a model. The model dropdown is filtered to the chosen provider's catalog.
Click the new card on the dashboard to open the conversation. Type a message; Primeta calls the provider, streams the reply back, and the avatar speaks it with persona voice and emotion intact.
How it works
You ──message──▶ Primeta ──▶ OpenAI / Anthropic / xAI
│
streamed reply
▼
Avatar speaks (TTS + emotion tags)
Unlike MCP and channel connections (where an external agent owns the conversation loop), Primeta itself runs the loop here. The persona's personality prompt is sent as the system message; conversation history travels with each request so the model has context.
Each provider's API key is scoped per-call so it never touches global config.
Cost and privacy
- Cost: you pay the provider directly. Primeta adds zero markup and does not meter usage.
- Privacy: messages flow Primeta → provider. Provider's data-handling policy applies (most have an opt-out for training; check their dashboards).
- Storage: conversation transcripts live in your Primeta account, same as MCP/channel chats.
Comparison to MCP and channels
| MCP session | Channel | Direct LLM | |
|---|---|---|---|
| Where the agent runs | Outside Primeta | Outside Primeta | Inside Primeta |
| Who pays for inference | The MCP client's setup | The agent's setup | You, via your provider key |
| Type into Primeta composer | No (composer hidden) | Yes | Yes |
| Setup steps | Add Primeta MCP URL to your client | Install the channel plugin in your runtime | Save a key, name a connection |
| Best for | Pairing Primeta with an existing IDE/client (Claude Desktop, Cursor, Claude Code) | Self-hosted local agents (OpenClaw) | Chatting straight from Primeta with no other tools running |
See Sessions, connections, conversations for how connections relate to chat sessions in general.
Switching models or rotating keys
- Different model on the same key: create a second connection on the dashboard, pick the same provider, give it a different name and model. No need to re-paste the key.
- Rotate a key: Settings → Connections → Remove the existing key (any connections that depended on it are also removed) → re-add with the new key → re-create the connections you want.
Future providers
We're adding providers as they prove out. If there's one you want, ping us in Discord.
Links
- Sessions, connections, conversations — how the core terms relate
- MCP sessions vs channels — the other two integration styles