Gravitee · Capability
Gravitee Llm Proxy Bridge
Routes Naftiko-side LLM calls through Gravitee's LLM Proxy (the Enterprise AI Agent Management module fronting OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Mistral, Hugging Face). Naftiko capability LLM calls automatically pick up Gravitee's prompt-token tracking, prompt guard-rails, semantic caching, and PII redaction policies without per-capability LLM-vendor wiring.
What You Can Do
POST
Chat completion
—
/v1/chat/completions
POST
Completion
—
/v1/completions
POST
Embedding
—
/v1/embeddings
GET
List models
—
/v1/models
GET
Get token usage
—
/llm/usage
MCP Tools
chat-completion
Run an OpenAI-compatible chat completion through Gravitee LLM Proxy (with token-tracking + guard-rails + semantic cache).
completion
Run an OpenAI-compatible text completion through Gravitee LLM Proxy.
embedding
Compute embeddings through Gravitee LLM Proxy.
read-only
list-models
List the LLM models available through Gravitee LLM Proxy.
read-only
get-token-usage
Get token consumption stats from Gravitee LLM Proxy (per model, per period).
read-only