Skip to the content.

vrraj-llm-adapter

PyPI - Version GitHub Release CI Status

Provider-agnostic Python adapter for LLM text generation and embeddings. Call OpenAI and Google Gemini using the same API while receiving a consistent, normalized response format.

Key Features

Install

pip install vrraj-llm-adapter

Quick Example

Requires LLM provider API keys. See README for setup.

from llm_adapter import llm_adapter

resp = llm_adapter.create(
    model="openai:gpt-4o-mini", # for gemini, use "gemini:openai-3-flash-preview"
    input="Explain quantum computing in simple terms.",
    max_output_tokens=300,
)

# Normalize to stable app-facing schema
result = llm_adapter.normalize_adapter_response(resp)

print(result["text"])
print(result["usage"])

Detailed Documentation

Interactive Demo UI

The repository includes a FastAPI-powered interactive playground for testing.

This allows developers to experiment with models, registry configuration, and adapter behavior without writing code.

→ See setup instructions in the README: Development and Demo UI