vrraj-llm-adapter
Provider-agnostic Python adapter for LLM text generation and embeddings. Call OpenAI and Google Gemini using the same API while receiving a consistent, normalized response format.
Key Features
-
Unified API: Switch between openai and gemini by changing a single string - the model identifier.
-
Stable Schemas: Stop parsing different JSON structures; get consistent LLMResult objects every time.
-
Interactive Playground: Includes a built-in FastAPI dashboard to test model configurations, custom registry testing and compare responses in real-time.
-
Registry-Driven: Manage model metadata, pricing, and routing through a centralized registry.
Install
pip install vrraj-llm-adapter
Quick Example
Requires LLM provider API keys. See README for setup.
from llm_adapter import llm_adapter
resp = llm_adapter.create(
model="openai:gpt-4o-mini", # for gemini, use "gemini:openai-3-flash-preview"
input="Explain quantum computing in simple terms.",
max_output_tokens=300,
)
# Normalize to stable app-facing schema
result = llm_adapter.normalize_adapter_response(resp)
print(result["text"])
print(result["usage"])
Links
Detailed Documentation
-
API Reference - Complete API documentation and usage examples
-
Model Registry Guide - Model configuration, reasoning policies, and extensible custom registry
-
Development Guide - Contributing, development setup, and demo UI
-
Story on Medium - Beyond the API: A Practical Registry-Driven Adapter for OpenAI and Gemini
Interactive Demo UI
The repository includes a FastAPI-powered interactive playground for testing.
This allows developers to experiment with models, registry configuration, and adapter behavior without writing code.
→ See setup instructions in the README: Development and Demo UI