githubinferredactive
SimplerLLM
provenance:github:hassancs91/SimplerLLM
Simplify interactions with Large Language Models
README
# SimplerLLM
[](https://opensource.org/licenses/MIT)
[](https://discord.gg/HUrtZXyp3j)
**Your Easy Pass to Advanced AI** - A comprehensive Python library for simplified Large Language Model interactions.
## Overview
SimplerLLM is an open-source Python library designed to simplify interactions with Large Language Models (LLMs) for researchers, developers, and AI enthusiasts. It provides a unified interface for multiple LLM providers, robust tools for content processing, and advanced features like reliable failover systems and intelligent routing.
[📚 Full Documentation](https://docs.simplerllm.com/)
## Installation
```bash
pip install simplerllm
```
### Optional Dependencies
For voice/audio features (AudioPlayer file playback):
```bash
pip install simplerllm[voice]
# Or install pygame directly:
pip install pygame>=2.5.0
```
## Key Features
### 🔗 Unified LLM Interface
- **8 LLM Providers**: OpenAI, Anthropic, Google Gemini, Cohere, OpenRouter, DeepSeek, and Ollama
- **Consistent API**: Same interface across all providers
- **100+ Models**: Access to diverse models through OpenRouter integration
- **Async Support**: Full asynchronous capabilities
### 🛡️ Reliability & Failover
- **Reliable LLM**: Automatic failover between primary and secondary providers
- **Retry Logic**: Built-in exponential backoff for failed requests
- **Validation**: Automatic provider validation during initialization
### 🎯 Structured Output
- **Pydantic Integration**: Generate validated JSON responses
- **Type Safety**: Automatic validation and parsing
- **Retry Logic**: Automatic retry on validation failures
### 🔍 Vector Operations
- **Multiple Providers**: OpenAI, Voyage AI, and Cohere embeddings
- **Local & Cloud Storage**: Local vector database and Qdrant integration
- **Semantic Search**: Advanced similarity search capabilities
### 🧠 Intelligent Routing
- **LLM Router**: AI-powered content routing and selection
- **Metadata Filtering**: Route based on content metadata
- **Confidence Scoring**: Intelligent selection with confidence metrics
### 🛠️ Advanced Tools
- **Content Loading**: PDF, DOCX, web pages, and more
- **Text Chunking**: Semantic, sentence, and paragraph-based chunking
- **Search Integration**: Serper and Value Serp APIs
- **Prompt Templates**: Dynamic prompt generation and management
## Quick Start
### Basic LLM Usage
```python
from SimplerLLM.language.llm import LLM, LLMProvider
# Create LLM instance
llm = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-4o")
# Generate response
response = llm.generate_response(prompt="Explain quantum computing in simple terms")
print(response)
```
### All Supported Providers
```python
from SimplerLLM.language.llm import LLM, LLMProvider
# OpenAI
openai_llm = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-4o")
# Anthropic Claude
anthropic_llm = LLM.create(provider=LLMProvider.ANTHROPIC, model_name="claude-3-5-sonnet-20241022")
# Google Gemini
gemini_llm = LLM.create(provider=LLMProvider.GEMINI, model_name="gemini-1.5-pro")
# Cohere
cohere_llm = LLM.create(provider=LLMProvider.COHERE, model_name="command-r-plus")
# OpenRouter (Access to 100+ models)
openrouter_llm = LLM.create(provider=LLMProvider.OPENROUTER, model_name="openai/gpt-4o")
# DeepSeek
deepseek_llm = LLM.create(provider=LLMProvider.DEEPSEEK, model_name="deepseek-chat")
# Ollama (Local models)
ollama_llm = LLM.create(provider=LLMProvider.OLLAMA, model_name="llama2")
```
### Reliable LLM with Failover
```python
from SimplerLLM.language.llm import LLM, LLMProvider
from SimplerLLM.language.llm.reliable import ReliableLLM
# Create primary and secondary LLMs
primary_llm = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-4o")
secondary_llm = LLM.create(provider=LLMProvider.ANTHROPIC, model_name="claude-3-5-sonnet-20241022")
# Create reliable LLM with automatic failover
reliable_llm = ReliableLLM(primary_llm, secondary_llm)
# If primary fails, automatically uses secondary
response = reliable_llm.generate_response(prompt="Explain machine learning")
print(response)
```
### Structured JSON Output with Pydantic
```python
from pydantic import BaseModel, Field
from SimplerLLM.language.llm import LLM, LLMProvider
from SimplerLLM.language.llm_addons import generate_pydantic_json_model
class MovieRecommendation(BaseModel):
title: str = Field(description="Movie title")
genre: str = Field(description="Movie genre")
year: int = Field(description="Release year")
rating: float = Field(description="IMDb rating")
reason: str = Field(description="Why this movie is recommended")
llm = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-4o")
prompt = "Recommend a great science fiction movie from the 2020s"
recommendation = generate_pydantic_json_model(
llm_instance=llm,
prompt=prompt,
model_class=MovieRecommendation
)
print(f"Title: {recommendation.title}")
print(f"Genre: {recommendation.genre}")
print(f"Year: {recommendation.year}")
print(f"Rating: {recommendation.rating}")
print(f"Reason: {recommendation.reason}")
```
### Reliable JSON Generation
```python
from SimplerLLM.language.llm.reliable import ReliableLLM
from SimplerLLM.language.llm_addons import generate_pydantic_json_model_reliable
# Use reliable LLM with JSON generation
reliable_llm = ReliableLLM(primary_llm, secondary_llm)
recommendation, provider, model_name = generate_pydantic_json_model_reliable(
reliable_llm=reliable_llm,
prompt=prompt,
model_class=MovieRecommendation
)
print(f"Generated by: {provider.name} using {model_name}")
print(f"Title: {recommendation.title}")
```
## Embeddings
### All Embedding Providers
```python
from SimplerLLM.language.embeddings import EmbeddingsLLM, EmbeddingsProvider
# OpenAI Embeddings
openai_embeddings = EmbeddingsLLM.create(
provider=EmbeddingsProvider.OPENAI,
model_name="text-embedding-3-large"
)
# Voyage AI Embeddings
voyage_embeddings = EmbeddingsLLM.create(
provider=EmbeddingsProvider.VOYAGE,
model_name="voyage-3-large"
)
# Cohere Embeddings
cohere_embeddings = EmbeddingsLLM.create(
provider=EmbeddingsProvider.COHERE,
model_name="embed-english-v3.0"
)
# Generate embeddings
text = "SimplerLLM makes AI development easier"
embeddings = openai_embeddings.generate_embeddings(text)
print(f"Embedding dimensions: {len(embeddings)}")
```
### Advanced Embedding Features
```python
# Voyage AI with advanced options
voyage_embeddings = EmbeddingsLLM.create(
provider=EmbeddingsProvider.VOYAGE,
model_name="voyage-3-large"
)
# Optimize for search queries vs documents
query_embeddings = voyage_embeddings.generate_embeddings(
user_input="What is machine learning?",
input_type="query",
output_dimension=1024
)
document_embeddings = voyage_embeddings.generate_embeddings(
user_input="Machine learning is a subset of artificial intelligence...",
input_type="document",
output_dimension=1024
)
# Cohere with different input types
cohere_embeddings = EmbeddingsLLM.create(
provider=EmbeddingsProvider.COHERE,
model_name="embed-english-v3.0"
)
classification_embeddings = cohere_embeddings.generate_embeddings(
user_input="This is a positive review",
input_type="classification"
)
```
## Vector Databases
### Local Vector Database
```python
from SimplerLLM.vectors.vector_db import VectorDB, VectorProvider
from SimplerLLM.language.embeddings import EmbeddingsLLM, EmbeddingsProvider
# Create embeddings model
embeddings_model = EmbeddingsLLM.create(
provider=EmbeddingsProvider.OPENAI,
model_name="text-embedding-3-small"
)
# Create local vector database
vector_db = VectorDB.create(provider=VectorProvider.LOCAL)
# Add documents
documents = [
"SimplerLLM is a Python library for LLM interactions",
"Vector databases store high-dimensional embeddings",
"Sem
[truncated…]PUBLIC HISTORY
First discoveredMar 21, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenDec 31, 2023
last updatedMar 17, 2026
last crawled15 days ago
version—
README BADGE
Add to your README:
