Universal LLM Playground

Connect to any LLM provider with a single interface

Easy Setup for Any Provider

Local Models

Run models locally with Ollama, LM Studio, or directly with transformers.

1. Install Requirements

# For Ollama

curl -fsSL https://ollama.com/install.sh | sh

ollama pull mistral


# For Python transformers

pip install torch transformers sentencepiece

2. Configuration

local_model = {

"type": "ollama", // or "transformers"

"base_url": "http://localhost:11434",

"model": "mistral",

"temperature": 0.7

}

Cloud Providers

Connect to OpenAI, Anthropic, Groq, or any API-compatible service.

1. Get API Keys

Create accounts and get API keys from:

2. Configuration

openai_config = {

"type": "openai",

"api_key": "sk-your-key-here",

"model": "gpt-4-turbo",

"temperature": 0.7

}


anthropic_config = {

"type": "anthropic",

"api_key": "sk-your-key-here",

"model": "claude-3-opus-20240229"

}

Hugging Face

Use free Inference API or Pro endpoints for private models.

1. Get Access

2. Configuration

hf_config = {

"type": "huggingface",

"api_key": "hf_your_key_here",

"model": "mistralai/Mistral-7B-Instruct-v0.1",

"endpoint": "https://api-inference.huggingface.co/models/",

"pro_endpoint": false // Set true for Pro

}

Image Generation

Connect to Stable Diffusion, DALL-E, or other image generation models.

1. Setup Options

  • Local: Stable Diffusion with diffusers
  • Cloud: DALL-E, Midjourney API, etc.

2. Configuration

image_config = {

"type": "dalle", // or "stable_diffusion"

"api_key": "sk-your-key-here", // For DALL-E

"model": "dall-e-3", // or "stabilityai/stable-diffusion-xl-base-1.0"

"size": "1024x1024"

}

🚀 Quick Start

Copy this starter code to begin using any provider:

# Install required packages

pip install requests python-dotenv


# In your .env file:

OPENAI_API_KEY=your_key_here

ANTHROPIC_API_KEY=your_key_here

HF_API_KEY=your_key_here


# Basic usage example

from llm_connector import LLMConnector


connector = LLMConnector(provider="openai")

response = connector.generate("Explain quantum computing")

print(response)

Try in Playground

Interactive Playground

Select LLM Provider

OpenAI

GPT-4

Anthropic

Claude 3

Groq

Mixtral

Hugging Face

Mistral

Ollama

Local

Transformers

PyTorch

DALL·E

Images

Stable Diffusion

Local

Configuration

Precise Balanced Creative

Prompt

0 tokens

Response

Response will appear here...

Tokens: 0 | Time: 0s
Model: gpt-4-turbo

API Usage Examples

Basic Text Generation

import LLMConnector


# Initialize with your preferred provider

llm = LLMConnector(provider="openai", api_key="your_key")


# Simple generation

response = llm.generate(

prompt="Explain quantum computing",

model="gpt-4-turbo",

temperature=0.7,

max_tokens=1000

)


print(response)

Image Generation

import LLMConnector


# Initialize image generator

image_gen = LLMConnector(provider="dalle", api_key="your_key")


# Generate image

image_url = image_gen.generate_image(

prompt="A futuristic cityscape at sunset",

model="dall-e-3",

size="1024x1024",

quality="hd"

)


print(f"Image generated at: {image_url}")

Chat Completion

import LLMConnector


# Initialize chat

chat = LLMConnector(provider="anthropic", api_key="your_key")


# Start conversation

messages = [

{"role": "system", "content": "You are a helpful assistant."},

{"role": "user", "content": "What's the weather today?"}

]


# Get response

response = chat.chat_complete(

messages=messages,

model="claude-3-opus-20240229",

temperature=0.5

)


print(response)

Local Model Setup

import LLMConnector


# Initialize local Ollama model

local_llm = LLMConnector(

provider="ollama",

base_url="http://localhost:11434",

model="mistral"

)


# Generate text

response = local_llm.generate(

prompt="Explain the theory of relativity",

temperature=0.7

)


print(response)

Key Features

Provider Agnostic

Switch between different LLM providers with a single line of code. No need to rewrite your application when changing models.

Unified Interface

Consistent API for text generation, chat completion, and image generation across all providers.

Secure Configuration

Environment variable support for API keys and sensitive configuration. Never hardcode credentials.

Performance Metrics

Track response times, token usage, and costs across different providers and models.

Fallback Handling

Automatic fallback to alternative models when primary model is unavailable or rate-limited.

Extensible Design

Easy to add new providers or customize existing ones with plugin architecture.

Made with DeepSite LogoDeepSite - 🧬 Remix