L
Langtrain

Python SDK

Complete guide to using LangTrain's Python SDK for model training and deployment.

Key Features

🐍

🐍 Pythonic API

Simple, intuitive Python interface for all LangTrain features

🔧

🔧 Type Safety

Full type hints and IDE support for better development experience

📦

📦 Easy Installation

Install with pip and get started immediately

🚀

🚀 Production Ready

Built for scale with async support and error handling

Installation

Install the LangTrain Python SDK using pip: **Requirements:** Python 3.8 or higher The SDK includes all necessary dependencies for model training and inference.
Code Example
pip install langtrain-ai

# Or install with optional dependencies
pip install langtrain-ai[gpu]  # For GPU support
pip install langtrain-ai[dev]  # For development tools

Quick Start

Get started with LangTrain in just a few lines of Python code: **Authentication:** Use your API key from the dashboard.
Code Example
import langtrain

# Initialize client
client = langtrain.Client(api_key="your-api-key")

# Start a fine-tuning job
job = client.fine_tune.create(
    model="llama-2-7b",
    dataset="your-dataset-id",
    config={
        "learning_rate": 2e-5,
        "batch_size": 4,
        "epochs": 3
    }
)

print(f"Fine-tuning job started: {job.id}")

Fine-tuning Models

Fine-tune models with custom datasets and configurations: **Supported Models:** LLaMA, Mistral, CodeLlama, and more **LoRA Support:** Efficient fine-tuning with Low-Rank Adaptation
Code Example
# Upload dataset
dataset = client.datasets.upload(
    file_path="training_data.jsonl",
    name="my-dataset"
)

# Create fine-tuning job with LoRA
job = client.fine_tune.create(
    model="mistral-7b",
    dataset=dataset.id,
    config={
        "method": "lora",
        "rank": 16,
        "alpha": 32,
        "learning_rate": 1e-4,
        "max_steps": 1000
    }
)

# Monitor progress
while job.status == "running":
    job = client.fine_tune.get(job.id)
    print(f"Progress: {job.progress}%")
    time.sleep(30)

Model Inference

Use your fine-tuned models for inference: **Streaming:** Support for real-time streaming responses **Batch Processing:** Efficient batch inference for large datasets
Code Example
# Load fine-tuned model
model = client.models.get("your-model-id")

# Single inference
response = model.generate(
    prompt="What is the capital of France?",
    max_tokens=100,
    temperature=0.7
)

print(response.text)

# Streaming inference
for chunk in model.stream(prompt="Tell me a story"):
    print(chunk.text, end="", flush=True)

Error Handling

Robust error handling and retry mechanisms: **Automatic Retries:** Built-in retry logic for transient failures **Custom Exceptions:** Specific exceptions for different error types
Code Example
from langtrain.exceptions import (
    AuthenticationError,
    RateLimitError,
    ModelNotFoundError
)

try:
    job = client.fine_tune.create(...)
except AuthenticationError:
    print("Invalid API key")
except RateLimitError as e:
    print(f"Rate limited. Retry after {e.retry_after} seconds")
except ModelNotFoundError:
    print("Model not found")
except Exception as e:
    print(f"Unexpected error: {e}")