L

Initializing Studio...

Documentation
Last updated: October 10, 2025

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog

LLM Hyperparameter Optimization

Automated hyperparameter optimization for Large Language Models using advanced search algorithms and neural architecture search techniques.

LLM Hyperparameter Optimization Overview

Optimize critical hyperparameters for LLM training and fine-tuning using state-of-the-art optimization algorithms:

Critical LLM Hyperparameters:
- Learning Rate & Schedule: Peak LR, warmup steps, decay strategy (cosine, linear, polynomial)
- Batch Configuration: Global batch size, micro-batch size, gradient accumulation steps
- Optimizer Parameters: β₁, β₂, ε, weight decay, gradient clipping threshold
- Architecture Choices: Hidden dimensions, attention heads, intermediate size, number of layers
- Regularization: Dropout rates, attention dropout, activation dropout, label smoothing

Advanced Optimization Targets:
- LoRA Parameters: Rank (r), alpha scaling, target modules, dropout rate
- Quantization Settings: Bit precision, calibration data, quantization schemes
- Memory Optimization: Gradient checkpointing intervals, ZeRO stage selection
- Data Pipeline: Sequence length, packing strategy, data mixing ratios

Multi-Objective Optimization:
- Performance vs Efficiency: Balance model quality against training time/cost
- Accuracy vs Safety: Optimize for task performance while minimizing harmful outputs
- Perplexity vs Downstream Tasks: Joint optimization across multiple evaluation metrics

Advanced Search Algorithms

Modern hyperparameter optimization algorithms tailored for large-scale language model training:

Bayesian Optimization with Gaussian Processes:
- Acquisition Functions: Expected Improvement (EI), Upper Confidence Bound (UCB), Probability of Improvement
- Kernel Selection: RBF, Matérn kernels with automatic relevance determination
- Multi-fidelity Optimization: BOHB (Bayesian Optimization and HyperBand) for budget-aware search
- Transfer Learning: Leverage knowledge from previous optimization runs

Population-Based Training (PBT):
- Evolutionary Strategy: Mutate and crossover hyperparameters of top performers
- Dynamic Resource Allocation: Reallocate compute from poor to promising configurations
- Online Hyperparameter Adaptation: Continuously adjust hyperparameters during training
- Truncation Selection: Periodically eliminate bottom percentile of population

Multi-Armed Bandits & Successive Halving:
- Hyperband: Principled early stopping with successive halving
- ASHA (Asynchronous Successive Halving): Efficient parallel hyperparameter search
- BOHB: Combine Bayesian optimization with Hyperband for sample efficiency
- DEHB: Differential Evolution combined with Hyperband

Neural Architecture Search (NAS):
- DARTS: Differentiable architecture search for transformer components
- Progressive Search: Incrementally grow model complexity during search
- Hardware-Aware NAS: Optimize for specific accelerator architectures (TPU, GPU)
- Efficient Attention: Search optimal attention patterns and sparse attention mechanisms

Configuration Options

Customize auto-tuning behavior:

Search Space Definition:
- Define ranges for each hyperparameter
- Specify distributions (uniform, log-uniform, categorical)
- Set constraints and dependencies

Resource Allocation:
- Maximum training budget
- Number of parallel trials
- Early stopping criteria

Optimization Objectives:
- Single or multi-objective optimization
- Custom metric definitions
- Trade-offs between performance and efficiency

Best Practices

Maximize auto-tuning effectiveness:

Data Preparation:
- Ensure representative validation sets
- Handle data imbalance appropriately
- Use consistent evaluation metrics

Search Space Design:
- Start with reasonable ranges
- Include important hyperparameters
- Avoid overly large search spaces

Resource Management:
- Allocate sufficient compute budget
- Use early stopping for efficiency
- Monitor progress and adjust as needed

Code Examples

Basic Auto-tuning

python
import langtrain

# Create model with auto-tuning enabled
model = langtrain.Model.create(
    name="auto-tuned-classifier",
    architecture="bert-base-uncased",
    task="classification",
    auto_tune=True  # Enable auto-tuning
)

# Load your dataset
dataset = langtrain.Dataset.from_csv("data.csv")

# Start auto-tuning
tuner = langtrain.AutoTuner(
    model=model,
    dataset=dataset,
    max_trials=50,      # Number of configurations to try
    max_epochs=10,      # Maximum epochs per trial
    objective="f1_score" # Metric to optimize
)

# Run optimization
best_config = tuner.optimize()
print(f"Best configuration: {best_config}")
print(f"Best score: {tuner.best_score}")

Custom Search Space

python
# Define custom hyperparameter search space
search_space = {
    'learning_rate': langtrain.hp.loguniform(1e-6, 1e-3),
    'batch_size': langtrain.hp.choice([8, 16, 32, 64]),
    'dropout_rate': langtrain.hp.uniform(0.1, 0.5),
    'weight_decay': langtrain.hp.loguniform(1e-6, 1e-2),
    'warmup_ratio': langtrain.hp.uniform(0.0, 0.2),
    'optimizer': langtrain.hp.choice(['adam', 'adamw', 'sgd'])
}

# Configure auto-tuner with custom search space
tuner = langtrain.AutoTuner(
    model=model,
    dataset=dataset,
    search_space=search_space,
    algorithm="bayesian",  # Optimization algorithm
    max_trials=100,
    timeout=3600  # 1 hour timeout
)

# Run with early stopping
best_config = tuner.optimize(
    early_stopping_patience=10,
    min_improvement=0.001
)

Multi-objective Optimization

python
# Optimize for multiple objectives
objectives = {
    'accuracy': 'maximize',
    'inference_time': 'minimize',
    'model_size': 'minimize'
}

tuner = langtrain.MultiObjectiveTuner(
    model=model,
    dataset=dataset,
    objectives=objectives,
    max_trials=200
)

# Get Pareto-optimal solutions
pareto_solutions = tuner.optimize()

# Select best trade-off based on your priorities
best_config = tuner.select_best(
    weights={'accuracy': 0.7, 'inference_time': 0.2, 'model_size': 0.1}
)

Population-Based Training

python
# Use population-based training for dynamic optimization
pbt_config = langtrain.PBTConfig(
    population_size=20,      # Number of parallel training runs
    perturbation_interval=5, # Epochs between perturbations
    mutation_rate=0.2,       # Probability of parameter mutation
    truncation_percentage=0.2 # Bottom 20% get replaced
)

tuner = langtrain.PopulationBasedTuner(
    model=model,
    dataset=dataset,
    config=pbt_config,
    total_epochs=50
)

# This will train multiple models simultaneously
# and evolve their hyperparameters over time
results = tuner.train_population()

# Get the best performing model
best_model = results.best_model
best_hyperparams = results.best_hyperparams

On this page

LLM Hyperparameter Optimization OverviewAdvanced Search AlgorithmsConfiguration OptionsBest Practices