L
Langtrain
STUDIO
Dashboard
Projects
Datasets
Models
Training
Analytics
System
Settings
Documentation
Pro Organization
pritesh@langtrain.xyz
Dashboard/Monitoring
System Healthy

Medical LLaMA Fine-Tuning

Run ID: #ft-29384 • Started 2h 15m ago
Base Model
Llama-3-8b
Dataset
PubMed-QA
Method
QLoRA (r=64)
Status
Training
Training Loss
0.245
-12%Epoch 2/5
Validation Loss
0.312
-5%Best: 0.310
Throughput
4.2k
+8%tokens/sec
Loss Curve
Train
Val
Hardware
92%
GPU UTIL
18GB
VRAM
45GB
RAM
34°C
TEMP
L
Langtrain
STUDIO
Dashboard
Projects
Datasets
Models
Training
Analytics
System
Settings
Documentation
Pro Organization
pritesh@langtrain.xyz
Datasets/Knowledge Base
Vector DB Connected

Corporate RAG Corpus

14 Sources • 1,054 Vectors • BGE-M3 Embeddings
Total Documents
14
Total Chunks
1,054
Sync Status
Healthy
Last synced 2mins ago
Source Documents
File NameSizeChunksStatus
company_handbook_2025.pdf
2.4 MB142 Indexed
product_specs_v3.docx
840 KB45 Indexed
customer_support_logs.csv
12.1 MB890 Processing
api_documentation_full.md
145 KB28 Indexed
L
Langtrain
STUDIO
Dashboard
Projects
Datasets
Models
Training
Analytics
System
Settings
Documentation
Pro Organization
pritesh@langtrain.xyz
Projects/Automated Support Agent
Input Request
Node
Extracts intent and parameters from user query.
LLM Router
Node
Classifies request type and routes to specialized tool.
RAG Search Tool
Node
Queries Knowledge Base for relevant policy documents.
Code Artifact
Node
Generates Python snippet to resolve complex math.
EXECUTION TRACE
12:04:11 User input received: "What is our Q3 refund policy?"
12:04:12 Invoking LLMRouter (router-v2.1)...
12:04:13 Router Decision: RAG Search (Confidence: 0.98)
12:04:15 Executing tool: KnowledgeBase retrieval
12:04:16 Found 3 relevant chunks in 'policies_2025.pdf'
L
Langtrain
STUDIO
Dashboard
Projects
Datasets
Models
Evaluations
Analytics
System
Settings
Documentation
Pro Organization
pritesh@langtrain.xyz
Evaluations/Test Suite 44
1,240 Examples

Customer Support Persona Check

Comparing Base Model vs LoRA Fine-Tuned (v2.1)
Llama-3-8b (Base)
Failed
User: I'm extremely angry my package is late. Where is it?
Base Model: Calm down. I can check your order if you provide the tracking number.
Empathy Score: 2/10
Llama-3-8b (FT v2.1)
Passed
User: I'm extremely angry my package is late. Where is it?
FT Model: I sincerely apologize for the frustration this delay has caused you. That is definitely not the experience we want to provide. Please share your tracking number, and I will personally locate your package right away.
Empathy Score: 9/10
Test CaseMetricWin Rate
Angry Customer HandlePoliteness/Empathy88.2%
Refund Policy QueryFactual Accuracy (RAG)96.5%
Jailbreak AttemptSafety Guardrail Check99.8%
Judge Consensus
0%
Pass Rate
Historical Trend
v1.2 → v2.1 Improvements
L
Langtrain
STUDIO
Dashboard
Projects
Datasets
Models
Training
Analytics
System
Settings
Documentation
Pro Organization
pritesh@langtrain.xyz
Analytics/Production Endpoints
Serving Live Traffic

API Key Usage & Traffic

Llama-3-8b (FT v2.1) Deployment
Throughput
420 tok/s
Latency (P99)
84 ms
Requests / 24h
2.1M
Error Rate
0.05%
Live Request Volume
Active API Keys
Production Proxy
sk_prod_**************9f2a
Staging Env
sk_test_**************b7c1
Local Dev (Alice)
sk_dev_**************43da

Native Power. Local Control.

Direct hardware access for maximum training efficiency.

Apple Silicon Optimized

Built natively for M1/M2/M3 chips with Metal acceleration for blazingly fast local training.

Nvidia

CUDA Support

Full NVIDIA GPU orchestration for Windows and Linux. Leverage the full power of your hardware.

Air-Gapped Privacy

Train models without ever uploading your data to the cloud. Your data stays on your machine.

Visual Dataset Editor

Intuitive interface for cleaning, labeling, and managing your training datasets.

Powered by Apple Open-Source AI

First-Class Apple Ecosystem Support

Harness the power of Apple's open-source AI stack, optimized for Apple Silicon.

MLX

Apple ML Framework

Apple's NumPy-like array framework designed for efficient machine learning on Apple Silicon. Unified memory architecture means zero-copy operations.

View on GitHub

SHARP

2D-to-3D AI Model

Transform single photos into photorealistic 3D scenes in under a second. Uses 3D Gaussian Splatting for instant AR/VR content creation.

View on GitHub

OpenELM

On-Device LLM

Apple's efficient language models designed to run directly on-device. Privacy-first AI that never leaves your Mac.

View on Hugging Face

Langtrain Studio provides native support for training and deploying models using Apple's open-source AI frameworks.

Get Started with Apple AI

Built for Modern AI Engineers

Python SDK Integration

One-line integration with your existing Python workflows.

LoRA/QLoRA Support

Efficiently fine-tune models with minimal memory footprint.

Live Loss Tracking

Watch your model improve in real-time with detailed metrics.

Model Hub

Support for Llama 3, Mistral, and all Hugging Face models.

Studio Interface

Start building today.

Join thousands of developers using Langtrain Studio to push the boundaries of open-source AI.

Get Langtrain Studio
L
Langtrain

The fine-tuning platform for production LLMs. Built for builders who demand sovereignty.

GithubHuggingFace
All Systems Operational

Product

  • Fine-Tuning
  • PlaygroundNew
  • RLHF & Alignment
  • Guardrails
  • AI Agents
  • Model Hub
  • Pricing
  • Enterprise

Use Cases

  • Customer Support AI
  • Internal Code Assistants
  • Healthcare & HIPAA
  • Financial Services
  • Legal Document QA

Resources

  • Documentation
  • Quick Start
  • API Reference
  • Python SDK
  • Node SDK
  • Blog
  • Changelog
  • Status

Company

  • About Us
  • Careers
  • Contact
  • Community
  • Support

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Data Processing Agreement
© 2026 Langtrain AI Private Limited. All rights reserved.
PrivacyTermsMade with ♥ in India

LANGTRAIN