L
Langtrain

Frequently Asked Questions

Find answers to common questions about LangTrain's features, pricing, and usage.

What models does LangTrain support?

LangTrain supports popular open-source models including LLaMA 2, Mistral, CodeLlama, Falcon, and MPT. You can also bring your own model checkpoints.

How long does fine-tuning take?

Fine-tuning time depends on model size, dataset size, and method. LoRA fine-tuning typically takes 30 minutes to 2 hours, while full fine-tuning can take several hours to days.

Can I use my own data?

Yes! You can upload your own datasets in JSONL, CSV, or Parquet format. We support instruction-following, chat, and completion formats.

Is my data secure?

Absolutely. We use end-to-end encryption, SOC 2 compliance, and enterprise-grade security. Your data is never used to train other models.

What's the difference between LoRA and full fine-tuning?

LoRA is parameter-efficient and faster but may have slightly lower performance. Full fine-tuning updates all parameters for maximum performance but requires more resources.

Can I deploy to my own cloud?

Yes! Export containers or deploy endpoints to AWS, GCP, Azure with full CI/CD integration.

Do you offer API access?

Yes, we provide REST APIs and Python/JavaScript SDKs for programmatic access to all features.

What are the pricing options?

We offer flexible pricing based on compute usage, storage, and API calls. Contact us for enterprise pricing and volume discounts.

Still have questions?

Can't find what you're looking for? Get in touch with our support team.