(function() { var utmInheritingDomain = "appstore.com", utmRegExp = /(&|\?)utm_[A-Za-z]+=[A-Za-z0-9]+/gi, links = document.getElementsByTagName("a"), utms = [ "utm_medium={{URL – utm_medium}}", "utm_source={{URL – utm_source}}", "utm_campaign={{URL – utm_campaign}}" ]; for (var index = 0; index < links.length; index += 1) { var tempLink = links[index].href, tempParts; if (tempLink.indexOf(utmInheritingDomain) > 0) { tempLink = tempLink.replace(utmRegExp, ""); tempParts = tempLink.split("#"); if (tempParts[0].indexOf("?") < 0 ) { tempParts[0] += "?" + utms.join("&"); } else { tempParts[0] += "&" + utms.join("&"); } tempLink = tempParts.join("#"); } links[index].href = tempLink; } }());

Fine-tuning:
Streamline from fine-tuning to serving

Optimize generative AI performance by customizing models 

Achieve your business goals more effectively by fine-tuning pre-trained models with your enterprise's data, optimizing performance and saving both time and resources.

Fine-tuning pre-trained models with your data

Fine-Tuning

Optimize pre-trained models with your enterprise data to achieve business specific goals.

Optimized multi GPU training

Parameter-Efficient
Fine-Tuning

Fine-tune models efficiently by updating only relevant parameters, preserving performance while saving computational resources, making it a top choice for model refinement.

Faster Training

Instead of updating all model parameters, this method focuses on updating only a subset of the pre-trained model's parameters. Fewer updates means less time and resources, saving costs.

Maintains Accuracy

Preserves pre-trained model's valuable knowledge for seamless adaptation to new tasks. Despite using fewer resources, parameter-efficient fine-tuning maintains accuracy levels.

Effortlessly deploy your fine tuned models

Fine-tune and deploy

Friendli Suite not only enables you to easily fine-tune your models but also streamlines the deployment process. You can run your fine-tuned models in your GPU environment with Friendli Container or on Friendli dedicated endpoints with just a few clicks. This seamless process ensures high performance and cost-efficiency for your operations.


SUPPORTED MODELS

Fine-tune open-source LLMs
on Friendli Dedicated Endpoints

Try now

MIXTRAL 8X7B INSTRUCT V0.1

MISTRAL 7B INSTRUCT V0.2

MISTRAL 7B INSTRUCT V0.1

MISTRAL 7B INSTRUCT V0.3

GEMMA 2 9B IT

LLAMA 2 7B CHAT HF

LLAMA 2 13B CHAT HF

LLAMA 2 70B CHAT HF

META LLAMA 3 8B INSTRUCT

META LLAMA 3 70B INSTRUCT

META LLAMA 3.1 8B INSTRUCT

META LLAMA 3.1 70B INSTRUCT

QWEN1.5 7B CHAT

HOW TO USE

Serve a fine-tuned model
in one-click with Friendli

01

Friendli Container

Serve generative AI models with Friendli Engine in your GPU environment

Learn more

02

Friendli Dedicated Endpoints

Build and run generative AI models on autopilot

Learn more