French Instruction-following Models

About Vigogne

The repository contains code for reproducing the Stanford Alpaca in French ūüáęūüá∑ using low-rank adaptation (LoRA) provided by ūü§ó Hugging Face's PEFT library. In addition to the LoRA technique, they also use LLM.int8() provided by bitsandbytes to quantize pretrained language models (PLMs) to int8. Combining these two techniques allows us to fine-tune PLMs on a single consumer GPU such as RTX 4090.

This project is based on LLaMA, Stanford Alpaca, Alpaca-Lora, Cabrita and Hugging Face. In addition, they adapted the training script to fine-tune on more models such as BLOOM and mT5. They also share the translated Alpaca dataset and the trained LoRA weights such as vigogne-lora-7b and vigogne-lora-bloom-7b1.

Vigogne screenshots

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker