Chinese LLaMA & Alpaca LLMs

About Chinese LLaMA & Alpaca LLMs

The Large Language Model (LLM) represented by ChatGPT, GPT-4, etc. has set off a new wave of research in the field of natural language processing, showing the ability of general artificial intelligence (AGI), and has attracted widespread attention in the industry. However, due to the extremely expensive training and deployment of large language models, it has caused certain obstacles to the construction of transparent and open academic research.

Main content of this project:

  • 🚀 Expanded the Chinese vocabulary for the original LLaMA model, improving the efficiency of Chinese encoding and decoding
  • 🚀 Open source the Chinese LLaMA large model (7B, 13B) pre-trained using Chinese text data
  • 🚀 Open source the Chinese Alpaca large model (7B, 13B) that has been further fine-tuned by instructions
  • 🚀 Quickly use the CPU of a laptop (personal PC) to deploy locally and experience the quantized version of the large model
  • 💡 The figure below shows the actual experience after the localized deployment of the 7B version model (animation is not accelerated, measured on Apple M1 Max).

Disclaimer: The resources related to this project are for academic research use only.

Chinese LLaMA & Alpaca LLMs screenshots

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker