Koala

A dialogue model for research purposes by UC Berkeley

Über Koala

UC Berkeley has released a dialogue model for research purposes named Koala. With results based on a user study the developers claimed that Koala is adept at responding to an array of queries posed by the users. The blogpost also bragged that generated results are at par with ChatGPT in half of the cases, while they mostly exceed Stanford built Alpaca. The researchers have released a web demo for public use.

Koala has been trained by fine-tuning Meta’s LLaMA on dialogue data which was scraped from the web — with a particular focus on responses to queries from other large language models like ChatGPT. The makers chose to scrape a high-quality dataset, instead of maximising the size of the dataset.

In the training of Koala, 60,000 dialogues, publicly shared by users on ShareGPT, were collected using APIs. However, redundant and non-english dialogues were eliminated, shrinking the data size to approximately 30,000 dialogues. ChatGPT and human responses were also used from the HC3 english dataset, which amounted to 87,000 question-answer examples.

Open source data used to train Alpaca, components from OIG dataset ANthropic HH dataset, OpenAI WebGPT’s dataset, and OpenAI summarisation dataset were used to train the model.

Source: https://analyticsindiamag.com/uc-berkeley-releases-koala-for-research-purposes/

Koala screenshots

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker