DeepMind RETRO

Improving language models by retrieving from trillions of tokens

Über DeepMind RETRO

RETRO enhances auto-regressive language models by conditioning on document chunks retrieved from a large corpus, based on local similarity with preceding tokens. With a $2$ trillion token database, the Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25× fewer parameters. After fine-tuning, Retro performance translates to downstream knowledge-intensive tasks such as question answering.

Source: https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens

DeepMind RETRO screenshots

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker