T0pp by BigScience

T0 outperforms GPT3 on 9 out of 11 benchmarks despite being 16x smaller

About T0pp by BigScience

T0* is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.

Intended uses

You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", and the model will hopefully generate "Positive".

T0pp by BigScience screenshots

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker