Microsoft Kosmos-1

A Multimodal Large Language Model

Über Microsoft Kosmos-1

Microsoft has unveiled Kosmos-1, which it describes as a multimodal large language model (MLLM) that can not only respond to language prompts but also visual cues, which can be used for an array of tasks, including image captioning, visual question answering, and more. Microsoft's Kosmos-1 can take image and audio prompts, paving the way for the next stage beyond ChatGPT's text prompts.

The KOSMOS-1 model natively supports language, perception-language, and vision activities, as indicated in Table 1. They use webscale multimodal datasets to train the model, including text data, image-text pairings, and arbitrarily interleaved pictures and words. Moreover, they transmit language-only data to assess the capacity to follow instructions across modalities. The KOSMOS-1 models can naturally handle perception-intensive tasks and natural language tasks. These tasks include visual dialogue, visual explanation, visible question answering, image captioning, simple math equations, OCR, and zero-shot image classification with descriptions.

Source: https://www.zdnet.com/article/now-microsoft-has-a-new-ai-model-kosmos-1/

Microsoft Kosmos-1 screenshots

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker