Enlarge / An AI-generated chatbot flying like a superhero. (credit: Stable Diffusion / OpenAI) On Tuesday, OpenAI announced a sizable update to its large language model API offerings (including GPT-4 and gpt-3.5-turbo), including a new function calling capability, significant cost reductions, and a 16,000 token context window option for the gpt-3.5-turbo model. In large language models (LLMs), the "context window" is like a short-term memory that stores the contents of the prompt input or, in the case of a chatbot, the entire contents of the ongoing conversation. In language models, increasing context size has become a technological race, with Anthropic recently announcing a 75,000-token context window option for its Claude language model. In addition, OpenAI has developed a 32,000-token version of GPT-4, but it is not yet publicly available. Along those lines, OpenAI just introduced a new 16,000 context window version of gpt-3.5-turbo, called, unsurprisingly, "gpt-3.5-turbo-16k," which allows a prompt to be up to 16,000 tokens in length. With four times the context length of the standard 4,000 version, gpt-3.5-turbo-16k can process around 20 pages of text in a single request. This is a considerable boost for developers requiring the model to process and generate responses for larger chunks of text. Read 7 remaining paragraphs | Comments Reference : https://ift.tt/UrvYQFw
Subscribe to:
Post Comments (Atom)
Niantic uses Pokémon Go player data to build AI navigation system
Last week, Niantic announced plans to create an AI model for navigating the physical world using scans collected from players of its mobi...
-
Neuralink, the neurotechnology company founded by Elon Musk , is at best having a rough initial go-round with the Food and Drug Administr...
-
Welcome to IEEE Spectrum ’s 11th annual rankings of the most popular programming languages. As always, we combine multiple metrics fr...
-
Enlarge (credit: Getty ) After reversing its positioning on remote work, Dell is reportedly implementing new tracking techniques on ...
No comments:
Post a Comment