Thursday, May 11, 2023

OpenAI peeks into the “black box” of neural networks with new research


An AI-generated image of robots looking inside an artificial brain.

Enlarge / An AI-generated image of robots looking inside an artificial brain. (credit: Stable Diffusion)

On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.

While large language models (LLMs) are conquering the tech world, AI researchers still don't know a lot about their functionality and capabilities under the hood. In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."

For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.

Read 10 remaining paragraphs | Comments

Reference : https://ift.tt/tDXR7YV

No comments:

Post a Comment

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram

Dozens of neo-Nazis are fleeing Telegram and moving to a relatively unknown secret chat app that has received funding from Twitter founder...