GPT-3 aces tests of reasoning by analogy

6 min read
Views
A hammer being used to force a square block through a round hole.

Enlarge (credit: zoom)

Large language models are a class of AI algorithm that relies on a high number computational nodes and an equally large number of connections among them. They can be trained to perform a variety of functions—protein folding, anyone?—but they're mostly recognized for their capabilities with human languages.

LLMs trained to simply predict the next word that will appear in text can produce human-sounding conversations and essays, although with some worrying accuracy issues. The systems have demonstrated a variety of behaviors that appear to go well beyond the simple language capabilities they were trained to handle.

Read Also :

We can apparently add analogies to the list of items that LLMs have inadvertently mastered. A team from University of California, Los Angeles has tested the GPT-3 LLM using questions that should be familiar to any Americans that have spent time on standardized tests like the SAT. In all but one variant of these questions, GPT-3 managed to outperform undergrads who presumably had mastered these tests just a few years earlier. The researchers suggest that this indicates that Large Language Models are able to master reasoning by analogy.

Read 12 remaining paragraphs | Comments



source https://arstechnica.com/?p=1957885
BotolBaba aka Mehedi Hasan Ariyan is an Bangladeshi Actor, Musical Artist, Entrepreneur & YouTube Personality. He releases his soundtracks on different music platforms like Spotify, Google Play M…

Post a Comment

Cookies Consent

We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.

Learn More