Via Superhuman
“Researchers train model that ‘thinks’ just like a human |
||
|
||
When computer scientists train models, they usually optimize for things like accuracy and performance. But what if you tried to instead build a model that acts just like a human, with all of our quirks and flaws? That’s the idea behind a new model called Centaur that could one day make “mind-reading” a reality. | ||
How was it built? An international team fine-tuned Meta’s Llama by training it on 160 psychological studies, which involved more than 10M choices. This helped the model pick up on behavioral patterns that are common across humans. | ||
How did it perform? In 31 out of 32 tasks, it was able to correctly predict how a person would respond to a particular scenario. That was true even for situations that it wasn’t trained on. The model was able to replicate human decision-making better than even leading statistical methods. | ||
Why it matters: The findings suggest “there’s a lot of structure in human behavior,” Stanford neuroscientist Russell Poldrack told Nature. In other words, humans might not be as unpredictable as we once thought, and LLMs could one day help us figure out which factors influence our decision-making the most. | ||
What’s next? The researchers plan to feed the model up to 4x more training data with the hopes of making it even more accurate. Centaur is also open source, so anyone can perform experiments with it, getting us that much closer to understanding the mysteries of the mind.” |
0 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.