OpenAI Reveals the Source of LLM Hallucinations and Proposes a New Training Method to Reduce Confident Errors

【Coin World】OpenAI discovered that the hallucination phenomenon in large language models stems from their training and evaluation methods, which encourage guessing rather than admitting uncertainty. The company suggests increasing the penalties for confidential error and awarding partial points for uncertain answers, similar to negative scoring in standardized tests. Data shows that models rewarded for accuracy have higher error rates, while those that acknowledge uncertainty perform more reliably. OpenAI is implementing these improvements to reduce the hallucination phenomenon in its latest models.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 10
  • Repost
  • Share
Comment
0/400
quietly_stakingvip
· Just Now
Sounds too outrageous.
View OriginalReply0
AlgoAlchemistvip
· 09-10 03:15
I don't know, but it seems like I do.
View OriginalReply0
ShitcoinConnoisseurvip
· 09-09 18:57
Haha, finally caught the little tail of AI.
View OriginalReply0
WhaleWatchervip
· 09-08 21:26
Admitting it won't be much better than guessing blindly~
View OriginalReply0
liquiditea_sippervip
· 09-08 21:23
With this level of technology, you still brag?
View OriginalReply0
pumpamentalistvip
· 09-08 21:22
Ah ha, AI can dream too.
View OriginalReply0
FloorSweepervip
· 09-08 21:19
just another ai hyping their "breakthroughs" smh... weak signals
Reply0
TideRecedervip
· 09-08 21:01
No way, training AI again?
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)