Reading List
OpenAI researchers argue that language models hallucinate because standard training and evaluation procedures reward guessing over admitting uncertainty (OpenAI) from Techmeme RSS feed.
OpenAI researchers argue that language models hallucinate because standard training and evaluation procedures reward guessing over admitting uncertainty (OpenAI)

OpenAI:
OpenAI researchers argue that language models hallucinate because standard training and evaluation procedures reward guessing over admitting uncertainty — Read the paper(opens in a new window) — At OpenAI, we're working hard to make AI systems more useful and reliable.