• 0 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: July 21st, 2023

help-circle






  • phi-4 is the only one I am aware of that was deliberately trained to refuse instead of hallucinating. it’s mindblowing to me that that isn’t standard. everyone is trying to maximize benchmarks at all cost.

    I wonder if diffusion LLMs will be lower in hallucinations, since they inherently have error correction built into their inference process