I’m honestly not educated enough to bother reading the linked article, but just going by what you wrote I have to wonder how AI “hallucinations” compare to human imagination (or, perhaps more importantly, how well they can be made to).
I saw a comment elsewhere that found a way to make the hallucinations useful:
I’ve found this to be one of the most useful ways to use (at least) GPT-4 for programming. Instead of telling it how an API works, I make it guess, maybe starting with some example code to which a feature needs to be added. Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.
Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that’s a good sign my API is confusing, and how.
I’m honestly not educated enough to bother reading the linked article, but just going by what you wrote I have to wonder how AI “hallucinations” compare to human imagination (or, perhaps more importantly, how well they can be made to).
I saw a comment elsewhere that found a way to make the hallucinations useful: