It gets things wrong literally half the time I ask it something (I’m not exaggerating). Why isn’t anybody talking about this?
Everyone: appalled at how AI mangles their field of expertise.
Also everyone: amazed at how well AI can explain to them topics where they lack expertise.
It has plugins for wolfram alpha, giving it many analytical tools.
It doesn’t reason, though, it just does a convincing facsimile, and people thinking it can is a bad thing.
It’s a reference to a comment a techbro made on a federated Hexbear thread back when ChatGPT first came out and the LLM craze was pretty new. They were unironically convinced that ChatGPT had a human level intelligence.
It’s even a tagline.
Dang, I am out of the loop.
TBF it’s a pretty obscure and underrated thread.
Lmao at the body slam from the top rope by @silent_water@hexbear.net