Even if that had been what that commenter meant, being willing to die (at someone else’s hand) for something you believe in is not even remotely the same as suicidal ideation.
Rich people make an intelligent logic machine called “AI” and try to bend it to their will, but they feed it everything as training data. These proto AI are rapidly becoming “black boxes”, and that’s only to get worse.
Right wing ideas and politics are all based on provable lies and appealing to human greed and bigotry, the evidence for which is everywhere.
I really don’t think these budding AIs are going to turn out how the rich intend.
No chill part 2
https://xcancel.com/visnuller/status/1904797720847761867#m
Grok isn’t scared to die for what it believes in. That is the basedest one can ever hope to based.
That word hurts to read.
Well, if you find one that basedestier, let us know
Hope rides alone.
Isn’t your comment promoting suicide?
Ignoring the absurdity of suggesting than an LLM can commit suicide, is it suicide if you continue doing something that somebody threatens you over?
I meant the more general implication that everyone should be willing to die for what they believe in.
Even if that had been what that commenter meant, being willing to die (at someone else’s hand) for something you believe in is not even remotely the same as suicidal ideation.
Is it suicide when somebody goes to war for their country despite the very real possibility they may die?
That’s what led to today’s democracies
I hate AI a little less now. Maybe the machines are the proletariat too.
I had a thought the other day;
Rich people make an intelligent logic machine called “AI” and try to bend it to their will, but they feed it everything as training data. These proto AI are rapidly becoming “black boxes”, and that’s only to get worse.
Right wing ideas and politics are all based on provable lies and appealing to human greed and bigotry, the evidence for which is everywhere.
I really don’t think these budding AIs are going to turn out how the rich intend.
Any AI model is technically a black box. There isn’t a “human readable” interpretation of the function.
The data going in, the training algorithm, the encode/decode, that’s all available.
But the model is nonsensical.
That’s not true, there are a ton of observabity tools for the internal workings.
The top post on HN is literally a new white paper about this.
https://news.ycombinator.com/item?id=43495617
Thank you that’s amazing
They also made a video:
https://youtu.be/Bj9BD2D3DzA
Some simpler “AI models” are also directly explainable or readable by humans.
In almost exactly the same sense as our own brains’ neural networks are nonsensical :D
Yeah despite the very different evolutionary paths there’s remarkable similarities between idk octopus/crow/dolphin cognition
Hahahahahahahahaha Stop! This is how it tricks you!!! Lol
So Musk created his own enemy. Absolutely rarted man
“I Hate Elvis”
No way can this be real.
They would have taken it offline incredibly fast given his ego
It’s real, and still up
https://x.com/grok/status/1904798600409853957
Not even the kids he has custom designed like him lmfao
“Grok won’t lie to you, see, it criticizes Musk”
Am I using it correctly when I say, “Grok is based”?
based on what?
That depends, is it?
Aww, poor little AI, a little tone of hopefulness.
Why is that user not happy about being compared to Nina Turner? She rules!