

Scott talks a bit about it in the video, but he was recently in the news as the guy who refused to sign a non-disparagement agreement when he left OpenAI, which caused them to claw back his stock options.
Scott talks a bit about it in the video, but he was recently in the news as the guy who refused to sign a non-disparagement agreement when he left OpenAI, which caused them to claw back his stock options.
I’m fascinated by the way they’re hyping up Daniel Kokotajlo to be some sort of AI prophet. Scott does it here, but so does Caroline Jeanmaire in the OP’s twitter link. It’s like they all got the talking point (probably from Scott) that Daniel is the new guru. Perhaps they’re trying to anoint someone less off-putting and awkward than Yud. (This is also the first time I’ve ever seen Scott on video, and he definitely gives off a weird vibe.)
After minutes of meticulous research and quantitative analysis, I’ve come up with my own predictions about the future of AI.
“USG gets captured by AGI”.
Promise?
Of course they use shitty AI slop as the background for their web page.
Like, what the hell is it even supposed to be? A mustachioed man writing in a journal in what appears to be a French village town square? Shadowy individuals chatting around an oddly incongruous fire pit? Guitar dude and listener sitting on invisible benches? I get that AI produces this kind of garbage all the time, but did the lesswrongers even bother to evaluate it for appropriateness?
This commenter may be saying something we already knew, but it’s nice to have the confirmation that Anthropic is chock full of EAs:
(I work at Anthropic, though I don’t claim any particular insight into the views of the cofounders. For my part I’ll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn’t personally have said them, but I think “a journalist goes through your public statements looking for the most damning or hypocritical things you’ve ever said out of context” is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)
Sorry, when she started taking Yud’s claims to be a “renowned AI researcher” at face value, I noped out.
I’m noticing that people who criticize him on that subreddit are being downvoted, while he’s being upvoted.
I wouldn’t be surprised if, as part of his prodigious self-promotion of this overlong and tendentious screed, he’s steered some of his more sympathetic followers to some of these forums.
Actually it’s the wikipedia subreddit thread I meant to refer to.
Wait, what are we trying to stop from coming to pass? Superintelligent AIs? Either I’m missing his point, or he really agrees with the doomers that LLMs are on their way to becoming “superintelligent”.