

Surely the most rational thing to do would be for the Rationalist sphere at large to retreat into their Ratcaves and spend a few years translating the sequences into Mandarin, right?
Surely the most rational thing to do would be for the Rationalist sphere at large to retreat into their Ratcaves and spend a few years translating the sequences into Mandarin, right?
So I finally actually read the damn thing fully because you fine sneerers shouldn’t suffer alone, and the thing that struck me was that for all their whinging about being blamed they’re not actually owning the fact that on the relevant issue they were talking about (“wokeness” or “DEI” or whatever they’re calling it this week) they were actually in perfect alignment with the right-wing argument. Like, regardless of what their opinions on trade policy or whatever, they were literally saying the same exact shit about how awful the “woke leftist academic Mafia cathedral” (or what they’re calling it this week) is and how to oppose it. For Christ’s sake the writer approvingly cites fucking Hanania, the guy who literally wrote the book (well, one of them) on “wokeness.” When people blame the radical “centrists” (is there a way to make even more aggressive scare quotes?) it’s not because they should have been more consistent in opposing Trump-style populism or caveated their arguments with “but also Trump wouldn’t piss on America if it was on fire” it’s because on the issue they spent all their time and energy writing about and advocating for they were actively promoting him. And I’m sorry but especially for someone with noted reach beyond whatever silicon valley cult bubble he lives in like Scooter I just can’t believe that they’re not aware of that fact.
At least the Russian trolls who side track any discussion of Ukraine by talking about Iraq and Nicaragua usually have a point about how fucked up the CIA is. These poor bastards just seem bitter that they’re not getting the respect and accolades they feel they deserve for being special smart boys because they didn’t go into real academia. And I mean let he who is without bitterness at academia cast the first stone, but scientific racism is still bullshit if it is a load-bearing part of your self-esteem.
At its heart I think that the real problem. The right has built up “wokeness” into this all-consuming conspiracy theory that is responsible for everything, which was an effective way to take power by offering simple plans that hurt people that large swathes of the voting public already believed had it too good, but now that they’re in power they need to actually do something about this fictitious issue they’ve convinced themselves is at the heart of all problems, and this is what that looks like. There is no simple common-sense policy that would protect people from “being forced to say DEI shibboleths” or whatever they’re whining about because nobody is forcing you to do that in the first place, but you can’t sweep in on a wave of “antiwokism” and do nothing about it.
I’m actually reminded of the similar bizzaro push against “color revolutions” that seems to animate Putin and some of the other crazies in international politics. Like, it’s pretty obviously bullshit if for no other reason than because it it was possible to culturally mind control a people into overthrowing their governments by throwing a relatively tiny sum of money at some artists and shouting a lot there’s no way that the CIA would have gone after Kyrgyzstan and Ukraine but not Russia itself. But a lot of Russian foreign policy, including the invasion of Ukraine, seems to be at least partially in response to this imagined threat from a nonexistent conspiracy, and the blood flowing down the Dnipro is the cost that the world is paying for that delusion.
I’m reminded of an old essay from Siskind that tried to break down the different approaches to disagreement as either “conflict theory” - different people want different, mutually incompatible things - and “mistake theory” where we all want the same basic thing but disagree about how to get it. Given the general silicon valley milieu’s (and YudRat’s specifically) affinity for “mistake theory” I think the susceptibility to authoritarianism and fascism fits remarkably well. After all, if we all want the same basic thing the only way the autocrat could do something we don’t like is if they were wrong, so we just have to get a reasonable enough autocrat and give them absolute power, at which point they can magically solve all problems. See also the singularity God AI nonsense.
If I had my wish, it would be that this doesn’t just remind people of how authoritarians can be/are evil or incompetent, but also that the general structure isn’t actually more “efficient” because whatever delays the democratic process introduces are dwarfed by the inevitable difficulties of just trying to do anything at the scale of any modern state, much less the sheer scale of the USA.
As a prominent intellectual voice criticizing the current administration, our boy Curtis should really be grateful that they don’t hew more closely to his preferred fascist project, as if they did they would have the incentive to be just as if not more brutal in stomping out “credible” independent opposition from the right as from existing liberal institutions. After all the fascist project relies on subverting existing institutions that are too tied up in their own bureaucracy and politicking to effectively oppose them, while the whole point Yarvin has made across his whole career is that the fascists have no such impediment or incentive for restraint. The Night of the Long Knives predated Kristallnacht by more than 4 years, after all.
As anyone who has tracked SovCit discourse can confirm, the English language was definitively codified for all eternity by the 1996 edition of Black’s Law Dictionary and all other projects including future editions of that venerable title are the result of communist plots to undermine the sanctity of American freedom and our precious bodily fluids.
Looks like that is indeed the post. I have a number of complaints, but the most significant one is actually in the early part of the narrative where they just assume “companies start to integrate AI” with little detail on how this is done, what kind of value it creates over their competitors, whether it’s profitable for anyone, etc. I’m admittedly trusting David Gerard’s and Ed Zitron’s overall financial analysis here, but at present it seems like the trajectory is moving in the opposite direction, with the AI industry as a whole looking likely to flame out as they burn through their ability to raise capital without ever actually finding a net return on that investment. At which point all the rest of it is sci-fi nonsense. Like, if you want to tell me a story about how we get from here to The Culture (or I Have No Mouth and I Must Scream), those are the details that need to be filled in. How do the intermediate steps actually work. Otherwise it’s the same story we’ve been reading since the 70s.
It’s almost like the tech industry relies on a great deal of general stability, education, and other aspects of society that are broadly considered the responsibility of the state. I think there’s a stock line here about libertarians and cats?
So the primary doctrine is basically tech bros rewriting standard millenarian christianity from mythic fantasy into science fiction. But it seems like the founder wants to be a silicon valley influencer more than he wants to be a proper cult leader, meaning that some of the people who take this shit seriously have accumulated absurd amounts of money and power and occasionally the more deranged subgroups will spin off into a proper cult with everything that entails – including, now, being involved in multiple homicides!
That’s like a solid centiMoR, which is conveniently the share of HPMoR in which anything actually happens.
The classic “I don’t understand something therefore it must be incomprehensible” problem. Anyone who does understand it must therefore be either lying or insane. I’m not sure if we’ve moved forward or backwards by having the incomprehensible eldritch truth be progressive social ideology itself rather than the existence of black people and foreign cultures.
Behind the Bastards just wrapped their four-part series on the Zizzians, which has been a fun trip. Nothing like seeing the live reactions of someone who hasn’t been at least a little bit plugged into the whole space for years.
I haven’t finished part 4, but so far I’ve deeply appreciated Robert’s emphasis on how the Zizzian nonsense isn’t that far outside the bounds of normal Rationalist nonsense, and the Rationalist movement itself has a long history as a kind of cult incubator, even if Yud himself hasn’t fully leveraged his influence over a self-selecting high-control group.
Also the recurring reminders of the importance of touching grass and talking to people who haven’t internet-poisoned themselves with the same things you have.
There’s never a bad time to remember one of the foundational texts of academic sneerery.
Surely there have to be some cognitive scientists who are at least a little bit less racist who could furnish alternative definitions? The actual definition at issue does seem fairly innocuous from a layman’s perspective: “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.” (Aside: it doesn’t do our credibility any favors that for all the concern about the source I had to actually track all the way to Microsoft’s paper to find the quote at issue.) The core issue is obviously that apparently they either took it completely out of context or else decided the fact that their source was explicitly arguing in favor of specious racist interpretations of shitty data wasn’t important. But it also feels like breaking down the idea itself may be valuable. Like, is there even a real consensus that those individual abilities or skills are actually correlated? Is it possible to be less vague than “among other things?” What does it mean to be “more able to learn from experience” or “more able to plan” that is rooted in an innate capacity rather than in the context and availability of good information? And on some level if that kind of intelligence is a unique and meaningful thing not emergent from context and circumstance, how are we supposed to see it emerge from statistical analysis of massive volumes of training data (Machine learning models are nothing but context and circumstance).
I don’t know enough about the state of non-racist neuroscience or whatever the relevant field is to know if these are even the right questions to ask, but it feels like there’s more room to question the definition itself than we’ve been taking advantage of. If nothing else the vagueness means that we haven’t really gotten any more specific than “the brain’s ability to brain good.”
These are AI bros, and should be assumed to be both racist and lazy. Of course they kept it.
At fair market value, considering the massive reduction in supply.
Ironically the trolley problem meme here is a great example of the objection: the same set up that puts him in the position to pull the lever also requires that people be tied to the track.
They definitely use actual numbers to try and push their agenda. It’s a classic case of constructing a category. Like how we’re the highest paying company in the industry of high technology, textile workers, teenagers, and dead people. Look at how much good EA-backed interventions like malaria nets are doing! Clearly this means EA-backed programs to make sure Sam Altman develops a computer god before his evil twin Alt Sam-man is also such a good use of resources that you’re basically a murderer if you don’t give.
Lemme just grab my programming screwdriver.
Remember the ~two weeks when Sabine was a vaguely respectable science communicator? That was wild.