• 3 Posts
  • 27 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle


  • Yeah, he thinks Cyc was a switch from the brilliant meta-heuristic soup of Eurisko to the dead end of expert systems, but according to the article I linked, Cycorp was still programming in extensive heuristics and meta-heuristics with the expert system entries they were making as part of it’s general resolution-based inference engine, it’s just that Cyc wasn’t able to do anything useful with these heuristics and in fact they were slowing it down extensively, so they started turning them off in 2007 and completely turned off the general inference system in 2010!

    To be fair far too charitable to Eliezer, this little factoid has cites from 2022 and 2023 when Lenat wrote more about lessons from Cyc, so it’s not like Eliezer could have known this back in 2008. To sneer be actually fair to Eliezer, he should have figured they guy that actually wrote and used Eurisko and talked about how Cyc was an extension of it and repeatedly refers back to lessons of Eurisko would in fact try to include a system of heuristics and meta-heuristics in Cyc! To properly sneer at Eliezer… it probably wouldn’t have helped even if Lenat kept the public up to date on the latest lessons from Cyc through academic articles, Eliezer doesn’t actually keep up with the literature as it’s published.



  • You need to translate them into lesswrongese before you try interpreting them together.

    probability: he made up a number to go with his feelings about a topic

    subjective: the number is even more made up and feelings based than is normal for lesswrong

    noticeable: the number is really tiny, but big enough for Eliezer to fearmonger about!

    No, you don’t get to actually know what the number is, then you could penalize Eliezer for predicting it wrongly or question why that number specifically. Just trust that the bayesianified language shows Eliezer thought really hard about it.



  • The series is on the sympathetic and charitable side in terms of tone and analysis, but it still gets to most of the major problems, so its probably a good resource for referring to people that want a “serious”, “non-sarcastic” dive into the issues with LW and EA.

    Edit: Reading this post in particular, it does a good job of not cutting the LWs slack or granting them too much charity. And it has really broken down the factual details in a clear way with illustrative direct quotes from LW.



  • I guess anti-communist fears and libertarian bias outweighs their fetishization of East Asians when it comes to the CCP?

    I haven’t seen any articles on the EA forums about spreading to China… China does have billionaires and philanthropists, but, judging by Jack Ma’s example, when they start talking big about altering society (in ways that just so happen to benefit the billionaires), they get to take a vacation from the public eye for a few months… so that might get in the way of EA billionaire activism?









  • Oh lol, yeah I forget he originally used lesswrong as a penname for HPMOR (he immediately claimed credit once it actually got popular).

    So the problem is lesswrong and Eliezer was previously obscure enough that few academic or educated sources bothered debunking them, but still prolific to get lots of casual readers. Sneerclub makes fun of their shit as it comes up, but effort posting is tiresome, so our effort posts are scattered among more casual mockery. There is one big essay connecting dots written by serious academic (Timnit Gebru and Emile Torres): https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599 . They point out common people between lesswrong, effective altruists, transhumanists, extropians, etc, and explain how the ideologies are related and how they originated.

    Also a related irony, Timnit Gebru is interested and has written serious academic papers about algorithmic bias and AI ethics. But for whatever reason (Because she’s an actual academic? Because she wrote a paper accurately calling them out? Because of the racists among them who are actually in favor of algorithmic bias?) “AI safety” lesswrong people hate her and are absolutely not interested in working with the AI ethics field of academia. In a world where they were saner and less independent minded cranks, lesswrong and MIRI could tried to get into the field of AI ethics and used that to sanewash and build reputation/respectability for themselves (and maybe even tested their ideas in a field with immediately demonstrable applications instead of wildly speculating about AI systems that aren’t remotely close to existing). Instead, they only sort of obliquely imply AI safety is an extension of AI ethics whenever their ideas are discussed in mainstream news sources but don’t really maintain the facade if actually pressed on it (I’m not sure how much of it is mainstream reporters trying to sanewash them or deliberate deception on their part).

    For a serious but much gentler rebuttal of Effective Altruism, there is this blog: https://reflectivealtruism.com/ . Note this blog was written by an Effective Altruist trying to persuade other EAs of the problem, so they often extend too much credit to EA and lesswrong in an effort to get their points across.

    …and I realized you may not have context on the EAs… they are a movement spun off of academic thinking about how to do charity most effectively, and lesswrong was a major early contributor in terms of thinking and members to their movement (they also currently get members from more mainstream recruiting, so it occasionally causes clashes when more mainstream people look around and notice the AI doom-hype and the pseudoscientific racism). So like half EA’s work is how to do charity effectively through mosquito nets to countries with malaria problems or paying for nutrition supplements to malnourished children or paying for anti-parasitic drugs to stop… and half their work is funding stuff like “AI safety” research or eugenics think tanks. Oh, and the EA’s utilitarian “earn to give” concept was a major inspiration for Sam Bankman Fried trying to make a bunch of money through FTX, so that’s another dot connected! (And SBF got a reputation boost from his association with them, and in general their is the issue of billionaire philanthropists reputation laundering and buying influence through philanthropy, so add that to the pile of problems with EA).

    Edit: I realized you were actually asking for books about real rationality, not resources deconstructing rationalists… so “Thinking, Fast and Slow” is the book on cognitive biases the Eliezer cribs from. Douglas Hofstadter has a lot of interesting books on philosophical thinking in computer science terms: “Godel, Escher, Bach” and “I am a strange loop”. In some ways GEB is dated, but I think that adds context to it that makes it better (in that you can immediately see how the books is flawed so you don’t think computer science can replace all other fields). The institute Timnit Gebru is a part of looks like a good source for academic writing on real AI harms: https://www.dair-institute.org/ (but I haven’t actually read most of her work yet, just the TESCREAL essay and skimmed a few of her other writings),




  • Yeah the genocidal imagery was downright unhinged, much worse than I expected from what little I’ve previously read of his. I almost wonder how ideological adjacent allies like Siskind can still stand to be associated with him (but not really, Siskind can normalize any odious insanity if it serves his purposes).


  • His fears are my hope, that Trump fucking up hard enough will send the pendulum of public opinion the other way (and then the Democrats use that to push some actually leftist policies through… it’s a hope not an actual prediction).

    He cultivated this incompetence and worshiped at the altar of the Silicon Valley CEO, so seeing him confronted with Elon’s and Trump’s clumsy incompetence is some nice schadenfreude.


  • So… on strategies for explaining to normies, a personal story often grabs people more than dry facts, so you could focus on the narrative of Eliezer trying big idea, failing or giving up, and moving on to bigger ideas before repeating (stock bot to seed AI to AI programming language to AI safety to shut down all AI)? You’ll need the wayback machine, but it is a simple narrative with a clear pattern?

    Or you could focus on the narrative arc of someone that previously bought into less wrong? I don’t volunteer, but maybe someone else would be willing to take that kind of attention?

    I took a stab at both approaches here: https://awful.systems/comment/6885617


  • Big effort post… reading it will still be less effort than listening to the full Behind the Bastards podcast, so I hope you appreciate it…

    To summarize it from a personal angle…

    In 2011, I was a high schooler who liked Harry Potter fanfics. I found Harry Potter And The Methods of Rationality a fun story, so I went to the lesswrong website and was hooked on all the neat pop-science explanations. The AGI stuff and cryonics and transhumanist stuff seemed a bit fanciful but neat (after all, the present would seem strange and exciting to someone from a hundred years ago). Fast forward to 2015, HPMOR was finally finishing, I was finishing my undergraduate degree, and in the course of getting a college education I had actually taken some computer science and machine learning courses. Reconsidering lesswrong with my level of education then… I noticed MIRI (the institute Eliezer founded) wasn’t actually doing anything with neural nets, they were playing around with math abstractions, and they hadn’t actually published much formal writing (well not actually any, but at the time I didn’t appreciate peer-review vs. self publishing and preprints), and even the informal lesswrong posts had basically stopped. I had gotten into a related blog, slatestarcodex (written by Scott Alexander), which filled some of the same niche, but in 2016 Scott published a defense of Trump normalizing him, and I realized Scott had an agenda at cross purposes with the “center-left” perspective he portrayed himself as. At around that point, I found the reddit version of sneerclub and it connected a lot of dots I had been missing. Far from the AI expert he presented himself as, Eliezer had basically done nothing but write loose speculation on AGI and pop-science explanations. And Scott Alexander was actually trying to push “human biodiversity” (i.e. racism disguised in pseudoscience) and neoreactionary/libertarian beliefs. From there, it became apparent to me a lot of Eliezer’s claims weren’t just a bit fanciful, they were actually really really ridiculous, and the community he had setup had a deeply embedded racist streak.

    To summarize it focusing on Eliezer…

    Late 1990s Eliezer was on various mailing lists, speculating with bright eyed optimism about nanotech and AGI and genetic engineering and cryonics. He tried his hand at getting in on it, first trying to write a stock trading bot… which didn’t work, then trying to write up seed AI (AI that would bootstrap to strong AGI and change the world)… which also didn’t work; then trying to develop a new programming language for AI… which he never finished. Then he realized he had been reckless, an actually successful AI might have destroyed mankind, so really it was lucky he didn’t succeed, he needed to figure out how to align an AI first. So from the mid 2000s on he started getting donors (this is where Thiel comes in) to fund his research. People kind of thought he was a crank, or just didn’t seem concerned with his ideas, so he concluded they must not be rational enough, and set about, first on Overcoming bias, then his own blog, lesswrong, writing a sequence of blog posts to fix that (and putting any actual AI research on hold). They got moderate attention which exploded in the early 2010s when a side project of writing Harry Potter fanfiction took off. He used this fame to get more funding and spread his ideas further. Finally, around mid 2010s, he pivoted to actually trying to do AI research again… MIRI has a sparse (compared to number of researchers they hired and how productive good professors in academia are) collection of papers focused on an abstract concept for AI called AIXI, that basically depends on having infinite computing power and isn’t remotely implementable in the real world. Last I checked they didn’t get any further than that. Eliezer was skeptical of neural network approaches, derisively thinking of them as voodoo science trying to blindly imitate biology with no proper understanding, so he wasn’t prepared for NN taking off mid 2012 and leading to GPT and LLM approaches. So when ChatGPT starts looking impressive, he starts panicking, leading to him going on a podcast circuit professing doom (after all if he and his institute couldn’t figure out AI alignment, no one can, and we’re likely all doomed for reasons he has written tens of thousands of words in blog posts about without being refuted at a quality he believes is valid).

    To tie off some side points:

    • Peter Thiel was one of the original funders of Eliezer and his institution. It was probably a relatively cheap attempt to buy reputation, and it worked to some extent. Peter Thiel has cut funding since Eliezer went full doomer (Thiel probably wanted Eliezer as a silicon valley hype man, not an apocalypse cult).

    • As Scott continued to write posts defending the far-right with a weird posture of being center-left, Slatestarcodex got an increasingly racist audience, culminating in a spin-off forum with full on 14 words white supremacists. He has played a major role in the alt-right pipeline that is some of Trump’s most loyal supporters.

    • Lesswrong also attracted some of the neoreactionaries (libertarian wackjobs that want a return to monarchy), among them Menicus Moldbug (real name Curtis Yarvin). Yarvin has written about strategies for dismantling the federal government, which DOGE is now implementing

    • Eliezer may not have been much of a researcher himself, but he inspired a bunch of people, so a lot of OpenAI researchers buy into the hype and/or doom. Sam Altman uses Eliezer’s terminology as marketing hype.

    • As for lesswrong itself… what is original isn’t good and what’s good isn’t original. Lots of the best sequences are just a remixed form of books like Kahneman’s “Thinking, Fast and Slow”. And the worst sequences demand you favor Eliezer’s take on bayesianism over actual science, or are focused on the coming AI salvation/doom.

    • other organizations have taken on the “AI safety” mantle. They are more productive than MIRI, in that they actually do stuff with actually implemented ‘AI’, but what they do is typically contrive (emphasis on contrive) scenarios where LLMs will “act” “deceptive” or “power seeking” or whatever scary buzzword you can imagine and then publish papers about it with titles and abstracts that imply the scenarios are much more natural than they really are.

    Feel free to ask any follow-up questions if you genuinely want to know more. If you actually already know about this stuff and are looking for a chance to evangelize for lesswrong or the coming LLM God, the mods can smell that out and you will be shown the door, so don’t bother (we get one or two people like that every couple of weeks).