• 0 Posts
  • 35 Comments
Joined 2 年前
cake
Cake day: 2023年6月25日

help-circle
  • Please calm down.

    for some reason this has gotten people very worked up

    Seriously I don’t know what I said that is so controversial or hard to understand.

    I don’t know why it’s controversial here.

    imagine coming into a conversation with people you don’t fucking know, taking a swing and a miss at one of them, and then telling the other parties in the conversation that they need to calm down — about racism.

    the rest of your horseshit post is just you restating your original point. we fucking got it. and since you missed ours, here it is one more time:

    race science isn’t real. we’re under no obligation to use terms invented by racists that describe nothing. if we’re feeling particularly categorical about our racists on a given day, or pointing out that one is using the guise of race science? sure, use the term if you want.

    tone policing people who want to call a racist a racist ain’t fucking it. what in the fuck do you think you added to this conversation? what does anyone gain from your sage advice that “X is Y but Y isn’t X” when the other poster didn’t say that Y is X but instead that Y doesn’t exist?

    so yeah no I’m not calm, go fuck yourself. we don’t need anyone tone policing conversations about racism in favor of the god damn racists


  • Race pseudoscience is racist

    yes, V0ldek said this

    but not all racism is racial pseudoscience

    they didn’t say this though, you did. race science is an excuse made up by racists to legitimize their own horseshit, just like how fascists invent a thousand different names to avoid being called what they are. call a spade a fucking spade.

    why are you playing bullshit linguistic games in a discussion about racism? this is the exact same crap the “you can’t call everyone a nazi you know, that just waters down the term” tone police would pull when I’d talk about people who, shockingly, turned out to be fucking nazis.

    “all nazis are fascists but not all fascists are nazis” who gives a shit, really. fascists and racists are whatever’s convenient for them at the time. a racist will and won’t believe in race science at any given time because it’s all just a convenient justification for the racist to do awful shit.



  • no problem! I don’t mean to give you homework, just threads to read that might be of interest.

    yeah, a few of us are Philosophy Tube fans, and I remember they’ve done a couple of good videos about parts of TESCREAL — their Effective Altruism and AI videos specifically come to mind.

    if you’re familiar with Behind the Bastards, they’ve done a few videos I can recommend dissecting TESCREAL topics too:

    • their episodes about the Zizians are definitely worth a listen; they explore and critique the group as a cult offshoot of LessWrong Rationalism.
    • they did a couple of older videos on AI cults and their origins that are very good too.

  • also fair enough. you might still enjoy a scroll through our back archive of threads if you’ve got time for it — there is a historical context to transhumanism that people like Musk exploit to further their own goals, and that’s definitely something to be aware of, especially as TESCREAL elements gain overt political power. there are positive versions of transhumanism and the article calls one of them out — the Culture is effectively a model for socialist transhumanism — but one must be familiar with the historical baggage of the philosophy or risk giving cover to people currently looking to cause harm under transhumanism’s name.


  • fair enough!

    but I don’t actually enjoy arguing and don’t have the skills for formalized “debate” anyway.

    it’s ok, nobody does. that’s why we ban it unless it’s amusing (which effectively bans debate for everyone unless they know their audience well enough to not fuck up) — shitty debatelords take up a lot of thread space and mental energy and give essentially nothing back.

    wherever “here” is

    SneerClub is a fairly old community if you count in its Reddit origins; part of what we do here is sneering at technofascists and other adherents to the TESCREAL belief package, though SneerClub itself tends to focus on the LessWrong Rationalists. that’s the context we tend to apply to articles like the OP.


  • There is a certain irony to everyone involved in this argument, if it can be called that.

    don’t do this debatefan here crap here, thanks

    This, and similar writing I’ve seen, seems to make a fundamental mistake in treating time like only the next few, decades maybe, exist, that any objective that takes longer than that is impossible and not even worth trying, and that any problem that emerges after a longer period of time may be ignored.

    this isn’t the article you’re thinking of. this article is about Silicon Valley technofascists making promises rooted in Golden Age science fiction as a manipulation tactic. at no point does the article state that, uh, long-term objectives aren’t worth trying because they’d take a long time??? and you had to ignore a lot of the text of the article, including a brief exploration of the techno-optimists and their fascist ties (and contrasting cases where futurism specifically isn’t fascist-adjacent), to come to the wrong conclusion about what the article’s about.

    unless you think the debunked physics and unrealistic crap in Golden Age science fiction will come true if only we wish long and hard enough in which case, aw, precious, this article is about you!





  • some experts genuinely do claim it as a possibility

    zero experts claim this. you’re falling for a grift. specifically,

    i keep using Claude as an example because of the thorough welfare evaluation that was done on it

    asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.

    s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

    you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?

    Like it has atleast the same amount of value as like letting an insect out instead of killing it

    that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.

    you say you acknowledge the harms done by LLMs, but I’m not seeing it.


  • centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

    i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

    the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

    claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

    if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

    schizoposting

    fuck off with this

    even if its wise imo to try not to be abusive to AI’s just incase

    describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?





  • lix is really cool! it’s very important to have a Nix evaluator that isn’t under fash control because none of the technology can exist without the language, and they’ve made some big improvements already to Nix’s build system, ergonomics, and internal docs — namely, a lot of the improvements the fash parts of the community fought hard to block, because technology that’s both powerful and obscure like Nix can easily be leveraged for political gain (see my previous post on this topic if you’d like more details on what the political side of this most likely looks like). I’m hoping lix proves generally resistant to assholes coming and ruining things — unfortunately, what happened to Nix keeps happening with other open source projects.

    aux is another project that’s along the same lines as lix. it used to be a nixpkgs replacement, but since then it’s become something that’s a bit harder for me to decipher but probably more promising if it works — I believe it’s a reworking of the Nix standard library and other foundational pieces to be less dependent on a centralized repo and more modular. they seem to be planning a package set (tidepool) on top of that new modular foundation too, plus they’re writing up a bunch of missing language docs. if what they’re doing pans out, aux and lix could be a good basis for a Nix replacement.

    the full NixOS system is unfortunately still irreplaceable for me, which fucking sucks — every computer I touch still runs it (my desktops, my laptops, the Lemmy instance where this thread lives, my fucking air conditioner thermostats…). replacing the NixOS options set and all its services and mechanisms is definitely a big job, and nobody’s managed it yet — I’ve even briefly considered GuixSD, but it’s actively becoming more hostile to running on real hardware (in the stupidest GNU way imaginable) including the hardware I run NixOS on, and the packages I rely on the most are weirdly primitive in guix (including emacs of all things).




  • lisp machines but networked

    urbit’s even stupider than this, cause lisp machines were infamously network-reliant (MIT, symbolics, and LMI machines wouldn’t even boot properly without a particular set of delicately-configured early network services, though they had the core of their OS on local storage), so yarvin’s brain took that and went “what if all I/O was treated like a network connection”, a decision that causes endless problems of its own

    speaking of, one day soon I should release my code that sets up a proper network environment for an MIT cadr machine (which mostly relies on a PDP-10 emulator running one of the AI lab archive images) and a complete Symbolics Virtual Lisp Machine environment (which needs a fuckton of brittle old Unix services, including a particular version of an old pre-ntp time daemon (this is so important for booting the lisp machine for some reason) and NFSv1 (with its included port mapper dependency and required utterly insecure permissions)) so there’s at least a nice way to experience some of this history that people keep stealing from firsthand