It’s not always easy to distinguish between existentialism and a bad mood.

  • 2 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • Here’s the full text:

    Fake radical honesty: when a dishonest person self-discloses taboo or undesirable things about themselves, but then omits the worst thing or things. They make themselves look honest and they’re not. This nasty trick ruined my life once. It occurs to me that this ploy may have been used to cover up the miricult scandal (https://archive.is/miricult.com) after a discussion with someone about what happened. A friend said something like that they’d looked into this and the people involved confessed, but only one minor was molested. For some reason this resulted in increased trust. It should not have. Have you seen fake radical honesty anywhere?

    For someone not steeped into the lore, why is this important?




  • The first prompt programming libraries start to develop, along with the first bureaucracies.

    I went three layers deep in his references and his references’ references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:

    It's the ideologized version of You're Prompting It Wrong. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.

    gwern wrote:

    I like “prompt programming” as a description of writing GPT-3 prompts because ‘prompt’ (like ‘dynamic programming’) has almost purely positive connotations; it indicates that iteration is fast as the meta-learning avoids the need for training so you get feedback in seconds; it reminds us that GPT-3 is a “weird machine” which we have to have “mechanical sympathy” to understand effective use of (eg. how BPEs distort its understanding of text and how it is always trying to roleplay as random Internet people); implies that prompts are programs which need to be developed, tested, version-controlled, and which can be buggy & slow like any other programs, capable of great improvement (and of being hacked); that it’s an art you have to learn how to do and can do well or poorly; and cautions us against thoughtless essentializing of GPT-3 (any output is the joint outcome of the prompt, sampling processes, models, and human interpretation of said outputs).






  • It’s pick-me objectivism, only more overtly culty the closer you are to it irl. Imagine scientology if it was organized around AI doomerism and naive utilitarianism while posing as a get-smart-quick scheme.

    It’s main function (besides getting the early adopters laid) is to provide court philosophers for the technofeudalist billionaire class, while grooming talented young techies into a wide variety of extremist thought both old and new, mostly by fostering contempt of established epistemological authority in the same way Qanons insist people do their own research, i.e. as a euphemism for only paying attention to ingroup approved influencers.

    It seems to have both a sexual harassment and a suicide problem, with a lot of irresponsible scientific racism and drug abuse in the mix.







  • in order to dissuade hypothetical agents from blackmailing you

    There’s also a whole thing with Yud accepting the many worlds interpretation as obvious truth that leads to (some) rationalists believing that getting killed in one timeline helps your surviving parallel selves by bolstering your case of being unblackmailable by said hypothetical agents, who are also from the future, which is why you can’t negotiate with them directly.