• merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    All this really does is show areas where the writing requirements are already bullshit and should be fixed.

    Like, consumer financial complaints. People feel they have to use LLMs because when they write in using plain language they feel they’re ignored, and they’re probably right. It suggests that these financial companies are under regulated and overly powerful. If they weren’t, they wouldn’t be able to ignore complaints when they’re not written in lawyerly language.

    Press releases: we already know they’re bullshit. No surprise that now they’re using LLMs to generate them. These shouldn’t exist at all. If you have something to say, don’t say it in a stilted press-release way. Don’t invent quotes from the CEO. If something is genuinely good and exciting news, make a blog post about it by someone who actually understands it and can communicate their excitement.

    Job postings. Another bullshit piece of writing. An honest job posting would probably be something like: “Our sysadmin needs help because he’s overworked, he says some of the key skills he’d need in a helper are X, Y and Z. But, even if you don’t have those skills, you might be useful in other ways. It’s a stressful job, and it doesn’t pay that well, but it’s steady work. Please don’t apply if you’re fresh out of school and don’t have any hands-on experience.” Instead, job postings have evolved into some weird cargo-culted style of writing involving stupid phrases like “the ideal candidate will…” and lies about something being a “fast paced environment” rather than simply “disorganized and stressful”. You already basically need a “secret decoder ring” to understand a job posting, so yeah, why not just feed a realistic job posting to an LLM and make it come up with some bullshit.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        And there are lawyers who have been raked over the coals by judges when the lawyers have submitted AI-generated documents where the LLM “hallucinated” cases that didn’t exist which were used as precedents.

    • ilovepiracy@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Exactly. LLM’s assisting people in writing soul-sucking corporate drivel is a good thing, I hope this changes the public perception on the umbrella of ‘formal office writing’. (including: internal emails, job applications etc.) So much time-wasting bullshit to form nothing productive.

  • taiyang@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    I’m the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I’ll be grading them next week. I’m already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.

    Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we’re swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.

    60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.

    Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It’d be wrong, cause you have to reference the book, but at least it’d not be at blatant.

    I didn’t unilaterally give them 0s but they usually got it wrong anyway so I didn’t really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don’t even care that much because again, it’s usually worse quality anyway but I have to grade this stuff, I don’t want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.

  • pezhore@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.

    I can only see this get worse.

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      It’s not just the internet.

      Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.

      I genuinely believe that the general effectiveness of written communication has regressed.

      • MisterFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        I honestly wonder what these sorts of jobs are. I feel like I have barely any reason to use AI ever in my job.

        But this may because I’m not summarising much, if ever

        AI can’t think, and how long emails are people writing to ever make the effort of asking the AI to write something for you worth it?

        By the time you’ve asked it to include everything you wanted, you could have just written the damn email

      • pezhore@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.

        • piccolo@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          I like using gpt to generate powershell scripts, surprisingly its pretty good at that. It is a small task so unlikely to go off in the deepend.