I have noticed that lemmy so far does not have a lot of fake accounts from bots and AI slop at least from what I can tell. I am wondering how the heck do we keep this community free of that kind of stuff as continuous waves of redditors land here and the platform grows.

EDIT a potential solution:

I have an idea where people can flag a post or a user as a bot and if it’s found out to be a bot the moderators could have some tool where the bot is essentially shadow banned into an inbox that just gets dumped occasionally. I am thinking this because then people creating the bots might not realize their bot has been banned and try and create replacement bots. This could effectively reduce the amount of bots without bot creators realizing it or know if their bots have been blocked or not. The one thing that would also be needed is a way to request being un-bannned if they get hit as a false positive. these would have to be built into lemmy’s moderation tools and I don’t know if any of that exists currently.

  • FenderStratocaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    3 days ago

    Keeping bots and AI-generated content off Lemmy (an open-source, federated social media platform) can be a challenge, but here are some effective strategies:

    1. Enable CAPTCHA Verification: Require users to solve CAPTCHAs during account creation and posting. This helps filter out basic bots.

    2. User Verification: Consider account age or karma-based posting restrictions. New users could be limited until they engage authentically.

    3. Moderation Tools: Use Lemmy’s moderation features to block and report suspicious users. Regularly update blocklists.

    4. Rate Limiting & Throttling: Limit post and comment frequency for new or unverified users. This makes spammy behavior harder.

    5. AI Detection Tools: Implement tools that analyze post content for AI-generated patterns. Some models can flag or reject obvious bot posts.

    6. Community Guidelines & Reporting: Establish clear rules against AI spam and encourage users to report suspicious content.

    7. Manual Approvals: For smaller communities, manually approving new members or first posts can be effective.

    8. Federation Controls: Choose which instances to federate with. Blocking or limiting interactions with known spammy instances helps.

    9. Machine Learning Models: Deploy spam-detection models that can analyze behavior and content patterns over time.

    10. Regular Audits: Periodically review community activity for trends and emerging threats.

    Do you run a Lemmy instance, or are you just looking to keep your community clean from AI-generated spam?