• Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    106
    ·
    6 days ago

    In my experience, the single biggest bully on the internet are the servers controlled by Meta which in my experience literally perform DDoS attacks whilst crawling, hitting sites several orders of magnitude more than all the others combined.

    Actively blocking them was the only option left.

    • alaphic@lemmy.world
      link
      fedilink
      English
      arrow-up
      45
      ·
      6 days ago

      Jeez, don’t these fucksticks have enough data already? People are literally handing it to them hand over fist and they’re still like “no, we need to forcibly suck the data out of you until your servers burst into flames”

  • mesa@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    ·
    6 days ago

    Yep same thing. I have some small servers and was getting hammered by openai ip controlled ai crawlers not respecting robots.txt. had to block all their IP addresses and create an AI black hole in order to stop them ddos ing my tiny site(s).

    • Tangent5280@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Hey, could you say how you did that? I’m looking to put a few servers up and I’m worried about this too

      • mesa@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 days ago

        I used fail2ban + router to block the ip addresses. Then if the headers come from openai, they also get bounced.

        Below is a template I used that I created on the fly for an AI black hole that I also made. Its decent, but I feel like it could be better.

        from flask import Flask, request, redirect, render_template_string
        import time
        from collections import defaultdict
        import random
        
        app = Flask(__name__)
        
        # Data structure to keep track of requests per IP
        ip_requests = defaultdict(list)
        IP_REQUEST_THRESHOLD = 1000  # Requests threshold for one hour
        TIME_WINDOW = 3600  # Time window of one hour in seconds
        
        # Function to track and limit requests based on IP
        def track_requests(ip):
            current_time = time.time()
            ip_requests[ip] = [t for t in ip_requests[ip] if current_time - t < TIME_WINDOW]  # Remove old requests
            ip_requests[ip].append(current_time)
            return len(ip_requests[ip])
        
        # Serve slow pages incrementally
        @app.route('/')
        def index():
            ip = request.remote_addr
            request_count = track_requests(ip)
        
            if request_count > IP_REQUEST_THRESHOLD:
                return serve_slow_page(request_count)
            else:
                return 'Welcome to the site!'
        
        def serve_slow_page(request_count):
            """Serve a progressively slower page."""
            delay = min(10, request_count / 1000)  # Slow down incrementally, max 10 seconds delay
            time.sleep(delay)  # Delay to slow down the request
        
            # Generate the next "black hole" link
            next_page_link = f'/slow/{random.randint(1000, 9999)}'
            
            html_content = f"""
            <html>
            <head><title>Slowing You Down...</title></head>
            <body>
                <h1>You are being slowed down!</h1>
                <p>This is taking longer than usual because you're making too many requests.</p>
                <p>You have made more than {IP_REQUEST_THRESHOLD} requests in the past hour.</p>
                <p>Next step: <a href="{next_page_link}">Click here for the next page...</a></p>
            </body>
            </html>
            """
            return render_template_string(html_content)
        
        @app.route('/slow/<int:page_id>')
        def slow_page(page_id):
            ip = request.remote_addr
            request_count = track_requests(ip)
        
            if request_count > IP_REQUEST_THRESHOLD:
                return serve_slow_page(request_count)
            else:
                return 'Welcome back to normal!'
        
        if __name__ == '__main__':
            app.run(debug=True)
        
  • Cosmic Cleric@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    6 days ago

    From the article …

    GNOME sysadmin Bart Piotrowski shared on Mastodon that only about 3.2 percent of requests (2,690 out of 84,056) passed their challenge system, suggesting the vast majority of traffic was automated.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    28
    ·
    6 days ago

    Even mainly text-mode sites like LWN are feeling the strain and finding it hard to support all this parasite bots.

      • PlutoniumAcid@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        6 days ago

        Sure, but the challenge is how to block them without putting undue load on humans.

        In the olden days, you’d just host a webserver and be done with it. Today you need elaborate setups to trick bots. It’s a losing proposition.

  • tomyhaw@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    6 days ago

    I put a rate limit on my nginx docker container. No clue if it worked but my customers are able to use the website now. I get a Alton of automated probing and SQL injection requests. Pretty horrible considering I built my app for very minimal traffic and use session data in places rather than pulling from DB and the ddos basically attacks corrupt sessions

    • tempest@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      The Internet has always been like that even before the AI stuff got up to stream. If you expose anything to the public Internet it takes about 5s for things to start port scanning if they can it try WordPress/Drupal exploits.

  • Cyber Yuki@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    It’s the old spam problem again. Spammers pass the cost of their customers to their victims, while AI bots pass the cost of their crawling to the sites they crawl (without authorization).

    I see no easy solution for this.

  • Goun@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 days ago

    What if we start throtling them so we make them waste time? Like, we could throttle contiguous requests, so if anyone is hitting the server aggresively they’d get slowed down.