Cool project: "Nepenthes" is a tarpit to catch (AI) web crawlers.
"It works by generating an endless sequences of pages, each of which with dozens of links, that simply go back into a the tarpit. Pages are randomly generated, but in a deterministic way, causing them to appear to be flat files that never change. Intentional delay is added to prevent crawlers from bogging down your server, in addition to wasting their time. Lastly, optional Markov-babble can be added to the pages, to give the crawlers something to scrape up and train their LLMs on, hopefully accelerating model collapse."
@tante I have mixed feelings.
Crawlers should respect robots.txt….
At the same time: there is clearly an emotionally based bias happening with LLM’s.
I feel weird about the idea of actively sabotaging. Considering it is only towards bad actors… and considering maybe robots.txt often are too restrictive in my opinion… the gray areas overlap a bit.
Why should we want to actively sabatoge AI dev? Wouldn’t that lead to possible catastrophic results? Who benefits from dumber ai?
The difference between a human reading a website and writing an article ‘inspired by’ what they’ve read And an LLM consuming and outputting content the same way is we recognize that an LLM is a tool and can do the same thing faster.
Reading is training. Reading isn’t copying. Output is the issue: not input. It’s worrisome to see so many not grasp this.
Looking/copying isnt stealing. It just isn’t. No one lost their website.