A sysadmin's public blog reveals an aggressive new defense against LLM training scrapers: outright blocking HTTP requests with generic User-Agent headers. This drastic measure highlights the unsustainable resource consumption caused by indiscriminate web crawling and forces a reckoning with scraping ethics. The policy demands explicit identification of all non-browser agents, rejecting common culprits like 'Go-http-client/1.1'.