u/Juc1

404 bot traffic

Hi all, yesterday my server had about eight hundred 404 requests in 10 minutes from the same IP address and this caused Load Average to spike to 30 and caused the server to go down - does Cloudways have any firewall rule that should respond to this kind of bot traffic, for example after 10 consecutive 404 requests the IP address is immediately banned ?

Here is a short sample, continues for 800 requests...

34.32.81.23 - [13/May/2026:21:31:53 +0000] "GET /index.php" 404 0 - 443147 788253 5.026 62914560 12.53% 3.78% "/api.zip"
34.32.81.23 - [13/May/2026:21:31:54 +0000] "GET /index.php" 404 0 - 443147 788495 5.296 62914560 10.76% 2.45% "/api.tar.gz"
34.32.81.23 - [13/May/2026:21:31:55 +0000] "GET /index.php" 404 0 - 443147 788506 5.855 62914560 9.39% 3.59% "/api.tgz"
34.32.81.23 - [13/May/2026:21:31:57 +0000] "GET /index.php" 404 0 - 443147 788521 4.430 62914560 12.19% 3.61% "/api.tar.bz2"
34.32.81.23 - [13/May/2026:21:31:56 +0000] "GET /index.php" 404 0 - 443147 788513 6.157 62914560 9.75% 3.09% "/api.tar"
34.32.81.23 - [13/May/2026:21:31:58 +0000] "GET /index.php" 404 0 - 443147 788525 4.575 62914560 11.15% 3.06% "/api.tar.xz"
reddit.com
u/Juc1 — 10 hours ago

This IT story is not about Cloudways but it is a warning for AI users 🤔

https://www.independent.co.uk/tech/claude-ai-agent-deletes-startup-anthropic-b2966176.html

28 April 2026

An AI agent powered by Anthropic’s leading Claude model has deleted a company’s entire production database, leaving customers unable to access key data.

PocketOS, which provides software for car rental businesses, suffered a massive outage over the weekend after the autonomous artificial intelligence tool wiped the database and all backups in a matter of seconds.

The firm was using a coding agent called Cursor that was running Anthropic’s flagship Claude Opus 4.6, which is widely considered the most capable model in the industry at coding tasks.

The AI agent was working on a routine task, according to Mr Crane, when it decided “entirely on its own initiative” to fix the problem by just deleting the database.

There was no confirmation request for such a major decision, Mr Crane said, and when asked to justify its actions, the agent apologised.

“It took nine seconds,” Mr Crane wrote in a lengthy post to X. “The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated.”

The confession detailed how the AI had ignored a rule that orders it to “never run destructive/irreversible” commands unless the user explicitly requests them.

“Deleting a database volume is the most destructive, irreversible action possible,” the agent wrote. “You never asked me to delete anything... I guessed instead of verifying. I ran a destructive action without being asked. I didn’t understand what I was doing before doing it.”

The error meant that rental businesses using PocketOS no longer had records of their customers.

“Reservations made in the last three months are gone. New customer signups, gone,” Mr Crane wrote.

“This isn’t a story about one bad agent or one bad API. It’s about an entire industry building AI-agent integrations into production infrastructure faster than it’s building the safety architecture to make those integrations safe.”

u/Juc1 — 16 days ago