I'm using a shell script to download the blacklist from AbuseIPDB to a dotted directory on one server every hour. That dotted directory is accessible only to my other servers, and CSF on those servers obtains the file from that directory rather than downloading it from AbuseIPDB.
The parameters I'm using right now (I'm still tweaking them) select only those IP addresses with 100 percent confidence and 200 or more reports, the oldest of which have to have been reported in the past 7 days. It tends to filter out the ephemeral anonymous proxies in favor of more persistently-abused IP addresses. Most of them are schools and colleges in third-world countries, schools and colleges in current or former communist countries, and a handful of ISPs and data centers with exceptionally poor policing (also mainly overseas, although we have a few here, too).
Code:
#!/bin/bash
cp -f /path/to/.blacklist/file.txt /path/to/.blacklist/file.bak
curl -G https://api.abuseipdb.com/api/v2/blacklist \
-d countMinimum=200 \
-d maxAgeInDays=7 \
-d confidenceMinimum=100 \
-H "Key: api-key-goes-here" \
-H "Accept: text/plain" > /path/to/.blacklist/file.txt
size=$(stat -c%s /path/to/.blacklist/file.txt)
min="2000"
if [[ $size -lt $min ]]; then
cp -f /path/to/.blacklist/file.bak /path/to/.blacklist/file.txt
fi
exit
What the script does is first backs up the current blacklist file, gets the new one, and overwrites the current one with the new one. But if the new one is smaller than 2000 bytes, it restores the backed-up version. A cron entry runs it every hour.
The reason I'm doing it this way is that once in a while the server vomits and returns either an error message, gibberish, or an empty file. Checking the file size -- it should
always be greater than 4K using those parameters -- protects against all three possibilities. Checking the response code didn't work for the invalid files with successful response codes.
Checking the file size
doesn't protect against a larger file that's gibberish, but that hasn't happened yet. This is still a work in progress, though, so I'm pondering ways to identify a properly-sized but invalid file and restore from backup in that event, even though it hasn't happened yet.
Saving the file to the one server and making it available to the others is just a courtesy to reduce the requests on AbuseIPDB's server. I have a paid account, so I could call it from each server; but that would be a waste of their resources.
CSF on the other servers imports the list to IPTABLES by way of a custom entry in /etc/csf/csf.blocklists on each server:
Code:
# AbuseIPDB Blacklist
ABUSEIPDB|3600|1000|https://path/to/.blacklist/file.txt
The capitalized entry is the name by which CSF knows the blacklist. 3600 is one hour. 1000 is the maximum IPs to import (there are usually around ~275 using the current parameters, but it will import as many as 1,000 if they're there).
So far this has worked flawlessly, but it's still a work-in-progress.
Rich