Disallow robots that we don't like from indexing our site #4
Closes #4 (closed)
Note: After all merge requests are done, we will need to run cd /var/www/hax0rbana.org && sudo -u www-data git pull
.
It'd be pretty reasonable to add a cron job to do this automatically every day. If you agree, please make a ticket for this in the configs repo (which is what deploys the VM that checks out this repo).
Edited by Adam