From previous research on reading different articles, I understand that a delay in the tracking of robots.txt as:
It allows a particular search engine to track x number of web pages per second and then stop.
It is not clear to me
from when →
until when, stops (24 hours?).
It is not clear to me what he does after reading the following statement here:
Be careful when using the tracking delay policy. When setting a crawl
10 second delay is only allowing these search engines
Access 8,640 pages per day.
crawl-delay: x; Has anyone here managed to understand how math works there?