would like to ask, write a crawler, how to tell when the crawler should stop?
the initial state is a url; and then there is a
while(isNotEmpty(urlList)){
// do something
}
my idea is this, but the speed of queuing url can not keep up with the speed of consumption, so that the urlList is empty, and then the crawler stops. I would like to ask which Daniel has written the framework of the crawler and what are the conditions under which the crawler stops running.