Pyspider imports bulk url from a file

now there is a batch of irregular url, stored in the file.
wants to crawl the page corresponding to each url and extract specific content from it.
there is no need for recursive fetching for each url,.

how can I implement it through pyspider?

Jul.12,2021

can be saved in the database and read in the database
but how do you load these url? the page elements are also different

.
MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1b36e06-2c053.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1b36e06-2c053.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?