Scrapy passes parameters across components

after starting the framework to crawl the target web page start_url, you need to extract an eigenvalue from the string start_url as the collection name of the MongoDB database, and then store the item through pipeline.
outline flow:

spiderpipeline

related code in pipeline:

import pymongo

class MongoPipeline(object):

    -sharpcollection_name = "Gsl6RoxfN"           

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get("MONGO_URI"),
            mongo_db=crawler.settings.get("MONGO_DATABASE", "items")
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        self.db[self.collection_name].insert_one(dict(item))
        return item
        

the question now is how to pass the variable collection_name in spider to pipeline
Thank you for reading
Thanks in advance

Apr.16,2021

I think there are two ways:
one is to define the collection_name you need as a global variable in the spider module, and then import it into the pipelines module.
second, you can add a collection_name field in item, and you can use item.pop ('collection_name') to pop up in pipelines


.

quoting the method of @ silk can solve the problem and realize the operation of "reading start_url, from MongoDB to process start_url, generating eigenvalues, and then passing eigenvalues to pipeline as the name of the collection table". The specific solution is as follows.

in Spider:

def start_requests(self):
    client = pymongo.MongoClient('localhost',27017)
    db_name = 'Sina'
    db = client[db_name]
    collection_set01 = db['UrlsQueue']
    datas=list(collection_set01.find({},{'_id':0,'url':1,'status':1}))
    for data in datas:
        if data.get('status') == 'pending':
            url=data.get('url')
            pattern='(?<=/)([0-9a-zA-Z]{9})(?=\?)'
            if re.search(pattern,url):
                collection_name=re.search(pattern,url).group(0)
            start_url='https://weibo.cn/comment/'+collection_name+'?ckAll=1'
            collection_set01.update({'url':url},{'$set':{'status':'proccessing'}})                
            break
        else:
            pass
    client.close()
    yield Request(url=start_url,callback=self.parse, cookies=cookie, meta={'collection_name':collection_name})

get start_url, from database, extract eigenvalues, process them, and send request with meta parameter

def parse(self,response):
        collection_name=response.meta['collection_name']
        ......
        for i in range(0,len(node)):
            item['collection_name']=collection_name
            yield item
            

parse () extracts the returned meta parameters while parsing the data from response

in Pipeline:

def close_spider(self, spider):
    self.db['UrlsQueue'].update({'status':'proccessing'},{'$set':{'status':'finished'}})
    self.client.close()

def process_item(self, item, spider):
    self.collection_name=item.pop('collection_name')
    self.db[self.collection_name].insert_one(dict(item))
    return item

pop, if you lose the collection_name parameter, you can

Thank you very much for @ Yu Bai for your help

MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1e3caa0-4bddc.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1e3caa0-4bddc.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?