How to remove url from DupeFilter when scrapy-redis acquisition fails

problem: when collecting a page, it may return empty content due to network reasons, but this collection record is recorded in the DupeFilter of redis, so that it cannot be collected again.
excuse me: how to manually remove the failed url from the xx:DupeFilter of redis during the writing process of redis.

Apr.25,2022

finally got it.
introduces
from scrapy.utils.request import request_fingerprint

.

in spiders, manually determine whether the response meets the crawling requirements, and if not, delete the fingerprint.

from scrapy.utils.request import request_fingerprint

    def parse(self,response):
        ajaxT = json.loads(response.text)
        if ajaxT['status'] == 'success':
             -sharp
        else:
            -sharpredis
            fp = request_fingerprint(response.request, include_headers=None)
            self.server.srem(self.name + ':dupefilter', fp)
MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1e9a827-475ba.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1e9a827-475ba.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?