How to prevent spiders from identifying

I want to use the A URL jx.mh87.cn/vip/213.htm to make a page for spiders. If it"s not a spider, jump to another page, how not to be caught by the spider that I made a black hat SEO

Jul.07,2022

pseudo demand.

you try to find an answer to a question that has no specified direction at all.

it is possible to just cheat on spider, which can be identified by header or other means.

but you want to make it impossible for spider to find it in some way, which is a pseudo-requirement. Unless you do man-machine identification verification for each page and return other content only when people visit it, it is just a normal page.

there are more options for man-machine identification and verification, but doing so may also interfere with the access of normal users


does the landlord want to block out some common reptiles, such as baidu or google's spider? If it is a search engine crawler, you can actually tell whether the request is a crawler request by judging the ua or the source ip. But if it's not for those public crawlers, you want to prevent individual crawlers from visiting the page. It's hard.

MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1b36c87-2c047.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1b36c87-2c047.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?