Mongdb 90,000,000 data query fetches

assume a single document type:

{
    "_id" : ObjectId("5a19403b421aa92332bc2b32"),
    "id" : "95957f4a9eab11e787f1509a4c4be0cd"
    "incre" :1
    "city" ""
}

data volume, 90 million, how to quickly extract all, city is all the id of Beijing.
where incre is a self-increasing id.
I use the following method, 100 threads, fetching 20 pieces of data at a time according to incre, and polling 90 million data from 1:

find({"$and":[{"city":""},{"incre":{"$gte":50,"$lt":70}}]})

suppose the query takes 1 second, 2000 threads, and 100 pieces of data in 1 second.
90m data, 750hrs.
is there a faster way?
turn to the Great God.
Thank you.

-where are all the bosses?-

Feb.27,2021

empirically speaking, I think direct find ({"city": "Beijing"}) may be faster, so you might as well compare it yourself.
using multithreading here can greatly increase complexity, but the actual value is limited or even counterproductive, if you don't have a good command of multithreading.

MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1bdba35-3964f.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1bdba35-3964f.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?