redis is fine, but for message queues, professionals can use rabbitMQ
rabbitMQ. There is a queue square in which messages (tickets) are stored. If a request comes, messages (tickets) in the square can be distributed to the request (consumer)
for example, a ticket numbered '12306' is stored in the database, and a 12306
A comes first in the queue. The queue sends this 12306 to A. A took this message to operate the database
B came, at this time A may not have left, because the frequency is very fast, but there are no tickets in the queue, and then the queue tells B that there are no tickets left. B just go back first.
you don't have to be first-in, first-out, first-out, first-in, first-out, first-
can be implemented using koa middleware.
let app = require ("koa");
app.use (async (ctx,next) = > {
await next (); / / execute the first request
/ / execute the second request
})
the database is locked.
try to forward requests from each API to the same object module when listening for related requests. When executing, the module only needs to implement a list to store all requests, then execute them sequentially and then call back. Of course, this can encapsulate a layer of Promise on the outside.
to put it simply, this data structure allows you to store values in it and pass a callback function. When it is executed, it will only be executed by Synchronize and executed in turn, and the result will be returned through a callback. Wrap it with Promise.
Today, my brain is a little cold. After writing, I found that there seems to be some places that haven't turned around. For example, I have to re-use callback and Promise encapsulation for Mao in Synchronize mode. However, the scheme mentioned above should be achievable, as I did when I controlled concurrency.
this needs to be implemented in mq and simply in DB. Other schemes can only be implemented in one process, but not with more than one.