Linux Synchronize and Asynchronous-event Loop

excuse me:

linuxnginx
    1.
    2.
    3.swoole



nginx


 
Mar.19,2021

tell me your understanding for reference. Suppose the subject understands some basic concepts of network programming and computer systems.

in a nutshell, event-driven is a way to implement concurrency processing.

Let's take the processing of HTTP requests as an example. For simplicity, we only consider network IO, without considering other processes such as file IO and database, and do not consider multi-core systems.
consider using the following minimalist model to process HTTP requests:

main_loop:
  accept() 
  recv()  
  parse() 
  send() 
  close() 

make a connection, read the data (request), parse the request content, and return the data (reply).
serves only one client at a time. In the process of serving client A, client B must wait.

this approach is very simple, straightforward and easy to understand, but it does not meet the needs of real-world scenarios- does not support concurrency .
in reality, client requests are concurrent: when one client's request is still being processed, another client's request will arrive, or even multiple client requests will arrive at the same time.
moreover, API involving network operations such as recv and send may need to wait and CPU will be idle due to the uncertainty of the sending and arrival of network data-but in this model, even if CPU is idle, it will not be able to process requests from other clients, which is a waste of CPU.

We can solve the above problems by using the following multithreading model:

main_loop:
  accept() 
  start_thread(thread_loop)

thread_loop:    
    recv()  
    parse() 
    send() 
    close() 
    exit thread()

that is, each client is processed in a separate thread.
when a client thread waits to perform a network operation, it is dispatched by the operating system to execute other threads that need to work.
seems to solve our problem perfectly?
however, there is no such thing.
because the operating system costs a lot to create threads, the number of threads it can support is limited, usually tens of thousands of threads. If there are too many threads, a lot of CPU will be wasted on thread creation, destruction, scheduling and other management operations.

so in order to give full play to the power of CPU and support more concurrency, there is another way to deal with concurrency on Linux: the
kernel provides mechanisms and interfaces to listen for a large number of network connections (handles), readable, writable, and other events. The
application registers the listening objects and concerned events with the kernel, and the kernel notifies the application processing when an event arrives.
handling concurrency based on this mechanism is event-driven.

the basic model of the event-driven mechanism is:

create_listen_socket()
register_event_for_listen_socket()
main_loop:
    wait_for_event()
    check_events:
         if listen_socket has event(new client coming) :
               accept()
               register_event_for_client_socket()
        if client_socket has event(new data coming):
               recv()                       
               parse()
               send()              

but there is a problem here. It is possible that as soon as a client has read some of the data, it is gone, and the rest is still in the network, so we need to continue to wait.
this requires saving the current read content and request processing state (that is, the context) and continuing to process events from other clients.
then the next time this client has an event, it will find the context and continue processing.
this actually requires the application to do some context saving and switching work related to task scheduling.

when multithreading is used to handle concurrency, the operating system does the work for us, and we don't need to care about task switching.
because a thread only deals with one client, repeatedly call recv to read the requested data and then parse it, and don't worry that recv blocks the processing of other clients when there is no data.
so writing concurrent code with multiple threads is very simple and straightforward.

as mentioned above, the event-driven mechanism is an efficient programming model for solving concurrency problems on Linux.
the process of repeatedly detecting events and processing received events one by one is the event loop.

so where are the concepts of Synchronize and async embodied?

Synchronize means that we perform a task and wait for the task to finish.
Asynchronous means that we execute a task, do not wait for the task to finish, continue to do other work, have a notification after the task result, or simply don't care about the task execution result.

in the multithreaded model, a thread is created every time a new client is received, which is an asynchronous process.
in the event-driven model, it is also asynchronous to keep the client in the listening queue when there is no data to read.

if we consider that the file IO, throws the IO request to another thread or group of threads (thread pool) for processing, and notifies the main thread after processing, it is also asynchronous.

MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1b368a2-343eb.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1b368a2-343eb.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?