now insert data into a table in mysql, and the primary key will be duplicated. The data sent will have a timestamp. When the data is repeated, the nearest one needs to be stored according to the timestamp. At a later stage, there will be millions of pieces of data. A primary key is a joint and unique primary key based on fields such as UUID and order number.
how can I determine the timestamp size of the data when the primary key is repeated and the primary key is repeated? The ways you can think of are:
1, first save the primary key and timestamp in nosql or other, and then determine whether the timestamp is updated or ignored when the primary key is repeated.
2, filter and judge are written in the inserted sql, which is executed when the time is large and ignored when the time is small.
(the two ideas are the same, but I think the performance of doing something like this will be very poor)
3, execute the insert directly, and then use the timestamp of the data to compare and update or ignore if there are repeated errors caught by catch.