problem description
I want to update the mongodb database and store an array with some deleted elements in mongodb, but when at least two threads perform this task at the same time, there is a second kind of update loss problem, such as the original array is [0Query 1], thread A deletes 0, stores the array in the database, thread B deletes 1, stores the array in the database, the result I expect is that the array should be empty. But the result is [1]
the environmental background of the problems and what methods you have tried
I am a sophomore, and my knowledge of database and concurrency is very limited. I only understand (maybe understand) the causes of this phenomenon through google. According to the knowledge I have acquired, I have come up with four solutions (not sure that they can be solved)
one. Add a read lock to the document and do not let other threads read
II until the read-modify operation is complete. Use optimistic locks and retry
3 if you find that the data is not up-to-date. There seems to be no such problem with using redis,redis
IV. Put the fetched FileMsg object in memory and make it thread safe. I"m modifying this object
but I don"t know how to implement the first or second solution, and I"m not sure which one is the best.
I tried to use optimistic locks, adding @ Version annotations and version fields to pojo, but I don"t know how to try again (the code on the Internet doesn"t understand.)
related codes
//
@Autowired
protected MongoTemplate mongoTemplate;
public void updateUnfinishedById(String id, LinkedList<Integer> list){
Query query = new Query(Criteria.where("_id").is(id));
Update update = new Update().set("unfinishedChunk",list);
mongoTemplate.findAndModify(query,update,FileMsg.class);
//,
}
//service
import com.xuebling.newpoetryspread.common.utils.UploadUtils;
import com.xuebling.newpoetryspread.dao.FileMsgRepository;
import com.xuebling.newpoetryspread.pojo.FileChunk;
import com.xuebling.newpoetryspread.pojo.FileMsg;
import com.xuebling.newpoetryspread.pojo.enums.ResponseMsg;
import com.xuebling.newpoetryspread.pojo.result.Response;
import com.xuebling.newpoetryspread.pojo.result.ResponseData;
import com.xuebling.newpoetryspread.service.UploadFileService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Isolation;
import org.springframework.transaction.annotation.Transactional;
import java.io.File;
import java.io.IOException;
import java.util.LinkedList;
import java.util.Optional;
@Service
public class UploadFileServiceImpl implements UploadFileService {
@Autowired
private FileMsgRepository fileMsgRepository;
protected Logger logger = LoggerFactory.getLogger(this.getClass());
@Value("${fileStorePath}")
private String fileStorePath;
//md5,,,
@Transactional(isolation = Isolation.SERIALIZABLE)
@Override
public Object finishChunkUpload(FileChunk fileChunk) {
//
Optional<FileMsg> fileMsg = fileMsgRepository.findById(fileChunk.getTaskId());
// logger.error("version"+fileMsg.get().getVersion());
//
if(!fileMsg.isPresent()){
logger.error(",");
return new Response(ResponseMsg.TARGETNULL);
}
//,
logger.info("chunkSize"+fileMsg.get().getChunkSize());
fileChunk.init(fileMsg.get().getChunkSize());
try {
logger.info("md5"+fileMsg.get().getSourceFileMD5());
//
if(fileMsg.get().getUnfinishedChunk().contains(fileChunk.getChunk())){
//
logger.info("fileStorePath"+fileStorePath);
String targetFileName = fileMsg.get().getBeginTime()+fileMsg.get().getFileName();//
File targetFile = new File(fileStorePath+targetFileName);
logger.info(""+targetFile.getAbsolutePath());
logger.info(""+targetFile.getName());
//
if(!targetFile.exists()){
logger.info("");
targetFile.createNewFile();
}
logger.info("");
//,,,
UploadUtils.writeByRandomAccess(fileChunk,targetFile);
logger.info(",");
//,
LinkedList<Integer> linkedList = fileMsg.get().getUnfinishedChunk();
logger.info("chunk"+fileChunk.getChunk());
linkedList.remove(fileChunk.getChunk());
logger.info("unfishedChunk"+fileMsg.get().getUnfinishedChunk());
fileMsgRepository.updateUnfinishedById(fileMsg.get().getTaskId(),linkedList);//fixme
//,md5
if(linkedList.size()==0||fileMsg.get().getChunkNum()==1){
logger.info(",");
String completeFileMD5 = UploadUtils.validateFile(targetFile);
logger.info("md5"+completeFileMD5);
if (completeFileMD5.equals(fileMsg.get().getSourceFileMD5())){
//uri
logger.info("");
return new ResponseData(ResponseMsg.ALLDONE,targetFileName);
}
//
else return new ResponseData(ResponseMsg.ALLFAIL,fileMsg);
}
}
else {
return new Response(ResponseMsg.TARGETNULL);
}
} catch (IOException e) {
logger.error("");
e.printStackTrace();
}
return new Response(ResponseMsg.ONEDONE);
}
}
what result do you expect? What is the error message actually seen?
I want to continue to use mongodb, so I have to solve the problem of missing updates in the second category. For example, the original array is [0jue 1], thread A deletes 0, the array is stored in the database, thread B deletes 1, and the array is saved in the database. The result I want is [], an empty array, but the result is [1]
how can I solve this problem? If you can, please recommend some books or materials on such issues. I would appreciate it!