A csv file with more than 1 million entries, read, each line to be processed, and the average processing time per line 250ms .
- scenario 1: each read a line, process one line, deal with it, and read the next line.
- scenario 2: read it out at once, put it in memory, and then traverse each line.
later, I found that one java process was not enough to run. I wanted to open a few more. When I found that I opened 3, Linux would kill 1-2 automatically for me.
excuse me, which scheme should be more reasonable in this scenario? What is the principle?