when I store a large file in the backend, I split it into n blocks, which are stored in Mongodb. Now JS is used in the front end to achieve multi-thread download, the current idea is to send money in the front en dna jax, and then use promise.all to wait for the return of the n requests, after the return of the data splicing and then set the a tag of the download attribute to download. At present, there are two problems:
1. Because I wait for all the data to come back before I integrate and download it, so the data is first in memory, which is obviously unreasonable for a large file, such as 1G or 2G.
2. As for the problem of resuming upload at breakpoint, the idea is to click to download the file, the front end reads the file first, and then determines whether the I (0 < I < n) block of the file has been downloaded, only the undownloaded part, but the problem is that JS cannot write to the local file.
do you have any good solutions? A friend said to see if he could use the browser"s proxy, that is, every time the browser requested the proxy, and then implemented various operations on the file in the proxy, but he did not specify what the proxy was and how to implement it, so it was not easy to search for materials. I hope you will not hesitate to give us your advice.