The problem is already discussed here. But there was no consensus on this issue.
I have some ideas on how the insert operation can be implemented for some popular file systems. If FS has a structure based on the extension (e.g. etx4, ntfsprobably btrfs) we can use this function to modify the parts of the intermediate file independently of other parts. I guess it will require length handling for each part independently of the other. But in some situations the advantage can be drastically large. From my experience: sometimes I face the problem of slow processing of a large file. Therefore, functionality may be in demand. And I don't even mention databases here that always required such functionality.
A good use case would be the distribution of read / write operations on multiple disk layers. It is quite relevant for modern multidisc systems (often SSD-based) multicore and multithreaded SMP (or even NUMA).
I already took a look at MPI-IO v.2 system. It has something simialar (especially with respect to parallel processing). But it doesn't provide the dynamic file resizing capability that I suppose to present.
I need your opinions on this subject. What inconveniences / deficiencies may arise when trying to implement this function. One of these drawbacks may be the irregular irregular length of the file blocks with you that result in broken memory mapping mechanisms, for example. I just want to note that at some point such functionality will be implemented because:
- Data volumes grow rapidly and so do files
- Parallel processing is already a modern reality. And there are still no other good ways to improve technology in the future in terms of performance.