>For one, you get no performance benefit over non-cow unless you update in place. It’s what every ‘fast and easy’ filesystem has to do - fat (including exfat), ext3, ext4, etc.
That is just a matter of priorities then. And just because you might opt to not update in place in some situations doesn't mean that you can never do it.
I'm not sure what you mean by "Delayed allocation and data loss", I don't find it relevant to this discussion at all since that isn't about filesystem-corruption but application data corruption. And COW also suffers from this - unless you have NILFS/automatic continuous snapshots. Now with COW you probably have a much greater chance of recovering the data with forensic tools (also discussed in this thread regarding ZFS) but with huge downsides and hardly an relevant argument for COW in the vast majority of usecases anyway.
ZFS minimum block size corresponds to disk sector size so for most practical purposes it is the same as your typical non-COW filesystem there. Writing 1 byte requires you to read 4 kb, update it in memory, recalculate checksum, and then writing it down again.
How you remove old records shouldn't depend on COW should it?
My only statement was that checksums isn't in any way dependent on COW.
The discussion about compression is invalid as it is a common feature of non-COW filesystems anyway.
Haven't seen a proper argument for the corruption claims. And that you get corrupted data if you interrupt a write is not a huge deal. Mind you corrupted write. Not corrupted filesystem. The data was toast anyway. A typical COW would at best save you one "block" of data which is hardly worth celebrating anyway. Your application will not care if you wrote 557 out of 1000 blocks or 556 out of 1000 blocks your document is trashed anyway. You need to restore from backup (or from a previous snapshot, which of course is typical killer feature of COW)).
There are also several ways to solve the corruption issue. ReFS for instance has data checksums and metadata checksums but only do copy-on-write for the metadata. (edit: was wrong about this, it uses COW for data too if data checksumming is enabled)
Yes, COW is popular and for good reasons. As is checksumming. It isn't surprising that modern filesystems employ both. Especially since the costs of both have been becoming less and less relevant at the same time.
That is just a matter of priorities then. And just because you might opt to not update in place in some situations doesn't mean that you can never do it.
I'm not sure what you mean by "Delayed allocation and data loss", I don't find it relevant to this discussion at all since that isn't about filesystem-corruption but application data corruption. And COW also suffers from this - unless you have NILFS/automatic continuous snapshots. Now with COW you probably have a much greater chance of recovering the data with forensic tools (also discussed in this thread regarding ZFS) but with huge downsides and hardly an relevant argument for COW in the vast majority of usecases anyway.
ZFS minimum block size corresponds to disk sector size so for most practical purposes it is the same as your typical non-COW filesystem there. Writing 1 byte requires you to read 4 kb, update it in memory, recalculate checksum, and then writing it down again.
How you remove old records shouldn't depend on COW should it?
My only statement was that checksums isn't in any way dependent on COW.
The discussion about compression is invalid as it is a common feature of non-COW filesystems anyway.
Haven't seen a proper argument for the corruption claims. And that you get corrupted data if you interrupt a write is not a huge deal. Mind you corrupted write. Not corrupted filesystem. The data was toast anyway. A typical COW would at best save you one "block" of data which is hardly worth celebrating anyway. Your application will not care if you wrote 557 out of 1000 blocks or 556 out of 1000 blocks your document is trashed anyway. You need to restore from backup (or from a previous snapshot, which of course is typical killer feature of COW)).
There are also several ways to solve the corruption issue. ReFS for instance has data checksums and metadata checksums but only do copy-on-write for the metadata. (edit: was wrong about this, it uses COW for data too if data checksumming is enabled)
dm-integrity can be used at a layer below the filesystem and solves it with the journal https://www.kernel.org/doc/html/latest/admin-guide/device-ma...
Yes, COW is popular and for good reasons. As is checksumming. It isn't surprising that modern filesystems employ both. Especially since the costs of both have been becoming less and less relevant at the same time.