Conversation

Replying to and
It's not slow to do a properly scoped fdatasync before rename. It's faster than doing a full fsync on the file after rename, which doesn't even guarantee that the rename was synced because doing that requires an fsync on the directory rather than the file anyway.
2
By doing rename before sync, it's possible the rename is committed to storage before data. On ext3, it wasn't possible with data=ordered because it didn't have delayed allocation of blocks, but that was never guaranteed or portable. Doubt developers knew about that guarantee.
2
Replying to and
I disagree with this assessment. It was guaranteed by what my, and I think many others', understanding of what data=ordered meant. Delayed allocation is a hack that licenses the fs driver to violate what data=ordered means.
1
Replying to and
It's only relevant what data=ordered means if the code was specifically written for ext3 rather than being portable across filesystems. It shouldn't matter what data=ordered means for the vast majority of code not written to target a specific filesystem implementation.
2
Replying to and
From my perspective, you're framing it wrong as the code being "written for" anything. The code is just written to POSIX (without sync options) with no concept of power failure existing. Filesystem is providing user-facing (not app-facing) features on top...
1
...to give the user (not the application) a level of data integrity they've selected (and made tradeoffs for) based on whether they deem power failure or kernel crash something that could plausibly happen and whether they deem the data valuable.
1
Replying to and
You can choose to use ext4 without delayed allocation. From an application or library perspective, it's not relevant as portable code implementing transactions correctly still works fine. Relying on the ext3 ordering is not just broken on ext4 with delayed allocation.
2
Replying to and
You're treating this as a [non-]contract between the application and the filesystem, in which case of course it would be "non-portable assumptions". I treat them as having no relationship, and this being a matter of contract between the user and the filesystem.
1
Replying to and
Okay, but if you want transactions in application or library code, fsync/fdatasync are required because both filesystems and hardware delay writes and perform them in batches with the order determined based on efficiency. It's not something specific to ext4 at all.
2
Replying to and
I don't want transactions. This thread is not about transactions or any programming-level contracts. It's about the extent of inconsistency that can be observed in their absence with different options and nasty hacks that hurt performance for related reasons.
1
If data_flush is enabled, it flushes data at checkpoints, resulting in semantics comparable to data=journal without anything close to the same performance cost since there aren't copies to and from a journal. The filesystem itself is the journal rather than copying to / from it.
Replying to and
I never had any problems with ext3. With ext4, the fs driver idiotically aborts journal and forces everything ro and corrupts the disk whenever the emmc is 1% slower than its advertised response time, and now this performance nonsense too...
1
Replying to and
I have a bad impression of both ext3 and ext4. I've preferred using XFS for many years. I've experimented with f2fs and it provides great performance on an NVMe SSD, but it's not worth using on a SATA SSD and I don't consider it mature enough to use it on my main NVMe SSD yet.
1
Show replies