Scenario in which Log Structured File System might not be efficient


  • 0
    P

    I was asked this question during an interview with RedHat.
    The answer is: Since in Log Structured FS the position datablocks for a file do not follow locality on disk, this file system will give an inefficient performance in a read-heavy workload where all the reads are not soaked up by the cache.


  • 0
    S

    Sequential scan of a file after random writes to it, significant GC under heavy load

    LFS would scatter the contents of a file across its log/disk and hinder sequential scan, where as a file system like ext4 would not scatter the file across the disk.


Log in to reply
 

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.