Revert "mm/filemap: avoid buffered read/write race to read inconsistent data"
authorBaokun Li <libaokun1@huawei.com>
Wed, 24 Jan 2024 14:28:56 +0000 (22:28 +0800)
committerChristian Brauner <brauner@kernel.org>
Thu, 25 Jan 2024 16:23:51 +0000 (17:23 +0100)
This reverts commit e2c27b803bb6 ("mm/filemap: avoid buffered read/write
race to read inconsistent data"). After making the i_size_read/write
helpers be smp_load_acquire/store_release(), it is already guaranteed that
changes to page contents are visible before we see increased inode size,
so the extra smp_rmb() in filemap_read() can be removed.

Signed-off-by: Baokun Li <libaokun1@huawei.com>
Link: https://lore.kernel.org/r/20240124142857.4146716-3-libaokun1@huawei.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
mm/filemap.c

index 750e779..a72dd2e 100644 (file)
@@ -2608,15 +2608,6 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter,
                        goto put_folios;
                end_offset = min_t(loff_t, isize, iocb->ki_pos + iter->count);
 
-               /*
-                * Pairs with a barrier in
-                * block_write_end()->mark_buffer_dirty() or other page
-                * dirtying routines like iomap_write_end() to ensure
-                * changes to page contents are visible before we see
-                * increased inode size.
-                */
-               smp_rmb();
-
                /*
                 * Once we start copying data, we don't want to be touching any
                 * cachelines that might be contended: