x86/sgx: Break up long non-preemptible delays in sgx_vepc_release()
authorJack Wang <jinpu.wang@ionos.com>
Wed, 6 Sep 2023 13:17:12 +0000 (15:17 +0200)
committerIngo Molnar <mingo@kernel.org>
Wed, 6 Sep 2023 21:55:09 +0000 (23:55 +0200)
On large enclaves we hit the softlockup warning with following call trace:

xa_erase()
sgx_vepc_release()
__fput()
task_work_run()
do_exit()

The latency issue is similar to the one fixed in:

  8795359e35bc ("x86/sgx: Silence softlockup detection when releasing large enclaves")

The test system has 64GB of enclave memory, and all is assigned to a single VM.
Release of 'vepc' takes a longer time and causes long latencies, which triggers
the softlockup warning.

Add cond_resched() to give other tasks a chance to run and reduce
latencies, which also avoids the softlockup detector.

[ mingo: Rewrote the changelog. ]

Fixes: 540745ddbc70 ("x86/sgx: Introduce virtual EPC for use by KVM guests")
Reported-by: Yu Zhang <yu.zhang@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Yu Zhang <yu.zhang@ionos.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Acked-by: Haitao Huang <haitao.huang@linux.intel.com>
Cc: stable@vger.kernel.org
arch/x86/kernel/cpu/sgx/virt.c

index c3e37ea..7aaa365 100644 (file)
@@ -204,6 +204,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file)
                        continue;
 
                xa_erase(&vepc->page_array, index);
+               cond_resched();
        }
 
        /*
@@ -222,6 +223,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file)
                        list_add_tail(&epc_page->list, &secs_pages);
 
                xa_erase(&vepc->page_array, index);
+               cond_resched();
        }
 
        /*
@@ -243,6 +245,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file)
 
                if (sgx_vepc_free_page(epc_page))
                        list_add_tail(&epc_page->list, &secs_pages);
+               cond_resched();
        }
 
        if (!list_empty(&secs_pages))