summaryrefslogtreecommitdiff
path: root/lib/locking-selftest-mutex.h
diff options
context:
space:
mode:
authorSean Christopherson <seanjc@google.com>2024-08-02 13:21:21 -0700
committerSean Christopherson <seanjc@google.com>2024-10-30 14:27:51 -0700
commit7e513617da71b1c0b6497cda1ddfc86a7c4d1765 (patch)
treee52f78c58760062416e86145b86e2852afca1c69 /lib/locking-selftest-mutex.h
parent5cb1659f412041e4780f2e8ee49b2e03728a2ba6 (diff)
downloadlinux-7e513617da71b1c0b6497cda1ddfc86a7c4d1765.tar.gz
linux-7e513617da71b1c0b6497cda1ddfc86a7c4d1765.tar.bz2
linux-7e513617da71b1c0b6497cda1ddfc86a7c4d1765.zip
KVM: Rework core loop of kvm_vcpu_on_spin() to use a single for-loop
Rework kvm_vcpu_on_spin() to use a single for-loop instead of making "two" passes over all vCPUs. Given N=kvm->last_boosted_vcpu, the logic is to iterate from vCPU[N+1]..vcpu[N-1], i.e. using two loops is just a kludgy way of handling the wrap from the last vCPU to vCPU0 when a boostable vCPU isn't found in vcpu[N+1]..vcpu[MAX]. Open code the xa_load() instead of using kvm_get_vcpu() to avoid reading online_vcpus in every loop, as well as the accompanying smp_rmb(), i.e. make it a custom kvm_for_each_vcpu(), for all intents and purposes. Oppurtunistically clean up the comment explaining the logic. Link: https://lore.kernel.org/r/20240802202121.341348-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
Diffstat (limited to 'lib/locking-selftest-mutex.h')
0 files changed, 0 insertions, 0 deletions