cpu-kvm,arch-arm: correct arm kvm virtual time

According to the kernel code*1, the virtual time is related to physical
timer and cntvoff. When the simulator goes from KVM to gem5, the
physical timer is still ticking. After gem5 simulating models and going
back to KVM, the virtual time also goes away. We should update cntvoff
by ourselve to get correct virtual time.

Moreover, according to update_vtimer_cntvoff*2, setting cntvoff affacts
all vcpus. Instead of puting individual vtime on BaseArmKvmCPU, we
maintain a global vtime, restore it before the first vcpu going into
KVM, and save it after the last vcpu back from KVM.

1. https://code.woboq.org/linux/linux/virt/kvm/arm/arch_timer.c.html#826
2. https://code.woboq.org/linux/linux/virt/kvm/arm/arch_timer.c.html#update_vtimer_cntvoff

Change-Id: Ie054104642f2a6d5a0740f39b947f5f2c29c36f3
Reviewed-on: https://gem5-review.googlesource.com/c/public/gem5/+/42161
Reviewed-by: Earl Ou <shunhsingou@google.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
Maintainer: Andreas Sandberg <andreas.sandberg@arm.com>
Tested-by: kokoro <noreply+kokoro@google.com>
This commit is contained in:
Yu-hsin Wang
2021-03-03 18:23:23 +08:00
parent bf91a33e2d
commit 60481e5111
2 changed files with 39 additions and 0 deletions

View File

@@ -38,8 +38,10 @@
#include "arch/arm/kvm/base_cpu.hh"
#include <linux/kvm.h>
#include <mutex>
#include "arch/arm/interrupts.hh"
#include "base/uncontended_mutex.hh"
#include "debug/KvmInt.hh"
#include "dev/arm/generic_timer.hh"
#include "params/BaseArmKvmCPU.hh"
@@ -58,6 +60,21 @@ using namespace ArmISA;
#define INTERRUPT_VCPU_FIQ(vcpu) \
INTERRUPT_ID(KVM_ARM_IRQ_TYPE_CPU, vcpu, KVM_ARM_IRQ_CPU_FIQ)
namespace {
/**
* When the simulator returns from KVM for simulating other models, the
* in-kernel timer doesn't stop. We have to save the virtual time and
* restore before going into KVM next time. Moreover, setting virtual time
* affacts all vcpus according to the kvm implementation. We maintain a global
* virtual time here, restore it before the first vcpu going into KVM, and save
* it after the last vcpu back from KVM.
*/
uint64_t vtime = 0;
uint64_t vtime_counter = 0;
UncontendedMutex vtime_mutex;
} // namespace
BaseArmKvmCPU::BaseArmKvmCPU(const BaseArmKvmCPUParams &params)
: BaseKvmCPU(params),
@@ -147,6 +164,26 @@ BaseArmKvmCPU::kvmRun(Tick ticks)
return kvmRunTicks;
}
void
BaseArmKvmCPU::ioctlRun()
{
// Check if it's the first vcpu going into KVM. If yes, it should restore
// the virtual time.
{
std::lock_guard<UncontendedMutex> l(vtime_mutex);
if (vtime_counter++ == 0)
setOneReg(KVM_REG_ARM_TIMER_CNT, vtime);
}
BaseKvmCPU::ioctlRun();
// Check if it's the last vcpu back from KVM. If yes, it should save the
// virtual time.
{
std::lock_guard<UncontendedMutex> l(vtime_mutex);
if (--vtime_counter == 0)
getOneReg(KVM_REG_ARM_TIMER_CNT, &vtime);
}
}
const BaseArmKvmCPU::RegIndexVector &
BaseArmKvmCPU::getRegList() const
{

View File

@@ -56,6 +56,8 @@ class BaseArmKvmCPU : public BaseKvmCPU
protected:
Tick kvmRun(Tick ticks) override;
/** Override for synchronizing state in kvm_run */
void ioctlRun() override;
/** Cached state of the IRQ line */
bool irqAsserted;