The existing implementation of vfmv instruction did not type cast the
first element of the source vector, which caused the "freg" to interpret
the result as a NaN.
With the type cast to f32, the value is correctly recognized as float
and sign extended to be stored in the fd register.
Git issue: https://github.com/gem5/gem5/issues/827
Change-Id: Ibe9873910827594c0ec11cb51ac0438428c3b54e
---------
Co-authored-by: Debjyoti B <bhatta53@imec.be>
Co-authored-by: Tommaso Marinelli <tommarin@ucm.es>
Remove the following files:
* src/dev/virtio/rng 2.cc
* src/dev/virtio/rng 2.hh
Which were a copy of rng.hh and rng.cc. Probably added to the repository
by accident. They were not compiled by scons
Change-Id: I9d1da19cc243c513ab7af887b1b6260d8e361b57
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
This PR fixes#948 in which running KVM CPUs through the updated gem5
interface in SE mode causes an immediate crash.
To fix this, I added a check to set_se_binary_workload that checks if
any of the cores are KVM, and if so, sets a couple of knobs for the
board and process that are required to make KVM work. The depecated
se.py script, which sets these knobs, is able to run KVM in SE mode just
fine, so doing the same here fixed the bug.
This commit adds support for vector unit-stride segment store operations
for RISC-V (vssegXeXX). This implementation is based in two types of
microops:
- VsSegIntrlv microops that properly interleave source registers into
structs.
- VsSeg microops that store data in memory as contiguous structs of
several fields.
Change-Id: Id80dd4e781743a60eb76c18b6a28061f8e9f723d
Gem5 issue: https://github.com/gem5/gem5/issues/382
Implement several features new in ROCm 6.0 and features required for
future devices. Includes the following:
- Support for multiple command processors
- Improve handling of unknown register addresses
- Use AddrRange for MMIO address regions
- Handle GART writes through SDMA copy
- Implement PCIe indirect reads and writes
- Improve PM4 write to check dword count
- Implement common MI300X instruction
This update modifies the test configuration to specify the versions of
resources used, rather than automatically using the latest versions.
Previously, if a resource was updated for a change, it could potentially
cause tests to fail if those tests were incompatible with the new
version of the resource.
Now, with this change, tests are tied to specific versions of resources,
ensuring that any updates to resources will require corresponding
updates to the tests to maintain compatibility.
Change-Id: I9633b1749f6c6c82af6aa6697b7e7656020f62fa
Currently gem5 assumes that there is only one command processor (CP)
which contains the PM4 packet processor. Some GPU devices have multiple
CPs which the driver tests individually during POST if they are used or
not. Therefore, these additional CPs need to be supported.
This commit allows for multiple PM4 packet processors which represent
multiple CPs. Each of these processors will have its own independent
MMIO address range. To more easily support ranges, the MMIO addresses
now use AddrRange to index a PM4 packet processor instead of the
hard-coded constexpr MMIO start and size pairs.
By default only one PM4 packet processor is created, meaning the
functionality of the simulation is unchanged for devices currently
supported in gem5.
Change-Id: I977f4fd3a169ef4a78671a4fb58c8ea0e19bf52c
PCIe can read/write to any 32-bit address using the PCI index/index2
registers as an address and then reading/writing the corresponding
data/data2 register.
This commit adds this functionality and removes one magic value being
written to support GPU POST. This feature is disabled for Vega10 which
relies on an MMIO trace for too many values to implement in the MMIO
interface.
Change-Id: Iacfdd1294a7652fc3e60304b57df536d318c847b
The SRBM write packets where previously not required. This commit
implements SRBM writes to set a register by using the new setRegVal
interface. SRBM writes seem to be used for SRIOV enabled devices.
Change-Id: I202653d339e882e8de59d69a995f65332b2dfb8c
The top level AMDGPUDevice currently reads/writes all unknown registers
to/from a map containing the previously written value. This is intended
as a way to handle registers that are not part of the model but the
driver requires for functionality. Since this is at the top level, it
can mask changes to register values which do not go through the same
interface. For example, reading an MMIO, changing via PM4 queue, and
reading again returns the stale cached value.
This commit removes the usage of the regs map in AMDGPUDevice,
implements some important MMIOs that were previously handled by it, and
moves the unknown register handling to the NBIO aperture only. To reduce
the number of additional MMIOs to implement, the display manager in
vega10 is now disabled.
Change-Id: Iff0a599dd82d663c7e710b79c6ef6d0ad1fc44a2
The SDMA engine can potentially be used to write to the GART address
range. Since gem5 has a shadow copy of the GART table to avoid sending
functional reads to device memory, the GART table must be updated when
copying to the GART range.
This changeset adds a check in the VM for GART range and implements the
SDMA copy packet writing to the GART range. A fatal is added to write
and ptePde, which are the only other two ways to write to memory, as
using these packets to update the GART table has not been observed.
Change-Id: I1e62dfd9179cc9e987659e68414209fd77bba2bd
The write data packet can write multiple dwords but currently always
assumes there is one dword, which can cause some write data to be
missed. This case is not common, but the number of dwords is implicitly
defined in the PM4 header.
This changeset passes the PM4 header to write data so that the correct
number of dwords can be determined. For now we assume no page crossing
when writing multiple dwords as the driver should be checking for that.
Change-Id: I0e8c3cbc28873779f468c2a11fdcf177210a22b7
The ROCm 6.0 driver adds a node_id field to interrupts which must match
before passing on the interrupt to be cleared by the cookie from gem5's
interrupt handler implementation. Add this field and enable for gfx942.
The usage of the field can be seen in event_interrupt_isr_v9_4_3 at
https://github.com/ROCm/ROCK-Kernel-Driver/blob/roc-6.0.x/drivers/
gpu/drm/amd/amdkfd/kfd_int_process_v9.c#L449
Change-Id: Iae8b8f0386a5ad2852b4a3c69f2c161d965c4922
The main decoder for GPU instructions looks at the first 9 bits of a
dword to determine either the instruction or a subDecode table with more
information for specific instructions types. For flat instructions the
first 9 bits currently consist of 6 fixed encoding bits, a reserved bit,
and the first two bits of the opcode. Hence to support all opcodes there
are four indirections to the flat subDecode table. In MI300 the reserved
bit is part of a field to determine memory scope and therefore may be
non-zero.
This commit adds four addition calls to the subDecode table for the
cases where the scope bit is 1. See page 468 (PDF page 478) below:
https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/
instruction-set-architectures/
amd-instinct-mi300-cdna3-instruction-set-architecture.pdf
Change-Id: Ic3c786f0ca00a758cbe87f42c5e3470576f73a32
gpu-compute: Add support for skipping GPU kernels
This commit adds two new command-line options:
--skip-until-gpu-kernel N
Skips (non-blit) GPU kernels until the target kernel is reached.
Execution continues normally from there. Blit kernels are not skipped
because they are responsible for copying the kernel code and metadata
for the non-blit kernels. Note that skipping kernels can impact
correctness; this feature is only useful if the kernel of interest has
no data-dependent behavior, or its data-dependent behavior is not based
on data generated by the skipped kernels.
--exit-after-gpu-kernel N
Ends the simulation after completing (non-blit) GPU kernel N.
This commit also renames two existing command-line options:
--debug-at-gpu-kernel -> --debug-at-gpu-task
--exit-at-gpu-kernel -> --exit-at-gpu-task
These were renamed because they count GPU tasks, which include both
kernels launched by the application as well as blit kernels.
Change-Id: If250b3fd2db05c1222e369e9e3f779c4422074bc
MI200 adds support for four FP32 packed math instructions. These are
VOP3P instructions which have a negative input modifier field. The
description made it unclear if these were used for F32 packed math
however the assembly of some Tensile kernels are using these modifiers
therefore adding support for them. Tested with PyTorch nn.Dropout kernel
which is using negative modifiers.
Change-Id: I568a18c084f93dd2a88439d8f451cf28a51dfe79
The datatype is U32 but should be F32. This is causing an implicit cast
leading to incorrect results. This fixes nn.Dropout in PyTorch.
Change-Id: I546aa917fde1fd6bc832d9d0fa9ffe66505e87dd
This change fixes two issues:
1) The --cacheline_size option was setting the system cache line size
but not the Ruby cache line size, and the mismatch was causing assertion
failures.
2) The submitDispatchPkt() function accesses the kernel object in
chunks, with the chunk size equal to the cache line size. For cache line
sizes >64B (e.g. 128B), the kernel object is not guaranteed to be
aligned to a cache line and it was possible for a chunk to be partially
contained in two separate device memories, causing the memory access to
fail.
Change-Id: I8e45146901943e9c2750d32162c0f35c851e09e1
Co-authored-by: Michael Boyer <Michael.Boyer@amd.com>
From [1] The PrivateL1PrivateL2Cache hierarchy has been amended with an
MMUCache, which is basically a small cache in front of the page table
walker.
Not every ISA makes use of it.
Arm for example already implements caching of page table walks, via the
partial_levels parameter in the ArmTLB.
With this patch we define a new module which explicitly makes use of the
WalkCache. Configurations that do not require
another cache in the first level of the memsys (for the ptw) can use the
PrivateL1PrivateL2CacheHierarchy
[1]: https://gem5-review.googlesource.com/c/public/gem5/+/49364
In the RISC-V unprivileged spec[1], the misaligned load/store support is
depend on the EEI.
In the RISC-V privileged spec ver1.12[2], the PMA specify wether the
misaligned access is support for each data width and the memory region.
In the [3] of `mcause` spec, we cloud directly raise misalign exception
if there is no memory region misalignment support. If the part of memory
region support misaligned-access, we need to translate the `vaddr` to
`paddr` first then check the `paddr` later. The page-fault or
access-fault is rose before misalign-fault.
The benefit of moving check_alignment option from ISA option to PMA
option is we can specify the part region of memory support misalign
load/store.
MMU will check alignment with virtual addresss if there is no misaligned
memory region specified. If there are some misaligned memory region
supported, translate address first and check alignment at final.
[1]
https://github.com/riscv/riscv-isa-manual/blob/main/src/rv32.adoc#base-instruction-formats
[2]
https://github.com/riscv/riscv-isa-manual/blob/main/src/machine.adoc#physical-memory-attributes
[3]
https://github.com/riscv/riscv-isa-manual/blob/main/src/machine.adoc#machine-cause-register-mcause
As X86 and RISCV are relying on a Table Walker cache, we
change their stdlib configs to use the newly defined
PrivateL1PrivateL2WalkCacheHierarchy
Change-Id: I63c3f70a9daa3b2c7a8306e51af8065bf1bea92b
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
From [1] The PrivateL1PrivateL2Cache hierarchy has been amended
with an MMUCache, which is basically a small cache in front
of the page table walker. Not every ISA makes use of it.
Arm for example already implements caching of page table
walks, via the partial_levels parameter in the ArmTLB.
With this patch we define a new module which explicitly makes
use of the WalkCache. Configurations that do not require
another cache in the first level of the memsys (for the ptw)
can use the PrivateL1PrivateL2CacheHierarchy
[1]: https://gem5-review.googlesource.com/c/public/gem5/+/49364
Change-Id: I17f7e68940ee947ca5b30e6ab3a01dafeed0f338
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
There were two errors causing the Weekly tests to fail. Each has a patch
in this PR:
1. Fixed incorrect version for the `artifact-downloader` (v4 instead of
v3).
2. Fixed incorrect use of `working-directory` which use of
`build/VEGA_X86/gem5.opt` to fail (not accessible in set
`working-directory`. The default `${github.workspace}` is sufficient.
"build/VEGA_X86/gem5.opt" is not available in directory "hip". `${
github.workspace}` is default should be run from there. This patch fixes
this.
Change-Id: I99875270c77dde92d3ec2ae0a07760905eaf903e
At the moment the SMMU is not handling translation errors gracefully the
way it is described by the SMMUv3 spec: whenever a translation fault
arises, simulation aborts as a whole. With this PR we add minimal
support for
translation fault handling, which means:
1) Not aborting simulation, but rather:
2) Writing an event entry to the SMMU_EVENTQ (event queue)
3) Signaling the PE an error arose and there is an event entry to be
consumed. The signaling is achieved
with the addition of the eventq SPI. Using an MSI is also possible
though it is currently disabled by the SMMU_IDR0.MSI being set to zero.
The PR is addressing issues reported by
https://github.com/orgs/gem5/discussions/898
The SMMU_IRQ_CTRL had been made optionally writeable by a
prior patch [1] even if interrupts were not supported in
the SMMUv3 model.
As we are partially enabling IRQ support, we remove this option
and we make the SMMU_IRQ_CTRL always writeable
[1]: https://gem5-review.googlesource.com/c/public/gem5/+/38555
Change-Id: Ie1f9458d583a5d8bcbe450c3e88bda6b3c53cf10
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
See https://github.com/orgs/gem5/discussions/898
The SMMUv3 Event Queue is basically unused at the moment. Whenever a
transaction fails we actually abort simulation. The sendEvent method
could be used to actually report the failure to the driver but it is
lacking interrupt support to notify the PE there is an event to handle.
The SMMUv3 spec allows both wired and MSI interrupts to be used.
We add the eventq_irq SPI param to the SMMU object and we draft an
initial sendInterrupt utility that makes use of it whenever it is
needed.
Change-Id: I6d103919ca8bf53794ae4bc922cbdc7156adf37a
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Rely on the architected solution instead of aborting simulation.
This means handling writes to the Event queue to signal managing
software there was a fault in the SMMU
Change-Id: I7b69ca77021732c6059bd6b837ae722da71350ff
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
The struct fields of the SMMUEvent were not matching the SMMUv3 specs.
This was "not an issue" as events have been implicitly disabled until
now (every translation error was aborting simulation)
With generateEvent we automatically construct a SMMU event from
a translation result.
Change-Id: Iba6a08d551c0a99bb58c4118992f1d2b683f62cf
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
A faulting translation should return additional information
(other than the fault type). This will be used by future
patches to properly populate the SMMU event record of the
event queue
As we currenlty support two faults only:
1) F_TRANSLATION
2) F_PERMISSION
We add to TranslResult the relevant fault information only:
type, class, stage and ipa
Change-Id: I0a81d5fc202e1b6135cecdcd6dfd2239c2f1ba7e
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Reading the Context Descriptor (CD) might require a stage2
translation. At the moment doReadCD does not check for the
return value of the translateStage2.
This means that any stage2 fault will be silently discarded
and an invalid address will be used/returned.
By returning a translation result we make sure any error
happening in the second stage of translation will be properly
flagged
Change-Id: I2ecd43f7e23080bf8222bc3addfabbd027ee8feb
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
We don't check the fault type directly. This will improve
readability once the TranslResult class will be augmented
with extra fields
Change-Id: I5acafaabf098d6ee79e1f0c384499cc043a75a9d
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
This commit adds support for vector unit-stride segment load operations
for RISC-V (vlseg<NF>e<X>). This implementation is based in two types of
microops:
- VlSeg microops that load data as it is organized in memory in structs
of several fields.
- VectorDeIntrlv microops that properly deinterleave structs into
destination registers.
Gem5 issue: https://github.com/gem5/gem5/issues/382
With the introduction of multi-ISA gem5, we don't store the TARGET_ISA
anymore as a string in the root section of the checkpoint [1]. There is
therefore no way at the moment to asses the ISA of a CPU/ThreadContext.
This is a problem when it comes to checkpoint updates which are ISA
specific.
By explicitly serializing the ISA as a string under the cpu.isa section
we avoid this problem and we let cpt_upgraders be aware of the ISA in
use.
[1]: https://gem5-review.googlesource.com/c/public/gem5/+/48884