As discussed in https://github.com/orgs/gem5/discussions/954:
In the refactor made by commit f65df9b959 conditional indirect
branches are no longer updated in the indirect predictor.
This kind of branches do not exist in x86 neither arm, but they are
present in PowerPC.
This patch, enables the indirect predictor to track this kind of
branches.
Fix#1055. Prioritize committing from exiting threads before we consider
other threads using the specified SMT commit policy. All instructions in
the ROB for exiting threads should already have been squashed. Thus,
this ensures that the ROB instruction queues for all exiting threads
will be empty at the end of the current cycle, avoiding the assertion
failure encountered in #1055.
Change-Id: Ib0178a1aa6e94bce2b6c49dd87750e82776639dc
Fix#1042. Clear the current fetch macro-op if the instruction
initiating the squash is the last micro-op in its macro-op.
Change-Id: I77f60334771277e47f19573d4067b3a7bc5488b2
Add support for event overflows when the host thread migrates across
differnt types of cores on a hybrid host architecture. This patch
achieves this by simply halving the sample period for each performance
counter. Since there are two types of cores, this guarantees that an
overflow event will trigger before N events occur, where N is the
requested period (e.g., number of instructions to simulate). This
may result in many early triggers (up to log2(N)) before the requested
period is reached. However, gem5's existing bookkeeping logic already
handles this case properly: if fewer events than requested occurred,
it will set a new period (N - observed) and resume execution. This loop
will exit once N events have actually occurred.
Change-Id: Iff85237da1ae1aa25bc2045fbf9091726291fe36
Fix#1064 by adding support for hardware performance counters on hybrid
architectures like Intel Alder Lake.
Hybrid architectures have multiple types of cores, each of which require
the instantiation of a separate performance counter. The KVM CPU's
PerfKvmCounter class was not aware of this, any only instantiated a
single performance counter, implicitly bound to the P-core only. This
meant that if gem5 ever ran on an E-core, the various hardware
performance counters would not get updated properly, in some cases
always zero (e.g., for the number of instructions executed).
This patch adds support for hybrid host architectures as follows. First,
we convert PerfKvmCounter into an abstract class, which has two
concrete implementations: SimplePerfKvmCounter and HybridPerfKvmCounter.
The former is used for non-hybrid architectures or for non-hardware
performance counters and is functionally equivalent to the prior
implementation of PerfKvmCounter. The latter is used for instantiating
hardware performance counters (i.e., of type PERF_TYPE_HARDWARE) on
hybrid host architectures. It does so by internally instantiating two
SimplePerfKvmCounters, one for a P-core and one for an E-core. Upon
read, it sums the results of reading the two internal counters.
Change-Id: If64fcb0e2fcc1b3a6a37d77455c2b21e1fc81150
Fixes#1033
In the BaseCPU object _uncached_interrupt_response_ports is a class
variable, not an instance variable. #1004 changed the explicit
self._uncached_interrupt_response_ports to use extend. This caused the
list of ports to be extended *for all cores*, which caused problems when
using a system with more than 1 core.
This reverts the `extend` part of the change, but keeps the rest.
Change-Id: I6dc7d6da6763048d82960229d34933a3a2ac36e0
Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
Fix issue #1004. When enabling SMT with the O3 cpu, only the first
interrupts object was getting initialized properly. This patch
initializes all interrupts objects, one per SMT thread.
Change-Id: I300782b645bd8ea3ef2497278fb73125ab4bf495
This commit updates cpu by removing VectorXXX types and updates
FUs according to the newer SimdXXX ones. This is part of the
homogenization of RISCV Vector instruction types, which moved
from VectorXXX to SimdXXX.
Change-Id: I84baccd099b73a11cf26dd714487a9f272671d3d
This commit adds support for vector unit-stride segment store operations
for RISC-V (vssegXeXX). This implementation is based in two types of
microops:
- VsSegIntrlv microops that properly interleave source registers into
structs.
- VsSeg microops that store data in memory as contiguous structs of
several fields.
Change-Id: Id80dd4e781743a60eb76c18b6a28061f8e9f723d
Gem5 issue: https://github.com/gem5/gem5/issues/382
This commit adds support for vector unit-stride segment load operations
for RISC-V (vlseg<NF>e<X>). This implementation is based in two types of
microops:
- VlSeg microops that load data as it is organized in memory in structs
of several fields.
- VectorDeIntrlv microops that properly deinterleave structs into
destination registers.
Gem5 issue: https://github.com/gem5/gem5/issues/382
This PR is fixing https://github.com/gem5/gem5/issues/668. It fixes it
for all ISAs other than Arm with the first commit, which is setting the
number of architectural Matrix registers to 0 for those ISA which are
not using them.
It then partly fixes it for Arm as well with the 2nd commit: by removing
RenameMap::numFreeEntries we don't stall renaming unless a matrix
instruction is encountered... This means most binaries will run with SMT
as long as they don't use FEAT_SME instructions. Please note: this is
not simply a SMT fix, it will generally address a shortcoming in the way
we were renaming instructions.
If an Arm binary wants to use SMT with FEAT_SME, the 4th commit will
make sure the lack of physical registers is notified explicitly at the
beginning of simulation, rather than silently blocking renaming
Having the number of physical registers matching exactly the number of
architectural ones does not guarantee a proper execution as it means the
freeList would have 0 registers available for renaming. In this case the
worst would happen: renaming would silently stall execution
indefinitely. With this change we report the issue to the user and fail
execution
Change-Id: I1eb968802f1a1a5115012f44b541542a682f887d
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
The method is extracting the minimum number of [1] non-zero free
registers/entries across all register classes. This means that if we
have saturated all register storage for a particular class, renaming
will stop as a whole.
I believe it does make sense to keep renaming and only block renaming in
case an instruction requiring the particular register type is
encountered. This would happen with the Rename::renameInsts method
[1]: https://github.com/gem5/gem5/blob/stable/src/cpu/o3/rename_map.hh#L269
[2]: https://github.com/gem5/gem5/blob/stable/src/cpu/o3/rename.cc#L662
Change-Id: I932826a77a5c0b2e05d8fdcab0e6ca13cf0e3d23
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
This change improves the functionality of strided generator to create
trace with better flexibility.
It allows the user to manually set offset and stride size instead of
calculating it based on a "gen_id".
This way different patterns could be created with the same SimObject.
In addition, this change adds stdlib components for strided generator.
Replace instances of "GCN3" with Vega. Remove gfx801 and gfx803. Rename
FIJI to Vega and Carrizo to Raven.
Using misc since there is not enough room to fit all the tags.
Change-Id: Ibafc939d49a69be9068107a906e878408c7a5891
Currently, if the Capstone header file is found in the host system,
scons will try to build the ArmCapstoneDisassembler regardless of the
gem5 target ISA. This is causing problem when the host has Capstone, but
the gem5 target ISA is not arm. Compiling gem5 in this case will cause
errors, e.g., ArmISA and ArmSystem is not found.
This change aims to prevent building the ArmCapstoneDisassembler when
the gem5 target ISA is not arm.
Ref:
[1] The Arm Capstone PR https://github.com/gem5/gem5/pull/494
Change-Id: I1e714d34aec8fe2a2af8cd351536951053a4d8a5
This commit converts `gem5::loader::Symbol` to a full class with
private members, enforcing encapsulation. Until now client code has
been able to (and does) access members directly.
This change will enable class invariants to be enforced via accessor
methods.
Change-Id: Ia0b5b080d4f656637a211808e13dce1ddca74541
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
With the use of large RVV vectors (i.e., 8K or 16K bits) and a limited
number of cacheLoadPorts, some loads take multiple cycles to execute.
This triggered certain conditions when store-to-load forwarding happens
in the middle of the execution of a load that already has outstanding
packets.
First, after store-to-load forwarding the request is marked as discarded
and the load is immediately writtenback, which triggers a writebackDone
that tries to delete the request, triggering an assert as it still has
outstanding packets. This patch avoid deleting the request leaving it
self owned, it will be deleted when the last packet arrives in
packetReplied.
Second, this patch avoid checking snoops on discarded requests by
checking if the request exists.
Change-Id: Icea0add0327929d3a6af7e6dd0af9945cb0d0970
Co-authored-by: Adrià Armejach <adria.armejach@bsc.es>
In a high performance CPU there is no other way than a BTB hit
to know about a branch instruction and its type. For low-end CPU's
pre-decoding might sit in from of the BPU to provide this information.
Currently, the BPU models only low-end behavior and updates the
RAS and the indirect branch prediction even without a BTB hit.
This patch adds three things to model the correct behavior for high-end
CPUs.
1. A check before the RAS and indirect predictor wheather there was
a BTB hit or not. Only for BTB hits the BPU will consolidate RAS, and
indirect predictor.
2. Since, this check requires a BTB hit for indirect branches they must
also be installed into the BTB. For returns this was already done.
3. Finally, the BTB update previously happened at squash (decode
or commit). Since this can be out-of-order that means branches from
the false path can get installed without ever been retired.
The config option HAVE_CAPSTONE is added in the previous [1] and
the Kconfig options should be sync with it.
[1] https://github.com/gem5/gem5/pull/494
Change-Id: Id83718bc825f53d87d37d6ac930b96371209bdb3
The CL updates the Kconfig:
1. Replace the USE_NULL_ISA with BUILD_ISA
2. The USE_XXX_ISAs are depends on BUILD_ISA
3. If the BUILD_ISA is set, at least one of USE_XXX_ISAs must be set
4. Refactor the USE_KVM option
Change-Id: I2a600dea9fb671263b0191c46c5790ebbe91a7b8
These are not yet consumed by anything, but convert all the settings
from SCons variables to Kconfig variables.
If you have existing SConsopts files which need to be converted, you
should take a look at KCONFIG.md to learn about how kconfig is used in
gem5. You should decide if any variables need to be available to C++ or
kconfig itself, and whether those are options which should be detected
automatically, or should be up to the user. Options which should be
measured automatically should still be in SConsopts files, while user
facing options should be added to new or existing Kconfig files.
Generally, make sure you're storing c++/kconfig visible options in
env['CONF'][...]. Also remove references to sticky_vars since persistent
options should now be handled with kconfig, and export_vars since
everything in env['CONF'] is now exported automatically.
Switch SCons/gem5 to use Kconfig for configuration, except EXTRAS which
is still a sticky SCons variable. This is necessary because EXTRAS also
controls what config options exist. If it came from Kconfig itself, then
there would be a circular dependency. This dependency could
theoretically be handled by reparsing the Kconfig when EXTRAS
directories were added or removed, but that would be complicated, and
isn't supported by kconfiglib. It wouldn't be worth the significant
effort it would take to add it, just to use Kconfig more purely.
Change-Id: I29ab1940b2d7b0e6635a490452d05befe5b4a2c9
In a high performance CPU there is no other way than a BTB hit
to know about a branch instruction and its type. For low-end CPU's
pre-decoding might sit in from of the BPU to provide this information.
Currently, the BPU models only low-end behavior and updates the
RAS and the indirect branch prediction even without a BTB hit.
This patch adds two things to model the correct behavior for high-end
CPUs.
1. A check before the RAS and indirect predictor wheather there was
a BTB hit or not. Only for BTB hits the BPU will consolidate RAS, and
indirect predictor.
2. Since, this check requires a BTB hit for indirect branches they must
also be installed into the BTB. For returns this was already done.
Change-Id: Ibef9aa890f180efe547c82f41fc71f457c988a89
Signed-off-by: David Schall <david.schall@ed.ac.uk>
This reverts gem5#133, the temporary work-around for gem5#131, allowing
both SLC and GLC atomic requests to be made in the GPU tester.
The underlying issues behind gem5#131 have been resolved by gem5#367 and
gem5#397.
Major refactoring of the branch predictor unit.
- Clearer control flow of the main branch predictor
- Remove `uncondBranch` and `btbUpdate` functions in favor
of a common `historyUpdate` function. There is now only
one lookup function for conditional branches and the new
`historyUpdate` for speculative history update.
- Added a new *target provider* class.
- More expressive statistics depending on the different branch
types.
- Cleanup the branch history management
Major refactoring of the branch predictor unit.
- Clearer control flow of the main branch predictor
- Remove `uncondBranch` and `btbUpdate` functions in favour of a
common `historyUpdate` function. There is now only one lookup
function for conditional branches and the new `historyUpdate` for
speculative history update.
- Added a new *target provider* class.
- More expressive statistics depending on the different branch types.
- Cleanup the branch history management
Change-Id: I21fa555b5663e4abad7c836fc1d41a9c8b205263
Signed-off-by: David Schall <david.schall@ed.ac.uk>
Capstone is an open source disassembler [1] already used by
other projects (like QEMU).
gem5 is already capable of disassembling instructions. Every StaticInst
is supposed to define a generateDisassembly method which returns the
instruction mnemonic (opcode + operand list) as a string.
This "distributed" implementation of a disassembler relies
on the developer to properly populate the metadata fields
of the base instruction class.
The growing complexity of the ISA code and the massive reuse
of base classes beyond their intended use has led to a
disassembling logic which contains several bugs.
By allowing a tracer to rely on a third party disassembler, we fill the
intruction trace with a more trustworthy instruction stream.
This will make any trace parsing tool to work better and it will
also allow us to spot/fix our own bugs by comparing instruction
traces with native vs custom disassembler
[1]: http://www.capstone-engine.org/
Change-Id: I3c4db5072c03d2731265d0398d3863c101dcb180
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
The return address stack (RAS) is restructured to be a separate SimObject.
This enables disabling the RAS and better separation of the functionality.
Furthermore, easier statistics and debugging.
Change-Id: I8aacf7d4c8e308165d0e7e15bc5a5d0df77f8192
Signed-off-by: David Schall <david.schall@ed.ac.uk>
This only applies to pseudo instructions with their own encoding (m5
ops)... In other
words memory mapped m5 operations are not supported. This make sense as
they should
rather be treated as device accesses... Though it is something to take
into consideration
when relying on the flag
While it's plausible to define the cache_line_size as a 32-bit unsigned
int, the use of cache_line_size is way out of its original scope.
cache_line_size has been used to produce an address mask, which masking
out the offset bits from an address. For example, [1], [2], [3], and
[4]. However, since the cache_line_size is an "unsigned int", the type
of the value is not guaranteed to be 64-bit long. Subsequently, the bit
twiddling hacks in [1], [2], [3], and [4] produce 32-bit mask, i.e.,
0x00000000FFFFFFC0.
This behavior at least caused a problem in LLSC in RISC-V [5], where the
load reservation (LR) relies on the mask to produce the cache block
address. Two distinct 64-bit addresses can be mapped to the same cache
block using the above mask.
This patch explicitly defines cache_line_size as a 64-bit unsigned int
so the cache block mask can be produced correctly for 64-bit addresses.
[1]
3bdcfd6f7a/src/cpu/simple/atomic.hh (L147)
[2]
3bdcfd6f7a/src/cpu/simple/timing.hh (L224)
[3]
3bdcfd6f7a/src/cpu/o3/lsq_unit.cc (L241)
[4]
3bdcfd6f7a/src/cpu/minor/lsq.cc (L1425)
[5]
3bdcfd6f7a/src/arch/riscv/isa.cc (L787)
Being able to recognise pseudo ops from the static instruction
pointer is actually quite useful in several circumstances
Change-Id: Ib39badf9aabba15ab3ebe7a8e9717583412731e4
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Simplify indirect predictor interface. Several of the existing
functions where merged together into four clear once. Those
four are similar to the main direction predictor interface.
'lookup', 'update', 'squash' and 'commit'. This makes the
interface much more clear, allows better functionality isolation
and makes it simpler to develop new predictor models.
A new parameter is added to allow additional buffer space for
speculative path history.
Change-Id: I6d6b43965b2986ef959953a64c428e50bc68d38e
Signed-off-by: David Schall <david.schall@ed.ac.uk>
- A new abstract BTB class is created to enable different BTB
implementations. The new BTB class gets its own parameter
and stats.
- An enum is added to differentiate branch instruction types.
This enum is used to enhance statistics and BPU management.
- The existing BTB is moved into `simple_btb` as default.
- An additional function is added to store the static instruction in
the BTB. This function is used for the decoupled front-end.
- Update configs to match new BTB parameters.
Change-Id: I99b29a19a1b57e59ea2b188ed7d62a8b79426529
Signed-off-by: David Schall <david.schall@ed.ac.uk>
This introduces the changes necessary for clang-15 and clang-16 to run
within gem5, and adds them to the compiler tests.
This also updates the dockerfiles for ubuntu 22.04 to include the steps
necessary to compile clang-15 and clang-16.
Add the instruction size of a static instruction. x86 and arm decoders
add now the instruction size to the macro instruction. However, microops
are still handled by the fetch stage which is not nice.
Furthermore, we add a set method to the PC state. It allows setting a PC
state to acertain address.
Both methods are required for the decoupled front-end.
Change-Id: I311fe3f637e867c42dee7781f5373ea2e69e2072
Adds the instruction size to all static instruction. x86, arm
and RISC-V decoders add the instruction size to every decoded
macro instruction. As microops should reflect the size of the
their parent macroop the set method is overwritten to pass the
size to all microops.
Furthermore, we add a set method to the PC state. It allows
setting a PC state to a certain address.
Both methods are required for the decoupled front-end.
Change-Id: I311fe3f637e867c42dee7781f5373ea2e69e2072
Signed-off-by: David Schall <david.schall@ed.ac.uk>