gpu-compute: Invalidate Scalar cache when SQC invalidates (#1093)

The scalar cache is not being invalidated which causes stale data to be
left in the scalar cache between GPU kernels. This commit sends
invalidates to the scalar cache when the SQC is invalidated. This is a
sufficient baseline for simulation.

Since the number of invalidates might be larger than the mandatory queue
can hold and no flash invalidate mechanism exists in the VIPER protocol,
the command line option for the mandatory queue size is removed, which
is the same behavior as the SQC.

Change-Id: I1723f224711b04caa4c88beccfa8fb73ccf56572
This commit is contained in:
Matthew Poremba
2024-05-06 07:35:38 -07:00
committed by GitHub
parent 36c1ea9c61
commit 0d3d456894
3 changed files with 50 additions and 16 deletions

View File

@@ -497,13 +497,6 @@ def define_options(parser):
parser.add_argument(
"--noL1", action="store_true", default=False, help="bypassL1"
)
parser.add_argument(
"--scalar-buffer-size",
type=int,
default=128,
help="Size of the mandatory queue in the GPU scalar "
"cache controller",
)
parser.add_argument(
"--glc-atomic-latency", type=int, default=1, help="GLC Atomic Latency"
)
@@ -841,9 +834,7 @@ def construct_scalars(options, system, ruby_system, network):
scalar_cntrl.responseToSQC = MessageBuffer(ordered=True)
scalar_cntrl.responseToSQC.in_port = network.out_port
scalar_cntrl.mandatoryQueue = MessageBuffer(
buffer_size=options.scalar_buffer_size
)
scalar_cntrl.mandatoryQueue = MessageBuffer()
return (scalar_sequencers, scalar_cntrl_nodes)