72ee6d1aadd6998cddb33d1fd689c4a848f03b6f
In the GPU VIPER TCC, programs with mixes of atomics and data accesses to the same address, in the same kernel, can experience deadlock when large applications (e.g., Pannotia's graph analytics algorithms) are running on very small GPUs (e.g., the default 4 CU GPU configuration). In this situation, deadlocks occur due to resource stalls interacting with the behavior of the current implementation for handling races between atomic accesses. The specific order of events causing this deadlock are: 1. TCC is waiting on an atomic to return from directory 2. In the meantime it receives another atomic to the same address -- when this happens, the TCC increments number of atomics to this address (numAtomics = 2) that are pending in TBE, and does a write through of the atomic to the directory. 3. When the first atomic returns from the Directory, it decrements the numAtomics counter. numAtomics was at 2 though, because of step #2. So it doesn't deallocate the TBE entry and calls Event:AtomicNotDone. 4. Another request (a LD) to the same address comes along for the same address. The LD does z_stall since the second atomic is pending –- so the LD retries every cycle until the deadlock counter times out (or until the second atomic comes back). 5. The second atomic returns to the TCC. However, because there are so many LD's pending in the cache, all doing z_stall's and retrying every cycle, there are a lot of resource stalls. So, when the second atomic returns, it is forced to retry its operation multiple times -- and each time it decrements the atomicDoneCnt flag (which was added to catch a race between atomics arriving and leaving the TCC in7246f70bfb) repeatedly. As a result atomicDoneCnt becomes negative. 6. Since this atomicDoneCnt flag is used to determine when Event:AtomicDone happens, and since the resource stalls caused the atomicDoneCnt flag to become negative, we never complete the atomic. Which means the pending LD can never access the line, because it's stuck waiting for the atomic to complete. 7. Eventually the deadlock threshold is reached. To fix this issue, this commit changes the VIPER TCC protocol from using z_stall to using the stall_and_wait buffer method that the Directory-level of the SLICC already uses. This change effectively prevents resource stalls from dominating the TCC level, by putting pending requests for a given address in a per-address stall buffer. These requests are then woken up when the pending request returns. As part of this change, this change also makes two small changes to the Directory-level protocol (MOESI_AMD_BASE-dir): 1. Updated the names of the wakeup actions to match the TCC wakeup actions, to avoid confusion. 2. Changed transition(B, UnblockWriteThrough, U) to check all stall buffers, as some requests were being placed later in the stall buffer than was being checked. This mirrors the changes in187c44fe44to other Directory transitions to resolve races between GPU and DMA requests, but for transitions prior workloads did not stress. Change-Id: I60ac9830a87c125e9ac49515a7fc7731a65723c2 Reviewed-on: https://gem5-review.googlesource.com/c/public/gem5/+/51367 Reviewed-by: Jason Lowe-Power <power.jg@gmail.com> Reviewed-by: Matthew Poremba <matthew.poremba@amd.com> Maintainer: Jason Lowe-Power <power.jg@gmail.com> Tested-by: kokoro <noreply+kokoro@google.com>
This is the gem5 simulator. The main website can be found at http://www.gem5.org A good starting point is http://www.gem5.org/about, and for more information about building the simulator and getting started please see http://www.gem5.org/documentation and http://www.gem5.org/documentation/learning_gem5/introduction. To build gem5, you will need the following software: g++ or clang, Python (gem5 links in the Python interpreter), SCons, zlib, m4, and lastly protobuf if you want trace capture and playback support. Please see http://www.gem5.org/documentation/general_docs/building for more details concerning the minimum versions of these tools. Once you have all dependencies resolved, type 'scons build/<CONFIG>/gem5.opt' where CONFIG is one of the options in build_opts like ARM, NULL, MIPS, POWER, SPARC, X86, Garnet_standalone, etc. This will build an optimized version of the gem5 binary (gem5.opt) with the the specified configuration. See http://www.gem5.org/documentation/general_docs/building for more details and options. The main source tree includes these subdirectories: - build_opts: pre-made default configurations for gem5 - build_tools: tools used internally by gem5's build process. - configs: example simulation configuration scripts - ext: less-common external packages needed to build gem5 - include: include files for use in other programs - site_scons: modular components of the build system - src: source code of the gem5 simulator - system: source for some optional system software for simulated systems - tests: regression tests - util: useful utility programs and files To run full-system simulations, you may need compiled system firmware, kernel binaries and one or more disk images, depending on gem5's configuration and what type of workload you're trying to run. Many of those resources can be downloaded from http://resources.gem5.org, and/or from the git repository here: https://gem5.googlesource.com/public/gem5-resources/ If you have questions, please send mail to gem5-users@gem5.org Enjoy using gem5 and please share your modifications and extensions.
Description