Merge "misc: Merge branch v21.0.0.0 into develop" into develop

This commit is contained in:
Bobby R. Bruce
2021-03-26 18:21:34 +00:00
60 changed files with 4390 additions and 774 deletions

View File

@@ -303,6 +303,12 @@ util-m5:
maintainers:
- Gabe Black <gabe.black@gmail.com>
util-gem5art:
status: maintained
maintainers:
- Bobby Bruce <bbruce@ucdavis.edu>
- Jason Lowe-Power <jason@lowepower.com>
website:
desc: >-
The gem5-website repo which contains the gem5.org site

View File

@@ -1,3 +1,66 @@
# Version 21.0.0.0
Version 21.0 marks *one full year* of gem5 releases, and on this anniversary, I think we have some of the biggest new features yet!
This has been a very productive release with [100 issues](https://gem5.atlassian.net/), over 813 commits, and 49 unique contributors.
## 21.0 New features
### AMBA CHI protocol implemented in SLICC: Contributed by *Tiago Mück*
This new protocol provides a single cache controller that can be reused at multiple levels of the cache hierarchy and configured to model multiple instances of MESI and MOESI cache coherency protocols.
This implementation is based of Arms [AMBA 5 CHI specification](https://static.docs.arm.com/ihi0050/d/IHI0050D_amba_5_chi_architecture_spec.pdf) and provides a scalable framework for the design space exploration of large SoC designs.
See [the gem5 documentation](http://www.gem5.org/documentation/general_docs/ruby/CHI/) for more details.
There is also a [gem5 blog post](http://www.gem5.org/2020/05/29/flexible-cache.html) on this new protocol as well.
### Full support for AMD's GCN3 GPU model
In previous releases, this model was only partially supported.
As of gem5 21.0, this model has been fully integrated and is tested nightly.
This model currently only works in syscall emulation mode and requires using the gcn docker container to get the correct version of the ROCm stack.
More information can be found in [this blog post](http://www.gem5.org/2020/05/27/modern-gpu-applications.html).
With this full support, we are also providing many applications as well.
See [gem5-resources](http://resources.gem5.org/) for more information.
### RISC-V Full system Linux boot support: Contributed by *Peter Yuen*
The RISC-V model in gem5 can now boot unmodified Linux!
Additionally, we have implemented DTB generation and support the Berkeley Boot Loader as the stage 1 boot loader.
We have also released a set of resources for you to get started: <https://gem5.googlesource.com/public/gem5-resources/+/refs/heads/develop/src/riscv-fs/>
### New/Changed APIs
There are multiple places where the developers have reduced boilerplate.
* **[API CHANGE]**: No more `create()` functions! Previously, every `SimObject` required a `<SimObjectParams>::create()` function to be manually defined. Forgetting to do this resulted in confusing errors. Now, this function is created for you automatically. You can still override it if you need to handle any special cases.
* **[API CHANGE]**: `params()`: Rather than defining a typedef and the `params()` function for every `SimObject`, you can now use the `PARAMS` macro.
See <http://doxygen.gem5.org/release/current/classSimObject.html#details> for more details on these two API changes.
* **[API CHANGE]**: All stats are now using *new style* groups instead of the older manual stat interface.
* The previous API (creating stats that are not part of a `Group`) is still supported, but it is now deprecated.
* If a stat is not created with the new `Group` API, it may not be automatically dumped using new stat APIs (e.g., the Python API).
* Next release, there will be a warning for all old-style stats.
### Platforms no longer support
* **[USER-FACING CHANGE]**: Python 2.7 is *no longer supported*. You must use Python 3.6+.
* CLANG minimum version is now 3.9
* Bump minimum C++ to C++14
### Other improvements and new features
* Extra options to build m5ops
* m5term improvements
* There is a new python-based library for handling statistics. This library *works*, but hasn't been thoroughly tested yet. Stay tuned for more on this next release.
* Many improvements and additions to unit tests
* Cleaning up the `StaticInst` type
* Workload API changes
* Many updates and changes to the m5 guest utility
* [Support for running arm64 Linux kernel v5.8](https://gem5.atlassian.net/browse/GEM5-787)
* [Arm SCMI implemented](https://gem5.atlassian.net/browse/GEM5-768)
# Version 20.1.0.5
**[HOTFIX]** This hotfix release fixes three known bugs:

View File

@@ -322,12 +322,6 @@ if main['GCC'] or main['CLANG']:
if GetOption('gold_linker'):
main.Append(LINKFLAGS='-fuse-ld=gold')
# Treat warnings as errors but white list some warnings that we
# want to allow (e.g., deprecation warnings).
main.Append(CCFLAGS=['-Werror',
'-Wno-error=deprecated-declarations',
'-Wno-error=deprecated',
])
else:
error('\n'.join((
"Don't know what compiler options to use for your compiler.",

View File

@@ -1,3 +1,3 @@
TARGET_ISA = 'arm'
CPU_MODELS = 'AtomicSimpleCPU,TimingSimpleCPU,O3CPU,MinorCPU'
PROTOCOL = 'MOESI_CMP_directory'
PROTOCOL = 'CHI'

View File

@@ -33,38 +33,45 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# 2x4 mesh definition
from ruby import CHI_config
# CustomMesh parameters for a 2x4 mesh. Routers will have the following layout:
#
# 0 --- 1 --- 2 --- 3
# | | | |
# 4 --- 5 --- 6 --- 7
#
mesh:
num_rows : 2
num_cols : 4
router_latency : 1
link_latency : 1
# Default parameter are configs/ruby/CHI_config.py
#
class NoC_Params(CHI_config.NoC_Params):
num_rows = 2
num_cols = 4
# Bindings for each CHI node type.
# Specialization of nodes to define bindings for each CHI node type
# needed by CustomMesh.
# The default types are defined in CHI_Node and their derivatives in
# configs/ruby/CHI_config.py
CHI_RNF:
# Uncomment to map num_nodes_per_router RNFs in each provided router,
# assuming num. created CHI_RNFs == len(router_list)*num_nodes_per_router
# num_nodes_per_router: 1
router_list: [1, 2, 5, 6]
class CHI_RNF(CHI_config.CHI_RNF):
class NoC_Params(CHI_config.CHI_RNF.NoC_Params):
router_list = [1, 2, 5, 6]
CHI_HNF:
# num_nodes_per_router: 1
router_list: [1, 2, 5, 6]
class CHI_HNF(CHI_config.CHI_HNF):
class NoC_Params(CHI_config.CHI_HNF.NoC_Params):
router_list = [1, 2, 5, 6]
CHI_SNF_MainMem:
# num_nodes_per_router: 1
router_list: [0, 4]
class CHI_SNF_MainMem(CHI_config.CHI_SNF_MainMem):
class NoC_Params(CHI_config.CHI_SNF_MainMem.NoC_Params):
router_list = [0, 4]
# Applies to CHI_SNF_BootMem and possibly other non-main memories
CHI_SNF_IO:
router_list: [3]
class CHI_SNF_BootMem(CHI_config.CHI_SNF_BootMem):
class NoC_Params(CHI_config.CHI_SNF_BootMem.NoC_Params):
router_list = [3]
# Applies to CHI_RNI_DMA and CHI_RNI_IO
CHI_RNI_IO:
router_list: [7]
class CHI_RNI_DMA(CHI_config.CHI_RNI_DMA):
class NoC_Params(CHI_config.CHI_RNI_DMA.NoC_Params):
router_list = [7]
class CHI_RNI_IO(CHI_config.CHI_RNI_IO):
class NoC_Params(CHI_config.CHI_RNI_IO.NoC_Params):
router_list = [7]

View File

@@ -40,6 +40,7 @@
import optparse
import sys
from os import path
import m5
from m5.defines import buildEnv
@@ -62,6 +63,43 @@ from common import ObjectList
from common.Caches import *
from common import Options
def generateMemNode(state, mem_range):
node = FdtNode("memory@%x" % int(mem_range.start))
node.append(FdtPropertyStrings("device_type", ["memory"]))
node.append(FdtPropertyWords("reg",
state.addrCells(mem_range.start) +
state.sizeCells(mem_range.size()) ))
return node
def generateDtb(system):
"""
Autogenerate DTB. Arguments are the folder where the DTB
will be stored, and the name of the DTB file.
"""
state = FdtState(addr_cells=2, size_cells=2, cpu_cells=1)
root = FdtNode('/')
root.append(state.addrCellsProperty())
root.append(state.sizeCellsProperty())
root.appendCompatible(["riscv-virtio"])
for mem_range in system.mem_ranges:
root.append(generateMemNode(state, mem_range))
sections = [*system.cpu, system.platform]
for section in sections:
for node in section.generateDeviceTree(state):
if node.get_name() == root.get_name():
root.merge(node)
else:
root.append(node)
fdt = Fdt()
fdt.add_rootnode(root)
fdt.writeDtsFile(path.join(m5.options.outdir, 'device.dts'))
fdt.writeDtbFile(path.join(m5.options.outdir, 'device.dtb'))
# ----------------------------- Add Options ---------------------------- #
parser = optparse.OptionParser()
Options.addCommonOptions(parser)
@@ -92,12 +130,12 @@ mdesc = SysConfig(disks=options.disk_image, rootdev=options.root_device,
system.mem_mode = mem_mode
system.mem_ranges = [AddrRange(start=0x80000000, size=mdesc.mem())]
system.workload = RiscvBareMetal()
system.workload = RiscvLinux()
system.iobus = IOXBar()
system.membus = MemBus()
system.system_port = system.membus.slave
system.system_port = system.membus.cpu_side_ports
system.intrctrl = IntrControl()
@@ -147,7 +185,7 @@ system.cpu_clk_domain = SrcClockDomain(clock = options.cpu_clock,
voltage_domain =
system.cpu_voltage_domain)
system.workload.bootloader = options.kernel
system.workload.object_file = options.kernel
# NOTE: Not yet tested
if options.script is not None:
@@ -161,12 +199,12 @@ system.cpu = [CPUClass(clk_domain=system.cpu_clk_domain, cpu_id=i)
if options.caches or options.l2cache:
# By default the IOCache runs at the system clock
system.iocache = IOCache(addr_ranges = system.mem_ranges)
system.iocache.cpu_side = system.iobus.master
system.iocache.mem_side = system.membus.slave
system.iocache.cpu_side = system.iobus.mem_side_ports
system.iocache.mem_side = system.membus.cpu_side_ports
elif not options.external_memory_system:
system.iobridge = Bridge(delay='50ns', ranges = system.mem_ranges)
system.iobridge.slave = system.iobus.master
system.iobridge.master = system.membus.slave
system.iobridge.cpu_side_ports = system.iobus.mem_side_ports
system.iobridge.mem_side_ports = system.membus.cpu_side_ports
# Sanity check
if options.simpoint_profile:
@@ -197,13 +235,27 @@ uncacheable_range = [
*system.platform._on_chip_ranges(),
*system.platform._off_chip_ranges()
]
pma_checker = PMAChecker(uncacheable=uncacheable_range)
# PMA checker can be defined at system-level (system.pma_checker)
# or MMU-level (system.cpu[0].mmu.pma_checker). It will be resolved
# by RiscvTLB's Parent.any proxy
for cpu in system.cpu:
cpu.mmu.pma_checker = pma_checker
cpu.mmu.pma_checker = PMAChecker(uncacheable=uncacheable_range)
# --------------------------- DTB Generation --------------------------- #
generateDtb(system)
system.workload.dtb_filename = path.join(m5.options.outdir, 'device.dtb')
# Default DTB address if bbl is bulit with --with-dts option
system.workload.dtb_addr = 0x87e00000
# Linux boot command flags
kernel_cmd = [
"console=ttyS0",
"root=/dev/vda",
"ro"
]
system.workload.command_line = " ".join(kernel_cmd)
# ---------------------------- Default Setup --------------------------- #

View File

@@ -33,617 +33,25 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import math
import yaml
import m5
from m5.objects import *
from m5.defines import buildEnv
from .Ruby import create_topology, setup_memory_controllers
from .Ruby import create_topology
def define_options(parser):
parser.add_option("--noc-config", action="store", type="string",
parser.add_option("--chi-config", action="store", type="string",
default=None,
help="YAML NoC config. parameters and bindings. "
"required for CustomMesh topology")
class Versions:
'''
Helper class to obtain unique ids for a given controller class.
These are passed as the 'version' parameter when creating the controller.
'''
_seqs = 0
@classmethod
def getSeqId(cls):
val = cls._seqs
cls._seqs += 1
return val
_version = {}
@classmethod
def getVersion(cls, tp):
if tp not in cls._version:
cls._version[tp] = 0
val = cls._version[tp]
cls._version[tp] = val + 1
return val
class CHI_Node(SubSystem):
'''
Base class with common functions for setting up Cache or Memory
controllers that are part of a CHI RNF, RNFI, HNF, or SNF nodes.
Notice getNetworkSideControllers and getAllControllers must be implemented
in the derived classes.
'''
def __init__(self, ruby_system):
super(CHI_Node, self).__init__()
self._ruby_system = ruby_system
self._network = ruby_system.network
def getNetworkSideControllers(self):
'''
Returns all ruby controllers that need to be connected to the
network
'''
raise NotImplementedError()
def getAllControllers(self):
'''
Returns all ruby controllers associated with this node
'''
raise NotImplementedError()
def setDownstream(self, cntrls):
'''
Sets cntrls as the downstream list of all controllers in this node
'''
for c in self.getNetworkSideControllers():
c.downstream_destinations = cntrls
def connectController(self, cntrl):
'''
Creates and configures the messages buffers for the CHI input/output
ports that connect to the network
'''
cntrl.reqOut = MessageBuffer()
cntrl.rspOut = MessageBuffer()
cntrl.snpOut = MessageBuffer()
cntrl.datOut = MessageBuffer()
cntrl.reqIn = MessageBuffer()
cntrl.rspIn = MessageBuffer()
cntrl.snpIn = MessageBuffer()
cntrl.datIn = MessageBuffer()
# All CHI ports are always connected to the network.
# Controllers that are not part of the getNetworkSideControllers list
# still communicate using internal routers, thus we need to wire-up the
# ports
cntrl.reqOut.out_port = self._network.in_port
cntrl.rspOut.out_port = self._network.in_port
cntrl.snpOut.out_port = self._network.in_port
cntrl.datOut.out_port = self._network.in_port
cntrl.reqIn.in_port = self._network.out_port
cntrl.rspIn.in_port = self._network.out_port
cntrl.snpIn.in_port = self._network.out_port
cntrl.datIn.in_port = self._network.out_port
class TriggerMessageBuffer(MessageBuffer):
'''
MessageBuffer for triggering internal controller events.
These buffers should not be affected by the Ruby tester randomization
and allow poping messages enqueued in the same cycle.
'''
randomization = 'disabled'
allow_zero_latency = True
class OrderedTriggerMessageBuffer(TriggerMessageBuffer):
ordered = True
class CHI_Cache_Controller(Cache_Controller):
'''
Default parameters for a Cache controller
The Cache_Controller can also be used as a DMA requester or as
a pure directory if all cache allocation policies are disabled.
'''
def __init__(self, ruby_system):
super(CHI_Cache_Controller, self).__init__(
version = Versions.getVersion(Cache_Controller),
ruby_system = ruby_system,
mandatoryQueue = MessageBuffer(),
prefetchQueue = MessageBuffer(),
triggerQueue = TriggerMessageBuffer(),
retryTriggerQueue = OrderedTriggerMessageBuffer(),
replTriggerQueue = OrderedTriggerMessageBuffer(),
reqRdy = TriggerMessageBuffer(),
snpRdy = TriggerMessageBuffer())
# Set somewhat large number since we really a lot on internal
# triggers. To limit the controller performance, tweak other
# params such as: input port buffer size, cache banks, and output
# port latency
self.transitions_per_cycle = 128
# This should be set to true in the data cache controller to enable
# timeouts on unique lines when a store conditional fails
self.sc_lock_enabled = False
class CHI_L1Controller(CHI_Cache_Controller):
'''
Default parameters for a L1 Cache controller
'''
def __init__(self, ruby_system, sequencer, cache, prefetcher):
super(CHI_L1Controller, self).__init__(ruby_system)
self.sequencer = sequencer
self.cache = cache
self.use_prefetcher = False
self.send_evictions = True
self.is_HN = False
self.enable_DMT = False
self.enable_DCT = False
# Strict inclusive MOESI
self.allow_SD = True
self.alloc_on_seq_acc = True
self.alloc_on_seq_line_write = False
self.alloc_on_readshared = True
self.alloc_on_readunique = True
self.alloc_on_readonce = True
self.alloc_on_writeback = True
self.dealloc_on_unique = False
self.dealloc_on_shared = False
self.dealloc_backinv_unique = True
self.dealloc_backinv_shared = True
# Some reasonable default TBE params
self.number_of_TBEs = 16
self.number_of_repl_TBEs = 16
self.number_of_snoop_TBEs = 4
self.unify_repl_TBEs = False
class CHI_L2Controller(CHI_Cache_Controller):
'''
Default parameters for a L2 Cache controller
'''
def __init__(self, ruby_system, cache, prefetcher):
super(CHI_L2Controller, self).__init__(ruby_system)
self.sequencer = NULL
self.cache = cache
self.use_prefetcher = False
self.allow_SD = True
self.is_HN = False
self.enable_DMT = False
self.enable_DCT = False
self.send_evictions = False
# Strict inclusive MOESI
self.alloc_on_seq_acc = False
self.alloc_on_seq_line_write = False
self.alloc_on_readshared = True
self.alloc_on_readunique = True
self.alloc_on_readonce = True
self.alloc_on_writeback = True
self.dealloc_on_unique = False
self.dealloc_on_shared = False
self.dealloc_backinv_unique = True
self.dealloc_backinv_shared = True
# Some reasonable default TBE params
self.number_of_TBEs = 32
self.number_of_repl_TBEs = 32
self.number_of_snoop_TBEs = 16
self.unify_repl_TBEs = False
class CHI_HNFController(CHI_Cache_Controller):
'''
Default parameters for a coherent home node (HNF) cache controller
'''
def __init__(self, ruby_system, cache, prefetcher, addr_ranges):
super(CHI_HNFController, self).__init__(ruby_system)
self.sequencer = NULL
self.cache = cache
self.use_prefetcher = False
self.addr_ranges = addr_ranges
self.allow_SD = True
self.is_HN = True
self.enable_DMT = True
self.enable_DCT = True
self.send_evictions = False
# MOESI / Mostly inclusive for shared / Exclusive for unique
self.alloc_on_seq_acc = False
self.alloc_on_seq_line_write = False
self.alloc_on_readshared = True
self.alloc_on_readunique = False
self.alloc_on_readonce = True
self.alloc_on_writeback = True
self.dealloc_on_unique = True
self.dealloc_on_shared = False
self.dealloc_backinv_unique = False
self.dealloc_backinv_shared = False
# Some reasonable default TBE params
self.number_of_TBEs = 32
self.number_of_repl_TBEs = 32
self.number_of_snoop_TBEs = 1 # should not receive any snoop
self.unify_repl_TBEs = False
class CHI_DMAController(CHI_Cache_Controller):
'''
Default parameters for a DMA controller
'''
def __init__(self, ruby_system, sequencer):
super(CHI_DMAController, self).__init__(ruby_system)
self.sequencer = sequencer
class DummyCache(RubyCache):
dataAccessLatency = 0
tagAccessLatency = 1
size = "128"
assoc = 1
self.use_prefetcher = False
self.cache = DummyCache()
self.sequencer.dcache = NULL
# All allocations are false
# Deallocations are true (don't really matter)
self.allow_SD = False
self.is_HN = False
self.enable_DMT = False
self.enable_DCT = False
self.alloc_on_seq_acc = False
self.alloc_on_seq_line_write = False
self.alloc_on_readshared = False
self.alloc_on_readunique = False
self.alloc_on_readonce = False
self.alloc_on_writeback = False
self.dealloc_on_unique = False
self.dealloc_on_shared = False
self.dealloc_backinv_unique = False
self.dealloc_backinv_shared = False
self.send_evictions = False
self.number_of_TBEs = 16
self.number_of_repl_TBEs = 1
self.number_of_snoop_TBEs = 1 # should not receive any snoop
self.unify_repl_TBEs = False
class CPUSequencerWrapper:
'''
Other generic configuration scripts assume a matching number of sequencers
and cpus. This wraps the instruction and data sequencer so they are
compatible with the other scripts. This assumes all scripts are using
connectCpuPorts/connectIOPorts to bind ports
'''
def __init__(self, iseq, dseq):
# use this style due to __setattr__ override below
self.__dict__['inst_seq'] = iseq
self.__dict__['data_seq'] = dseq
self.__dict__['support_data_reqs'] = True
self.__dict__['support_inst_reqs'] = True
# Compatibility with certain scripts that wire up ports
# without connectCpuPorts
self.__dict__['slave'] = dseq.in_ports
self.__dict__['in_ports'] = dseq.in_ports
def connectCpuPorts(self, cpu):
assert(isinstance(cpu, BaseCPU))
cpu.icache_port = self.inst_seq.in_ports
for p in cpu._cached_ports:
if str(p) != 'icache_port':
exec('cpu.%s = self.data_seq.in_ports' % p)
cpu.connectUncachedPorts(self.data_seq)
def connectIOPorts(self, piobus):
self.data_seq.connectIOPorts(piobus)
def __setattr__(self, name, value):
setattr(self.inst_seq, name, value)
setattr(self.data_seq, name, value)
class CHI_RNF(CHI_Node):
'''
Defines a CHI request node.
Notice all contollers and sequencers are set as children of the cpus, so
this object acts more like a proxy for seting things up and has no topology
significance unless the cpus are set as its children at the top level
'''
def __init__(self, cpus, ruby_system,
l1Icache_type, l1Dcache_type,
cache_line_size,
l1Iprefetcher_type=None, l1Dprefetcher_type=None):
super(CHI_RNF, self).__init__(ruby_system)
self._block_size_bits = int(math.log(cache_line_size, 2))
# All sequencers and controllers
self._seqs = []
self._cntrls = []
# Last level controllers in this node, i.e., the ones that will send
# requests to the home nodes
self._ll_cntrls = []
self._cpus = cpus
# First creates L1 caches and sequencers
for cpu in self._cpus:
cpu.inst_sequencer = RubySequencer(version = Versions.getSeqId(),
ruby_system = ruby_system)
cpu.data_sequencer = RubySequencer(version = Versions.getSeqId(),
ruby_system = ruby_system)
self._seqs.append(CPUSequencerWrapper(cpu.inst_sequencer,
cpu.data_sequencer))
# caches
l1i_cache = l1Icache_type(start_index_bit = self._block_size_bits,
is_icache = True)
l1d_cache = l1Dcache_type(start_index_bit = self._block_size_bits,
is_icache = False)
# Placeholders for future prefetcher support
if l1Iprefetcher_type != None or l1Dprefetcher_type != None:
m5.fatal('Prefetching not supported yet')
l1i_pf = NULL
l1d_pf = NULL
# cache controllers
cpu.l1i = CHI_L1Controller(ruby_system, cpu.inst_sequencer,
l1i_cache, l1i_pf)
cpu.l1d = CHI_L1Controller(ruby_system, cpu.data_sequencer,
l1d_cache, l1d_pf)
cpu.inst_sequencer.dcache = NULL
cpu.data_sequencer.dcache = cpu.l1d.cache
cpu.l1d.sc_lock_enabled = True
cpu._ll_cntrls = [cpu.l1i, cpu.l1d]
for c in cpu._ll_cntrls:
self._cntrls.append(c)
self.connectController(c)
self._ll_cntrls.append(c)
def getSequencers(self):
return self._seqs
def getAllControllers(self):
return self._cntrls
def getNetworkSideControllers(self):
return self._cntrls
def setDownstream(self, cntrls):
for c in self._ll_cntrls:
c.downstream_destinations = cntrls
def getCpus(self):
return self._cpus
# Adds a private L2 for each cpu
def addPrivL2Cache(self, cache_type, pf_type=None):
self._ll_cntrls = []
for cpu in self._cpus:
l2_cache = cache_type(start_index_bit = self._block_size_bits,
is_icache = False)
if pf_type != None:
m5.fatal('Prefetching not supported yet')
l2_pf = NULL
cpu.l2 = CHI_L2Controller(self._ruby_system, l2_cache, l2_pf)
self._cntrls.append(cpu.l2)
self.connectController(cpu.l2)
self._ll_cntrls.append(cpu.l2)
for c in cpu._ll_cntrls:
c.downstream_destinations = [cpu.l2]
cpu._ll_cntrls = [cpu.l2]
class CHI_HNF(CHI_Node):
'''
Encapsulates an HNF cache/directory controller.
Before the first controller is created, the class method
CHI_HNF.createAddrRanges must be called before creating any CHI_HNF object
to set-up the interleaved address ranges used by the HNFs
'''
_addr_ranges = []
@classmethod
def createAddrRanges(cls, sys_mem_ranges, cache_line_size, num_hnfs):
# Create the HNFs interleaved addr ranges
block_size_bits = int(math.log(cache_line_size, 2))
cls._addr_ranges = []
llc_bits = int(math.log(num_hnfs, 2))
numa_bit = block_size_bits + llc_bits - 1
for i in range(num_hnfs):
ranges = []
for r in sys_mem_ranges:
addr_range = AddrRange(r.start, size = r.size(),
intlvHighBit = numa_bit,
intlvBits = llc_bits,
intlvMatch = i)
ranges.append(addr_range)
cls._addr_ranges.append((ranges, numa_bit, i))
@classmethod
def getAddrRanges(cls, hnf_idx):
assert(len(cls._addr_ranges) != 0)
return cls._addr_ranges[hnf_idx]
# The CHI controller can be a child of this object or another if
# 'parent' if specified
def __init__(self, hnf_idx, ruby_system, llcache_type, parent):
super(CHI_HNF, self).__init__(ruby_system)
addr_ranges,intlvHighBit,intlvMatch = CHI_HNF.getAddrRanges(hnf_idx)
# All ranges should have the same interleaving
assert(len(addr_ranges) >= 1)
assert(intlvMatch == hnf_idx)
ll_cache = llcache_type(start_index_bit = intlvHighBit + 1)
self._cntrl = CHI_HNFController(ruby_system, ll_cache, NULL,
addr_ranges)
if parent == None:
self.cntrl = self._cntrl
else:
parent.cntrl = self._cntrl
self.connectController(self._cntrl)
def getAllControllers(self):
return [self._cntrl]
def getNetworkSideControllers(self):
return [self._cntrl]
class CHI_SNF_Base(CHI_Node):
'''
Creates CHI node controllers for the memory controllers
'''
# The CHI controller can be a child of this object or another if
# 'parent' if specified
def __init__(self, ruby_system, parent):
super(CHI_SNF_Base, self).__init__(ruby_system)
self._cntrl = Memory_Controller(
version = Versions.getVersion(Memory_Controller),
ruby_system = ruby_system,
triggerQueue = TriggerMessageBuffer(),
responseFromMemory = MessageBuffer(),
requestToMemory = MessageBuffer(ordered = True),
reqRdy = TriggerMessageBuffer())
self.connectController(self._cntrl)
if parent:
parent.cntrl = self._cntrl
else:
self.cntrl = self._cntrl
def getAllControllers(self):
return [self._cntrl]
def getNetworkSideControllers(self):
return [self._cntrl]
def getMemRange(self, mem_ctrl):
# TODO need some kind of transparent API for
# MemCtrl+DRAM vs SimpleMemory
if hasattr(mem_ctrl, 'range'):
return mem_ctrl.range
else:
return mem_ctrl.dram.range
class CHI_SNF_BootMem(CHI_SNF_Base):
'''
Create the SNF for the boot memory
'''
def __init__(self, ruby_system, parent, bootmem):
super(CHI_SNF_BootMem, self).__init__(ruby_system, parent)
self._cntrl.memory_out_port = bootmem.port
self._cntrl.addr_ranges = self.getMemRange(bootmem)
class CHI_SNF_MainMem(CHI_SNF_Base):
'''
Create the SNF for a list main memory controllers
'''
def __init__(self, ruby_system, parent, mem_ctrl = None):
super(CHI_SNF_MainMem, self).__init__(ruby_system, parent)
if mem_ctrl:
self._cntrl.memory_out_port = mem_ctrl.port
self._cntrl.addr_ranges = self.getMemRange(mem_ctrl)
# else bind ports and range later
class CHI_RNI_Base(CHI_Node):
'''
Request node without cache / DMA
'''
# The CHI controller can be a child of this object or another if
# 'parent' if specified
def __init__(self, ruby_system, parent):
super(CHI_RNI_Base, self).__init__(ruby_system)
self._sequencer = RubySequencer(version = Versions.getSeqId(),
ruby_system = ruby_system,
clk_domain = ruby_system.clk_domain)
self._cntrl = CHI_DMAController(ruby_system, self._sequencer)
if parent:
parent.cntrl = self._cntrl
else:
self.cntrl = self._cntrl
self.connectController(self._cntrl)
def getAllControllers(self):
return [self._cntrl]
def getNetworkSideControllers(self):
return [self._cntrl]
class CHI_RNI_DMA(CHI_RNI_Base):
'''
DMA controller wiredup to a given dma port
'''
def __init__(self, ruby_system, dma_port, parent):
super(CHI_RNI_DMA, self).__init__(ruby_system, parent)
assert(dma_port != None)
self._sequencer.in_ports = dma_port
class CHI_RNI_IO(CHI_RNI_Base):
'''
DMA controller wiredup to ruby_system IO port
'''
def __init__(self, ruby_system, parent):
super(CHI_RNI_IO, self).__init__(ruby_system, parent)
ruby_system._io_port = self._sequencer
def noc_params_from_config(config, noc_params):
# mesh options
noc_params.num_rows = config['mesh']['num_rows']
noc_params.num_cols = config['mesh']['num_cols']
if 'router_latency' in config['mesh']:
noc_params.router_latency = config['mesh']['router_latency']
if 'link_latency' in config['mesh']:
noc_params.router_link_latency = config['mesh']['link_latency']
noc_params.node_link_latency = config['mesh']['link_latency']
if 'router_link_latency' in config['mesh']:
noc_params.router_link_latency = config['mesh']['router_link_latency']
if 'node_link_latency' in config['mesh']:
noc_params.node_link_latency = config['mesh']['node_link_latency']
if 'cross_links' in config['mesh']:
noc_params.cross_link_latency = \
config['mesh']['cross_link_latency']
noc_params.cross_links = []
for x, y in config['mesh']['cross_links']:
noc_params.cross_links.append((x, y))
noc_params.cross_links.append((y, x))
else:
noc_params.cross_links = []
noc_params.cross_link_latency = 0
# CHI_RNF options
noc_params.CHI_RNF = config['CHI_RNF']
# CHI_RNI_IO
noc_params.CHI_RNI_IO = config['CHI_RNI_IO']
# CHI_HNF options
noc_params.CHI_HNF = config['CHI_HNF']
if 'pairing' in config['CHI_HNF']:
noc_params.pairing = config['CHI_HNF']['pairing']
# CHI_SNF_MainMem
noc_params.CHI_SNF_MainMem = config['CHI_SNF_MainMem']
# CHI_SNF_IO (applies to CHI_SNF_Bootmem)
noc_params.CHI_SNF_IO = config['CHI_SNF_IO']
help="NoC config. parameters and bindings. "
"Required for CustomMesh topology")
def read_config_file(file):
''' Read file as a module and return it '''
import types
import importlib.machinery
loader = importlib.machinery.SourceFileLoader('chi_configs', file)
chi_configs = types.ModuleType(loader.name)
loader.exec_module(chi_configs)
return chi_configs
def create_system(options, full_system, system, dma_ports, bootmem,
ruby_system, cpus):
@@ -657,25 +65,25 @@ def create_system(options, full_system, system, dma_ports, bootmem,
if options.num_l3caches < 1:
m5.fatal('--num-l3caches must be at least 1')
# Default parameters for the network
class NoC_Params(object):
def __init__(self):
self.topology = options.topology
self.network = options.network
self.router_link_latency = 1
self.node_link_latency = 1
self.router_latency = 1
self.router_buffer_size = 4
self.cntrl_msg_size = 8
self.data_width = 32
params = NoC_Params()
# read additional configurations from yaml file if provided
if options.noc_config:
with open(options.noc_config, 'r') as file:
noc_params_from_config(yaml.load(file), params)
elif params.topology == 'CustomMesh':
# read specialized classes from config file if provided
if options.chi_config:
chi_defs = read_config_file(options.chi_config)
elif options.topology == 'CustomMesh':
m5.fatal('--noc-config must be provided if topology is CustomMesh')
else:
# Use the defaults from CHI_config
from . import CHI_config as chi_defs
# NoC params
params = chi_defs.NoC_Params
# Node types
CHI_RNF = chi_defs.CHI_RNF
CHI_HNF = chi_defs.CHI_HNF
CHI_SNF_MainMem = chi_defs.CHI_SNF_MainMem
CHI_SNF_BootMem = chi_defs.CHI_SNF_BootMem
CHI_RNI_DMA = chi_defs.CHI_RNI_DMA
CHI_RNI_IO = chi_defs.CHI_RNI_IO
# Declare caches and controller types used by the protocol
# Notice tag and data accesses are not concurrent, so the a cache hit
@@ -824,17 +232,17 @@ def create_system(options, full_system, system, dma_ports, bootmem,
ruby_system.network.data_msg_size = params.data_width
ruby_system.network.buffer_size = params.router_buffer_size
if params.topology == 'CustomMesh':
topology = create_topology(network_nodes, params)
elif params.topology in ['Crossbar', 'Pt2Pt']:
topology = create_topology(network_cntrls, params)
else:
m5.fatal("%s not supported!" % params.topology)
# Incorporate the params into options so it's propagated to
# makeTopology by the parent script
# makeTopology and create_topology the parent scripts
for k in dir(params):
if not k.startswith('__'):
setattr(options, k, getattr(params, k))
if options.topology == 'CustomMesh':
topology = create_topology(network_nodes, options)
elif options.topology in ['Crossbar', 'Pt2Pt']:
topology = create_topology(network_cntrls, options)
else:
m5.fatal("%s not supported!" % options.topology)
return (cpu_sequencers, mem_cntrls, topology)

646
configs/ruby/CHI_config.py Normal file
View File

@@ -0,0 +1,646 @@
# Copyright (c) 2021 ARM Limited
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''
Definitions for CHI nodes and controller types. These are used by
create_system in configs/ruby/CHI.py or may be used in custom configuration
scripts. When used with create_system, the user may provide an additional
configuration file as the --chi-config parameter to specialize the classes
defined here.
When using the CustomMesh topology, --chi-config must be provided with
specialization of the NoC_Param classes defining the NoC dimensions and
node to router binding. See configs/example/noc_config/2x4.py for an example.
'''
import math
import m5
from m5.objects import *
class Versions:
'''
Helper class to obtain unique ids for a given controller class.
These are passed as the 'version' parameter when creating the controller.
'''
_seqs = 0
@classmethod
def getSeqId(cls):
val = cls._seqs
cls._seqs += 1
return val
_version = {}
@classmethod
def getVersion(cls, tp):
if tp not in cls._version:
cls._version[tp] = 0
val = cls._version[tp]
cls._version[tp] = val + 1
return val
class NoC_Params:
'''
Default parameters for the interconnect. The value of data_width is
also used to set the data_channel_size for all CHI controllers.
(see configs/ruby/CHI.py)
'''
router_link_latency = 1
node_link_latency = 1
router_latency = 1
router_buffer_size = 4
cntrl_msg_size = 8
data_width = 32
cross_links = []
cross_link_latency = 0
class CHI_Node(SubSystem):
'''
Base class with common functions for setting up Cache or Memory
controllers that are part of a CHI RNF, RNFI, HNF, or SNF nodes.
Notice getNetworkSideControllers and getAllControllers must be implemented
in the derived classes.
'''
class NoC_Params:
'''
NoC config. parameters and bindings required for CustomMesh topology.
Maps 'num_nodes_per_router' CHI nodes to each router provided in
'router_list'. This assumes len(router_list)*num_nodes_per_router
equals the number of nodes
If 'num_nodes_per_router' is left undefined, we circulate around
'router_list' until all nodes are mapped.
See 'distributeNodes' in configs/topologies/CustomMesh.py
'''
num_nodes_per_router = None
router_list = None
def __init__(self, ruby_system):
super(CHI_Node, self).__init__()
self._ruby_system = ruby_system
self._network = ruby_system.network
def getNetworkSideControllers(self):
'''
Returns all ruby controllers that need to be connected to the
network
'''
raise NotImplementedError()
def getAllControllers(self):
'''
Returns all ruby controllers associated with this node
'''
raise NotImplementedError()
def setDownstream(self, cntrls):
'''
Sets cntrls as the downstream list of all controllers in this node
'''
for c in self.getNetworkSideControllers():
c.downstream_destinations = cntrls
def connectController(self, cntrl):
'''
Creates and configures the messages buffers for the CHI input/output
ports that connect to the network
'''
cntrl.reqOut = MessageBuffer()
cntrl.rspOut = MessageBuffer()
cntrl.snpOut = MessageBuffer()
cntrl.datOut = MessageBuffer()
cntrl.reqIn = MessageBuffer()
cntrl.rspIn = MessageBuffer()
cntrl.snpIn = MessageBuffer()
cntrl.datIn = MessageBuffer()
# All CHI ports are always connected to the network.
# Controllers that are not part of the getNetworkSideControllers list
# still communicate using internal routers, thus we need to wire-up the
# ports
cntrl.reqOut.out_port = self._network.in_port
cntrl.rspOut.out_port = self._network.in_port
cntrl.snpOut.out_port = self._network.in_port
cntrl.datOut.out_port = self._network.in_port
cntrl.reqIn.in_port = self._network.out_port
cntrl.rspIn.in_port = self._network.out_port
cntrl.snpIn.in_port = self._network.out_port
cntrl.datIn.in_port = self._network.out_port
class TriggerMessageBuffer(MessageBuffer):
'''
MessageBuffer for triggering internal controller events.
These buffers should not be affected by the Ruby tester randomization
and allow poping messages enqueued in the same cycle.
'''
randomization = 'disabled'
allow_zero_latency = True
class OrderedTriggerMessageBuffer(TriggerMessageBuffer):
ordered = True
class CHI_Cache_Controller(Cache_Controller):
'''
Default parameters for a Cache controller
The Cache_Controller can also be used as a DMA requester or as
a pure directory if all cache allocation policies are disabled.
'''
def __init__(self, ruby_system):
super(CHI_Cache_Controller, self).__init__(
version = Versions.getVersion(Cache_Controller),
ruby_system = ruby_system,
mandatoryQueue = MessageBuffer(),
prefetchQueue = MessageBuffer(),
triggerQueue = TriggerMessageBuffer(),
retryTriggerQueue = OrderedTriggerMessageBuffer(),
replTriggerQueue = OrderedTriggerMessageBuffer(),
reqRdy = TriggerMessageBuffer(),
snpRdy = TriggerMessageBuffer())
# Set somewhat large number since we really a lot on internal
# triggers. To limit the controller performance, tweak other
# params such as: input port buffer size, cache banks, and output
# port latency
self.transitions_per_cycle = 128
# This should be set to true in the data cache controller to enable
# timeouts on unique lines when a store conditional fails
self.sc_lock_enabled = False
class CHI_L1Controller(CHI_Cache_Controller):
'''
Default parameters for a L1 Cache controller
'''
def __init__(self, ruby_system, sequencer, cache, prefetcher):
super(CHI_L1Controller, self).__init__(ruby_system)
self.sequencer = sequencer
self.cache = cache
self.use_prefetcher = False
self.send_evictions = True
self.is_HN = False
self.enable_DMT = False
self.enable_DCT = False
# Strict inclusive MOESI
self.allow_SD = True
self.alloc_on_seq_acc = True
self.alloc_on_seq_line_write = False
self.alloc_on_readshared = True
self.alloc_on_readunique = True
self.alloc_on_readonce = True
self.alloc_on_writeback = True
self.dealloc_on_unique = False
self.dealloc_on_shared = False
self.dealloc_backinv_unique = True
self.dealloc_backinv_shared = True
# Some reasonable default TBE params
self.number_of_TBEs = 16
self.number_of_repl_TBEs = 16
self.number_of_snoop_TBEs = 4
self.unify_repl_TBEs = False
class CHI_L2Controller(CHI_Cache_Controller):
'''
Default parameters for a L2 Cache controller
'''
def __init__(self, ruby_system, cache, prefetcher):
super(CHI_L2Controller, self).__init__(ruby_system)
self.sequencer = NULL
self.cache = cache
self.use_prefetcher = False
self.allow_SD = True
self.is_HN = False
self.enable_DMT = False
self.enable_DCT = False
self.send_evictions = False
# Strict inclusive MOESI
self.alloc_on_seq_acc = False
self.alloc_on_seq_line_write = False
self.alloc_on_readshared = True
self.alloc_on_readunique = True
self.alloc_on_readonce = True
self.alloc_on_writeback = True
self.dealloc_on_unique = False
self.dealloc_on_shared = False
self.dealloc_backinv_unique = True
self.dealloc_backinv_shared = True
# Some reasonable default TBE params
self.number_of_TBEs = 32
self.number_of_repl_TBEs = 32
self.number_of_snoop_TBEs = 16
self.unify_repl_TBEs = False
class CHI_HNFController(CHI_Cache_Controller):
'''
Default parameters for a coherent home node (HNF) cache controller
'''
def __init__(self, ruby_system, cache, prefetcher, addr_ranges):
super(CHI_HNFController, self).__init__(ruby_system)
self.sequencer = NULL
self.cache = cache
self.use_prefetcher = False
self.addr_ranges = addr_ranges
self.allow_SD = True
self.is_HN = True
self.enable_DMT = True
self.enable_DCT = True
self.send_evictions = False
# MOESI / Mostly inclusive for shared / Exclusive for unique
self.alloc_on_seq_acc = False
self.alloc_on_seq_line_write = False
self.alloc_on_readshared = True
self.alloc_on_readunique = False
self.alloc_on_readonce = True
self.alloc_on_writeback = True
self.dealloc_on_unique = True
self.dealloc_on_shared = False
self.dealloc_backinv_unique = False
self.dealloc_backinv_shared = False
# Some reasonable default TBE params
self.number_of_TBEs = 32
self.number_of_repl_TBEs = 32
self.number_of_snoop_TBEs = 1 # should not receive any snoop
self.unify_repl_TBEs = False
class CHI_DMAController(CHI_Cache_Controller):
'''
Default parameters for a DMA controller
'''
def __init__(self, ruby_system, sequencer):
super(CHI_DMAController, self).__init__(ruby_system)
self.sequencer = sequencer
class DummyCache(RubyCache):
dataAccessLatency = 0
tagAccessLatency = 1
size = "128"
assoc = 1
self.use_prefetcher = False
self.cache = DummyCache()
self.sequencer.dcache = NULL
# All allocations are false
# Deallocations are true (don't really matter)
self.allow_SD = False
self.is_HN = False
self.enable_DMT = False
self.enable_DCT = False
self.alloc_on_seq_acc = False
self.alloc_on_seq_line_write = False
self.alloc_on_readshared = False
self.alloc_on_readunique = False
self.alloc_on_readonce = False
self.alloc_on_writeback = False
self.dealloc_on_unique = False
self.dealloc_on_shared = False
self.dealloc_backinv_unique = False
self.dealloc_backinv_shared = False
self.send_evictions = False
self.number_of_TBEs = 16
self.number_of_repl_TBEs = 1
self.number_of_snoop_TBEs = 1 # should not receive any snoop
self.unify_repl_TBEs = False
class CPUSequencerWrapper:
'''
Other generic configuration scripts assume a matching number of sequencers
and cpus. This wraps the instruction and data sequencer so they are
compatible with the other scripts. This assumes all scripts are using
connectCpuPorts/connectIOPorts to bind ports
'''
def __init__(self, iseq, dseq):
# use this style due to __setattr__ override below
self.__dict__['inst_seq'] = iseq
self.__dict__['data_seq'] = dseq
self.__dict__['support_data_reqs'] = True
self.__dict__['support_inst_reqs'] = True
# Compatibility with certain scripts that wire up ports
# without connectCpuPorts
self.__dict__['slave'] = dseq.in_ports
self.__dict__['in_ports'] = dseq.in_ports
def connectCpuPorts(self, cpu):
assert(isinstance(cpu, BaseCPU))
cpu.icache_port = self.inst_seq.in_ports
for p in cpu._cached_ports:
if str(p) != 'icache_port':
exec('cpu.%s = self.data_seq.in_ports' % p)
cpu.connectUncachedPorts(self.data_seq)
def connectIOPorts(self, piobus):
self.data_seq.connectIOPorts(piobus)
def __setattr__(self, name, value):
setattr(self.inst_seq, name, value)
setattr(self.data_seq, name, value)
class CHI_RNF(CHI_Node):
'''
Defines a CHI request node.
Notice all contollers and sequencers are set as children of the cpus, so
this object acts more like a proxy for seting things up and has no topology
significance unless the cpus are set as its children at the top level
'''
def __init__(self, cpus, ruby_system,
l1Icache_type, l1Dcache_type,
cache_line_size,
l1Iprefetcher_type=None, l1Dprefetcher_type=None):
super(CHI_RNF, self).__init__(ruby_system)
self._block_size_bits = int(math.log(cache_line_size, 2))
# All sequencers and controllers
self._seqs = []
self._cntrls = []
# Last level controllers in this node, i.e., the ones that will send
# requests to the home nodes
self._ll_cntrls = []
self._cpus = cpus
# First creates L1 caches and sequencers
for cpu in self._cpus:
cpu.inst_sequencer = RubySequencer(version = Versions.getSeqId(),
ruby_system = ruby_system)
cpu.data_sequencer = RubySequencer(version = Versions.getSeqId(),
ruby_system = ruby_system)
self._seqs.append(CPUSequencerWrapper(cpu.inst_sequencer,
cpu.data_sequencer))
# caches
l1i_cache = l1Icache_type(start_index_bit = self._block_size_bits,
is_icache = True)
l1d_cache = l1Dcache_type(start_index_bit = self._block_size_bits,
is_icache = False)
# Placeholders for future prefetcher support
if l1Iprefetcher_type != None or l1Dprefetcher_type != None:
m5.fatal('Prefetching not supported yet')
l1i_pf = NULL
l1d_pf = NULL
# cache controllers
cpu.l1i = CHI_L1Controller(ruby_system, cpu.inst_sequencer,
l1i_cache, l1i_pf)
cpu.l1d = CHI_L1Controller(ruby_system, cpu.data_sequencer,
l1d_cache, l1d_pf)
cpu.inst_sequencer.dcache = NULL
cpu.data_sequencer.dcache = cpu.l1d.cache
cpu.l1d.sc_lock_enabled = True
cpu._ll_cntrls = [cpu.l1i, cpu.l1d]
for c in cpu._ll_cntrls:
self._cntrls.append(c)
self.connectController(c)
self._ll_cntrls.append(c)
def getSequencers(self):
return self._seqs
def getAllControllers(self):
return self._cntrls
def getNetworkSideControllers(self):
return self._cntrls
def setDownstream(self, cntrls):
for c in self._ll_cntrls:
c.downstream_destinations = cntrls
def getCpus(self):
return self._cpus
# Adds a private L2 for each cpu
def addPrivL2Cache(self, cache_type, pf_type=None):
self._ll_cntrls = []
for cpu in self._cpus:
l2_cache = cache_type(start_index_bit = self._block_size_bits,
is_icache = False)
if pf_type != None:
m5.fatal('Prefetching not supported yet')
l2_pf = NULL
cpu.l2 = CHI_L2Controller(self._ruby_system, l2_cache, l2_pf)
self._cntrls.append(cpu.l2)
self.connectController(cpu.l2)
self._ll_cntrls.append(cpu.l2)
for c in cpu._ll_cntrls:
c.downstream_destinations = [cpu.l2]
cpu._ll_cntrls = [cpu.l2]
class CHI_HNF(CHI_Node):
'''
Encapsulates an HNF cache/directory controller.
Before the first controller is created, the class method
CHI_HNF.createAddrRanges must be called before creating any CHI_HNF object
to set-up the interleaved address ranges used by the HNFs
'''
class NoC_Params(CHI_Node.NoC_Params):
'''HNFs may also define the 'pairing' parameter to allow pairing'''
pairing = None
_addr_ranges = []
@classmethod
def createAddrRanges(cls, sys_mem_ranges, cache_line_size, num_hnfs):
# Create the HNFs interleaved addr ranges
block_size_bits = int(math.log(cache_line_size, 2))
cls._addr_ranges = []
llc_bits = int(math.log(num_hnfs, 2))
numa_bit = block_size_bits + llc_bits - 1
for i in range(num_hnfs):
ranges = []
for r in sys_mem_ranges:
addr_range = AddrRange(r.start, size = r.size(),
intlvHighBit = numa_bit,
intlvBits = llc_bits,
intlvMatch = i)
ranges.append(addr_range)
cls._addr_ranges.append((ranges, numa_bit, i))
@classmethod
def getAddrRanges(cls, hnf_idx):
assert(len(cls._addr_ranges) != 0)
return cls._addr_ranges[hnf_idx]
# The CHI controller can be a child of this object or another if
# 'parent' if specified
def __init__(self, hnf_idx, ruby_system, llcache_type, parent):
super(CHI_HNF, self).__init__(ruby_system)
addr_ranges,intlvHighBit,intlvMatch = self.getAddrRanges(hnf_idx)
# All ranges should have the same interleaving
assert(len(addr_ranges) >= 1)
assert(intlvMatch == hnf_idx)
ll_cache = llcache_type(start_index_bit = intlvHighBit + 1)
self._cntrl = CHI_HNFController(ruby_system, ll_cache, NULL,
addr_ranges)
if parent == None:
self.cntrl = self._cntrl
else:
parent.cntrl = self._cntrl
self.connectController(self._cntrl)
def getAllControllers(self):
return [self._cntrl]
def getNetworkSideControllers(self):
return [self._cntrl]
class CHI_SNF_Base(CHI_Node):
'''
Creates CHI node controllers for the memory controllers
'''
# The CHI controller can be a child of this object or another if
# 'parent' if specified
def __init__(self, ruby_system, parent):
super(CHI_SNF_Base, self).__init__(ruby_system)
self._cntrl = Memory_Controller(
version = Versions.getVersion(Memory_Controller),
ruby_system = ruby_system,
triggerQueue = TriggerMessageBuffer(),
responseFromMemory = MessageBuffer(),
requestToMemory = MessageBuffer(ordered = True),
reqRdy = TriggerMessageBuffer())
self.connectController(self._cntrl)
if parent:
parent.cntrl = self._cntrl
else:
self.cntrl = self._cntrl
def getAllControllers(self):
return [self._cntrl]
def getNetworkSideControllers(self):
return [self._cntrl]
def getMemRange(self, mem_ctrl):
# TODO need some kind of transparent API for
# MemCtrl+DRAM vs SimpleMemory
if hasattr(mem_ctrl, 'range'):
return mem_ctrl.range
else:
return mem_ctrl.dram.range
class CHI_SNF_BootMem(CHI_SNF_Base):
'''
Create the SNF for the boot memory
'''
def __init__(self, ruby_system, parent, bootmem):
super(CHI_SNF_BootMem, self).__init__(ruby_system, parent)
self._cntrl.memory_out_port = bootmem.port
self._cntrl.addr_ranges = self.getMemRange(bootmem)
class CHI_SNF_MainMem(CHI_SNF_Base):
'''
Create the SNF for a list main memory controllers
'''
def __init__(self, ruby_system, parent, mem_ctrl = None):
super(CHI_SNF_MainMem, self).__init__(ruby_system, parent)
if mem_ctrl:
self._cntrl.memory_out_port = mem_ctrl.port
self._cntrl.addr_ranges = self.getMemRange(mem_ctrl)
# else bind ports and range later
class CHI_RNI_Base(CHI_Node):
'''
Request node without cache / DMA
'''
# The CHI controller can be a child of this object or another if
# 'parent' if specified
def __init__(self, ruby_system, parent):
super(CHI_RNI_Base, self).__init__(ruby_system)
self._sequencer = RubySequencer(version = Versions.getSeqId(),
ruby_system = ruby_system,
clk_domain = ruby_system.clk_domain)
self._cntrl = CHI_DMAController(ruby_system, self._sequencer)
if parent:
parent.cntrl = self._cntrl
else:
self.cntrl = self._cntrl
self.connectController(self._cntrl)
def getAllControllers(self):
return [self._cntrl]
def getNetworkSideControllers(self):
return [self._cntrl]
class CHI_RNI_DMA(CHI_RNI_Base):
'''
DMA controller wiredup to a given dma port
'''
def __init__(self, ruby_system, dma_port, parent):
super(CHI_RNI_DMA, self).__init__(ruby_system, parent)
assert(dma_port != None)
self._sequencer.in_ports = dma_port
class CHI_RNI_IO(CHI_RNI_Base):
'''
DMA controller wiredup to ruby_system IO port
'''
def __init__(self, ruby_system, parent):
super(CHI_RNI_IO, self).__init__(ruby_system, parent)
ruby_system._io_port = self._sequencer

View File

@@ -78,10 +78,10 @@ def create_system(options, full_system, system, dma_ports, bootmem,
dma_cntrl_nodes = []
assert (options.num_cpus % options.num_clusters == 0)
num_cpus_per_cluster = options.num_cpus / options.num_clusters
num_cpus_per_cluster = options.num_cpus // options.num_clusters
assert (options.num_l2caches % options.num_clusters == 0)
num_l2caches_per_cluster = options.num_l2caches / options.num_clusters
num_l2caches_per_cluster = options.num_l2caches // options.num_clusters
l2_bits = int(math.log(num_l2caches_per_cluster, 2))
block_size_bits = int(math.log(options.cacheline_size, 2))

View File

@@ -42,7 +42,7 @@ from m5.objects import *
from m5.defines import buildEnv
if buildEnv['PROTOCOL'] == 'CHI':
import ruby.CHI as CHI
import ruby.CHI_config as CHI
from topologies.BaseTopology import SimpleTopology
@@ -163,8 +163,12 @@ class CustomMesh(SimpleTopology):
return node_router
def distributeNodes(self, num_nodes_per_router, router_idx_list,
node_list):
def distributeNodes(self, node_placement_config, node_list):
if len(node_list) == 0:
return
num_nodes_per_router = node_placement_config.num_nodes_per_router
router_idx_list = node_placement_config.router_list
if num_nodes_per_router:
# evenly distribute nodes to all listed routers
@@ -233,25 +237,45 @@ class CustomMesh(SimpleTopology):
self._node_link_latency = options.link_latency
# classify nodes into different types
rnf_list = []
hnf_list = []
mem_ctrls = []
io_mem_ctrls = []
io_rni_ctrls = []
rnf_nodes = []
hnf_nodes = []
mem_nodes = []
io_mem_nodes = []
rni_dma_nodes = []
rni_io_nodes = []
# Notice below that all the type must be the same for all nodes with
# the same base type.
rnf_params = None
hnf_params = None
mem_params = None
io_mem_params = None
rni_dma_params = None
rni_io_params = None
def check_same(val, curr):
assert(curr == None or curr == val)
return val
for n in self.nodes:
if isinstance(n, CHI.CHI_RNF):
rnf_list.append(n)
rnf_nodes.append(n)
rnf_params = check_same(type(n).NoC_Params, rnf_params)
elif isinstance(n, CHI.CHI_HNF):
hnf_list.append(n)
hnf_nodes.append(n)
hnf_params = check_same(type(n).NoC_Params, hnf_params)
elif isinstance(n, CHI.CHI_SNF_MainMem):
mem_ctrls.append(n)
mem_nodes.append(n)
mem_params = check_same(type(n).NoC_Params, mem_params)
elif isinstance(n, CHI.CHI_SNF_BootMem):
io_mem_ctrls.append(n)
io_mem_nodes.append(n)
io_mem_params = check_same(type(n).NoC_Params, io_mem_params)
elif isinstance(n, CHI.CHI_RNI_DMA):
io_rni_ctrls.append(n)
rni_dma_nodes.append(n)
rni_dma_params = check_same(type(n).NoC_Params, rni_dma_params)
elif isinstance(n, CHI.CHI_RNI_IO):
io_rni_ctrls.append(n)
rni_io_nodes.append(n)
rni_io_params = check_same(type(n).NoC_Params, rni_io_params)
else:
fatal('topologies.CustomMesh: {} not supported'
.format(n.__class__.__name__))
@@ -269,39 +293,20 @@ class CustomMesh(SimpleTopology):
options.cross_links, options.cross_link_latency)
# Place CHI_RNF on the mesh
num_nodes_per_router = options.CHI_RNF['num_nodes_per_router'] \
if 'num_nodes_per_router' in options.CHI_RNF else None
self.distributeNodes(num_nodes_per_router,
options.CHI_RNF['router_list'],
rnf_list)
self.distributeNodes(rnf_params, rnf_nodes)
# Place CHI_HNF on the mesh
num_nodes_per_router = options.CHI_HNF['num_nodes_per_router'] \
if 'num_nodes_per_router' in options.CHI_HNF else None
self.distributeNodes(num_nodes_per_router,
options.CHI_HNF['router_list'],
hnf_list)
self.distributeNodes(hnf_params, hnf_nodes)
# Place CHI_SNF_MainMem on the mesh
num_nodes_per_router = options.CHI_SNF_MainMem['num_nodes_per_router']\
if 'num_nodes_per_router' in options.CHI_SNF_MainMem else None
self.distributeNodes(num_nodes_per_router,
options.CHI_SNF_MainMem['router_list'],
mem_ctrls)
self.distributeNodes(mem_params, mem_nodes)
# Place all IO mem nodes on the mesh
num_nodes_per_router = options.CHI_SNF_IO['num_nodes_per_router'] \
if 'num_nodes_per_router' in options.CHI_SNF_IO else None
self.distributeNodes(num_nodes_per_router,
options.CHI_SNF_IO['router_list'],
io_mem_ctrls)
self.distributeNodes(io_mem_params, io_mem_nodes)
# Place all IO request nodes on the mesh
num_nodes_per_router = options.CHI_RNI_IO['num_nodes_per_router'] \
if 'num_nodes_per_router' in options.CHI_RNI_IO else None
self.distributeNodes(num_nodes_per_router,
options.CHI_RNI_IO['router_list'],
io_rni_ctrls)
self.distributeNodes(rni_dma_params, rni_dma_nodes)
self.distributeNodes(rni_io_params, rni_io_nodes)
# Set up
network.int_links = self._int_links

View File

@@ -31,7 +31,7 @@ PROJECT_NAME = gem5
# This could be handy for archiving the generated documentation or
# if some version control system is used.
PROJECT_NUMBER = DEVELOP-FOR-V20.2
PROJECT_NUMBER = v21.0.0.0
# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute)
# base path where the generated documentation will be put.

View File

@@ -30,22 +30,20 @@
from m5.params import *
from m5.objects.System import System
from m5.objects.Workload import Workload
from m5.objects.Workload import Workload, KernelWorkload
class RiscvFsWorkload(Workload):
type = 'RiscvFsWorkload'
cxx_class = 'RiscvISA::FsWorkload'
cxx_header = 'arch/riscv/fs_workload.hh'
abstract = True
bare_metal = Param.Bool(False, "Using Bare Metal Application?")
reset_vect = Param.Addr(0x0, 'Reset vector')
class RiscvBareMetal(RiscvFsWorkload):
class RiscvBareMetal(Workload):
type = 'RiscvBareMetal'
cxx_class = 'RiscvISA::BareMetal'
cxx_header = 'arch/riscv/bare_metal/fs_workload.hh'
bootloader = Param.String("File, that contains the bootloader code")
bare_metal = Param.Bool(True, "Using Bare Metal Application?")
reset_vect = Param.Addr(0x0, 'Reset vector')
bare_metal = True
class RiscvLinux(KernelWorkload):
type = 'RiscvLinux'
cxx_class = 'RiscvISA::FsLinux'
cxx_header = 'arch/riscv/linux/fs_workload.hh'
dtb_filename = Param.String("",
"File that contains the Device Tree Blob. Don't use DTB if empty.")
dtb_addr = Param.Addr(0x87e00000, "DTB address")

View File

@@ -58,6 +58,7 @@ if env['TARGET_ISA'] == 'riscv':
Source('linux/se_workload.cc')
Source('linux/linux.cc')
Source('linux/fs_workload.cc')
Source('bare_metal/fs_workload.cc')

View File

@@ -32,12 +32,14 @@
#include "arch/riscv/faults.hh"
#include "base/loader/object_file.hh"
#include "sim/system.hh"
#include "sim/workload.hh"
namespace RiscvISA
{
BareMetal::BareMetal(const Params &p) : RiscvISA::FsWorkload(p),
bootloader(Loader::createObjectFile(p.bootloader))
BareMetal::BareMetal(const Params &p) : Workload(p),
_isBareMetal(p.bare_metal), _resetVect(p.reset_vect),
bootloader(Loader::createObjectFile(p.bootloader))
{
fatal_if(!bootloader, "Could not load bootloader file %s.", p.bootloader);
_resetVect = bootloader->entryPoint();
@@ -52,7 +54,7 @@ BareMetal::~BareMetal()
void
BareMetal::initState()
{
RiscvISA::FsWorkload::initState();
Workload::initState();
for (auto *tc: system->threads) {
RiscvISA::Reset().invoke(tc);

View File

@@ -29,20 +29,24 @@
#ifndef __ARCH_RISCV_BARE_METAL_SYSTEM_HH__
#define __ARCH_RISCV_BARE_METAL_SYSTEM_HH__
#include "arch/riscv/fs_workload.hh"
#include "params/RiscvBareMetal.hh"
#include "sim/workload.hh"
namespace RiscvISA
{
class BareMetal : public RiscvISA::FsWorkload
class BareMetal : public Workload
{
protected:
// checker for bare metal application
bool _isBareMetal;
// entry point for simulation
Addr _resetVect;
Loader::ObjectFile *bootloader;
Loader::SymbolTable bootloaderSymtab;
public:
typedef RiscvBareMetalParams Params;
PARAMS(RiscvBareMetal);
BareMetal(const Params &p);
~BareMetal();
@@ -54,11 +58,20 @@ class BareMetal : public RiscvISA::FsWorkload
{
return bootloaderSymtab;
}
bool
insertSymbol(const Loader::Symbol &symbol) override
{
return bootloaderSymtab.insert(symbol);
}
// return reset vector
Addr resetVect() const { return _resetVect; }
// return bare metal checker
bool isBareMetal() const { return _isBareMetal; }
Addr getEntry() const override { return _resetVect; }
};
} // namespace RiscvISA

View File

@@ -31,7 +31,6 @@
#include "arch/riscv/faults.hh"
#include "arch/riscv/fs_workload.hh"
#include "arch/riscv/insts/static_inst.hh"
#include "arch/riscv/isa.hh"
#include "arch/riscv/registers.hh"
@@ -41,6 +40,7 @@
#include "debug/Fault.hh"
#include "sim/debug.hh"
#include "sim/full_system.hh"
#include "sim/workload.hh"
namespace RiscvISA
{
@@ -156,8 +156,8 @@ void Reset::invoke(ThreadContext *tc, const StaticInstPtr &inst)
tc->setMiscReg(MISCREG_MCAUSE, 0);
// Advance the PC to the implementation-defined reset vector
auto workload = dynamic_cast<FsWorkload *>(tc->getSystemPtr()->workload);
PCState pc = workload->resetVect();
auto workload = dynamic_cast<Workload *>(tc->getSystemPtr()->workload);
PCState pc = workload->getEntry();
tc->pcState(pc);
}

View File

@@ -92,7 +92,11 @@ class CSROp : public RiscvStaticInst
CSROp(const char *mnem, MachInst _machInst, OpClass __opClass)
: RiscvStaticInst(mnem, _machInst, __opClass),
csr(FUNCT12), uimm(CSRIMM)
{}
{
if (csr == CSR_SATP) {
flags[IsSquashAfter] = true;
}
}
std::string generateDisassembly(
Addr pc, const Loader::SymbolTable *symtab) const override;

View File

@@ -0,0 +1,75 @@
/*
* Copyright (c) 2021 Huawei International
* All rights reserved
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met: redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer;
* redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution;
* neither the name of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "arch/riscv/linux/fs_workload.hh"
#include "arch/riscv/faults.hh"
#include "base/loader/dtb_file.hh"
#include "base/loader/object_file.hh"
#include "base/loader/symtab.hh"
#include "sim/kernel_workload.hh"
#include "sim/system.hh"
namespace RiscvISA
{
void
FsLinux::initState()
{
KernelWorkload::initState();
if (params().dtb_filename != "") {
inform("Loading DTB file: %s at address %#x\n", params().dtb_filename,
params().dtb_addr);
auto *dtb_file = new ::Loader::DtbFile(params().dtb_filename);
if (!dtb_file->addBootCmdLine(
commandLine.c_str(), commandLine.size())) {
warn("couldn't append bootargs to DTB file: %s\n",
params().dtb_filename);
}
dtb_file->buildImage().offset(params().dtb_addr)
.write(system->physProxy);
delete dtb_file;
for (auto *tc: system->threads) {
tc->setIntReg(11, params().dtb_addr);
}
} else {
warn("No DTB file specified\n");
}
for (auto *tc: system->threads) {
RiscvISA::Reset().invoke(tc);
tc->activate();
}
}
} // namespace RiscvISA

View File

@@ -1,7 +1,6 @@
/*
* Copyright (c) 2002-2005 The Regents of The University of Michigan
* Copyright (c) 2007 MIPS Technologies, Inc.
* All rights reserved.
* Copyright (c) 2021 Huawei International
* All rights reserved
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
@@ -27,38 +26,24 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __ARCH_RISCV_FS_WORKLOAD_HH__
#define __ARCH_RISCV_FS_WORKLOAD_HH__
#ifndef __ARCH_RISCV_LINUX_SYSTEM_HH__
#define __ARCH_RISCV_LINUX_SYSTEM_HH__
#include "params/RiscvFsWorkload.hh"
#include "sim/sim_object.hh"
#include "sim/workload.hh"
#include "params/RiscvLinux.hh"
#include "sim/kernel_workload.hh"
namespace RiscvISA
{
class FsWorkload : public Workload
class FsLinux : public KernelWorkload
{
protected:
// checker for bare metal application
bool _isBareMetal;
// entry point for simulation
Addr _resetVect;
public:
FsWorkload(const RiscvFsWorkloadParams &p) : Workload(p),
_isBareMetal(p.bare_metal), _resetVect(p.reset_vect)
{}
PARAMS(RiscvLinux);
FsLinux(const Params &p) : KernelWorkload(p) {}
// return reset vector
Addr resetVect() const { return _resetVect; }
// return bare metal checker
bool isBareMetal() const { return _isBareMetal; }
Addr getEntry() const override { return _resetVect; }
void initState() override;
};
} // namespace RiscvISA
#endif // __ARCH_RISCV_FS_WORKLOAD_HH__
#endif // __ARCH_RISCV_LINUX_FS_WORKLOAD_HH__

View File

@@ -54,7 +54,7 @@ void
PMAChecker::check(const RequestPtr &req)
{
if (isUncacheable(req->getPaddr(), req->getSize())) {
req->setFlags(Request::UNCACHEABLE);
req->setFlags(Request::UNCACHEABLE | Request::STRICT_ORDER);
}
}

View File

@@ -35,7 +35,6 @@
#include <vector>
#include "arch/riscv/faults.hh"
#include "arch/riscv/fs_workload.hh"
#include "arch/riscv/mmu.hh"
#include "arch/riscv/pagetable.hh"
#include "arch/riscv/pagetable_walker.hh"

View File

@@ -29,4 +29,4 @@
/**
* @ingroup api_base_utils
*/
const char *gem5Version = "[DEVELOP-FOR-V20.2]";
const char *gem5Version = "21.0.0.0";

View File

@@ -37,6 +37,7 @@ from m5.objects.Device import BasicPioDevice
from m5.objects.IntPin import IntSinkPin
from m5.params import *
from m5.proxy import *
from m5.util.fdthelper import *
class Clint(BasicPioDevice):
"""
@@ -51,3 +52,21 @@ class Clint(BasicPioDevice):
intrctrl = Param.IntrControl(Parent.any, "interrupt controller")
int_pin = IntSinkPin('Pin to receive RTC signal')
pio_size = Param.Addr(0xC000, "PIO Size")
def generateDeviceTree(self, state):
node = self.generateBasicPioDeviceNode(state, "clint", self.pio_addr,
self.pio_size)
cpus = self.system.unproxy(self).cpu
int_extended = list()
for cpu in cpus:
phandle = state.phandle(cpu)
int_extended.append(phandle)
int_extended.append(0x3)
int_extended.append(phandle)
int_extended.append(0x7)
node.append(FdtPropertyWords("interrupts-extended", int_extended))
node.appendCompatible(["riscv,clint0"])
yield node

View File

@@ -42,6 +42,7 @@ from m5.objects.Uart import Uart8250
from m5.objects.Terminal import Terminal
from m5.params import *
from m5.proxy import *
from m5.util.fdthelper import *
class HiFive(Platform):
"""HiFive Platform
@@ -111,6 +112,9 @@ class HiFive(Platform):
uart_int_id = Param.Int(0xa, "PLIC Uart interrupt ID")
terminal = Terminal()
# Dummy param for generating devicetree
cpu_count = Param.Int(0, "dummy")
def _on_chip_devices(self):
"""Returns a list of on-chip peripherals
"""
@@ -167,3 +171,39 @@ class HiFive(Platform):
"""
for device in self._off_chip_devices():
device.pio = bus.mem_side_ports
def generateDeviceTree(self, state):
cpus_node = FdtNode("cpus")
cpus_node.append(FdtPropertyWords("timebase-frequency", [10000000]))
yield cpus_node
node = FdtNode("soc")
local_state = FdtState(addr_cells=2, size_cells=2)
node.append(local_state.addrCellsProperty())
node.append(local_state.sizeCellsProperty())
node.append(FdtProperty("ranges"))
node.appendCompatible(["simple-bus"])
for subnode in self.recurseDeviceTree(local_state):
node.append(subnode)
yield node
def annotateCpuDeviceNode(self, cpu, state):
cpu.append(FdtPropertyStrings('mmu-type', 'riscv,sv48'))
cpu.append(FdtPropertyStrings('status', 'okay'))
cpu.append(FdtPropertyStrings('riscv,isa', 'rv64imafdcsu'))
cpu.appendCompatible(["riscv"])
int_node = FdtNode("interrupt-controller")
int_state = FdtState(interrupt_cells=1)
int_node.append(int_state.interruptCellsProperty())
int_node.append(FdtProperty("interrupt-controller"))
int_node.appendCompatible("riscv,cpu-intc")
cpus = self.system.unproxy(self).cpu
phandle = int_state.phandle(cpus[self.cpu_count])
self.cpu_count += 1
int_node.append(FdtPropertyWords("phandle", [phandle]))
cpu.append(int_node)

View File

@@ -36,6 +36,7 @@
from m5.objects.Device import BasicPioDevice
from m5.params import *
from m5.proxy import *
from m5.util.fdthelper import *
class Plic(BasicPioDevice):
"""
@@ -50,3 +51,30 @@ class Plic(BasicPioDevice):
intrctrl = Param.IntrControl(Parent.any, "interrupt controller")
pio_size = Param.Addr(0x4000000, "PIO Size")
n_src = Param.Int("Number of interrupt sources")
def generateDeviceTree(self, state):
node = self.generateBasicPioDeviceNode(state, "plic", self.pio_addr,
self.pio_size)
int_state = FdtState(addr_cells=0, interrupt_cells=1)
node.append(int_state.addrCellsProperty())
node.append(int_state.interruptCellsProperty())
phandle = int_state.phandle(self)
node.append(FdtPropertyWords("phandle", [phandle]))
node.append(FdtPropertyWords("riscv,ndev", [self.n_src - 1]))
cpus = self.system.unproxy(self).cpu
int_extended = list()
for cpu in cpus:
phandle = int_state.phandle(cpu)
int_extended.append(phandle)
int_extended.append(0xb)
int_extended.append(phandle)
int_extended.append(0x9)
node.append(FdtPropertyWords("interrupts-extended", int_extended))
node.append(FdtProperty("interrupt-controller"))
node.appendCompatible(["riscv,plic0"])
yield node

View File

@@ -36,6 +36,7 @@
from m5.objects.Device import BasicPioDevice
from m5.params import *
from m5.proxy import *
from m5.util.fdthelper import *
class PlicIntDevice(BasicPioDevice):
type = 'PlicIntDevice'
@@ -44,3 +45,13 @@ class PlicIntDevice(BasicPioDevice):
platform = Param.Platform(Parent.any, "Platform")
pio_size = Param.Addr("PIO Size")
interrupt_id = Param.Int("PLIC Interrupt ID")
def generatePlicDeviceNode(self, state, name):
node = self.generateBasicPioDeviceNode(state, name,
self.pio_addr, self.pio_size)
plic = self.platform.unproxy(self).plic
node.append(FdtPropertyWords("interrupts", [self.interrupt_id]))
node.append(FdtPropertyWords("interrupt-parent", state.phandle(plic)))
return node

View File

@@ -37,6 +37,7 @@
from m5.SimObject import SimObject
from m5.params import *
from m5.proxy import *
from m5.util.fdthelper import *
from m5.objects.PlicDevice import PlicIntDevice
from m5.objects.VirtIO import VirtIODummyDevice
@@ -45,3 +46,9 @@ class MmioVirtIO(PlicIntDevice):
type = 'MmioVirtIO'
cxx_header = 'dev/riscv/vio_mmio.hh'
vio = Param.VirtIODeviceBase(VirtIODummyDevice(), "VirtIO device")
def generateDeviceTree(self, state):
node = self.generatePlicDeviceNode(state, "virtio_mmio")
node.appendCompatible(["virtio,mmio"])
yield node

View File

@@ -64,7 +64,9 @@ Clint::raiseInterruptPin(int id)
for (int context_id = 0; context_id < nThread; context_id++) {
// Update misc reg file
system->threads[context_id]->setMiscRegNoEffect(MISCREG_TIME, mtime);
ISA* isa = dynamic_cast<ISA*>(
system->threads[context_id]->getIsaPtr());
isa->setMiscRegNoEffect(MISCREG_TIME, mtime);
// Post timer interrupt
uint64_t mtimecmp = registers.mtimecmp[context_id].get();

View File

@@ -38,6 +38,8 @@
from m5.params import *
from m5.proxy import *
from m5.util.fdthelper import *
from m5.defines import buildEnv
from m5.objects.Device import BasicPioDevice
from m5.objects.Serial import SerialDevice
@@ -61,3 +63,18 @@ class Uart8250(Uart):
type = 'Uart8250'
cxx_header = "dev/serial/uart8250.hh"
pio_size = Param.Addr(0x8, "Size of address range")
def generateDeviceTree(self, state):
if buildEnv['TARGET_ISA'] == "riscv":
node = self.generateBasicPioDeviceNode(
state, "uart", self.pio_addr, self.pio_size)
platform = self.platform.unproxy(self)
plic = platform.plic
node.append(
FdtPropertyWords("interrupts", [platform.uart_int_id]))
node.append(
FdtPropertyWords("clock-frequency", [0x384000]))
node.append(
FdtPropertyWords("interrupt-parent", state.phandle(plic)))
node.appendCompatible(["ns8250"])
yield node

View File

@@ -66,8 +66,8 @@
reg = <0x0 0x30000000 0x0 0x10000000>;
ranges = <0x01000000 0x0 0x00000000 0x0 0x2f000000 0x0 0x00010000>,
<0x02000000 0x0 0x40000000 0x0 0x40000000 0x0 0x40000000>;
ranges = <0x01000000 0x0 0x0 0x0 0x2f000000 0x0 0x00010000>,
<0x02000000 0x0 0x0 0x0 0x40000000 0x0 0x40000000>;
interrupt-map = <0x000000 0x0 0x0 0 &gic 0 68 1>,
<0x000800 0x0 0x0 0 &gic 0 69 1>,

View File

@@ -76,8 +76,8 @@
reg = <0x0 0x30000000 0x0 0x10000000>;
ranges = <0x01000000 0x0 0x00000000 0x0 0x2f000000 0x0 0x00010000>,
<0x02000000 0x0 0x40000000 0x0 0x40000000 0x0 0x40000000>;
ranges = <0x01000000 0x0 0x0 0x0 0x2f000000 0x0 0x00010000>,
<0x02000000 0x0 0x0 0x0 0x40000000 0x0 0x40000000>;
/*
child unit address, #cells = #address-cells

View File

@@ -90,11 +90,8 @@ def asm_test(test, #The full path of the test
cpu_types = ('AtomicSimpleCPU', 'TimingSimpleCPU', 'MinorCPU', 'DerivO3CPU')
# The following lists the RISCV binaries. Those commented out presently result
# in a test failure. They are outlined in the following Jira Issues:
#
# https://gem5.atlassian.net/browse/GEM5-494
# in a test failure. This is outlined in the following Jira issue:
# https://gem5.atlassian.net/browse/GEM5-496
# https://gem5.atlassian.net/browse/GEM5-497
binaries = (
'rv64samt-ps-sysclone_d',
'rv64samt-ps-sysfutex1_d',

8
util/gem5art/.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
*.swp
*~
.venv
__pycache__
dist/
*.egg-info/
.vscode/
.mypy_cache/

25
util/gem5art/LICENSE Normal file
View File

@@ -0,0 +1,25 @@
Copyright (c) 2019-2021 The Regents of the University of California.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met: redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer;
redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution;
neither the name of the copyright holders nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

104
util/gem5art/README.md Normal file
View File

@@ -0,0 +1,104 @@
<img alt="gem5art logo" src="/gem5art.svg" width=150>
# gem5art: Artifact, reproducibility, and testing utilities for gem5
![CI Badge](https://github.com/darchr/gem5art/workflows/CI/badge.svg)
[![Documentation Status](https://readthedocs.org/projects/gem5art/badge/?version=latest)](https://gem5art.readthedocs.io/en/latest/?badge=latest)
See <http://www.gem5.org/documentation/gem5art> for detailed documentation.
## Installing gem5art
To install gem5art, simply use pip.
We suggest creating a virtual environment first.
Note that gem5art requires Python 3, so be sure to use a Python 3 interpreter when creating the virtual environment
```sh
virtualenv -p python3
pip install gem5art-artifact gem5art-run gem5art-tasks
```
It's not required to install all of the gem5art utilities (e.g., you can skip gem5art-tasks if you don't want to use the celery job server).
## Running the tests
Below describes how to run the tests locally before uploading your changes.
### mypy: Python static analysis
[mypy](http://mypy-lang.org/) is a static type checker for Python.
By annotating the code with types and using a static type checker, we can have many of the benefits of a compiled language, with the benefits of Python!
Before contributing any code, please add type annotations and run the type checker.
The type checker must be run for each package separately.
```sh
cd artifact
mypy -p gem5art.artifact
```
```sh
cd run
mypy -p gem5art.run
```
```sh
cd tasks
mypy -p gem5art.tasks
```
You should see something like the following output:
```
Success: no issues found in 3 source files
```
If you see `0 source files`, then it's mostly likely that mypy has been run in the wrong directory.
If there are problems with imports, you may need to add `# type: ignore` after the `import` statement if there are third party packages without type annotations.
### Running the unit tests
We currently only have a small number of unit tests.
Although, we are working on adding more!
To run the unit tests, use the Python `unittest` module.
```sh
python -m unittest
```
You must run this in each package's subdirectory.
The output should be something like the following:
```
...
----------------------------------------------------------------------
Ran 3 tests in 0.141s
OK
```
If you instead see `Ran 0 tests`, then most likely you are in the wrong directory.
## Directory structure
The directory structure is a little strange so we can distribute each Python package separately.
However, they are all part of the gem5art namespace.
See the [Python namespace documentation](https://packaging.python.org/guides/packaging-namespace-packages/) for more details.
## Building for distribution
1. Run the setup.py. This must be done in each subdirectory to get the packages to build correctly.
```sh
python setup.py sdist
```
2. Upload to PyPI
```sh
twine upload dist/*
```
These two steps must be completed for each package (e.g., artifact, run, and tasks).

View File

@@ -0,0 +1,25 @@
# Release notes for the gem5art package
## v1.4.0
- Update version now that it's in gem5
## v1.3.1
- Minor fixes
- Update documentation
- Prepare for merging with main gem5 repository
## v1.3.0
### Database now configurable
- Instead of only working with MongoDB installed at localhost, you can now specify the database connection parameter.
- You can specify it by explicitly calling `artifact.getDBConnection()` or using the `GEM5ART_DB` environment variable.
- The default is still `mongodb://localhost:271017`.
- All functions that query the database now *require* a `db` parameter (e.g., `getRuns()`).
- Reorganized some of the db functions in artifact, but this shouldn't affect end users.
### Other changes
- General documentation updates

View File

@@ -0,0 +1,269 @@
# gem5art artifact package
This package contains the `Artifact` type and an artifact database for use with [gem5art](http://www.gem5.org/documentation/gem5art/).
Please cite the [gem5art paper](https://arch.cs.ucdavis.edu/papers/2021-3-28-gem5art) when using the gem5art packages.
This documentation can be found on the [gem5 website](https://www.gem5.org/documentation/gem5art/)
## gem5art artifacts
All unique objects used during gem5 experiments are termed "artifacts" in gem5art.
Examples of artifacts include: gem5 binary, gem5 source code repo, Linux kernel source repo, linux binary, disk image, and packer binary (used to build the disk image).
The goal of this infrastructure is to keep a record of all the artifacts used in a particular experiment and to return the set of used artifacts when the same experiment needs to be performed in the future.
The description of an artifact serves as the documentation of how that artifact was created.
One of the goals of gem5art is for these artifacts to be self contained.
With just the metadata stored with the artifact a third party should be able to perfectly reproduce the artifact.
(We are still working toward this goal.
For instance, we are looking into using docker to create artifacts to separate artifact creation from the host platform its run on.)
Each artifact is characterized by a set of attributes, described below:
- command: command used to build this artifact
- typ: type of the artifact e.g. binary, git repo etc.
- name: name of the artifact
- cwd: current working directory, where the command to build the artifact is run
- path: actual path of the location of the artifact
- inputs: a list of the artifacts used to build the current artifact
- documentation: a docstring explaining the purpose of the artifact and any other useful information that can help to reproduce the artifact
Additionally, each artifact also has the following implicit information.
- hash: an MD5 hash for a binary artifact or a git hash for a git artifact
- time: time of the creation of an artifact
- id: a UUID associated with the artifact
- git: a dictionary containing the origin, current commit and the repo name for a git artifact (will be an empty dictionary for other types of artifacts)
These attribute are not specified by the user, but are generated by gem5art automatically (when the `Artifact` object is created for the first time).
An example of how a user would create a gem5 binary artifact using gem5art is shown below.
In this example, the type, name, and documentation are up to the user of gem5art.
You're encouraged to use names that are easy to remember when you later query the database.
The documentation attribute should be used to completely describe the artifact that you are saving.
```python
gem5_binary = Artifact.registerArtifact(
command = 'scons build/X86/gem5.opt',
typ = 'gem5 binary',
name = 'gem5',
cwd = 'gem5/',
path = 'gem5/build/X86/gem5.opt',
inputs = [gem5_repo,],
documentation = '''
Default gem5 binary compiled for the X86 ISA.
This was built from the main gem5 repo (gem5.googlesource.com) without
any modifications. We recently updated to the current gem5 master
which has a fix for memory channel address striping.
'''
)
```
Another goal of gem5art is to enable sharing of artifacts among multiple users, which is achieved through the use of the centralized database.
Basically, whenever a user tries to create a new artifact, the database is searched to find if the same artifact exists there.
If it does, the user can download the matching artifact for use.
Otherwise, the newly created artifact is uploaded to the database for later use.
The use of database also avoids running identical experiments (by generating an error message if a user tries to execute exact run which already exists in the database).
### Creating artifacts
To create an `Artifact`, you must use [`registerArtifact`](artifacts.html#gem5art.artifact.artifact.Artifact.registerArtifact) as shown in the above example as well.
This is a factory method which will initially create the artifact.
When calling `registerArtifact`, the artifact will automatically be added to the database.
If it already exists, a pointer to that artifact will be returned.
The parameters to the `registerArtifact` function are meant for *documentation*, not as explicit directions to create the artifact from scratch.
In the future, this feature may be added to gem5art.
Note: While creating new artifacts, warning messages showing that certain attributes (except hash and id) of two artifacts don't match (when artifact similarity is checked in the code) might appear. Users should make sure that they understand the reasons of any such warnings.
### Using artifacts from the database
You can create an artifact with just a UUID if it is already stored in the database.
The behavior will be the same as when creating an artifact that already exists.
All of the properties of the artifact will be populated from the database.
## ArtifactDB
The particular database used in this work is [MongoDB](https://www.mongodb.com/).
We use MongoDB since it can easily store large files (e.g., disk images), is tightly integrated with Python through [pymongo](https://api.mongodb.com/python/current/), and has an interface that is flexible as the needs of gem5art changes.
Currently, it's required to run a database to use gem5.
However, we are planning on changing this default to allow gem5art to be used standalone as well.
gem5art allows you to connect to any database, but by default assumes there is a MongoDB instance running on the localhost at `mongo://localhost:27017`.
You can use the environment variable `GEM5ART_DB` to specify the default database to connect when running simple scripts.
Additionally, you can specify the location of the database when calling `getDBConnection` in your scripts.
In case no database exists or a user want their own database, you can create a new database by creating a new directory and running the mongodb docker image.
See the [MongoDB docker documentation](https://hub.docker.com/_/mongo) or the [MongoDB documentation](https://docs.mongodb.com/) for more information.
```sh
`docker run -p 27017:27017 -v <absolute path to the created directory>:/data/db --name mongo-<some tag> -d mongo`
```
This uses the official [MongoDB Docker image](https://hub.docker.com/_/mongo) to run the database at the default port on the localhost.
If the Docker container is killed, it can be restarted with the same command line and the database should be consistent.
### Connecting to an existing database
By default, gem5art will assume the database is running at `mongodb://localhost:27017`, which is MongoDB's default on the localhost.
The environment variable `GEM5ART_DB` can override this default.
Otherwise, to programmatically set a database URI when using gem5art, you can pass a URI to the `getDatabaseConnection` function.
Currently, gem5art only supports MongoDB database backends, but extending this to other databases should be straightforward.
### Searching the Database
gem5art provides a few convience functions for searching and accessing the database.
These functions can be found in `artifact.common_queries`.
Specifically, we provide the following functions:
- `getByName`: Returns all objects mathching `name` in database.
- `getDiskImages`: Returns a generator of disk images (type = disk image).
- `getLinuxBinaries`: Returns a generator of Linux kernel binaries (type = kernel).
- `getgem5Binaries`: Returns a generator of gem5 binaries (type = gem5 binary).
### Downloading from the Database
You can also download a file associated with an artifact using functions provided by gem5art. A good way to search and download items from the database is by using the Python interactive shell.
You can search the database with the functions provided by the `artifact` module (e.g., [`getByName`](artifacts.html#gem5art.artifact.artifact.getByName), [`getByType`](artifacts.html#gem5art.artifact.artifact.getByType), etc.).
Then, once you've found the ID of the artifact you'd like to download, you can call [`downloadFile`](artifacts.html#gem5art.artifact._artifactdb.ArtifactDB.downloadFile).
See the example below.
```sh
$ python
Python 3.6.8 (default, Oct 7 2019, 12:59:55)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from gem5art.artifact import *
>>> db = getDBConnection()
>>> for i in getDiskImages(db, limit=2): print(i)
...
ubuntu
id: d4a54de8-3a1f-4d4d-9175-53c15e647afd
type: disk image
path: disk-image/ubuntu-image/ubuntu
inputs: packer:fe8ba737-ffd4-44fa-88b7-9cd072f82979, fs-x86-test:94092971-4277-4d38-9e4a-495a7119a5e5, m5:69dad8b1-48d0-43dd-a538-f3196a894804
Ubuntu with m5 binary installed and root auto login
ubuntu
id: c54b8805-48d6-425d-ac81-9b1badba206e
type: disk image
path: disk-image/ubuntu-image/ubuntu
inputs: packer:fe8ba737-ffd4-44fa-88b7-9cd072f82979, fs-x86-test:5bfaab52-7d04-49f2-8fea-c5af8a7f34a8, m5:69dad8b1-48d0-43dd-a538-f3196a894804
Ubuntu with m5 binary installed and root auto login
>>> for i in getLinuxBinaries(db, limit=2): print(i)
...
vmlinux-5.2.3
id: 8cfd9fbe-24d0-40b5-897e-beca3df80dd2
type: kernel
path: linux-stable/vmlinux-5.2.3
inputs: fs-x86-test:94092971-4277-4d38-9e4a-495a7119a5e5, linux-stable:25feca9a-3642-458e-a179-f3705266b2fe
Kernel binary for 5.2.3 with simple config file
vmlinux-5.2.3
id: 9721d8c9-dc41-49ba-ab5c-3ed169e24166
type: kernel
path: linux-stable/vmlinux-5.2.3
inputs: npb:85e6dd97-c946-4596-9b52-0bb145810d68, linux-stable:25feca9a-3642-458e-a179-f3705266b2fe
Kernel binary for 5.2.3 with simple config file
>>> from uuid import UUID
>>> db.downloadFile(UUID('8cfd9fbe-24d0-40b5-897e-beca3df80dd2'), 'linux-stable/vmlinux-5.2.3')
```
For another example, assume there is a disk image named `npb` (containing [NAS Parallel](https://www.nas.nasa.gov/) Benchmarks) in your database and you want to download the disk image to your local directory. You can do the following to download the disk image:
```python
import gem5art.artifact
db = gem5art.artifact.getDBConnection()
disks = gem5art.artifact.getByName(db, 'npb')
for disk in disks:
if disk.type == 'disk image' and disk.documentation == 'npb disk image created on Nov 20':
db.downloadFile(disk._id, 'npb')
```
Here, we assume that there can be multiple disk images/artifacts with the name `npb` and we are only interested in downloading the npb disk image with a particular documentation ('npb disk image created on Nov 20'). Also, note that there is not a single way to download files from the database (although they will eventually use the downloadFile function).
The dual of the [downloadFile](artifacts.html#gem5art.artifact._artifactdb.ArtifactDB.downloadFile) method used above is [upload](artifacts.html#gem5art.artifact._artifactdb.ArtifactDB.upload).
#### Database schema
Alternative, you can use the pymongo Python module or the mongodb command line interface to interact with the database.
See the [MongoDB documentation](https://docs.mongodb.com/) for more information on how to query the MongoDB database.
gem5art has two collections.
`artifact_database.artifacts` stores all of the metadata for the artifacts and `artifact_database.fs` is a [GridFS](https://docs.mongodb.com/manual/core/gridfs/) store for all of the files.
The files in the GridFS use the same UUIDs as the Artifacts as their primary keys.
You can list all of the details of all of the artifacts by running the following in Python.
```python
#!/usr/bin/env python3
from pymongo import MongoClient
db = MongoClient().artifact_database
for i in db.artifacts.find():
print(i)
```
gem5art also provides a few methods to search the database for artifacts of a particular type or name. For example, to find all disk images in a database you can do the following:
```python
import gem5art.artifact
db = gem5art.artifact.getDBConnection('mongo://localhost')
for i in gem5art.artifact.getDiskImages(db):
print(i)
```
Other similar methods include: `getLinuxBinaries()`, `getgem5Binaries()`
You can use getByName() method to search database for artifacts using the name attribute. For example, to search for gem5 named artifacts:
```python
import gem5art.artifact
db = gem5art.artifact.getDBConnection('mongo://localhost')
for i in gem5art.artifact.getByName(db, "gem5"):
print(i)
```
## Artifacts API Documentation
```eval_rst
Artifact Module
--------
.. automodule:: gem5art.artifact
:members:
Artifact
--------
.. automodule:: gem5art.artifact.artifact
:members:
:undoc-members:
Artifact
--------
.. automodule:: gem5art.artifact.artifact.Artifact
:members:
:undoc-members:
Helper Functions for Common Queries
-----------------------------------
.. automodule:: gem5art.artifact.common_queries
:members:
:undoc-members:
AritifactDB
-----------
This is mostly internal.
.. automodule:: gem5art.artifact._artifactdb
:members:
:undoc-members:
```

View File

@@ -0,0 +1,45 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""This is the gem5 artifact package"""
from .artifact import Artifact
from .common_queries import (
getByName,
getDiskImages,
getLinuxBinaries,
getgem5Binaries,
)
from ._artifactdb import getDBConnection
__all__ = [
"Artifact",
"getByName",
"getDiskImages",
"getLinuxBinaries",
"getgem5Binaries",
"getDBConnection",
]

View File

@@ -0,0 +1,256 @@
# Copyright (c) 2019-2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""This file defines the ArtifactDB type and some common implementations of
ArtifactDB.
The database interface defined here does not include any schema information.
The database "schema" is defined in the artifact.py file based on the types of
artifacts stored in the database.
Some common queries can be found in common_queries.py
"""
from abc import ABC, abstractmethod
import gridfs # type: ignore
import os
from pathlib import Path
from pymongo import MongoClient # type: ignore
from typing import Any, Dict, Iterable, Union, Type
from urllib.parse import urlparse
from uuid import UUID
class ArtifactDB(ABC):
"""
Abstract base class for all artifact DBs.
"""
@abstractmethod
def __init__(self, uri: str):
"""Initialize the database with a URI"""
pass
@abstractmethod
def put(self, key: UUID, artifact: Dict[str, Union[str, UUID]]) -> None:
"""Insert the artifact into the database with the key"""
pass
@abstractmethod
def upload(self, key: UUID, path: Path) -> None:
"""Upload the file at path to the database with _id of key"""
pass
@abstractmethod
def __contains__(self, key: Union[UUID, str]) -> bool:
"""Key can be a UUID or a string. Returns true if item in DB"""
pass
@abstractmethod
def get(self, key: Union[UUID, str]) -> Dict[str, str]:
"""Key can be a UUID or a string. Returns a dictionary to construct
an artifact.
"""
pass
@abstractmethod
def downloadFile(self, key: UUID, path: Path) -> None:
"""Download the file with the _id key to the path. Will overwrite the
file if it currently exists."""
pass
def searchByName(self, name: str, limit: int) -> Iterable[Dict[str, Any]]:
"""Returns an iterable of all artifacts in the database that match
some name. Note: Not all DB implementations will implement this
function"""
raise NotImplementedError()
def searchByType(self, typ: str, limit: int) -> Iterable[Dict[str, Any]]:
"""Returns an iterable of all artifacts in the database that match
some type. Note: Not all DB implementations will implement this
function"""
raise NotImplementedError()
def searchByNameType(
self, name: str, typ: str, limit: int
) -> Iterable[Dict[str, Any]]:
"""Returns an iterable of all artifacts in the database that match
some name and type. Note: Not all DB implementations will implement
this function"""
raise NotImplementedError()
def searchByLikeNameType(
self, name: str, typ: str, limit: int
) -> Iterable[Dict[str, Any]]:
"""Returns an iterable of all artifacts in the database that match
some type and a regex name. Note: Not all DB implementations will
implement this function"""
raise NotImplementedError()
class ArtifactMongoDB(ArtifactDB):
"""
This is a mongodb database connector for storing Artifacts (as defined in
artifact.py).
This database stores the data in three collections:
- artifacts: This stores the json serialized Artifact class
- files and chunks: These two collections store the large files required
for some artifacts. Within the files collection, the _id is the
UUID of the artifact.
"""
def __init__(self, uri: str) -> None:
"""Initialize the mongodb connection and grab pointers to the databases
uri is the location of the database in a mongodb compatible form.
http://dochub.mongodb.org/core/connections.
"""
# Note: Need "connect=False" so that we don't connect until the first
# time we interact with the database. Required for the gem5 running
# celery server
self.db = MongoClient(host=uri, connect=False).artifact_database
self.artifacts = self.db.artifacts
self.fs = gridfs.GridFSBucket(self.db, disable_md5=True)
def put(self, key: UUID, artifact: Dict[str, Union[str, UUID]]) -> None:
"""Insert the artifact into the database with the key"""
assert artifact["_id"] == key
self.artifacts.insert_one(artifact)
def upload(self, key: UUID, path: Path) -> None:
"""Upload the file at path to the database with _id of key"""
with open(path, "rb") as f:
self.fs.upload_from_stream_with_id(key, str(path), f)
def __contains__(self, key: Union[UUID, str]) -> bool:
"""Key can be a UUID or a string. Returns true if item in DB"""
if isinstance(key, UUID):
count = self.artifacts.count_documents({"_id": key}, limit=1)
else:
# This is a hash. Count the number of matches
count = self.artifacts.count_documents({"hash": key}, limit=1)
return bool(count > 0)
def get(self, key: Union[UUID, str]) -> Dict[str, str]:
"""Key can be a UUID or a string. Returns a dictionary to construct
an artifact.
"""
if isinstance(key, UUID):
return self.artifacts.find_one({"_id": key}, limit=1)
else:
# This is a hash.
return self.artifacts.find_one({"hash": key}, limit=1)
def downloadFile(self, key: UUID, path: Path) -> None:
"""Download the file with the _id key to the path. Will overwrite the
file if it currently exists."""
with open(path, "wb") as f:
self.fs.download_to_stream(key, f)
def searchByName(self, name: str, limit: int) -> Iterable[Dict[str, Any]]:
"""Returns an iterable of all artifacts in the database that match
some name."""
for d in self.artifacts.find({"name": name}, limit=limit):
yield d
def searchByType(self, typ: str, limit: int) -> Iterable[Dict[str, Any]]:
"""Returns an iterable of all artifacts in the database that match
some type."""
for d in self.artifacts.find({"type": typ}, limit=limit):
yield d
def searchByNameType(
self, name: str, typ: str, limit: int
) -> Iterable[Dict[str, Any]]:
"""Returns an iterable of all artifacts in the database that match
some name and type."""
for d in self.artifacts.find({"type": typ, "name": name}, limit=limit):
yield d
def searchByLikeNameType(
self, name: str, typ: str, limit: int
) -> Iterable[Dict[str, Any]]:
"""Returns an iterable of all artifacts in the database that match
some type and a regex name."""
data = self.artifacts.find(
{"type": typ, "name": {"$regex": "{}".format(name)}}, limit=limit
)
for d in data:
yield d
_db = None
_default_uri = "mongodb://localhost:27017"
_db_schemes: Dict[str, Type[ArtifactDB]] = {"mongodb": ArtifactMongoDB}
def _getDBType(uri: str) -> Type[ArtifactDB]:
"""Internal function to take a URI and return a class that can be
constructed with that URI. For instance "mongodb://localhost" will return
an ArtifactMongoDB. More types will be added in the future.
Supported types:
**ArtifactMongoDB**: mongodb://...
See http://dochub.mongodb.org/core/connections for details.
"""
result = urlparse(uri)
if result.scheme in _db_schemes:
return _db_schemes[result.scheme]
else:
raise Exception(f"Cannot find DB type for {uri}")
def getDBConnection(uri: str = "") -> ArtifactDB:
"""Returns the database connection
uri: a string representing the URI of the database. See _getDBType for
details. If no URI is given we use the default
(mongodb://localhost:27017) or the value in the GEM5ART_DB environment
variable.
If the connection has not been established, this will create a new
connection. If the connection has been established, this will replace the
connection if the uri input is non-empy.
"""
global _db
# mypy bug: https://github.com/python/mypy/issues/5423
if _db is not None and not uri: # type: ignore[unreachable]
# If we have already established a connection, use that
return _db # type: ignore[unreachable]
if not uri:
uri = os.environ.get("GEM5ART_DB", _default_uri)
typ = _getDBType(uri)
_db = typ(uri)
return _db

View File

@@ -0,0 +1,312 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""File contains the Artifact class and helper functions
"""
import hashlib
from inspect import cleandoc
import os
from pathlib import Path
import subprocess
import time
from typing import Any, Dict, Iterator, List, Union
from uuid import UUID, uuid4
from ._artifactdb import getDBConnection
def getHash(path: Path) -> str:
"""
Returns an md5 hash for the file in self.path.
"""
BUF_SIZE = 65536
md5 = hashlib.md5()
with open(path, "rb") as f:
while True:
data = f.read(BUF_SIZE)
if not data:
break
md5.update(data)
return md5.hexdigest()
def getGit(path: Path) -> Dict[str, str]:
"""
Returns dictionary with origin, current commit, and repo name for the
base repository for `path`.
An exception is generated if the repo is dirty or doesn't exist
"""
path = path.resolve() # Make absolute
if path.is_file():
path = path.parent
command = [
"git",
"status",
"--porcelain",
"--ignore-submodules",
"--untracked-files=no",
]
res = subprocess.run(command, stdout=subprocess.PIPE, cwd=path)
if res.returncode != 0:
raise Exception("git repo doesn't exist for {}".format(path))
if res.stdout:
raise Exception("git repo dirty for {}".format(path))
command = ["git", "remote", "get-url", "origin"]
origin = subprocess.check_output(command, cwd=path)
command = ["git", "log", "-n1", "--pretty=format:%H"]
hsh = subprocess.check_output(command, cwd=path)
command = ["git", "rev-parse", "--show-toplevel"]
name = subprocess.check_output(command, cwd=path)
return {
"origin": str(origin.strip(), "utf-8"),
"hash": str(hsh.strip(), "utf-8"),
"name": str(name.strip(), "utf-8"),
}
class Artifact:
"""
A base artifact class.
It holds following attributes of an artifact:
1) name: name of the artifact
2) command: bash command used to generate the artifact
3) path: path of the location of the artifact
4) time: time of creation of the artifact
5) documentation: a string to describe the artifact
6) ID: unique identifier of the artifact
7) inputs: list of the input artifacts used to create this artifact stored
as a list of uuids
"""
_id: UUID
name: str
type: str
documentation: str
command: str
path: Path
hash: str
time: float
git: Dict[str, str]
cwd: Path
inputs: List["Artifact"]
@classmethod
def registerArtifact(
cls,
command: str,
name: str,
cwd: str,
typ: str,
path: Union[str, Path],
documentation: str,
inputs: List["Artifact"] = [],
) -> "Artifact":
"""Constructs a new artifact.
This assume either it's not in the database or it is the exact same as
when it was added to the database
"""
_db = getDBConnection()
# Dictionary with all of the kwargs for construction.
data: Dict[str, Any] = {}
data["name"] = name
data["type"] = typ
data["documentation"] = cleandoc(documentation)
if len(data["documentation"]) < 10: # 10 characters is arbitrary
raise Exception(
cleandoc(
"""Must provide longer documentation!
This documentation is how your future data will remember what
this artifact is and how it was created."""
)
)
data["command"] = cleandoc(command)
data["time"] = time.time()
ppath = Path(path)
data["path"] = ppath
if ppath.is_file():
data["hash"] = getHash(ppath)
data["git"] = {}
elif ppath.is_dir():
data["git"] = getGit(ppath)
data["hash"] = data["git"]["hash"]
else:
raise Exception("Path {} doesn't exist".format(ppath))
pcwd = Path(cwd)
data["cwd"] = pcwd
if not pcwd.exists():
raise Exception("cwd {} doesn't exist.".format(pcwd))
if not pcwd.is_dir():
raise Exception("cwd {} is not a directory".format(pcwd))
data["inputs"] = [i._id for i in inputs]
if data["hash"] in _db:
old_artifact = Artifact(_db.get(data["hash"]))
data["_id"] = old_artifact._id
# Now that we have a complete object, construct it
self = cls(data)
self._checkSimilar(old_artifact)
else:
data["_id"] = uuid4()
# Now that we have a complete object, construct it
self = cls(data)
# Upload the file if there is one.
if self.path.is_file():
_db.upload(self._id, self.path)
# Putting the artifact to the database
_db.put(self._id, self._getSerializable())
return self
def __init__(self, other: Union[str, UUID, Dict[str, Any]]) -> None:
"""Constructs the object from the database based on a UUID or
dictionary from the database
"""
_db = getDBConnection()
if isinstance(other, str):
other = UUID(other)
if isinstance(other, UUID):
other = _db.get(other)
if not other:
raise Exception("Cannot construct artifact")
assert isinstance(other["_id"], UUID)
self._id = other["_id"]
self.name = other["name"]
self.type = other["type"]
self.documentation = other["documentation"]
self.command = other["command"]
self.path = Path(other["path"])
self.hash = other["hash"]
assert isinstance(other["git"], dict)
self.git = other["git"]
self.cwd = Path(other["cwd"])
self.inputs = [Artifact(i) for i in other["inputs"]]
def __str__(self) -> str:
inputs = ", ".join([i.name + ":" + str(i._id) for i in self.inputs])
return "\n ".join(
[
self.name,
f"id: {self._id}",
f"type: {self.type}",
f"path: {self.path}",
f"inputs: {inputs}",
self.documentation,
]
)
def __repr__(self) -> str:
return vars(self).__repr__()
def _getSerializable(self) -> Dict[str, Union[str, UUID]]:
data = vars(self).copy()
data["inputs"] = [input._id for input in self.inputs]
data["cwd"] = str(data["cwd"])
data["path"] = str(data["path"])
return data
def __eq__(self, other: object) -> bool:
"""checks if two artifacts are the same.
Two artifacts are the same if they have the same UUID and the same
hash. We emit a warning if other fields are different. If other fields
are different and the hash is the same, this is suggestive that the
user is doing something wrong.
"""
if not isinstance(other, Artifact):
return NotImplemented
if self.hash == other.hash and self._id == other._id:
self._checkSimilar(other)
return True
else:
return False
def _checkSimilar(self, other: "Artifact"):
"""Prints warnings if other is simlar, but not the same as self.
These mismatches may or may not be a problem. It's up to the user to
make this decision.
"""
if self.name != other.name:
print(
f"WARNING: name mismatch for {self.name}! "
f"{self.name} != {other.name}"
)
if self.documentation != other.documentation:
print(
f"WARNING: documentation mismatch for {self.name}! "
f"{self.documentation} != {other.documentation}"
)
if self.command != other.command:
print(
f"WARNING: command mismatch for {self.name}! "
f"{self.command} != {other.command}"
)
if self.path != other.path:
print(
f"WARNING: path mismatch for {self.name}! "
f"{self.path} != {other.path}"
)
if self.cwd != other.cwd:
print(
f"WARNING: cwd mismatch for {self.name}! "
f"{self.cwd} != {other.cwd}"
)
if self.git != other.git:
print(
f"WARNING: git mismatch for {self.name}! "
f"{self.git} != {other.git}"
)
mismatch = set(self.inputs).symmetric_difference(other.inputs)
if mismatch:
print(f"WARNING: input mismatch for {self.name}! {mismatch}")
def __hash__(self) -> int:
return self._id.int

View File

@@ -0,0 +1,83 @@
# Copyright (c) 2020-2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""File contains the some helper functions with common queries for artifacts
in the ArtifactDB.
"""
from typing import Iterator
from ._artifactdb import ArtifactDB
from .artifact import Artifact
def _getByType(db: ArtifactDB, typ: str, limit: int = 0) -> Iterator[Artifact]:
"""Returns a generator of Artifacts with matching `type` from the db.
Limit specifies the maximum number of results to return.
"""
data = db.searchByType(typ, limit=limit)
for d in data:
yield Artifact(d)
def getDiskImages(db: ArtifactDB, limit: int = 0) -> Iterator[Artifact]:
"""Returns a generator of disk images (type = disk image).
Limit specifies the maximum number of results to return.
"""
return _getByType(db, "disk image", limit)
def getgem5Binaries(db: ArtifactDB, limit: int = 0) -> Iterator[Artifact]:
"""Returns a generator of gem5 binaries (type = gem5 binary).
Limit specifies the maximum number of results to return.
"""
return _getByType(db, "gem5 binary", limit)
def getLinuxBinaries(db: ArtifactDB, limit: int = 0) -> Iterator[Artifact]:
"""Returns a generator of Linux kernel binaries (type = kernel).
Limit specifies the maximum number of results to return.
"""
return _getByType(db, "kernel", limit)
def getByName(db: ArtifactDB, name: str, limit: int = 0) -> Iterator[Artifact]:
"""Returns all objects mathching `name` in database.
Limit specifies the maximum number of results to return.
"""
data = db.searchByName(name, limit=limit)
for d in data:
yield Artifact(d)

View File

@@ -0,0 +1,3 @@
[mypy]
namespace_packages = True
warn_unreachable = True

63
util/gem5art/artifact/setup.py Executable file
View File

@@ -0,0 +1,63 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""A setuptools based setup module."""
from os.path import join
from pathlib import Path
from setuptools import setup, find_namespace_packages
with open(Path(__file__).parent / "README.md", encoding="utf-8") as f:
long_description = f.read()
setup(
name="gem5art-artifact",
version="1.4.0",
description="Artifacts for gem5art",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://www.gem5.org/",
author="Davis Architecture Research Group (DArchR)",
author_email="jlowepower@ucdavis.edu",
license="BSD",
classifiers=[
"Development Status :: 4 - Beta",
"License :: OSI Approved :: BSD License",
"Topic :: System :: Hardware",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
],
keywords="simulation architecture gem5",
packages=find_namespace_packages(include=["gem5art.*"]),
install_requires=["pymongo"],
python_requires=">=3.6",
project_urls={
"Bug Reports": "https://gem5.atlassian.net/",
"Source": "https://gem5.googlesource.com/",
"Documentation": "https://www.gem5.org/documentation/gem5art",
},
)

View File

@@ -0,0 +1,25 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,243 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Tests for the Artifact object and associated functions"""
import hashlib
from pathlib import Path
import unittest
from uuid import uuid4, UUID
import sys
import io
from gem5art import artifact
from gem5art.artifact._artifactdb import ArtifactDB, getDBConnection
class MockDB(ArtifactDB):
"""
This is a Mock DB,
used to run unit tests
"""
def __init__(self, uri=""):
self.db = {}
self.hashes = {}
def put(self, key, metadata):
print("putting an entry in the mock database")
self.db[key] = metadata
self.hashes[metadata["hash"]] = key
def __contains__(self, key):
if isinstance(key, UUID):
return key in self.db.keys()
else:
# This is a hash
return key in self.hashes
def get(self, key):
if isinstance(key, UUID):
return self.db[key]
else:
# This is a hash
return self.db[self.hashes[key]]
def upload(self, key, path):
pass
def downloadFile(self, key, path):
pass
# Add the MockDB as a scheme
artifact._artifactdb._db_schemes["mockdb"] = MockDB
# This needs to be a global variable so
# that this getDBConnection is the first
# call to create a DB connection
_db = getDBConnection("mockdb://")
class TestGit(unittest.TestCase):
def test_keys(self):
git = artifact.artifact.getGit(Path("."))
self.assertSetEqual(
set(git.keys()), set(["origin", "hash", "name"]), "git keys wrong"
)
def test_origin(self):
git = artifact.artifact.getGit(Path("."))
self.assertTrue(
git["origin"].endswith("gem5art"), "Origin should end with gem5art"
)
class TestArtifact(unittest.TestCase):
def setUp(self):
self.artifact = artifact.Artifact(
{
"_id": uuid4(),
"name": "test-name",
"type": "test-type",
"documentation": (
"This is a long test documentation that has "
"lots of words"
),
"command": ["ls", "-l"],
"path": "/",
"hash": hashlib.md5().hexdigest(),
"git": artifact.artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
def test_dirs(self):
self.assertTrue(self.artifact.cwd.exists())
self.assertTrue(self.artifact.path.exists())
class TestArtifactSimilarity(unittest.TestCase):
def setUp(self):
self.artifactA = artifact.Artifact(
{
"_id": uuid4(),
"name": "artifact-A",
"type": "type-A",
"documentation": "This is a description of artifact A",
"command": ["ls", "-l"],
"path": "/",
"hash": hashlib.md5().hexdigest(),
"git": artifact.artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
self.artifactB = artifact.Artifact(
{
"_id": uuid4(),
"name": "artifact-B",
"type": "type-B",
"documentation": "This is a description of artifact B",
"command": ["ls", "-l"],
"path": "/",
"hash": hashlib.md5().hexdigest(),
"git": artifact.artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
self.artifactC = artifact.Artifact(
{
"_id": self.artifactA._id,
"name": "artifact-A",
"type": "type-A",
"documentation": "This is a description of artifact A",
"command": ["ls", "-l"],
"path": "/",
"hash": self.artifactA.hash,
"git": artifact.artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
self.artifactD = artifact.Artifact(
{
"_id": uuid4(),
"name": "artifact-A",
"type": "type-A",
"documentation": "This is a description of artifact A",
"command": ["ls", "-l"],
"path": "/",
"hash": hashlib.md5().hexdigest(),
"git": artifact.artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
def test_not_equal(self):
self.assertTrue(self.artifactA != self.artifactB)
def test_equal(self):
self.assertTrue(self.artifactA == self.artifactC)
def test_not_similar(self):
capturedOutput = io.StringIO()
sys.stdout = capturedOutput
self.artifactA._checkSimilar(self.artifactB)
sys.stdout = sys.__stdout__
self.assertTrue("WARNING:" in capturedOutput.getvalue())
def test_similar(self):
capturedOutput = io.StringIO()
sys.stdout = capturedOutput
self.artifactA._checkSimilar(self.artifactD)
sys.stdout = sys.__stdout__
self.assertFalse("WARNING:" in capturedOutput.getvalue())
class TestRegisterArtifact(unittest.TestCase):
def setUp(self):
# Create and register an artifact
self.testArtifactA = artifact.Artifact.registerArtifact(
name="artifact-A",
typ="type-A",
documentation="This is a description of artifact A",
command="ls -l",
path="./",
cwd="./",
)
# Create an artifact without pushing it to the database
self.testArtifactB = artifact.Artifact(
{
"_id": uuid4(),
"name": "artifact-B",
"type": "type-B",
"documentation": "This is a description of artifact B",
"command": ["vim test_artifact.py"],
"path": "./tests/test_artifact.py",
"hash": hashlib.md5().hexdigest(),
"git": artifact.artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
# test to see if an artifact is in the database
def test_in_database(self):
self.assertTrue(self.testArtifactA.hash in _db)
self.assertFalse(self.testArtifactB.hash in _db)
if __name__ == "__main__":
unittest.main()

BIN
util/gem5art/gem5art.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

383
util/gem5art/gem5art.svg Normal file
View File

@@ -0,0 +1,383 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
id="svg2"
sodipodi:docname="gem5art.svg"
viewBox="0 0 308.56216 333.39229"
sodipodi:version="0.32"
version="1.0"
inkscape:output_extension="org.inkscape.output.svg.inkscape"
inkscape:version="0.92.3 (2405546, 2018-03-11)"
width="308.56216"
height="333.3923"
inkscape:export-filename="/home/jlp/Code/gem5/gem5art/gem5art.png"
inkscape:export-xdpi="150"
inkscape:export-ydpi="150">
<defs
id="defs4">
<linearGradient
id="linearGradient2656">
<stop
id="stop2658"
style="stop-color:#480c00"
offset="0" />
<stop
id="stop2660"
style="stop-color:#a06400"
offset=".69328" />
<stop
id="stop2662"
style="stop-color:#ecd450"
offset=".72343" />
<stop
id="stop2664"
style="stop-color:#682c00"
offset=".79501" />
<stop
id="stop2666"
style="stop-color:#a47200"
offset=".82530" />
<stop
id="stop2668"
style="stop-color:#e4c844"
offset=".86175" />
<stop
id="stop2670"
style="stop-color:#783c00"
offset=".94375" />
<stop
id="stop2672"
style="stop-color:#e8cc48"
offset=".97130" />
<stop
id="stop2674"
style="stop-color:#480c00"
offset="1" />
</linearGradient>
<linearGradient
id="linearGradient9908">
<stop
id="stop9910"
style="stop-color:#480c00"
offset="0" />
<stop
id="stop9920"
style="stop-color:#743800"
offset=".69328" />
<stop
id="stop9912"
style="stop-color:#ecd450"
offset=".76570" />
<stop
id="stop9914"
style="stop-color:#a06400"
offset=".91558" />
<stop
id="stop9916"
style="stop-color:#e8cc48"
offset=".96003" />
<stop
id="stop9918"
style="stop-color:#480c00"
offset="1" />
</linearGradient>
<linearGradient
id="linearGradient2634"
y2="547.71997"
xlink:href="#linearGradient9908"
gradientUnits="userSpaceOnUse"
x2="-199.25"
gradientTransform="matrix(-0.314435,0,0,-3.5356,-7375.2,-2991.861)"
y1="547.71997"
x1="2231.6001"
inkscape:collect="always" />
<radialGradient
fx="0"
fy="0"
cx="0"
cy="0"
r="1"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(47.134947,0,0,47.134947,-12979.631,-6842.1551)"
spreadMethod="pad"
id="radialGradient130">
<stop
style="stop-opacity:1;stop-color:#35aad1"
offset="0"
id="stop126" />
<stop
style="stop-opacity:1;stop-color:#008eb0"
offset="1"
id="stop128" />
</radialGradient>
<radialGradient
fx="0"
fy="0"
cx="0"
cy="0"
r="1"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(47.127623,0,0,47.127623,-13053.605,-6797.9188)"
spreadMethod="pad"
id="radialGradient110">
<stop
style="stop-opacity:1;stop-color:#939598"
offset="0"
id="stop106" />
<stop
style="stop-opacity:1;stop-color:#77787b"
offset="1"
id="stop108" />
</radialGradient>
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient9908"
id="linearGradient1859"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0,0.314435,-3.5356,0,-8168,-5727.15)"
x1="2231.6001"
y1="547.71997"
x2="-199.25"
y2="547.71997" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient9908"
id="linearGradient1876"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0,0.314435,-3.5356,0,-7009.9,-5940.311)"
x1="2231.6001"
y1="547.71997"
x2="-199.25"
y2="547.71997" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient9908"
id="linearGradient1857-3"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.314435,0,0,3.5356,-9958.4,-6305.761)"
x1="2231.6001"
y1="547.71997"
x2="-199.25"
y2="547.71997" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient9908"
id="linearGradient1876-7"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0,0.314435,-3.5356,0,-7009.9,-5940.311)"
x1="2231.6001"
y1="547.71997"
x2="-199.25"
y2="547.71997" />
<linearGradient
id="linearGradient2634-0"
y2="547.71997"
xlink:href="#linearGradient9908"
gradientUnits="userSpaceOnUse"
x2="-199.25"
gradientTransform="matrix(-0.314435,0,0,-3.5356,-7375.2,-2991.861)"
y1="547.71997"
x1="2231.6001"
inkscape:collect="always" />
<linearGradient
id="linearGradient2636-9"
y2="547.71997"
xlink:href="#linearGradient9908"
gradientUnits="userSpaceOnUse"
x2="-199.25"
gradientTransform="matrix(0,-0.314435,3.5356,0,-10324,-3357.25)"
y1="547.71997"
x1="2231.6001"
inkscape:collect="always" />
<radialGradient
fx="0"
fy="0"
cx="0"
cy="0"
r="1"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(47.127623,0,0,47.127623,-13053.605,-6797.9188)"
spreadMethod="pad"
id="radialGradient110-6">
<stop
style="stop-opacity:1;stop-color:#939598"
offset="0"
id="stop106-1" />
<stop
style="stop-opacity:1;stop-color:#77787b"
offset="1"
id="stop108-8" />
</radialGradient>
<radialGradient
fx="0"
fy="0"
cx="0"
cy="0"
r="1"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(47.134947,0,0,47.134947,-12979.631,-6842.1551)"
spreadMethod="pad"
id="radialGradient130-7">
<stop
style="stop-opacity:1;stop-color:#35aad1"
offset="0"
id="stop126-9" />
<stop
style="stop-opacity:1;stop-color:#008eb0"
offset="1"
id="stop128-2" />
</radialGradient>
</defs>
<sodipodi:namedview
id="base"
bordercolor="#666666"
inkscape:pageshadow="2"
inkscape:guide-bbox="true"
pagecolor="#ffffff"
inkscape:window-height="1025"
inkscape:zoom="1.5506342"
inkscape:window-x="0"
showgrid="false"
borderopacity="1.0"
inkscape:current-layer="layer1"
inkscape:cx="101.47976"
inkscape:cy="187.93026"
showguides="true"
inkscape:window-y="27"
inkscape:window-width="1920"
showborder="false"
inkscape:pageopacity="0.0"
inkscape:document-units="px"
inkscape:window-maximized="1"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0" />
<g
id="layer1"
inkscape:label="Ebene 1"
inkscape:groupmode="layer"
transform="translate(12764.604,6948.6392)">
<rect
style="fill:#f7f5e5;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:1.33417308;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:normal"
id="rect2098"
width="265.24234"
height="287.7782"
x="-12742.701"
y="-6926.0596" />
<g
id="g2648-3"
transform="matrix(0.11392363,0,0,0.12309744,-11622.953,-6209.6901)"
style="stroke-width:0.50787127">
<path
inkscape:connector-curvature="0"
d="m -10020.9,-6002.961 225,225.1 V -3519.7 l -225,225.039 z"
style="fill:url(#linearGradient1857-3);stroke-width:0.50787127"
sodipodi:nodetypes="ccccc"
id="rect2383-6" />
<path
inkscape:connector-curvature="0"
d="m -7312.7,-6002.961 -225,225.1 h -2258.2 l -225,-225.1 z"
style="fill:url(#linearGradient1876-7);stroke-width:0.50787127"
sodipodi:nodetypes="ccccc"
id="path2626-0" />
<path
inkscape:connector-curvature="0"
d="m -7312.7,-3294.661 -225,-225.039 v -2258.161 l 225,-225.1 z"
style="fill:url(#linearGradient2634-0);stroke-width:0.50787127"
sodipodi:nodetypes="ccccc"
id="path2640-6" />
<path
inkscape:connector-curvature="0"
d="m -10021.2,-3294.6 225.3,-225.1 h 2258.2 l 224.7,225.1 z"
style="fill:url(#linearGradient2636-9);stroke-width:0.50787127"
sodipodi:nodetypes="ccccc"
id="path2642-2" />
</g>
<g
transform="translate(406.2851,1.2898364)"
id="g2045-0">
<path
d="m -13088.844,-6689.1863 c -5.713,4.1067 -13.707,3.5987 -18.843,-1.54 -5.713,-5.716 -5.713,-14.984 0,-20.6867 5.708,-5.7186 14.963,-5.7186 20.684,0 l 4.259,-4.2693 c -8.061,-8.0733 -21.145,-8.0733 -29.22,0 -8.063,8.0707 -8.063,21.1533 0,29.224 6.26,6.2493 15.509,7.6373 23.12,4.2 v 7.32 h -22.375 l -6.082,6.0827 h 34.538 v -27.14 c 0,0 -1.598,2.6186 -4.24,5.2693 -0.574,0.5733 -1.201,1.0773 -1.841,1.54"
style="fill:#77787b;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:1.33333325"
id="path80-2"
inkscape:connector-curvature="0" />
<path
d="m -12944.367,-6712.9559 c 5.707,-4.0907 13.71,-3.584 18.84,1.5427 5.722,5.7186 5.722,14.98 0,20.696 -5.705,5.708 -14.97,5.708 -20.685,0 l -4.265,4.2586 c 8.074,8.0747 21.158,8.0747 29.221,0 8.068,-8.0586 8.068,-21.1386 0,-29.2146 -6.253,-6.2454 -15.507,-7.6374 -23.111,-4.1974 v -7.32 h 22.367 l 6.084,-6.0826 h -34.531 v 17.5786 9.5667 c 0,0 1.584,-2.6387 4.235,-5.2853 0.58,-0.564 1.197,-1.0707 1.845,-1.5427"
style="fill:#008eb0;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:1.33333325"
id="path84-3"
inkscape:connector-curvature="0" />
<path
d="m -13066.931,-6711.3582 c 5.677,-5.664 14.893,-5.664 20.58,0 1.238,1.2427 2.177,2.66 2.877,4.184 h -26.356 c 0.701,-1.524 1.657,-2.9413 2.899,-4.184 m 0,20.584 c -2.844,-2.8466 -4.263,-6.564 -4.263,-10.292 h 35.208 c 0,-2.0613 -0.312,-4.1213 -0.931,-6.108 -0.95,-3.1053 -2.656,-6.0373 -5.117,-8.508 -8.069,-8.0733 -21.151,-8.0733 -29.228,0 -8.064,8.084 -8.064,21.164 0,29.224 8.077,8.0734 21.159,8.0734 29.228,0 l -4.317,-4.316 c -5.687,5.6814 -14.903,5.6814 -20.58,0"
style="fill:#77787b;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:1.33333325"
id="path88-7"
inkscape:connector-curvature="0" />
<path
d="m -12977.472,-6721.7312 c -7.46,0 -13.972,3.9653 -17.614,9.888 -3.628,-5.9227 -10.146,-9.888 -17.606,-9.888 -4.252,0 -8.197,1.296 -11.485,3.4973 -2.48,1.656 -4.559,3.8467 -6.112,6.3907 v 30.7666 h 6.112 v -28.8866 c 2.665,-3.4347 6.812,-5.648 11.485,-5.648 8.04,0 14.547,6.5053 14.547,14.5453 v 19.9893 h 6.123 v -19.9893 c 0,-8.04 6.517,-14.5453 14.55,-14.5453 8.032,0 14.544,6.5053 14.544,14.5453 v 19.9893 h 6.122 v -19.9893 c 0,-11.4173 -9.256,-20.6653 -20.666,-20.6653"
style="fill:#008eb0;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:1.33333325"
id="path92-5"
inkscape:connector-curvature="0" />
<path
inkscape:connector-curvature="0"
id="path112-9"
style="fill:url(#radialGradient110-6);stroke:none;stroke-width:1.33333325"
d="m -13077.879,-6840.685 c -16.926,16.9187 -16.926,44.368 0,61.3014 v 0 c 13.115,13.1053 32.528,16.0106 48.482,8.8 v 0 15.3653 h -46.92 l -12.775,12.756 h 72.451 v -56.8907 l -10.948,10.9467 h -41.37 l 52.322,-52.3253 c -8.458,-8.428 -19.531,-12.6427 -30.602,-12.644 v 0 c -11.092,0 -22.18,4.2293 -30.64,12.6906" />
<path
inkscape:connector-curvature="0"
id="path132-2"
style="fill:url(#radialGradient130-7);stroke:none;stroke-width:1.33333325"
d="m -13016.573,-6897.6103 v 56.82 l 10.885,-10.8987 h 41.371 l -52.303,52.3014 c 16.931,16.924 44.36,16.9093 61.28,0 v 0 c 16.929,-16.9294 16.929,-44.38 0,-61.2987 v 0 c -13.115,-13.1093 -32.532,-16.0253 -48.483,-8.8133 v 0 -15.356 h 47.01 l 12.686,-12.7547 z" />
</g>
</g>
<metadata
id="metadata905">
<rdf:RDF>
<cc:Work>
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<cc:license
rdf:resource="http://creativecommons.org/publicdomain/zero/1.0/" />
<dc:publisher>
<cc:Agent
rdf:about="http://openclipart.org/">
<dc:title>Openclipart</dc:title>
</cc:Agent>
</dc:publisher>
<dc:title></dc:title>
<dc:date>2008-09-22T14:36:23</dc:date>
<dc:description />
<dc:source>https://openclipart.org/detail/19331/gold-frames-set-by-chrisdesign-19331</dc:source>
<dc:creator>
<cc:Agent>
<dc:title>Chrisdesign</dc:title>
</cc:Agent>
</dc:creator>
<dc:subject>
<rdf:Bag>
<rdf:li>gold frame frames rahmen</rdf:li>
<rdf:li>how i did it</rdf:li>
</rdf:Bag>
</dc:subject>
</cc:Work>
<cc:License
rdf:about="http://creativecommons.org/publicdomain/zero/1.0/">
<cc:permits
rdf:resource="http://creativecommons.org/ns#Reproduction" />
<cc:permits
rdf:resource="http://creativecommons.org/ns#Distribution" />
<cc:permits
rdf:resource="http://creativecommons.org/ns#DerivativeWorks" />
</cc:License>
</rdf:RDF>
</metadata>
</svg>

After

Width:  |  Height:  |  Size: 14 KiB

183
util/gem5art/run/README.md Normal file
View File

@@ -0,0 +1,183 @@
# gem5art run package
This package contains Python objects to wrap gem5 runs/experiments.
Please cite the [gem5art paper](https://arch.cs.ucdavis.edu/papers/2021-3-28-gem5art) when using the gem5art packages.
This documentation can be found on the [gem5 website](https://www.gem5.org/documentation/gem5art/)
Each gem5 experiment is wrapped inside a run object.
These run objects contain all of the information required to execute the gem5 experiments and can optionally be executed via the gem5art tasks library (or manually with the `run()` function.).gem5Run interacts with the Artifact class of gem5art to ensure reproducibility of gem5 experiments and also stores the current gem5Run object and the output results in the database for later analysis.
## SE and FS mode runs
Next are two methods (for SE (system-emulation) and FS (full-system) modes of gem5) from gem5Run class which give an idea of the required arguments from a user's perspective to create a gem5Run object:
```python
@classmethod
def createSERun(cls,
name: str,
gem5_binary: str,
run_script: str,
outdir: str,
gem5_artifact: Artifact,
gem5_git_artifact: Artifact,
run_script_git_artifact: Artifact,
*params: str,
timeout: int = 60*15) -> 'gem5Run':
.......
@classmethod
def createFSRun(cls,
name: str,
gem5_binary: str,
run_script: str,
outdir: str,
gem5_artifact: Artifact,
gem5_git_artifact: Artifact,
run_script_git_artifact: Artifact,
linux_binary: str,
disk_image: str,
linux_binary_artifact: Artifact,
disk_image_artifact: Artifact,
*params: str,
timeout: int = 60*15) -> 'gem5Run':
.......
```
For the user it is important to understand different arguments passed to run objects:
- `name`: name of the run, can act as a tag to search the database to find the required runs (it is expected that user will use a unique name for different experiments)
- `gem5_binary`: path to the actual gem5 binary to be used
- `run_script`: path to the python run script that will be used with gem5 binary
- `outdir`: path to the directory where gem5 results should be written
- `gem5_artifact`: gem5 binary git artifact object
- `gem5_git_artifact`: gem5 source git repo artifact object
- `run_script_git_artifact`: run script artifact object
- `linux_binary` (only full-system): path to the actual linux binary to be used (used by run script as well)
- `disk_image` (only full-system): path to the actual disk image to be used (used by run script as well)
- `linux_binary_artifact` (only full-system): linux binary artifact object
- `disk_image_artifact` (only full-system): disk image artifact object
- `params`: other params to be passed to the run script
- `timeout`: longest time in seconds for which the current gem5 job is allowed to execute
The artifact parameters (`gem5_artifact`, `gem5_git_artifact`, and `run_script_git_artifact`) are used to ensure this is reproducible run.
Apart from the above mentioned parameters, gem5Run class also keeps track of other features of a gem5 run e.g., the start time, the end time, the current status of gem5 run, the kill reason (if the run is finished), etc.
While the user can write their own run script to use with gem5 (with any command line arguments), currently when a `gem5Run` object is created for a full-system experiment using `createFSRun` method, it is assumed that the path to the `linux_binary` and `disk_image` is passed to the run script on the command line (as arguments of the `createFSRun` method).
## Running an experiment
The `gem5Run` object has everything needed to run one gem5 execution.
Normally, this will be performed by using the gem5art *tasks* package.
However, it is also possible to manually execute a gem5 run.
The `run` function executes the gem5 experiment.
It takes two optional parameters: a task associated with the run for bookkeeping and an optional directory to execute the run in.
The `run` function executes the gem5 binary by using `Popen`.
This creates another process to execute gem5.
The `run` function is *blocking* and does not return until the child process has completed.
While the child process is running, every 5 seconds the parent python process will update the status in the `info.json` file.
The `info.json` file is the serialized `gem5run` object which contains all of the run information and the current status.
`gem5Run` objects have 7 possible status states.
These are currently simple strings stored in the `status` property.
- `Created`: The run has been created. This is set in the constructor when either `createSRRun` or `createFSRun` is called.
- `Begin run`: When `run()` is called, after the database is checked, we enter the `Begin run` state.
- `Failed artifact check for ...`: The status is set to this when the artifact check fails
- `Spawning`: Next, just before `Popen` is called, the run enters the `Spawning` state
- `Running`: Once the parent process begins spinning waiting for the child to finish, the run enters the `Running` state.
- `Finished`: When the child finished with exit code `0`, the run enters the `Finished` state.
- `Failed`: When the child finished with a non-zero exit code, the run enters the `Failed` state.
## Run Already in the Database
When starting a run with gem5art, it might complain that the run already exists in the database.
Basically, before launching a gem5 job, gem5art checks if this run matches an existing run in the database.
In order to uniquely identify a run, a single hash is made out of:
- the runscript
- the parameters passed to the runscript
- the artifacts of the run object which, for an SE run, include: gem5 binary artifact, gem5 source git artifact, run script (experiments repo) artifact. For an FS run, the list of artifacts also include linux binary artifact and disk image artifacts in addition to the artifacts of an SE run.
If this hash already exists in the database, gem5art will not launch a new job based on this run object as a run with same parameters would have already been executed.
In case, user still wants to launch this job, the user will have to remove the existing run object from the database.
## Searching the Database to find Runs
### Utility script
gem5art provides the utility `gem5art-getruns` to search the database and retrieve runs.
Based on the parameters, `gem5art-getruns` will dump the results into a file in the json format.
```
usage: gem5art-getruns [-h] [--fs-only] [--limit LIMIT] [--db-uri DB_URI]
[-s SEARCH_NAME]
filename
Dump all runs from the database into a json file
positional arguments:
filename Output file name
optional arguments:
-h, --help show this help message and exit
--fs-only Only output FS runs
--limit LIMIT Limit of the number of runs to return. Default: all
--db-uri DB_URI The database to connect to. Default
mongodb://localhost:27017
-s SEARCH_NAME, --search_name SEARCH_NAME
Query for the name field
```
### Manually searching the database
Once you start running the experiments with gem5 and want to know the status of those runs, you can look at the gem5Run artifacts in the database.
For this purpose, gem5art provides a method `getRuns`, which you can use as follows:
```python
import gem5art.run
from gem5art.artifact import getDBConnection
db = getDBConnection()
for i in gem5art.run.getRuns(db, fs_only=False, limit=100):
print(i)
```
The documentation on [getRuns](run.html#gem5art.run.getRuns) is available at the bottom of this page.
## Searching the Database to find Runs with Specific Names
As discussed above, while creating a FS or SE mode Run object, the user has to pass a name field to recognize
a particular set of runs (or experiments).
We expect that the user will take care to use a name string which fully characterizes a set of experiments and can be thought of as a `Nonce`.
For example, if we are running experiments to test linux kernel boot on gem5, we can use a name field `boot_tests_v1` or `boot_tests_[month_year]` (where mont_year correspond to the month and year when the experiments were run).
Later on, the same name can be used to search for relevant gem5 runs in the database.
For this purpose, gem5art provides a method `getRunsByName`, which can be used as follow:
```python
import gem5art.run
from gem5art.artifact import getDBConnection
db = getDBConnection()
for i in gem5art.run.getRunsByName(db, name='boot_tests_v1', fs_only=True, limit=100):
print(i)
```
The documentation on `getRunsByName` is available [here](run.html#gem5art.run.getRunsByName).
## Runs API Documentation
```eval_rst
Run
---
.. automodule:: gem5art.run
:members:
:undoc-members:
```

View File

@@ -0,0 +1,88 @@
#! /usr/bin/env python3
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""This is a simple script to dump gem5 runs into a json file.
This file simply wraps the getRuns function from gem5art.run.
"""
from argparse import ArgumentParser
from json import dump
import gem5art.artifact
from gem5art.artifact import getDBConnection
from gem5art.run import getRunsByNameLike, getRuns
def parseArgs():
parser = ArgumentParser(
description="Dump all runs from the database into a json file"
)
default_db_uri = gem5art.artifact._artifactdb._default_uri
parser.add_argument("filename", help="Output file name")
parser.add_argument(
"--fs-only",
action="store_true",
default=False,
help="Only output FS runs",
)
parser.add_argument(
"--limit",
type=int,
default=0,
help="Limit of the number of runs to return. Default: all",
)
parser.add_argument(
"--db-uri",
default=default_db_uri,
help=f"The database to connect to. Default {default_db_uri}",
)
parser.add_argument(
"-s", "--search_name", help="Query for the name field", default=""
)
return parser.parse_args()
if __name__ == "__main__":
args = parseArgs()
db = getDBConnection(args.db_uri)
with open(args.filename, "w") as f:
if args.search_name:
runs = getRunsByNameLike(
db, args.search_name, args.fs_only, args.limit
)
else:
runs = getRuns(db, args.fs_only, args.limit)
to_dump = [run._convertForJson(run._getSerializable()) for run in runs]
dump(to_dump, f, indent=2)

View File

@@ -0,0 +1,618 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
This file defines a gem5Run object which contains all information needed to
run a single gem5 test.
This class works closely with the artifact module to ensure that the gem5
experiment is reproducible and the output is saved to the database.
"""
import hashlib
import json
import os
from pathlib import Path
import signal
import subprocess
import time
from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
from uuid import UUID, uuid4
import zipfile
from gem5art import artifact
from gem5art.artifact import Artifact
from gem5art.artifact._artifactdb import ArtifactDB
class gem5Run:
"""
This class holds all of the info required to run gem5.
"""
_id: UUID
hash: str
type: str
name: str
gem5_binary: Path
run_script: Path
gem5_artifact: Artifact
gem5_git_artifact: Artifact
run_script_git_artifact: Artifact
params: Tuple[str, ...]
timeout: int
gem5_name: str
script_name: str
linux_name: str
disk_name: str
string: str
outdir: Path
linux_binary: Path
disk_image: Path
linux_binary_artifact: Artifact
disk_image_artifact: Artifact
command: List[str]
running: bool
enqueue_time: float
start_time: float
end_time: float
return_code: int
kill_reason: str
status: str
pid: int
task_id: Any
results: Optional[Artifact]
artifacts: List[Artifact]
@classmethod
def _create(
cls,
name: str,
gem5_binary: Path,
run_script: Path,
outdir: Path,
gem5_artifact: Artifact,
gem5_git_artifact: Artifact,
run_script_git_artifact: Artifact,
params: Tuple[str, ...],
timeout: int,
) -> "gem5Run":
"""
Shared code between SE and FS when creating a run object.
"""
run = cls()
run.name = name
run.gem5_binary = gem5_binary
run.run_script = run_script
run.gem5_artifact = gem5_artifact
run.gem5_git_artifact = gem5_git_artifact
run.run_script_git_artifact = run_script_git_artifact
run.params = params
run.timeout = timeout
run._id = uuid4()
run.outdir = outdir.resolve() # ensure this is absolute
# Assumes **/<gem5_name>/gem5.<anything>
run.gem5_name = run.gem5_binary.parent.name
# Assumes **/<script_name>.py
run.script_name = run.run_script.stem
# Info about the actual run
run.running = False
run.enqueue_time = time.time()
run.start_time = 0.0
run.end_time = 0.0
run.return_code = 0
run.kill_reason = ""
run.status = "Created"
run.pid = 0
run.task_id = None
# Initially, there are no results
run.results = None
return run
@classmethod
def createSERun(
cls,
name: str,
gem5_binary: str,
run_script: str,
outdir: str,
gem5_artifact: Artifact,
gem5_git_artifact: Artifact,
run_script_git_artifact: Artifact,
*params: str,
timeout: int = 60 * 15,
) -> "gem5Run":
"""
name is the name of the run. The name is not necessarily unique. The
name could be used to query the results of the run.
gem5_binary and run_script are the paths to the binary to run
and the script to pass to gem5. Full paths are better.
The artifact parameters (gem5_artifact, gem5_git_artifact, and
run_script_git_artifact) are used to ensure this is reproducible run.
Further parameters can be passed via extra arguments. These
parameters will be passed in order to the gem5 run script.
timeout is the time in seconds to run the subprocess before killing it.
Note: When instantiating this class for the first time, it will create
a file `info.json` in the outdir which contains a serialized version
of this class.
"""
run = cls._create(
name,
Path(gem5_binary),
Path(run_script),
Path(outdir),
gem5_artifact,
gem5_git_artifact,
run_script_git_artifact,
params,
timeout,
)
run.artifacts = [
gem5_artifact,
gem5_git_artifact,
run_script_git_artifact,
]
run.string = f"{run.gem5_name} {run.script_name}"
run.string += " ".join(run.params)
run.command = [
str(run.gem5_binary),
"-re",
f"--outdir={run.outdir}",
str(run.run_script),
]
run.command += list(params)
run.hash = run._getHash()
run.type = "gem5 run"
# Make the directory if it doesn't exist
os.makedirs(run.outdir, exist_ok=True)
run.dumpJson("info.json")
return run
@classmethod
def createFSRun(
cls,
name: str,
gem5_binary: str,
run_script: str,
outdir: str,
gem5_artifact: Artifact,
gem5_git_artifact: Artifact,
run_script_git_artifact: Artifact,
linux_binary: str,
disk_image: str,
linux_binary_artifact: Artifact,
disk_image_artifact: Artifact,
*params: str,
timeout: int = 60 * 15,
) -> "gem5Run":
"""
name is the name of the run. The name is not necessarily unique. The
name could be used to query the results of the run.
gem5_binary and run_script are the paths to the binary to run
and the script to pass to gem5.
The linux_binary is the kernel to run and the disk_image is the path
to the disk image to use.
Further parameters can be passed via extra arguments. These
parameters will be passed in order to the gem5 run script.
Note: When instantiating this class for the first time, it will create
a file `info.json` in the outdir which contains a serialized version
of this class.
"""
run = cls._create(
name,
Path(gem5_binary),
Path(run_script),
Path(outdir),
gem5_artifact,
gem5_git_artifact,
run_script_git_artifact,
params,
timeout,
)
run.linux_binary = Path(linux_binary)
run.disk_image = Path(disk_image)
run.linux_binary_artifact = linux_binary_artifact
run.disk_image_artifact = disk_image_artifact
# Assumes **/<linux_name>
run.linux_name = run.linux_binary.name
# Assumes **/<disk_name>
run.disk_name = run.disk_image.name
run.artifacts = [
gem5_artifact,
gem5_git_artifact,
run_script_git_artifact,
linux_binary_artifact,
disk_image_artifact,
]
run.string = f"{run.gem5_name} {run.script_name} "
run.string += f"{run.linux_name} {run.disk_name} "
run.string += " ".join(run.params)
run.command = [
str(run.gem5_binary),
"-re",
f"--outdir={run.outdir}",
str(run.run_script),
str(run.linux_binary),
str(run.disk_image),
]
run.command += list(params)
run.hash = run._getHash()
run.type = "gem5 run fs"
# Make the directory if it doesn't exist
os.makedirs(run.outdir, exist_ok=True)
run.dumpJson("info.json")
return run
@classmethod
def loadJson(cls, filename: str) -> "gem5Run":
with open(filename) as f:
d = json.load(f)
# Convert string version of UUID to UUID object
for k, v in d.iteritems():
if k.endswith("_artifact"):
d[k] = UUID(v)
d["_id"] = UUID(d["_id"])
try:
return cls.loadFromDict(d)
except KeyError:
print("Incompatible json file: {}!".format(filename))
raise
@classmethod
def loadFromDict(cls, d: Dict[str, Union[str, UUID]]) -> "gem5Run":
"""Returns new gem5Run instance from the dictionary of values in d"""
run = cls()
run.artifacts = []
for k, v in d.items():
if isinstance(v, UUID) and k != "_id":
a = Artifact(v)
setattr(run, k, a)
run.artifacts.append(a)
else:
setattr(run, k, v)
return run
def checkArtifacts(self, cwd: str) -> bool:
"""Checks to make sure all of the artifacts are up to date
This should happen just before running gem5. This function will return
False if the artifacts don't check and true if they are all the same.
For the git repos, this checks the git hash, for binary artifacts this
checks the md5 hash.
"""
for v in self.artifacts:
if v.type == "git repo":
new = artifact.artifact.getGit(cwd / v.path)["hash"]
old = v.git["hash"]
else:
new = artifact.artifact.getHash(cwd / v.path)
old = v.hash
if new != old:
self.status = f"Failed artifact check for {cwd / v.path}"
return False
return True
def __repr__(self) -> str:
return str(self._getSerializable())
def checkKernelPanic(self) -> bool:
"""
Returns true if the gem5 instance specified in args has a kernel panic
Note: this gets around the problem that gem5 doesn't exit on panics.
"""
term_path = self.outdir / "system.pc.com_1.device"
if not term_path.exists():
return False
with open(term_path, "rb") as f:
try:
f.seek(-1000, os.SEEK_END)
except OSError:
return False
try:
# There was a case where reading `term_path` resulted in a
# UnicodeDecodeError. It is known that the terminal output
# (content of 'system.pc.com_1.device') is written from a
# buffer from gem5, and when gem5 stops, the content of the
# buffer is stopped being copied to the file. The buffer is
# not flushed as well. So, it might be a case that the content
# of the `term_path` is corrupted as a Unicode character could
# be longer than a byte.
last = f.readlines()[-1].decode()
if "Kernel panic" in last:
return True
else:
return False
except UnicodeDecodeError:
return False
def _getSerializable(self) -> Dict[str, Union[str, UUID]]:
"""Returns a dictionary that can be used to recreate this object
Note: All artifacts are converted to a UUID instead of an Artifact.
"""
# Grab all of the member variables
d = vars(self).copy()
# Remove list of artifacts
del d["artifacts"]
# Replace the artifacts with their UUIDs
for k, v in d.items():
if isinstance(v, Artifact):
d[k] = v._id
if isinstance(v, Path):
d[k] = str(v)
return d
def _getHash(self) -> str:
"""Return a single value that uniquely identifies this run
To uniquely identify this run, the gem5 binary, gem5 scripts, and
parameters should all match. Thus, let's make a single hash out of the
artifacts + the runscript + parameters
"""
to_hash = [art._id.bytes for art in self.artifacts]
to_hash.append(str(self.run_script).encode())
to_hash.append(" ".join(self.params).encode())
return hashlib.md5(b"".join(to_hash)).hexdigest()
@classmethod
def _convertForJson(cls, d: Dict[str, Any]) -> Dict[str, str]:
"""Converts UUID objects to strings for json compatibility"""
for k, v in d.items():
if isinstance(v, UUID):
d[k] = str(v)
return d
def dumpJson(self, filename: str) -> None:
"""Dump all info into a json file"""
d = self._convertForJson(self._getSerializable())
with open(self.outdir / filename, "w") as f:
json.dump(d, f)
def dumpsJson(self) -> str:
"""Like dumpJson except returns string"""
d = self._convertForJson(self._getSerializable())
return json.dumps(d)
def run(self, task: Any = None, cwd: str = ".") -> None:
"""Actually run the test.
Calls Popen with the command to fork a new process.
Then, this function polls the process every 5 seconds to check if it
has finished or not. Each time it checks, it dumps the json info so
other applications can poll those files.
task is the celery task that is running this gem5 instance.
cwd is the directory to change to before running. This allows a server
process to run in a different directory than the running process. Note
that only the spawned process runs in the new directory.
"""
# Check if the run is already in the database
db = artifact.getDBConnection()
if self.hash in db:
print(f"Error: Have already run {self.command}. Exiting!")
return
self.status = "Begin run"
self.dumpJson("info.json")
if not self.checkArtifacts(cwd):
self.dumpJson("info.json")
return
self.status = "Spawning"
self.start_time = time.time()
self.task_id = task.request.id if task else None
self.dumpJson("info.json")
# Start running the gem5 command
proc = subprocess.Popen(self.command, cwd=cwd)
# Register handler in case this process is killed while the gem5
# instance is running. Note: there's a bit of a race condition here,
# but hopefully it's not a big deal
def handler(signum, frame):
proc.kill()
self.kill_reason = "sigterm"
self.dumpJson("info.json")
# Note: We'll fall out of the while loop after this.
# This makes it so if you term *this* process, it will actually kill
# the subprocess and then this process will die.
signal.signal(signal.SIGTERM, handler)
# Do this until the subprocess is done (successfully or not)
while proc.poll() is None:
self.status = "Running"
# Still running
self.current_time = time.time()
self.pid = proc.pid
self.running = True
if self.current_time - self.start_time > self.timeout:
proc.kill()
self.kill_reason = "timeout"
if self.checkKernelPanic():
proc.kill()
self.kill_reason = "kernel panic"
self.dumpJson("info.json")
# Check again in five seconds
time.sleep(5)
print("Done running {}".format(" ".join(self.command)))
# Done executing
self.running = False
self.end_time = time.time()
self.return_code = proc.returncode
if self.return_code == 0:
self.status = "Finished"
else:
self.status = "Failed"
self.dumpJson("info.json")
self.saveResults()
# Store current gem5 run in the database
db.put(self._id, self._getSerializable())
print("Done storing the results of {}".format(" ".join(self.command)))
def saveResults(self) -> None:
"""Zip up the output directory and store the results in the
database."""
with zipfile.ZipFile(
self.outdir / "results.zip", "w", zipfile.ZIP_DEFLATED
) as zipf:
for path in self.outdir.glob("**/*"):
if path.name == "results.zip":
continue
zipf.write(path, path.relative_to(self.outdir.parent))
self.results = Artifact.registerArtifact(
command=f"zip results.zip -r {self.outdir}",
name=self.name,
typ="directory",
path=self.outdir / "results.zip",
cwd="./",
documentation="Compressed version of the results directory",
)
def __str__(self) -> str:
return self.string + " -> " + self.status
def getRuns(
db: ArtifactDB, fs_only: bool = False, limit: int = 0
) -> Iterable[gem5Run]:
"""Returns a generator of gem5Run objects.
If fs_only is True, then only full system runs will be returned.
Limit specifies the maximum number of runs to return.
"""
if not fs_only:
runs = db.searchByType("gem5 run", limit=limit)
for run in runs:
yield gem5Run.loadFromDict(run)
fsruns = db.searchByType("gem5 run fs", limit=limit)
for run in fsruns:
yield gem5Run.loadFromDict(run)
def getRunsByName(
db: ArtifactDB, name: str, fs_only: bool = False, limit: int = 0
) -> Iterable[gem5Run]:
"""Returns a generator of gem5Run objects, which have the field "name"
**exactly** the same as the name parameter. The name used in this query
is case sensitive.
If fs_only is True, then only full system runs will be returned.
Limit specifies the maximum number of runs to return.
"""
if not fs_only:
seruns = db.searchByNameType(name, "gem5 run", limit=limit)
for run in seruns:
yield gem5Run.loadFromDict(run)
fsruns = db.searchByNameType(name, "gem5 run fs", limit=limit)
for run in fsruns:
yield gem5Run.loadFromDict(run)
def getRunsByNameLike(
db: ArtifactDB, name: str, fs_only: bool = False, limit: int = 0
) -> Iterable[gem5Run]:
"""Return a generator of gem5Run objects, which have the field "name"
containing the name parameter as a substring. The name used in this
query is case sensitive.
If fs_only is True, then only full system runs will be returned.
Limit specifies the maximum number of runs to return.
"""
if not fs_only:
seruns = db.searchByLikeNameType(name, "gem5 run", limit=limit)
for run in seruns:
yield gem5Run.loadFromDict(run)
fsruns = db.searchByLikeNameType(name, "gem5 run fs", limit=limit)
for run in fsruns:
yield gem5Run.loadFromDict(run)

View File

@@ -0,0 +1,4 @@
[mypy]
namespace_packages = True
warn_unreachable = True
mypy_path = ../artifact

66
util/gem5art/run/setup.py Executable file
View File

@@ -0,0 +1,66 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""A setuptools based setup module."""
from os.path import join
from pathlib import Path
from setuptools import setup, find_namespace_packages
with open(Path(__file__).parent / "README.md", encoding="utf-8") as f:
long_description = f.read()
setup(
name="gem5art-run",
version="1.4.0",
description="A collection of utilities for running gem5",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://www.gem5.org/",
author="Davis Architecture Research Group (DArchR)",
author_email="jlowepower@ucdavis.edu",
license="BSD",
classifiers=[
"Development Status :: 4 - Beta",
"License :: OSI Approved :: BSD License",
"Topic :: System :: Hardware",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
],
keywords="simulation architecture gem5",
packages=find_namespace_packages(),
install_requires=["gem5art-artifact"],
python_requires=">=3.6",
project_urls={
"Bug Reports": "https://gem5.atlassian.net/",
"Source": "https://gem5.googlesource.com/",
"Documentation": "https://www.gem5.org/documentation/gem5art",
},
scripts=[
"bin/gem5art-getruns",
],
)

View File

@@ -0,0 +1,25 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,125 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Tests for gem5Run object"""
import hashlib
from pathlib import Path
import os
import unittest
from uuid import uuid4
from gem5art.artifact import artifact
from gem5art.run import gem5Run
class TestSERun(unittest.TestCase):
def setUp(self):
self.gem5art = artifact.Artifact(
{
"_id": uuid4(),
"name": "test-gem5",
"type": "test-binary",
"documentation": "This is a description of gem5 artifact",
"command": "scons build/X86/gem5.opt",
"path": "/",
"hash": hashlib.md5().hexdigest(),
"git": artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
self.gem5gitart = artifact.Artifact(
{
"_id": uuid4(),
"name": "test-gem5-git",
"type": "test-git",
"documentation": "This is a description of gem5 git artifact",
"command": "git clone something",
"path": "/",
"hash": hashlib.md5().hexdigest(),
"git": artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
self.runscptart = artifact.Artifact(
{
"_id": uuid4(),
"name": "test-runscript",
"type": "test-git",
"documentation": "This is a description of runscript aritfact",
"command": "git clone something",
"path": "/",
"hash": hashlib.md5().hexdigest(),
"git": artifact.getGit(Path(".")),
"cwd": "/",
"inputs": [],
}
)
self.run = gem5Run.createSERun(
"test SE run",
"gem5/build/X86/gem5.opt",
"configs-tests/run_test.py",
"results/run_test/out",
self.gem5art,
self.gem5gitart,
self.runscptart,
"extra",
"params",
)
def test_out_dir(self):
relative_outdir = "results/run_test/out"
self.assertEqual(
self.run.outdir.relative_to(Path(".").resolve()),
Path(relative_outdir),
)
self.assertTrue(
self.run.outdir.is_absolute(),
"outdir should be absolute directory",
)
def test_command(self):
self.assertEqual(
self.run.command,
[
"gem5/build/X86/gem5.opt",
"-re",
"--outdir={}".format(os.path.abspath("results/run_test/out")),
"configs-tests/run_test.py",
"extra",
"params",
],
)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,82 @@
# gem5art tasks package
This package contains two parallel task libraries for running gem5 experiments.
he actual gem5 experiment can be executed with the help of [Python multiprocessing support](https://docs.python.org/3/library/multiprocessing.html), [Celery](http://www.celeryproject.org/) or even without using any job manager (a job can be directly launched by calling `run()` function of gem5Run object).
This package implicitly depends on the gem5art run package.
Please cite the [gem5art paper](https://arch.cs.ucdavis.edu/papers/2021-3-28-gem5art) when using the gem5art packages.
This documentation can be found on the [gem5 website](https://www.gem5.org/documentation/gem5art/)
## Use of Python Multiprocessing
This is a simple way to run gem5 jobs using Python multiprocessing library.
You can use the following function in your job launch script to execute gem5art run objects:
```python
run_job_pool([a list containing all run objects you want to execute], num_parallel_jobs = [Number of parallel jobs you want to run])
```
## Use of Celery
Celery server can run many gem5 tasks asynchronously.
Once a user creates a gem5Run object (discussed previously) while using gem5art, this object needs to be passed to a method `run_gem5_instance()` registered with Celery app, which is responsible for starting a Celery task to run gem5. The other argument needed by the `run_gem5_instance()` is the current working directory.
Celery server can be started with the following command:
```sh
celery -E -A gem5art.tasks.celery worker --autoscale=[number of workers],0
```
This will start a server with events enabled that will accept gem5 tasks as defined in gem5art.
It will autoscale from 0 to desired number of workers.
Celery relies on a message broker `RabbitMQ` for communication between the client and workers.
If not already installed, you need to install `RabbitMQ` on your system (before running celery) using:
```sh
apt-get install rabbitmq-server
```
### Monitoring Celery
Celery does not explicitly show the status of the runs by default.
[flower](https://flower.readthedocs.io/en/latest/), a Python package, is a web-based tool for monitoring and administrating Celery.
To install the flower package,
```sh
pip install flower
```
You can monitor the celery cluster doing the following:
```sh
flower -A gem5art.tasks.celery --port=5555
```
This will start a webserver on port 5555.
### Removing all tasks
```sh
celery -A gem5art.tasks.celery purge
```
### Viewing state of all jobs in celery
```sh
celery -A gem5art.tasks.celery events
```
## Tasks API Documentation
```eval_rst
Task
----
.. automodule:: gem5art.tasks.tasks
:members:
:undoc-members:
.. automodule:: gem5art.tasks.celery
:members:
:undoc-members:
```

View File

@@ -0,0 +1,27 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""This is a set of utilities for using celery to run gem5 experiments"""

View File

@@ -0,0 +1,40 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from celery import Celery # type: ignore
# Create a celery server. If you run celery with this file, it will start a
# server that will accept tasks specified by the "run" below.
gem5app = Celery(
"gem5",
backend="rpc",
broker="amqp://localhost",
include=["gem5art.tasks.tasks"],
)
gem5app.conf.update(accept_content=["pickle", "json"])
if __name__ == "__main__":
gem5app.start()

View File

@@ -0,0 +1,65 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from .celery import gem5app
import multiprocessing as mp
import time
@gem5app.task(bind=True, serializer="pickle")
def run_gem5_instance(self, gem5_run, cwd="."):
"""
Runs a gem5 instance with the script and any parameters to the script.
Note: this is "bound" which means self is the task that is running this.
"""
gem5_run.run(self, cwd=cwd)
def run_single_job(run):
start_time = time.time()
print(f"Running {' '.join(run.command)} at {time.time()}")
run.run()
finish_time = time.time()
print(
f"Finished {' '.join(run.command)} at {time.time()}. "
f"Total time = {finish_time - start_time}"
)
def run_job_pool(job_list, num_parallel_jobs=mp.cpu_count() // 2):
"""
Runs gem5 jobs in parallel when Celery is not used.
Creates as many parallel jobs as core count if no explicit
job count is provided
Receives a list of run objects created by the launch script
"""
pool = mp.Pool(num_parallel_jobs)
pool.map(run_single_job, job_list)
pool.close()
pool.join()
print(f"All jobs done running!")

View File

@@ -0,0 +1,4 @@
[mypy]
namespace_packages = True
warn_unreachable = True
mypy_path = ../artifact

66
util/gem5art/tasks/setup.py Executable file
View File

@@ -0,0 +1,66 @@
# Copyright (c) 2019, 2021 The Regents of the University of California
# All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""A setuptools based setup module."""
from os.path import join
from pathlib import Path
from setuptools import setup, find_namespace_packages
with open(Path(__file__).parent / "README.md", encoding="utf-8") as f:
long_description = f.read()
setup(
name="gem5art-tasks",
version="1.4.0",
description="A celery app for gem5art",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://www.gem5.org/",
author="Davis Architecture Research Group (DArchR)",
author_email="jlowepower@ucdavis.edu",
license="BSD",
classifiers=[
"Development Status :: 4 - Beta",
"License :: OSI Approved :: BSD License",
"Topic :: System :: Hardware",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
],
keywords="simulation architecture gem5",
packages=find_namespace_packages(include=["gem5art.*"]),
install_requires=["celery"],
extras_require={
"flower": ["flower"],
},
python_requires=">=3.6",
project_urls={
"Bug Reports": "https://gem5.atlassian.net/",
"Source": "https://gem5.googlesource.com/",
"Documentation": "https://www.gem5.org/documentation/gem5art",
},
)