Small improvements
This commit is contained in:
@@ -60,8 +60,8 @@ In this method, access patterns are created by analyzing the application's behav
|
||||
|
||||
\textbf{Zusammenfassung}
|
||||
|
||||
Die Leistung heutiger Rechensysteme hängt insbesondere von dem eingesetzem Speichersystem ab.
|
||||
Mit der zunehmenden Verbreitung von DRAMs, auch in mobilen und eingebetteten Systemen, ist es wichtig eine Speicherkonfiguration zu wählen, welche gut zur Anwendung passt, um eine hohe Leistungsfähigkeit zu erzielen.
|
||||
Die Leistung heutiger Rechensysteme hängt insbesondere von dem eingesetzen Speichersystem ab.
|
||||
Mit der zunehmenden Verbreitung von DRAMs auch in mobilen und eingebetteten Systemen ist es wichtig, eine Speicherkonfiguration zu wählen, welche gut zur Anwendung passt, um eine hohe Leistungsfähigkeit zu erzielen.
|
||||
Da dies jedoch aufgrund der überwältigenden Anzahl möglicher Konfigurationen und ihrer Vor- und Nachteile eine komplexe Aufgabenstellung innerhalb des Systemdesigns ist, ist eine Simulation des Systems unabdingbar, um zu bewerten, ob die verwendeten Komponenten und Konfigurationsparameter für die Anwendung geeignet sind.
|
||||
Solch eine Simulation kann mit der DRAM Simulationsumgebung DRAMSys durchgeführt werden.
|
||||
Eine Simulation mit DRAMSys erfordert realitätsnahe Stimuli für das Speichersystem, das dem Verhalten der Anwendung entspricht, welches mit einer zeitaufwändigen Simulation der Anwendung mit Prozessorkernmodellen erstellt werden kann.
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
\section{Simulation Results}
|
||||
\label{sec:simulation_results}
|
||||
|
||||
In this section the accuracy of the new simulation frontend will be evaluated.
|
||||
This section evaluates the accuracy of the new simulation front-end.
|
||||
After a short discussion about the general expections regarding the accuracy and considerations to make, the simulation results will be presented.
|
||||
The presentation is structured into two parts:
|
||||
At first simulation statistics of numerous benchmarks are compared against the gem5 \cite{Binkert2011} simulator that uses detailed processor models and can be considered as a reference.
|
||||
At first simulation statistics of numerous benchmarks are compared against the gem5 \cite{Binkert2011} simulator, which uses detailed processor models and can be considered as a reference.
|
||||
Secondly, the new simulation frontend is compared against the memory access trace generator tool of the Ramulator DRAM simulator \cite{Ghose2019}.
|
||||
|
||||
\subsection{Accuracy}
|
||||
@@ -22,7 +22,7 @@ Since the DBI cannot observe the fetching of those instructions, the new simulat
|
||||
At first, the micro-benchmark suite TheBandwithBenchmark \cite{TheBandwidthBenchmark}, consisting of various streaming kernels, will be used to compare the gem5 full-system simulation as well as the gem5 syscall-emulation simulation with the newly developed frontend.
|
||||
|
||||
The gem5 syscall-emulation does not simulate a whole operating system, rather it utilizes the host system's Linux kernel and therefore only simulates the binary application.
|
||||
In contrast, the gem5 full-system simulation boots into a complete Linux system including all processes running in the background.
|
||||
In contrast, the gem5 full-system simulation boots into a complete Linux system including all processes, that may run in the background.
|
||||
Therefore, syscall-emulation is conceptually closer to the DynamoRIO approach than full-system simulation.
|
||||
|
||||
The simulation setup consists in both cases of a two-level cache hierarchy with the following parameters:
|
||||
@@ -54,7 +54,7 @@ The trace player operates at the same clock frequency as the gem5 core models.
|
||||
|
||||
It is important to configure the CPI value of the new trace player to a sensible value to approximate the delay between two consecutive memory accesses.
|
||||
For the simulations, the CPI value that gem5 SE reports in its statistics is used.
|
||||
It has been found that the CPI value gives an approximate value of \textit{10} if only computation instructions are considered and load and store operations are ignored, since those are affected by the latency of the memory subsystem.
|
||||
It has been found that the CPI results in an approximate value of \textit{10} if only computation instructions are considered and load and store operations are ignored, since those are affected by the latency of the memory subsystem.
|
||||
|
||||
The micro-benchmarks itself are multi-threaded and make use of all available cores.
|
||||
Furthermore, the compiler optimizations are set to \texttt{-Ofast} for all benchmarks.
|
||||
@@ -65,7 +65,7 @@ Their access patterns are as followed:
|
||||
\begin{center}
|
||||
\begin{tabular}{|c|c|c|}
|
||||
\hline
|
||||
Benchmark Kernel & Description & Access Pattern \\
|
||||
Kernel & Description & Access Pattern \\
|
||||
\hline
|
||||
\hline
|
||||
INIT & Initialize an array & a = s (store, write allocate) \\
|
||||
@@ -130,7 +130,7 @@ The results show that all parameters of DRAMSys correlate well with the gem5 sta
|
||||
While for the average bandwidth the DynamoRIO results are on average 31.0\% slower compared to gem5 SE, this deviation is only 11.1\% for gem5 FS.
|
||||
The numbers for the total amount of bytes read result in a deviation of 35.5\% in comparison to gem5 FS and only to 14.6\% to gem5 SE.
|
||||
The amount of bytes written, on the other hand, shows a very small deviation of 5.2\% for gem5 FS and only 0.07\% for gem5 SE.
|
||||
Therefore, it can be stated that almost the same number of bytes were written back to DRAM due to cache write-backs.
|
||||
Therefore, it can be stated that almost the same number of bytes were written back to the DRAM due to cache write-backs.
|
||||
|
||||
Those numbers are also illustrated in Figure \ref{fig:benchmark_gem5_bandwidth_ddr4}.
|
||||
|
||||
@@ -244,7 +244,7 @@ Those numbers are also illustrated in Figure \ref{fig:benchmark_gem5_bandwidth_d
|
||||
\end{figure}
|
||||
|
||||
Table \ref{tab:benchmark_gem5_bandwidth_ddr3} and Figure \ref{fig:benchmark_gem5_bandwidth_ddr3} show those same key parameters for the DDR3 configuration.
|
||||
Here, the absolute deviations in the average memory bandwidth quantify by 27.5\% and 7.0\% for gem5 SE and gem5 FS respectively.
|
||||
Here, the absolute deviations in the average memory bandwidth amount to 27.5\% and 7.0\% for gem5 SE and gem5 FS respectively.
|
||||
The differences for the amount of bytes read result to 31.6\% for gem5 FS and to 14.7\% to gem5 SE.
|
||||
Also here, the bytes written only show small deviations of 5.2\% for gem5 FS and 0.02\% for gem5 SE.
|
||||
|
||||
@@ -302,15 +302,15 @@ These tables also provide information about the simulation time of the different
|
||||
|
||||
\subsection{Comparison to Ramulator}
|
||||
|
||||
In order to evaluate the new simulation frontend with a simulator that uses a similar approach, the benchmarks are now compared with ramulator.
|
||||
This approach is also based on DBI, more specifically Ramulator uses the Intel Pin-Tool to create a memory access trace of any application.
|
||||
Cache filtering takes place when the trace is created instead of while the trace is simulated by Ramulator.
|
||||
This means that the simulation of the cache cannot take into account the feedback from the DRAM system and the latencies of the cache are neglected.
|
||||
Ramulator also uses the number of computational instructions to approximate the delay between two memory accesses.
|
||||
In order to evaluate the new simulation frontend with a simulator that uses a similar approach, the benchmarks are compared with Ramulator in this section.
|
||||
This approach is also based on DBI, more specifically Ramulator uses the Intel Pin-Tool to create a memory access trace of a running application.
|
||||
Here, the cache filtering takes place when the trace is created instead of while the trace is played back by the simulator.
|
||||
This means that the simulation of the cache cannot take into account the feedback from the DRAM system and therefore the latencies of the cache are neglected.
|
||||
Ramulator also uses the count of computational instructions to approximate the delay between two memory accesses.
|
||||
Since Ramulator uses a CPI value of \textit{4} by default, this is also the value that DRAMSys is configured with.
|
||||
|
||||
The cache configuration remains the same as in the gem5 simulations, and the simulation is also performed again with a DDR3-1600 and DDR4-2400 configuration.
|
||||
However, address mapping has changed, namely to a row-bank-rank-column-channel address mapping with only one rank and channel respectively.
|
||||
However, the address mapping has changed, namely to a row-bank-rank-column-channel address mapping with only one rank and channel respectively.
|
||||
The exact configuration is listed in Section \ref{sec:address_mappings}.
|
||||
|
||||
In contrast to the previous simulations, the benchmarks are now single-threaded.
|
||||
@@ -385,15 +385,16 @@ In Tables \ref{tab:benchmark_ramulator_bandwidth_ddr3} and \ref{tab:benchmark_ra
|
||||
On average, the absolute deviation is about 19.1\% for the DDR4 simulation, whereas it only amounts to about 10.0\% for the DDR3 configuration.
|
||||
The differences in the average access latency equal to 41.5\% and 3.6\% for the DDR4 and DDR3 simulations, respectively.
|
||||
|
||||
One noticeable aspect is that with Ramulator, the latencies are greater with DDR4 than with DDR3. in the DRAMSys configuration, this is the opposite case.
|
||||
A possible explanation could be that, as mentioned before, ramulator cannot take the feedback from the memory system into account in the cache filtering and therefore deviations may occur.
|
||||
One noticeable aspect is that with Ramulator, the latencies are greater with DDR4 than with DDR3.
|
||||
In the DRAMSys configuration, this is the opposite case.
|
||||
A possible explanation could be that Ramulator, as already mentioned, cannot take into account the feedback from the memory system during cache filtering and therefore deviations can occur.
|
||||
|
||||
\subsection{Simulation Runtime Analysis}
|
||||
|
||||
The last topic for comparison is to analyze the speed increase (i.e., the reduction in \textit{wall clock time}) by using the new simulation frontend compared to a detailed processor simulation.
|
||||
|
||||
For this DRAMSys is again compared with gem5 SE and FS.
|
||||
A comparison with Ramulator would not be meaningful, because the cache filtering takes place at different times: while with Ramulator the trace generation takes longer than with DynamoRIO, the simulation is faster.
|
||||
A comparison with Ramulator would not be meaningful, because the cache filtering takes place at different times: while with Ramulator the trace generation takes longer than with DynamoRIO, the simulation itself is faster.
|
||||
The database recording feature of DRAMSys is also disabled for these measurements, since the additional file system accesses for this functionality severely degrade the simulator's performance.
|
||||
|
||||
Figure \ref{fig:runtimes} presents the runtimes of the various benchmarks and simulators.
|
||||
@@ -439,4 +440,3 @@ Figure \ref{fig:runtimes} presents the runtimes of the various benchmarks and si
|
||||
As expected, DRAMSys outperforms the gem5 full-system and syscall-emulation simulators in every case.
|
||||
On average, DRAMSys is 47.0\% faster than gem5 SE and 73.7\% faster than gem5 FS, with a maximum speedup of 82.6\% for the benchmark \texttt{SUM}.
|
||||
While gem5 SE only simulates the target application using the detailed processor model, gem5 FS has to simulate the complete operating system kernel and applications, that run in the background concurrently.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user