arch-arm: Keep track of observed page sizes in the TLB

With this commit we record the page sizes of all valid
TLB entries in a TLB.

We update the set conservatively, therefore allowing
false positives but not false negatives.

This information will be used when doing a page size based
lookup. At the moment we don't strictly need it as we
iterate over all TLB entries (the TLB implements a fully
associative cache) and if we find multiple matches, it means
we have stored some partial translations.

The existing logic is prioritizing complete translations
over partial translations and among the latter, late stage
translations over early stage (with the idea to minimize
the number of walks).

The "iterate once over the entire TLB and record all matches"
won't work well when we shift from a fully associative
TLB into a set associative. With the introduction of the
aforementioned set, we can do page size based lookups,
so we can explicitly lookup the TLB for a specific page size
therefore looking into the appropriate set for a match

Change-Id: If77853373792d6a5ec84cf1909ee5eb567f3d0e4
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com>
This commit is contained in:
Giacomo Travaglini
2024-07-16 09:36:10 +01:00
parent da3919a6f4
commit fda8eeace4
2 changed files with 16 additions and 0 deletions

View File

@@ -264,6 +264,7 @@ TLB::insert(TlbEntry &entry)
table[i] = table[i-1];
table[0] = entry;
observedPageSizes.insert(entry.N);
stats.inserts++;
ppRefills->notify(1);
}
@@ -314,12 +315,14 @@ TLB::flushAll()
}
stats.flushTlb++;
observedPageSizes.clear();
}
void
TLB::flush(const TLBIOp& tlbi_op)
{
int x = 0;
bool valid_entry = false;
TlbEntry *te;
while (x < size) {
te = &table[x];
@@ -328,10 +331,13 @@ TLB::flush(const TLBIOp& tlbi_op)
te->valid = false;
stats.flushedEntries++;
}
valid_entry = valid_entry || te->valid;
++x;
}
stats.flushTlb++;
if (!valid_entry)
observedPageSizes.clear();
}
void

View File

@@ -156,6 +156,16 @@ class TLB : public BaseTLB
int rangeMRU; //On lookup, only move entries ahead when outside rangeMRU
vmid_t vmid;
/** Set of observed page sizes in the TLB
* We update the set conservatively, therefore allowing
* false positives but not false negatives.
* This means there could be a stored page size with
* no matching TLB entry (e.g. it has been invalidated),
* but if the page size is not in the set, we are certain
* there is no associated TLB with that size
*/
std::set<Addr> observedPageSizes;
public:
using Params = ArmTLBParams;
using Lookup = TlbEntry::Lookup;