Age | Commit message (Collapse) | Author | Files | Lines |
|
Simplify the implementation of mullwo. For 64 bit CPUs, the result is
the concatenation of the upper and lower parts of the muls2_i32 operation,
which may be slightly better than deposit. For 32 bit CPUs, the lower part
of the muls_i32 operation is moved into the target GPR.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Eliminate the unecessary ext32s TCG operation and make the multiplication
operation explicitly 32 bit.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Optimize the special case of rlwnm where MB=0 and ME=31. This can
be implemented using a ROTL.
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Optimize the special case of rlwinm where MB=0 and ME=31. This can
be implemented as a 32-bit ROTL.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The special case of rlwimi where MB <= ME and SH = 31-ME can be implemented
with a single TCG deposit operation. This replaces the less general case
of SH = MB = 0 and ME = 31.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The last 8 bytes of the buffer list is defined to contain the number
of dropped frames. At the moment we use it to store rx entries,
which trips up ethtool -S:
rx_no_buffer: 9223380832981355136
Fix this by skipping the last buffer list entry.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
When disabling MSI/MSIX via "ibm,change-msi" RTAS call, no check was made
if MSI or MSIX is actually supported and the MSI message was reset
unconditionally. If this happened on a device which does not support MSI
(but does support MSIX, otherwise "ibm,change-msi" would not be called),
this device would have PCIDevice::msi_cap field (MSI capability offset)
set to zero and writing a vector would actually clear PCI status.
This clears MSI message only if MSI or MSIX is present on a device.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Mac OS X calibrates a number of frequencies on bootup based on reading
tb values on bootup and comparing them to via cuda timer values.
The only variable we can really steer well (thanks to KVM) is the cuda
frequency. So let's use that one to fake Mac OS X into believing the
bus frequency is tbfreq * 4. That way Mac OS X will automatically
calculate the correct timebase frequency.
With this patch and the patch set I posted earlier I can successfully
run Mac OS X 10.2, 10.3 and 10.4 guests with -M mac99 on TCG and KVM.
Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
We already expose the real CPU's tb frequency to the guest via fw_cfg. Soon
we will need to also expose it to the MacIO, so let's move it to a variable
that we can leverage every time we need the frequency.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Mac OS X (at least with -M mac99) searches for a valid NVRAM partition
of a special Apple type. If it can't find that partition in the first
half of NVRAM, it will look at the second half.
There are a few implications from this. The first is that we need to
split NVRAM into 2 halves - one for Open Firmware use, the other one for
Mac OS X. Without this split Mac OS X will just loop endlessly over the
second half trying to find a partition.
The other implication is that we should provide a specially crafted Mac
OS X compatible NVRAM partition on the second half that Mac OS X can
happily use as it sees fit.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The NVRAM in our Core99 machine really supports 2byte and 4byte accesses
just as well as 1byte accesses. In fact, Mac OS X uses those.
Add support for higher register size granularities.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The macio_nvram_read and macio_nvram_write functions are never called,
just remove them.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
There is a special timer in the mac99 machine that we recently started
to emulate. Unfortunately we emulated it in the wrong frequency.
This patch adapts the frequency Mac OS X uses to evaluate results from
this timer, making calculations it bases off of it work.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
To find out whether we support the KVM hypercall interface we need to ask KVM
on the VM level rather than the global KVM level, because Book3S HV KVM does
not support it and we play conservative when both HV and PR are loaded.
So instead, use the VM helper that falls back to global KVM enumeration. That
should cover all cases.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
We now can call KVM_CHECK_EXTENSION on the kvm fd or on the vm fd, whereas
the vm version is more accurate when it comes to PPC KVM.
Add a helper to make the vm version available that falls back to the non-vm
variant if the vm one is not available yet to stay compatible.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Fix the check for carry in the srad helper to properly construct
the mask -- a "1ULL" must be used (instead of "1") in order to
get the desired result.
Example:
R3 8000000000000000
R4 F3511AD4A2CD4C38
srad 3,3,4
Should *not* set XER[CA] but does without this patch.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
For 64 bit implementations, the special case of a shift by zero
should result in the sign extension of the least significant 32 bits
of the source GPR (not a direct copy of the 64 bit source GPR).
Example:
R3 A6212433228F41DC
srawi 3,3,0
R3 expected : 00000000228F41DC
R3 actual : A6212433228F41DC (without this patch)
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Fix the code to properly detect overflow; the 128 bit signed
product must have all zeroes or all ones in the first 65 bits
otherwise OV should be set.
Example:
R3 45F086A5D5887509
R4 0000000000000002
mulldo 3,3,4
Should set XER[OV].
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
For 64-bit implementations, the mullw result is the 64 bit product
of the sign-extended least significant 32 bits of the source
registers.
Fix the code to properly sign extend the source operands and produce
a 64 bit product.
Example:
R3 00000000002F37A0
R4 41C33D242F816715
mullw 3,3,4
R3 expected : 0008C3146AE0F020
R3 actual : 000000006AE0F020 (without this patch)
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
On 64-bit implementations, the mullwo result is the 64 bit product of
the signed 32 bit operands. Fix the implementation to properly deposit
the upper 32 bits into the target register.
Example:
R3 0407DED115077586
R4 53778DF3CA992E09
mullwo 3,3,4
R3 expected : FB9D02730D7735B6
R3 actual : 000000000D7735B6 (without this patch)
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The rlwimi specification includes the ROTL32 operation, which is defined
to be a left rotation of two copies of the least significant 32 bits of
the source GPR.
The current implementation is incorrect on 64-bit implementations in that
it rotates a single copy of the least significant 32 bits, padding with
zeroes in the most significant bits.
Fix the code to properly implement this ROTL32 operation.
Also fix the special case of MB=31 and ME=0 to copy the entire contents
of the source GPR.
Examples:
R3 FFFFFFFFFFFFFFF0
rlwimi 3,3,29,14,1
R3 expected : 1FFFFFFE3FFFFFFE
R3 actual : 000000003FFFFFFE (without this patch)
R3 ED7EB4DD824F0853
rlwimi 3,3,10,31,0
R3 expected : 3C214E09024F0853
R3 actual : 00000000024F0853 (without this patch)
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The rlwnm specification includes the ROTL32 operation, which is defined
to be a left rotation of two copies of the least significant 32 bits of
the source GPR.
The current implementation is incorrect on 64-bit implementations in that
it rotates a single copy of the least significant 32 bits, padding with
zeroes in the most significant bits.
Fix the code to properly implement this ROTL32 operation.
Example:
R3 = 0000000000000002
R4 = 7FFFFFFFFFFFFFFF
rlwnm 3,3,4,31,16
R3 expected : 0000000100000001
R3 actual : 0000000000000001 (without this patch)
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The rlwinm specification includes the ROTL32 operation, which is defined
to be a left rotation of two copies of the least significant 32 bits of
the source GPR.
The current implementation is incorrect on 64-bit implementations in that
it rotates a single copy of the least significant 32 bits, padding with
zeroes in the most significant bits.
Fix the code to properly implement this ROTL32 operation.
Example:
R3 = F7487D82EC6F75DF
rlwinm 3,3,5,12,4
R3 expected : 8DEEBBFD880EBBFD
R3 actual : 00000000880EBBFD (without this fix)
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
MAX_CPUS 256 is inconsistent with qemu supporting upto 255 cpus. This
MAX_CPUS number was percolated back to "virsh capabilities" with wrong
max_cpus.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This patch adds hardware breakpoint and hardware watchpoint support
for ppc.
On BOOKE architecture we cannot share debug resources between QEMU
and guest because:
When QEMU is using debug resources then debug exception must
be always enabled. To achieve this we set MSR_DE and also set
MSRP_DEP so guest cannot change MSR_DE.
When emulating debug resource for guest we want guest
to control MSR_DE (enable/disable debug interrupt on need).
So above mentioned two configuration cannot be supported
at the same time. So the result is that we cannot share
debug resources between QEMU and Guest on BOOKE architecture.
In the current design QEMU gets priority over guest,
this means that if QEMU is using debug resources then guest
cannot use them and if guest is using debug resource then
qemu can overwrite them.
When QEMU is not able to handle debug exception then we inject program
exception to guest. Yes program exception NOT debug exception and the
reason is:
1) QEMU and guest not sharing debug resources
2) For software breakpoint QEMU uses a ehpriv-1 instruction;
So there cannot be any reason that we are in qemu with exit reason
KVM_EXIT_DEBUG for guest set debug exception, only possibility is
guest executed ehpriv-1 privilege instruction and that's why we are
injecting program exception.
Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This patch allow insert/remove software breakpoint.
When QEMU is not able to handle debug exception then we inject
program exception to guest because for software breakpoint QEMU
uses a ehpriv-1 instruction;
So there cannot be any reason that we are in qemu with exit reason
KVM_EXIT_DEBUG for guest set debug exception, only possibility is
guest executed ehpriv-1 privilege instruction and that's why we are
injecting program exception.
Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
[agraf: make deflect comment booke/book3s agnostic]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This patch synchronizes env->excp_vectors[] with env->iovr[].
This is required for using the existing interrupt injection mechanism
for kvm.
Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Get trap instruction opcode from KVM and this opcode will
be used for setting software breakpoint in following patch
Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
We currently calculate the final RTAS and FDT location based on
the early estimate of the RMA size, cropped to 256M on KVM since
we only know the real RMA size at reset time which happens much
later in the boot process.
This means the FDT and RTAS end up right below 256M while they
could be much higher, using precious RMA space and limiting
what the OS bootloader can put there which has proved to be
a problem with some OSes (such as when using very large initrd's)
Fortunately, we do the actual copy of the device-tree into guest
memory much later, during reset, late enough to be able to do it
using the final RMA value, we just need to move the calculation
to the right place.
However, RTAS is still loaded too early, so we change the code to
load the tiny blob into qemu memory early on, and then copy it into
guest memory at reset time. It's small enough that the memory usage
doesn't matter.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[aik: fixed errors from checkpatch.pl, defined RTAS_MAX_ADDR]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
[agraf: fix compilation on 32bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
A subsequent patch to ppc/spapr needs to load the RTAS blob into
qemu memory rather than target memory (so it can later be copied
into the right spot at machine reset time).
I would use load_image() but it is marked deprecated because it
doesn't take a buffer size as argument, so let's add load_image_size()
that does.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[aik: fixed errors from checkpatch.pl]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
We want the associtivity lists of memory and CPU nodes to match but
memory nodes have incorrect domain#3 which is zero for CPU so they won't
match.
This clears domain#3 in the list to match CPUs associtivity lists.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
In multiple places there is a node0_size variable calculation
which assumes that NUMA node #0 and memory node #0 are the same
things which they are not. Since we are going to change it and
do not want to change it in multiple places, let's make a helper.
This adds a spapr_node0_size() helper and makes use of it.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Linux kernel expects nodes to have power-of-two size and
does WARN_ON if this is not the case:
[ 0.041456] WARNING: at drivers/base/memory.c:115
which is:
===
/* Validate blk_sz is a power of 2 and not less than section size */
if ((block_sz & (block_sz - 1)) || (block_sz < MIN_MEMORY_BLOCK_SIZE)) {
WARN_ON(1);
block_sz = MIN_MEMORY_BLOCK_SIZE;
}
===
This splits memory nodes into set of smaller blocks with
a size which is a power of two. This makes sure the start
address of every node is aligned to the node size.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
[agraf: squash windows compile fix in]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Current QEMU does not support memoryless NUMA nodes, however
actual hardware may have them so it makes sense to have a way
to emulate them in QEMU. This prepares SPAPR for that.
This moves 2 calls of spapr_populate_memory_node() into
the existing loop over numa nodes so first several nodes may
have no memory and this still will work.
If there is no numa configuration, the code assumes there is just
a single node at 0 and it has all the guest memory.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This finishes refactoring by using the spapr_populate_memory_node helper
for all nodes and removing leftovers from spapr_populate_memory().
This is not a part of the previous patch because the patches look
nicer apart.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This moves recurring bits of code related to memory@xxx nodes
creation to a helper.
This makes use of the new helper for node@0.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
get_boot_devices_list() will malloc memory, spapr_finalize_fdt
doesn't free it.
Signed-off-by: Chenliang <chenliang88@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
When running KVM we have to adhere to host page boundaries for memory slots.
Unfortunately the NVRAM on mac99 is a 4k RAM hole inside of an MMIO flash
area.
So if our host is configured with 64k page size, we can't use the mac99 target
with KVM. This is a real shame, as this limitation is not really an issue - we
can easily map NVRAM somewhere else and at least Linux and Mac OS X use it
at their new location.
So in that emergency case when it's about failing to run at all and moving NVRAM
to a place it shouldn't be at, choose the latter.
This patch enables -M mac99 with KVM on 64k page size hosts.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Useful for identifying the guest/host uniquely within the
guest. Adding following properties to the guest root node.
vm,uuid - uuid of the guest
host-model - Host model number
host-serial - Host machine serial number
hypervisor type - Tells its "kvm"
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Fix a typo in the names of a couple of functions
(s/resouce/resource/).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Function pointers in the 64-bit ELFv2 PowerPC ABI are actual (internal)
entry point addresses. However, when invoking a function via a function
pointer, GPR 12 must also be set to this address so that the TOC may be
handled properly.
Add this support to the invocation of a signal handler.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Eliminate the stub for the do_setcontext() function for TARGET_PPC64. The
implementation re-uses the existing TARGET_PPC32 code with the only change
being the computation of the address of the register save area.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Properly dereference 64-bit PPC ELF V1 ABIT function pointers to signal handlers.
On this platform, function pointers are pointers to structures and the first 64
bits of such a structure contains the function's entry point. The second 64 bits
contains the TOC pointer, which must be placed into GPR 2.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Enable the 64-bit PowerPC signal handling code that was previously
disabled via #ifdefs. Specifically:
- Move the target_mcontext (register save area) structure and
append it to the 64-bit target_sigcontext structure. This
provides the space on the stack for saving and restoring
context.
- Define the target_rt_sigframe for 64-bit.
- Adjust the setup_frame and setup_rt_frame routines to properly
select the target_mcontext area and trampoline within the stack
frame; tthis is different for 32-bit and 64-bit implementations.
- Adjust the do_setcontext stub for 64-bit so that it compiles
without warnings.
The 64-bit signal handling code is still not functional after this
change; but the 32-bit code is. Subsequent changes will address
specific issues with the 64-bit code.
Signed-off-by: Tom Musta <tommusta@gmail.com>
[agraf: fix build on 32bit hosts, ppc64abi32]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Split the encoding of the PowerPC sigreturn trampoline from the saving of
register state onto the signal handler stack. This will make it easier
in subsequent patches to deal with variations in the stack frame layouts between
32 and 64 bit PowerPC.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The code that sets the stack frame back pointer is incorrect for
the setup_rt_frame() code; qemu will abort (SIGSEGV) in some
environments. The setup_frame code was fixed in commit
beb526b12134a6b6744125deec5a7fe24a8f92e3 but the setup_rt_frame
code was not.
Make the setup_rt_frame code consistent with the setup_frame
code.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
PAPR compliant guest calls this in absence of kdump. This finally
reaches the guest and can be handled according to the policies set by
higher level tools(like taking dump) for further analysis by tools like
crash.
Linux kernel calls ibm,os-term when extended property of os-term is set.
This makes sure that a return to the linux kernel is gauranteed.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
[agraf: reduce RTAS_TOKEN_MAX]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
On PPC we have 2 different styles of KVM: PR and HV. HV can only virtualize
sPAPR guests while PR can virtualize everything that's reasonably close to
the host hardware platform.
As long as only one kernel module (PR or HV) is loaded, the "default" kvm type
is the module that's loaded. So if your hardware only supports PR mode you can
easily spawn a Mac VM.
However, if both HV and PR are loaded we default to HV mode. And in that case
the Mac machines have to explicitly ask for PR mode to get a working VM.
Fix this up by explicitly having the Mac machines ask for PR style KVM. This
fixes bootup of Mac VMs on systems where bot HV and PR kvm modules are loaded
for me.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
staging
sanity check for qxl, minor spice display channel tweak.
# gpg: Signature made Tue 02 Sep 2014 09:53:39 BST using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
* remotes/spice/tags/pull-spice-20140902-1:
spice: use console index as display id
qxl-render: add more sanity checks
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
QEMU system mode page table walks are expensive. Taken by running QEMU
qemu-system-x86_64 system mode on Intel PIN , a TLB miss and walking a
4-level page tables in guest Linux OS takes ~450 X86 instructions on
average.
QEMU system mode TLB is implemented using a directly-mapped hashtable.
This structure suffers from conflict misses. Increasing the
associativity of the TLB may not be the solution to conflict misses as
all the ways may have to be walked in serial.
A victim TLB is a TLB used to hold translations evicted from the
primary TLB upon replacement. The victim TLB lies between the main TLB
and its refill path. Victim TLB is of greater associativity (fully
associative in this patch). It takes longer to lookup the victim TLB,
but its likely better than a full page table walk. The memory
translation path is changed as follows :
Before Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. TLB refill.
5. Do the memory access.
6. Return to code cache.
After Victim TLB:
1. Inline TLB lookup
2. Exit code cache on TLB miss.
3. Check for unaligned, IO accesses
4. Victim TLB lookup.
5. If victim TLB misses, TLB refill
6. Do the memory access.
7. Return to code cache
The advantage is that victim TLB can offer more associativity to a
directly mapped TLB and thus potentially fewer page table walks while
still keeping the time taken to flush within reasonable limits.
However, placing a victim TLB before the refill path increase TLB
refill path as the victim TLB is consulted before the TLB refill. The
performance results demonstrate that the pros outweigh the cons.
some performance results taken on SPECINT2006 train
datasets and kernel boot and qemu configure script on an
Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Linux machine are shown in the
Google Doc link below.
https://docs.google.com/spreadsheets/d/1eiItzekZwNQOal_h-5iJmC4tMDi051m9qidi5_nwvH4/edit?usp=sharing
In summary, victim TLB improves the performance of qemu-system-x86_64 by
11% on average on SPECINT2006, kernelboot and qemu configscript and with
highest improvement of in 26% in 456.hmmer. And victim TLB does not result
in any performance degradation in any of the measured benchmarks. Furthermore,
the implemented victim TLB is architecture independent and is expected to
benefit other architectures in QEMU as well.
Although there are measurement fluctuations, the performance
improvement is very significant and by no means in the range of
noises.
Signed-off-by: Xin Tong <trent.tong@gmail.com>
Message-id: 1407202523-23553-1-git-send-email-trent.tong@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|