summaryrefslogtreecommitdiff
path: root/accel
AgeCommit message (Collapse)AuthorFilesLines
2019-03-22trace-events: Consistently point to docs/devel/tracing.txtMarkus Armbruster2-2/+2
Almost all trace-events point to docs/devel/tracing.txt in a comment right at the beginning. Touch up the ones that don't. [Updated with Markus' new commit description wording. --Stefan] Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190314180929.27722-2-armbru@redhat.com Message-Id: <20190314180929.27722-2-armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-03-11qdev: Fix latent bug with compat_props and onboard devicesMarkus Armbruster1-0/+1
Compatibility properties started life as a qdev property thing: we supported them only for qdev properties, and implemented them with the machinery backing command line option -global. Recent commit fa0cb34d221 put them to use (tacitly) with memory backend objects (subtypes of TYPE_MEMORY_BACKEND). To make that possible, we first moved the work of applying them from the -global machinery into TYPE_DEVICE's .instance_post_init() method device_post_init(), in commits ea9ce8934c5 and b66bbee39f6, then made it available to TYPE_MEMORY_BACKEND's .instance_post_init() method host_memory_backend_post_init() as object_apply_compat_props(), in commit 1c3994f6d2a. Note the code smell: we now have function name starting with object_ in hw/core/qdev.c. It has to be there rather than in qom/, because it calls qdev_get_machine() to find the current accelerator's and machine's compat_props. Turns out calling qdev_get_machine() there is problematic. If we qdev_create() from a machine's .instance_init() method, we call device_post_init() and thus qdev_get_machine() before main() can create "/machine" in QOM. qdev_get_machine() tries to get it with container_get(), which "helpfully" creates it as "container" object, and returns that. object_apply_compat_props() tries to paper over the problem by doing nothing when the value of qdev_get_machine() isn't a TYPE_MACHINE. But the damage is done already: when main() later attempts to create the real "/machine", it fails with "attempt to add duplicate property 'machine' to object (type 'container')", and aborts. Since no machine .instance_init() calls qdev_create() so far, the bug is latent. But since I want to do that, I get to fix the bug first. Observe that object_apply_compat_props() doesn't actually need the MachineState, only its the compat_props member of its MachineClass and AccelClass. This permits a simple fix: register MachineClass and AccelClass compat_props with the object_apply_compat_props() machinery right after these classes get selected. This is actually similar to how things worked before commits ea9ce8934c5 and b66bbee39f6, except we now register much earlier. The old code registered them only after the machine's .instance_init() ran, which would've broken compatibility properties for any devices created there. Cc: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20190308131445.17502-2-armbru@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
2019-03-11accel: Allow to build QEMU without TCG or KVM supportAnthony PERARD1-1/+3
Instead of deny build of QEMU without a default accelerator, simply report an error when the user haven't passed -accel or -machine accel= and TCG and KVM isn't builtin. ./configure already check that at least one accelerator is available. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-05hw/boards: Add a MachineState parameter to kvm_type callbackEric Auger1-1/+1
On ARM, the kvm_type will be resolved by querying the KVMState. Let's add the MachineState handle to the callback so that we can retrieve the KVMState handle. in kvm_init, when the callback is called, the kvm_state variable is not yet set. Signed-off-by: Eric Auger <eric.auger@redhat.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-id: 20190304101339.25970-5-eric.auger@redhat.com [ppc parts] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-14kvm: Add kvm_set_ioeventfd* tracesDr. David Alan Gilbert2-0/+5
Add a couple of traces around the kvm_set_ioeventfd* calls. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20190212134758.10514-4-dgilbert@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-02-11cputlb: update TLB entry/index after tlb_fillEmilio G. Cota2-0/+12
We are failing to take into account that tlb_fill() can cause a TLB resize, which renders prior TLB entry pointers/indices stale. Fix it by re-doing the TLB entry lookups immediately after tlb_fill. Fixes: 86e1eff8bc ("tcg: introduce dynamic TLB sizing", 2019-01-28) Reported-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20190209162745.12668-3-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-02-07Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190206' into stagingPeter Maydell1-3/+0
Queued accel/tcg patches # gpg: Signature made Wed 06 Feb 2019 03:42:52 GMT # gpg: using RSA key 64DF38E8AF7E215F # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth/tags/pull-tcg-20190206: accel/tcg: Consider cluster index in tb_lookup__cpu_state() tcg: add early clober modifier in atomic16_cmpxchg on aarch64 Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-06accel/tcg: Consider cluster index in tb_lookup__cpu_state()Peter Maydell1-3/+0
In commit f7b78602fdc6c6e4be we added the CPU cluster number to the cflags field of the TB hash; this included adding it to the value kept in tb->cflags, since we pass that field directly into the hash calculation in some places. Unfortunately we forgot to check whether other parts of the code were doing comparisons against tb->cflags that would need to be updated. It turns out that there is exactly one such place: the tb_lookup__cpu_state() function checks whether the TB it has found in the tb_jmp_cache has a tb->cflags matching the cf_mask that is passed in. The tb->cflags has the cluster_index in it but the cf_mask does not. Hoist the "add cluster index to the cf_mask" code up from tb_htable_lookup() to tb_lookup__cpu_state() so it can be considered in the "did this TB match in the jmp cache" condition, as well as when we do the full hash lookup by physical PC, flags, etc. (tb_htable_lookup() is only called from tb_lookup__cpu_state(), so this change doesn't require any further knock-on changes.) Fixes: f7b78602fdc6c6e4be ("accel/tcg: Add cluster number to TCG TB hash") Tested-by: Cleber Rosa <crosa@redhat.com> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reported-by: Howard Spoelstra <hsp.cat7@gmail.com> Reported-by: Cleber Rosa <crosa@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20190205151810.571-1-peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-02-05cpu-exec: reset BQL after longjmp in cpu_exec_step_atomicEmilio G. Cota1-0/+3
Just like we do in cpu_exec(). Reported-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-02-05cpu-exec: add assert_no_pages_locked() after longjmpEmilio G. Cota1-0/+1
We forgot to add this check in faa9372c07 ("translate-all: introduce assert_no_pages_locked", 2018-06-15); we only added it after returning from a longjmp in cpu_exec_step_atomic. Fix it. Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-30tcg: Fix LGPL version numberThomas Huth9-9/+9
It's either "GNU *Library* General Public version 2" or "GNU Lesser General Public version *2.1*", but there was no "version 2.0" of the "Lesser" library. So assume that version 2.1 is meant here. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1548252536-6242-5-git-send-email-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-01-29accel/tcg: Add cluster number to TCG TB hashPeter Maydell2-0/+6
Include the cluster number in the hash we use to look up TBs. This is important because a TB that is valid for one cluster at a given physical address and set of CPU flags is not necessarily valid for another: the two clusters may have different views of physical memory, or may have different CPU features (eg FPU present or absent). We put the cluster number in the high 8 bits of the TB cflags. This gives us up to 256 clusters, which should be enough for anybody. If we ever need more, or need more bits in cflags for other purposes, we could make tb_hash_func() take more data (and expand qemu_xxhash7() to qemu_xxhash8()). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20190121152218.9592-4-peter.maydell@linaro.org
2019-01-29accel/tcg/user-exec: Don't parse aarch64 insns to test for read vs writePeter Maydell1-14/+52
In cpu_signal_handler() for aarch64 hosts, currently we parse the faulting instruction to see if it is a load or a store. Since the 3.16 kernel (~2014), the kernel has provided us with the syndrome register for a fault, which includes the WnR bit. Use this instead if it is present, only falling back to instruction parsing if not. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108180014.32386-1-peter.maydell@linaro.org
2019-01-28cputlb: Remove static tlb sizingRichard Henderson1-21/+0
Now that all tcg backends support TCG_TARGET_IMPLEMENTS_DYN_TLB, remove the define and the old code. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg: introduce dynamic TLB sizingEmilio G. Cota1-5/+197
Disabled in all TCG backends for now. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20190116170114.26802-3-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28cputlb: do not evict empty entries to the vtlbEmilio G. Cota1-1/+10
Currently we evict an entry to the victim TLB when it doesn't match the current address. But it could be that there's no match because the current entry is empty (i.e. all -1's, for instance via tlb_flush). Do not evict the entry to the vtlb in that case. This change will help us keep track of the TLB's use rate, which we'll use to implement a policy for dynamic TLB sizing. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20190116170114.26802-2-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg: Add opcodes for vector minmax arithmeticRichard Henderson2-0/+244
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg: Add gvec expanders for nand, nor, eqvRichard Henderson2-0/+36
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-11qemu/queue.h: leave head structs anonymous unless necessaryPaolo Bonzini1-2/+2
Most list head structs need not be given a name. In most cases the name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV or reverse iteration, but this does not apply to lists of other kinds, and even for QTAILQ in practice this is only rarely needed. In addition, we will soon reimplement those macros completely so that they do not need a name for the head struct. So clean up everything, not giving a name except in the rare case where it is necessary. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-11build-sys: don't include windows.h, osdep.h does itMarc-André Lureau1-4/+0
osdep.h will also define the available Windows API version for QEMU. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20181122110039.15972-2-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-11accel: Improve selection of the default acceleratorThomas Huth1-3/+15
When compiling with "--disable-tcg", we currently still use "tcg" as default accelerator. "kvm" should be used in this case instead. Also, some downstream distros provide QEMU binaries which have "kvm" in their names (e.g. "qemu-kvm" on RHEL or "kvm" on Ubuntu) that use KVM by default - and some users might want to do something similar with upstream binaries, too. Accomodate them by using "kvm:tcg" as default when we detect such a binary name. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1538748792-19444-1-git-send-email-thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-07hw: apply accel compat properties without touching globalsMarc-André Lureau1-12/+0
Instead of registering compat properties as globals, let's keep them in their own array, to avoid mixing with user globals. Introduce object_apply_global_props() function, to apply compatibility properties from a GPtrArray. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Acked-by: Eduardo Habkost <ehabkost@redhat.com>
2018-12-26tcg: Add RISC-V cpu signal handlerAlistair Francis1-0/+75
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Michael Clark <mjc@sifive.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <c445175310fa836b61fd862a55628907f0093194.1545246859.git.alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-12-11accel: register global_props like machine globalsMarc-André Lureau1-1/+8
global_props is only used for Xen xen_compat_props. It's a static array of GlobalProperty, like machine globals in SET_MACHINE_COMPAT(). Let's register the globals the same way, without extra copy allocation. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20181204142023.15982-5-marcandre.lureau@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-10-31cputlb: Remove tlb_c.pending_flushesRichard Henderson1-14/+2
This is essentially redundant with tlb_c.dirty. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Filter flushes on already clean tlbsRichard Henderson1-10/+25
Especially for guests with large numbers of tlbs, like ARM or PPC, we may well not use all of them in between flush operations. Remember which tlbs have been used since the last flush, and avoid any useless flushing. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Count "partial" and "elided" tlb flushesRichard Henderson2-7/+19
Our only statistic so far was "full" tlb flushes, where all mmu_idx are flushed at the same time. Now count "partial" tlb flushes where sets of mmu_idx are flushed, but the set is not maximal. Account one per mmu_idx flushed, as that is the unit of work performed. We don't actually count elided flushes yet, but go ahead and change the interface presented to the monitor all at once. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Merge tlb_flush_page into tlb_flush_page_by_mmuidxRichard Henderson1-46/+12
The difference between the two sets of APIs is now miniscule. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Merge tlb_flush_nocheck into tlb_flush_by_mmuidx_async_workRichard Henderson1-72/+21
The difference between the two sets of APIs is now miniscule. This allows tlb_flush, tlb_flush_all_cpus, and tlb_flush_all_cpus_synced to be merged with their corresponding by_mmuidx functions as well. For accounting, consider mmu_idx_bitmask = ALL_MMUIDX_BITS to be a full flush. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Move env->vtlb_index to env->tlb_d.vindexRichard Henderson1-3/+2
The rest of the tlb victim cache is per-tlb, the next use index should be as well. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Split large page tracking per mmu_idxRichard Henderson1-77/+61
The set of large pages in the kernel is probably not the same as the set of large pages in the application. Forcing one range to cover both will flush more often than necessary. This allows tlb_flush_page_async_work to flush just the one mmu_idx implicated, which in turn allows us to remove tlb_check_page_and_flush_by_mmuidx_async_work. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Move cpu->pending_tlb_flush to env->tlb_c.pending_flushRichard Henderson1-12/+23
Protect it with the tlb_lock instead of using atomics. The move puts it in or near the same cacheline as the lock; using the lock means we don't need a second atomic operation in order to perform the update. Which makes it cheap to also update pending_flush in tlb_flush_by_mmuidx_async_work. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Remove tcg_enabled hack from tlb_flush_nocheckRichard Henderson1-7/+0
The bugs this was working around were fixed with commits 022d6378c7fd target/unicore32: remove tlb_flush from uc32_init_fn 6e11beecfde0 target/alpha: remove tlb_flush from alpha_cpu_initfn Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Move tlb_lock to CPUTLBCommonRichard Henderson1-24/+24
This is the first of several moves to reduce the size of the CPU_COMMON_TLB macro and improve some locality of refernce. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-19Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell1-3/+53
* RTC fixes (Artem) * icount fixes (Artem) * rr fixes (Pavel, myself) * hotplug cleanup (Igor) * SCSI fixes (myself) * 4.20-rc1 KVM header update (myself) * coalesced PIO support (Peng Hao) * HVF fixes (Roman B.) * Hyper-V refactoring (Roman K.) * Support for Hyper-V IPI (Vitaly) # gpg: Signature made Fri 19 Oct 2018 12:47:58 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (47 commits) replay: pass raw icount value to replay_save_clock target/i386: kvm: just return after migrate_add_blocker failed hyperv_testdev: add SynIC message and event testmodes hyperv: process POST_MESSAGE hypercall hyperv: add support for KVM_HYPERV_EVENTFD hyperv: process SIGNAL_EVENT hypercall hyperv: add synic event flag signaling hyperv: add synic message delivery hyperv: make overlay pages for SynIC hyperv: only add SynIC in compatible configurations hyperv: qom-ify SynIC hyperv:synic: split capability testing and setting i386: add hyperv-stub for CONFIG_HYPERV=n default-configs: collect CONFIG_HYPERV* in hyperv.mak hyperv: factor out arch-independent API into hw/hyperv hyperv: make hyperv_vp_index inline hyperv: split hyperv-proto.h into x86 and arch-independent parts hyperv: rename kvm_hv_sint_route_set_sint hyperv: make HvSintRoute reference-counted hyperv: address HvSintRoute by X86CPU pointer ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-19target-i386 : add coalesced_pio APIPeng Hao1-3/+53
the primary API realization. Signed-off-by: Peng Hao <peng.hao2@zte.com.cn> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <1539795177-21038-3-git-send-email-peng.hao2@zte.com.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-10-18cputlb: read CPUTLBEntry.addr_write atomicallyEmilio G. Cota2-12/+19
Updates can come from other threads, so readers that do not take tlb_lock must use atomic_read to avoid undefined behaviour (UB). This completes the conversion to tlb_lock. This conversion results on average in no performance loss, as the following experiments (run on an Intel i7-6700K CPU @ 4.00GHz) show. 1. aarch64 bootup+shutdown test: - Before: Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs): 7487.087786 task-clock (msec) # 0.998 CPUs utilized ( +- 0.12% ) 31,574,905,303 cycles # 4.217 GHz ( +- 0.12% ) 57,097,908,812 instructions # 1.81 insns per cycle ( +- 0.08% ) 10,255,415,367 branches # 1369.747 M/sec ( +- 0.08% ) 173,278,962 branch-misses # 1.69% of all branches ( +- 0.18% ) 7.504481349 seconds time elapsed ( +- 0.14% ) - After: Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs): 7462.441328 task-clock (msec) # 0.998 CPUs utilized ( +- 0.07% ) 31,478,476,520 cycles # 4.218 GHz ( +- 0.07% ) 57,017,330,084 instructions # 1.81 insns per cycle ( +- 0.05% ) 10,251,929,667 branches # 1373.804 M/sec ( +- 0.05% ) 173,023,787 branch-misses # 1.69% of all branches ( +- 0.11% ) 7.474970463 seconds time elapsed ( +- 0.07% ) 2. SPEC06int: SPEC06int (test set) [Y axis: Speedup over master] 1.15 +-+----+------+------+------+------+------+-------+------+------+------+------+------+------+----+-+ | | 1.1 +-+.................................+++.............................+ tlb-lock-v2 (m+++x) +-+ | +++ | +++ tlb-lock-v3 (spinl|ck) | | +++ | | +++ +++ | | | 1.05 +-+....+++...........####.........|####.+++.|......|.....###....+++...........+++....###.........+-+ | ### ++#| # |# |# ***### +++### +++#+# | +++ | #|# ### | 1 +-+++***+#++++####+++#++#++++++++++#++#+*+*++#++++#+#+****+#++++###++++###++++###++++#+#++++#+#+++-+ | *+* # #++# *** # #### *** # * *++# ****+# *| * # ****|# |# # #|# #+# # # | 0.95 +-+..*.*.#....#..#.*|*..#...#..#.*|*..#.*.*..#.*|.*.#.*++*.#.*++*+#.****.#....#+#....#.#..++#.#..+-+ | * * # # # *|* # # # *|* # * * # *++* # * * # * * # * |* # ++# # # # *** # | | * * # ++# # *+* # # # *|* # * * # * * # * * # * * # *++* # **** # ++# # * * # | 0.9 +-+..*.*.#...|#..#.*.*..#.++#..#.*|*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*.|*.#...|#.#..*.*.#..+-+ | * * # *** # * * # |# # *+* # * * # * * # * * # * * # * * # *++* # |# # * * # | 0.85 +-+..*.*.#..*|*..#.*.*..#.***..#.*.*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*..*.#.****.#..*.*.#..+-+ | * * # *+* # * * # *|* # * * # * * # * * # * * # * * # * * # * * # * |* # * * # | | * * # * * # * * # *+* # * * # * * # * * # * * # * * # * * # * * # * |* # * * # | 0.8 +-+..*.*.#..*.*..#.*.*..#.*.*..#.*.*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*..*.#.*++*.#..*.*.#..+-+ | * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # | 0.75 +-+--***##--***###-***###-***###-***###-***###-****##-****##-****##-****##-****##-****##--***##--+-+ 400.perlben401.bzip2403.gcc429.m445.gob456.hmme45462.libqua464.h26471.omnet473483.xalancbmkgeomean png: https://imgur.com/a/BHzpPTW Notes: - tlb-lock-v2 corresponds to an implementation with a mutex. - tlb-lock-v3 corresponds to the current implementation, i.e. a spinlock and a single lock acquisition in tlb_set_page_with_attrs. Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181016153840.25877-1-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18tcg: Split CONFIG_ATOMIC128Richard Henderson3-7/+21
GCC7+ will no longer advertise support for 16-byte __atomic operations if only cmpxchg is supported, as for x86_64. Fortunately, x86_64 still has support for __sync_compare_and_swap_16 and we can make use of that. AArch64 does not have, nor ever has had such support, so open-code it. Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18tcg: Add tlb_index and tlb_entry helpersRichard Henderson2-63/+61
Isolate the computation of an index from an address into a helper before we change that function. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> [ cota: convert tlb_vaddr_to_host; use atomic_read on addr_write ] Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181009175129.17888-2-cota@braap.org>
2018-10-18cputlb: serialize tlb updates with env->tlb_lockEmilio G. Cota1-71/+84
Currently we rely on atomic operations for cross-CPU invalidations. There are two cases that these atomics miss: cross-CPU invalidations can race with either (1) vCPU threads flushing their TLB, which happens via memset, or (2) vCPUs calling tlb_reset_dirty on their TLB, which updates .addr_write with a regular store. This results in undefined behaviour, since we're mixing regular and atomic ops on concurrent accesses. Fix it by using tlb_lock, a per-vCPU lock. All updaters of tlb_table and the corresponding victim cache now hold the lock. The readers that do not hold tlb_lock must use atomic reads when reading .addr_write, since this field can be updated by other threads; the conversion to atomic reads is done in the next patch. Note that an alternative fix would be to expand the use of atomic ops. However, in the case of TLB flushes this would have a huge performance impact, since (1) TLB flushes can happen very frequently and (2) we currently use a full memory barrier to flush each TLB entry, and a TLB has many entries. Instead, acquiring the lock is barely slower than a full memory barrier since it is uncontended, and with a single lock acquisition we can flush the entire TLB. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181009174557.16125-6-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18cputlb: fix assert_cpu_is_self macroEmilio G. Cota1-2/+2
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181009174557.16125-5-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18exec: introduce tlb_initEmilio G. Cota1-0/+4
Paves the way for the addition of a per-TLB lock. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181009174557.16125-4-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18tcg: access cpu->icount_decr.u16.high with atomicsEmilio G. Cota2-2/+2
Consistently access u16.high with atomics to avoid undefined behaviour in MTTCG. Note that icount_decr.u16.low is only used in icount mode, so regular accesses to it are OK. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181010144853.13005-2-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18tcg: Implement CPU_LOG_TB_NOCHAIN during expansionRichard Henderson1-1/+1
Rather than test NOCHAIN before linking, do not emit the goto_tb opcode at all. We already do this for goto_ptr. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-02accel/tcg: Remove dead codeThomas Huth1-9/+0
The global cpu_single_env variable has been removed more than 5 years ago, so apparently nobody used this dead debug code in that timeframe anymore. Thus let's remove it completely now. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1537204134-15905-1-git-send-email-thuth@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-10-02translator: fix breakpoint processingPavel Dovgalyuk1-2/+6
QEMU cannot pass through the breakpoints when 'si' command is used in remote gdb. This patch disables inserting the breakpoints when we are already single stepping though the gdb remote protocol. This patch also fixes icount calculation for the blocks that include breakpoints - instruction with breakpoint is not executed and shouldn't be used in icount calculation. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Message-Id: <20180912081910.3228.8523.stgit@pasha-VirtualBox> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-26qht: drop ht argument from qht iteratorsEmilio G. Cota1-4/+2
Accessing the HT from an iterator results almost always in a deadlock. Given that only one qht-internal function uses this argument, drop it from the interface. Suggested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-08-23KVM: cleanup unnecessary #ifdef KVM_CAP_...Paolo Bonzini1-2/+0
The capability macros are always defined, since they come from kernel headers that are copied into the QEMU tree. Remove the unnecessary #ifdefs. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-08-17kvm: Use inhibit to prevent ballooning without synchronous mmuAlex Williamson1-0/+4
Remove KVM specific tests in balloon_page(), instead marking ballooning as inhibited without KVM_CAP_SYNC_MMU support. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2018-08-14accel/tcg: Check whether TLB entry is RAM consistently with how we set it upPeter Maydell1-21/+8
We set up TLB entries in tlb_set_page_with_attrs(), where we have some logic for determining whether the TLB entry is considered to be RAM-backed, and thus has a valid addend field. When we look at the TLB entry in get_page_addr_code(), we use different logic for determining whether to treat the page as RAM-backed and use the addend field. This is confusing, and in fact buggy, because the code in tlb_set_page_with_attrs() correctly decides that rom_device memory regions not in romd mode are not RAM-backed, but the code in get_page_addr_code() thinks they are RAM-backed. This typically results in "Bad ram pointer" assertion if the guest tries to execute from such a memory region. Fix this by making get_page_addr_code() just look at the TLB_MMIO bit in the code_address field of the TLB, which tlb_set_page_with_attrs() sets if and only if the addend field is not valid for code execution. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180713150945.12348-1-peter.maydell@linaro.org