1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
|
..
Copyright (c) 2017 Linaro Limited
Written by Peter Maydell
===================
Load and Store APIs
===================
QEMU internally has multiple families of functions for performing
loads and stores. This document attempts to enumerate them all
and indicate when to use them. It does not provide detailed
documentation of each API -- for that you should look at the
documentation comments in the relevant header files.
``ld*_p and st*_p``
~~~~~~~~~~~~~~~~~~~
These functions operate on a host pointer, and should be used
when you already have a pointer into host memory (corresponding
to guest ram or a local buffer). They deal with doing accesses
with the desired endianness and with correctly handling
potentially unaligned pointer values.
Function names follow the pattern:
load: ``ld{type}{sign}{size}_{endian}_p(ptr)``
store: ``st{type}{size}_{endian}_p(ptr, val)``
``type``
- (empty) : integer access
- ``f`` : float access
``sign``
- (empty) : for 32 or 64 bit sizes (including floats and doubles)
- ``u`` : unsigned
- ``s`` : signed
``size``
- ``b`` : 8 bits
- ``w`` : 16 bits
- ``l`` : 32 bits
- ``q`` : 64 bits
``endian``
- ``he`` : host endian
- ``be`` : big endian
- ``le`` : little endian
The ``_{endian}`` infix is omitted for target-endian accesses.
The target endian accessors are only available to source
files which are built per-target.
Regexes for git grep
- ``\<ldf\?[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
- ``\<stf\?[bwlq]\(_[hbl]e\)\?_p\>``
``cpu_{ld,st}_*``
~~~~~~~~~~~~~~~~~
These functions operate on a guest virtual address. Be aware
that these functions may cause a guest CPU exception to be
taken (e.g. for an alignment fault or MMU fault) which will
result in guest CPU state being updated and control longjumping
out of the function call. They should therefore only be used
in code that is implementing emulation of the target CPU.
These functions may throw an exception (longjmp() back out
to the top level TCG loop). This means they must only be used
from helper functions where the translator has saved all
necessary CPU state before generating the helper function call.
It's usually better to use the ``_ra`` variants described below
from helper functions, but these functions are the right choice
for calls made from hooks like the CPU do_interrupt hook or
when you know for certain that the translator had to save all
the CPU state that ``cpu_restore_state()`` would restore anyway.
Function names follow the pattern:
load: ``cpu_ld{sign}{size}_{mmusuffix}(env, ptr)``
store: ``cpu_st{size}_{mmusuffix}(env, ptr, val)``
``sign``
- (empty) : for 32 or 64 bit sizes
- ``u`` : unsigned
- ``s`` : signed
``size``
- ``b`` : 8 bits
- ``w`` : 16 bits
- ``l`` : 32 bits
- ``q`` : 64 bits
``mmusuffix`` is one of the generic suffixes ``data`` or ``code``, or
(for softmmu configs) a target-specific MMU mode suffix as defined
in the target's ``cpu.h``.
Regexes for git grep
- ``\<cpu_ld[us]\?[bwlq]_[a-zA-Z0-9]\+\>``
- ``\<cpu_st[bwlq]_[a-zA-Z0-9]\+\>``
``cpu_{ld,st}_*_ra``
~~~~~~~~~~~~~~~~~~~~
These functions work like the ``cpu_{ld,st}_*`` functions except
that they also take a ``retaddr`` argument. This extra argument
allows for correct unwinding of any exception that is taken,
and should generally be the result of GETPC() called directly
from the top level HELPER(foo) function (i.e. the return address
in the generated code).
These are generally the preferred way to do accesses by guest
virtual address from helper functions; see the documentation
of the non-``_ra`` variants for when those would be better.
Calling these functions with a ``retaddr`` argument of 0 is
equivalent to calling the non-``_ra`` version of the function.
Function names follow the pattern:
load: ``cpu_ld{sign}{size}_{mmusuffix}_ra(env, ptr, retaddr)``
store: ``cpu_st{sign}{size}_{mmusuffix}_ra(env, ptr, val, retaddr)``
Regexes for git grep
- ``\<cpu_ld[us]\?[bwlq]_[a-zA-Z0-9]\+_ra\>``
- ``\<cpu_st[bwlq]_[a-zA-Z0-9]\+_ra\>``
``helper_*_{ld,st}*mmu``
~~~~~~~~~~~~~~~~~~~~~~~~
These functions are intended primarily to be called by the code
generated by the TCG backend. They may also be called by target
CPU helper function code. Like the ``cpu_{ld,st}_*_ra`` functions
they perform accesses by guest virtual address; the difference is
that these functions allow you to specify an ``opindex`` parameter
which encodes (among other things) the mmu index to use for the
access. This is necessary if your helper needs to make an access
via a specific mmu index (for instance, an "always as non-privileged"
access) rather than using the default mmu index for the current state
of the guest CPU.
The ``opindex`` parameter should be created by calling ``make_memop_idx()``.
The ``retaddr`` parameter should be the result of GETPC() called directly
from the top level HELPER(foo) function (or 0 if no guest CPU state
unwinding is required).
**TODO** The names of these functions are a bit odd for historical
reasons because they were originally expected to be called only from
within generated code. We should rename them to bring them
more in line with the other memory access functions.
load: ``helper_{endian}_ld{sign}{size}_mmu(env, addr, opindex, retaddr)``
load (code): ``helper_{endian}_ld{sign}{size}_cmmu(env, addr, opindex, retaddr)``
store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)``
``sign``
- (empty) : for 32 or 64 bit sizes
- ``u`` : unsigned
- ``s`` : signed
``size``
- ``b`` : 8 bits
- ``w`` : 16 bits
- ``l`` : 32 bits
- ``q`` : 64 bits
``endian``
- ``le`` : little endian
- ``be`` : big endian
- ``ret`` : target endianness
Regexes for git grep
- ``\<helper_\(le\|be\|ret\)_ld[us]\?[bwlq]_c\?mmu\>``
- ``\<helper_\(le\|be\|ret\)_st[bwlq]_mmu\>``
``address_space_*``
~~~~~~~~~~~~~~~~~~~
These functions are the primary ones to use when emulating CPU
or device memory accesses. They take an AddressSpace, which is the
way QEMU defines the view of memory that a device or CPU has.
(They generally correspond to being the "master" end of a hardware bus
or bus fabric.)
Each CPU has an AddressSpace. Some kinds of CPU have more than
one AddressSpace (for instance ARM guest CPUs have an AddressSpace
for the Secure world and one for NonSecure if they implement TrustZone).
Devices which can do DMA-type operations should generally have an
AddressSpace. There is also a "system address space" which typically
has all the devices and memory that all CPUs can see. (Some older
device models use the "system address space" rather than properly
modelling that they have an AddressSpace of their own.)
Functions are provided for doing byte-buffer reads and writes,
and also for doing one-data-item loads and stores.
In all cases the caller provides a MemTxAttrs to specify bus
transaction attributes, and can check whether the memory transaction
succeeded using a MemTxResult return code.
``address_space_read(address_space, addr, attrs, buf, len)``
``address_space_write(address_space, addr, attrs, buf, len)``
``address_space_rw(address_space, addr, attrs, buf, len, is_write)``
``address_space_ld{sign}{size}_{endian}(address_space, addr, attrs, txresult)``
``address_space_st{size}_{endian}(address_space, addr, val, attrs, txresult)``
``sign``
- (empty) : for 32 or 64 bit sizes
- ``u`` : unsigned
(No signed load operations are provided.)
``size``
- ``b`` : 8 bits
- ``w`` : 16 bits
- ``l`` : 32 bits
- ``q`` : 64 bits
``endian``
- ``le`` : little endian
- ``be`` : big endian
The ``_{endian}`` suffix is omitted for byte accesses.
Regexes for git grep
- ``\<address_space_\(read\|write\|rw\)\>``
- ``\<address_space_ldu\?[bwql]\(_[lb]e\)\?\>``
- ``\<address_space_st[bwql]\(_[lb]e\)\?\>``
``{ld,st}*_phys``
~~~~~~~~~~~~~~~~~
These are functions which are identical to
``address_space_{ld,st}*``, except that they always pass
``MEMTXATTRS_UNSPECIFIED`` for the transaction attributes, and ignore
whether the transaction succeeded or failed.
The fact that they ignore whether the transaction succeeded means
they should not be used in new code, unless you know for certain
that your code will only be used in a context where the CPU or
device doing the access has no way to report such an error.
``load: ld{sign}{size}_{endian}_phys``
``store: st{size}_{endian}_phys``
``sign``
- (empty) : for 32 or 64 bit sizes
- ``u`` : unsigned
(No signed load operations are provided.)
``size``
- ``b`` : 8 bits
- ``w`` : 16 bits
- ``l`` : 32 bits
- ``q`` : 64 bits
``endian``
- ``le`` : little endian
- ``be`` : big endian
The ``_{endian}_`` infix is omitted for byte accesses.
Regexes for git grep
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>``
- ``\<st[bwlq]\(_[bl]e\)\?_phys\>``
``cpu_physical_memory_*``
~~~~~~~~~~~~~~~~~~~~~~~~~
These are convenience functions which are identical to
``address_space_*`` but operate specifically on the system address space,
always pass a ``MEMTXATTRS_UNSPECIFIED`` set of memory attributes and
ignore whether the memory transaction succeeded or failed.
For new code they are better avoided:
* there is likely to be behaviour you need to model correctly for a
failed read or write operation
* a device should usually perform operations on its own AddressSpace
rather than using the system address space
``cpu_physical_memory_read``
``cpu_physical_memory_write``
``cpu_physical_memory_rw``
Regexes for git grep
- ``\<cpu_physical_memory_\(read\|write\|rw\)\>``
``cpu_physical_memory_write_rom``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This function performs a write by physical address like
``address_space_write``, except that if the write is to a ROM then
the ROM contents will be modified, even though a write by the guest
CPU to the ROM would be ignored.
Note that unlike ``cpu_physical_memory_write()`` this function takes
an AddressSpace argument, but unlike ``address_space_write()`` this
function does not take a ``MemTxAttrs`` or return a ``MemTxResult``.
**TODO**: we should probably clean up this inconsistency and
turn the function into ``address_space_write_rom`` with an API
matching ``address_space_write``.
``cpu_physical_memory_write_rom``
``cpu_memory_rw_debug``
~~~~~~~~~~~~~~~~~~~~~~~
Access CPU memory by virtual address for debug purposes.
This function is intended for use by the GDB stub and similar code.
It takes a virtual address, converts it to a physical address via
an MMU lookup using the current settings of the specified CPU,
and then performs the access (using ``address_space_rw`` for
reads or ``cpu_physical_memory_write_rom`` for writes).
This means that if the access is a write to a ROM then this
function will modify the contents (whereas a normal guest CPU access
would ignore the write attempt).
``cpu_memory_rw_debug``
``dma_memory_*``
~~~~~~~~~~~~~~~~
These behave like ``address_space_*``, except that they perform a DMA
barrier operation first.
**TODO**: We should provide guidance on when you need the DMA
barrier operation and when it's OK to use ``address_space_*``, and
make sure our existing code is doing things correctly.
``dma_memory_read``
``dma_memory_write``
``dma_memory_rw``
Regexes for git grep
- ``\<dma_memory_\(read\|write\|rw\)\>``
``pci_dma_*`` and ``{ld,st}*_pci_dma``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These functions are specifically for PCI device models which need to
perform accesses where the PCI device is a bus master. You pass them a
``PCIDevice *`` and they will do ``dma_memory_*`` operations on the
correct address space for that device.
``pci_dma_read``
``pci_dma_write``
``pci_dma_rw``
``load: ld{sign}{size}_{endian}_pci_dma``
``store: st{size}_{endian}_pci_dma``
``sign``
- (empty) : for 32 or 64 bit sizes
- ``u`` : unsigned
(No signed load operations are provided.)
``size``
- ``b`` : 8 bits
- ``w`` : 16 bits
- ``l`` : 32 bits
- ``q`` : 64 bits
``endian``
- ``le`` : little endian
- ``be`` : big endian
The ``_{endian}_`` infix is omitted for byte accesses.
Regexes for git grep
- ``\<pci_dma_\(read\|write\|rw\)\>``
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_pci_dma\>``
- ``\<st[bwlq]\(_[bl]e\)\?_pci_dma\>``
|