summaryrefslogtreecommitdiff
path: root/portability
diff options
context:
space:
mode:
authorSøren Sandmann Pedersen <ssp@redhat.com>2012-10-29 00:04:38 -0400
committerSøren Sandmann Pedersen <ssp@redhat.com>2012-10-29 00:04:38 -0400
commit71033b4e5a43958c201ac25d55d16266e70e1d7b (patch)
treee235e09298c6ca258b842fcb9d248769e46c83e1 /portability
parentc8b2c39f78b0389d6d1bc605c44fcf415660c164 (diff)
16 bit; 8 bit; other things
Diffstat (limited to 'portability')
-rw-r--r--portability25
1 files changed, 25 insertions, 0 deletions
diff --git a/portability b/portability
index fcb4736..263c9a8 100644
--- a/portability
+++ b/portability
@@ -31,3 +31,28 @@
locate labels based on their names
fix up jumps by calling out to 'patch'
+
+
+Notes about x86:
+
+How to call a statically known function (e.g, memcpy() or malloc())
+from JIT compiled code:
+
+- In 64 bit mode:
+
+ The JIT compiled code can end up more than 4GB away from the
+ function, which means a standard call with 32-bit offset is not good
+ enough. Instead we have to do "call RIP_REL(label)", where label
+ refers to a place where the address of the function in question is
+ stored.
+
+ It might be possible to detect at compile time whether the final
+ generated code would be less than 4GB away, which would allow a call
+ rel32 to be used. It gets a little bit tricky though because the
+ rel32 has a different length, and in general we don't know the final
+ address of the jit compiled code until later.
+
+- In 32 bit mode:
+
+ This is easy: Just do <call rel32> and have the linker fix it up
+ before generating the final code.