diff options
author | Bill Wendling <isanbard@gmail.com> | 2012-08-02 08:49:53 +0000 |
---|---|---|
committer | Bill Wendling <isanbard@gmail.com> | 2012-08-02 08:49:53 +0000 |
commit | 31ce7bf6beeafaee40a4db2d2eac695976c79fc9 (patch) | |
tree | 69038219ae9c42115f408f5fb90a27fc5668c126 /docs | |
parent | 1c3781496081b47412fc70393bcdc5b67b440b02 (diff) |
Sphinxify the Code Generator document.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161164 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'docs')
-rw-r--r-- | docs/CodeGenerator.html | 3190 | ||||
-rw-r--r-- | docs/CodeGenerator.rst | 2428 | ||||
-rw-r--r-- | docs/subsystems.rst | 7 |
3 files changed, 2432 insertions, 3193 deletions
diff --git a/docs/CodeGenerator.html b/docs/CodeGenerator.html deleted file mode 100644 index 651eb96603..0000000000 --- a/docs/CodeGenerator.html +++ /dev/null @@ -1,3190 +0,0 @@ -<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" - "http://www.w3.org/TR/html4/strict.dtd"> -<html> -<head> - <meta http-equiv="content-type" content="text/html; charset=utf-8"> - <title>The LLVM Target-Independent Code Generator</title> - <link rel="stylesheet" href="_static/llvm.css" type="text/css"> - - <style type="text/css"> - .unknown { background-color: #C0C0C0; text-align: center; } - .unknown:before { content: "?" } - .no { background-color: #C11B17 } - .no:before { content: "N" } - .partial { background-color: #F88017 } - .yes { background-color: #0F0; } - .yes:before { content: "Y" } - </style> - -</head> -<body> - -<h1> - The LLVM Target-Independent Code Generator -</h1> - -<ol> - <li><a href="#introduction">Introduction</a> - <ul> - <li><a href="#required">Required components in the code generator</a></li> - <li><a href="#high-level-design">The high-level design of the code - generator</a></li> - <li><a href="#tablegen">Using TableGen for target description</a></li> - </ul> - </li> - <li><a href="#targetdesc">Target description classes</a> - <ul> - <li><a href="#targetmachine">The <tt>TargetMachine</tt> class</a></li> - <li><a href="#targetdata">The <tt>TargetData</tt> class</a></li> - <li><a href="#targetlowering">The <tt>TargetLowering</tt> class</a></li> - <li><a href="#targetregisterinfo">The <tt>TargetRegisterInfo</tt> class</a></li> - <li><a href="#targetinstrinfo">The <tt>TargetInstrInfo</tt> class</a></li> - <li><a href="#targetframeinfo">The <tt>TargetFrameInfo</tt> class</a></li> - <li><a href="#targetsubtarget">The <tt>TargetSubtarget</tt> class</a></li> - <li><a href="#targetjitinfo">The <tt>TargetJITInfo</tt> class</a></li> - </ul> - </li> - <li><a href="#codegendesc">The "Machine" Code Generator classes</a> - <ul> - <li><a href="#machineinstr">The <tt>MachineInstr</tt> class</a></li> - <li><a href="#machinebasicblock">The <tt>MachineBasicBlock</tt> - class</a></li> - <li><a href="#machinefunction">The <tt>MachineFunction</tt> class</a></li> - <li><a href="#machineinstrbundle"><tt>MachineInstr Bundles</tt></a></li> - </ul> - </li> - <li><a href="#mc">The "MC" Layer</a> - <ul> - <li><a href="#mcstreamer">The <tt>MCStreamer</tt> API</a></li> - <li><a href="#mccontext">The <tt>MCContext</tt> class</a> - <li><a href="#mcsymbol">The <tt>MCSymbol</tt> class</a></li> - <li><a href="#mcsection">The <tt>MCSection</tt> class</a></li> - <li><a href="#mcinst">The <tt>MCInst</tt> class</a></li> - </ul> - </li> - <li><a href="#codegenalgs">Target-independent code generation algorithms</a> - <ul> - <li><a href="#instselect">Instruction Selection</a> - <ul> - <li><a href="#selectiondag_intro">Introduction to SelectionDAGs</a></li> - <li><a href="#selectiondag_process">SelectionDAG Code Generation - Process</a></li> - <li><a href="#selectiondag_build">Initial SelectionDAG - Construction</a></li> - <li><a href="#selectiondag_legalize_types">SelectionDAG LegalizeTypes Phase</a></li> - <li><a href="#selectiondag_legalize">SelectionDAG Legalize Phase</a></li> - <li><a href="#selectiondag_optimize">SelectionDAG Optimization - Phase: the DAG Combiner</a></li> - <li><a href="#selectiondag_select">SelectionDAG Select Phase</a></li> - <li><a href="#selectiondag_sched">SelectionDAG Scheduling and Formation - Phase</a></li> - <li><a href="#selectiondag_future">Future directions for the - SelectionDAG</a></li> - </ul></li> - <li><a href="#liveintervals">Live Intervals</a> - <ul> - <li><a href="#livevariable_analysis">Live Variable Analysis</a></li> - <li><a href="#liveintervals_analysis">Live Intervals Analysis</a></li> - </ul></li> - <li><a href="#regalloc">Register Allocation</a> - <ul> - <li><a href="#regAlloc_represent">How registers are represented in - LLVM</a></li> - <li><a href="#regAlloc_howTo">Mapping virtual registers to physical - registers</a></li> - <li><a href="#regAlloc_twoAddr">Handling two address instructions</a></li> - <li><a href="#regAlloc_ssaDecon">The SSA deconstruction phase</a></li> - <li><a href="#regAlloc_fold">Instruction folding</a></li> - <li><a href="#regAlloc_builtIn">Built in register allocators</a></li> - </ul></li> - <li><a href="#codeemit">Code Emission</a></li> - <li><a href="#vliw_packetizer">VLIW Packetizer</a> - <ul> - <li><a href="#vliw_mapping">Mapping from instructions to functional - units</a></li> - <li><a href="#vliw_repr">How the packetization tables are - generated and used</a></li> - </ul> - </li> - </ul> - </li> - <li><a href="#nativeassembler">Implementing a Native Assembler</a></li> - - <li><a href="#targetimpls">Target-specific Implementation Notes</a> - <ul> - <li><a href="#targetfeatures">Target Feature Matrix</a></li> - <li><a href="#tailcallopt">Tail call optimization</a></li> - <li><a href="#sibcallopt">Sibling call optimization</a></li> - <li><a href="#x86">The X86 backend</a></li> - <li><a href="#ppc">The PowerPC backend</a> - <ul> - <li><a href="#ppc_abi">LLVM PowerPC ABI</a></li> - <li><a href="#ppc_frame">Frame Layout</a></li> - <li><a href="#ppc_prolog">Prolog/Epilog</a></li> - <li><a href="#ppc_dynamic">Dynamic Allocation</a></li> - </ul></li> - <li><a href="#ptx">The PTX backend</a></li> - </ul></li> - -</ol> - -<div class="doc_author"> - <p>Written by the LLVM Team.</p> -</div> - -<div class="doc_warning"> - <p>Warning: This is a work in progress.</p> -</div> - -<!-- *********************************************************************** --> -<h2> - <a name="introduction">Introduction</a> -</h2> -<!-- *********************************************************************** --> - -<div> - -<p>The LLVM target-independent code generator is a framework that provides a - suite of reusable components for translating the LLVM internal representation - to the machine code for a specified target—either in assembly form - (suitable for a static compiler) or in binary machine code format (usable for - a JIT compiler). The LLVM target-independent code generator consists of six - main components:</p> - -<ol> - <li><a href="#targetdesc">Abstract target description</a> interfaces which - capture important properties about various aspects of the machine, - independently of how they will be used. These interfaces are defined in - <tt>include/llvm/Target/</tt>.</li> - - <li>Classes used to represent the <a href="#codegendesc">code being - generated</a> for a target. These classes are intended to be abstract - enough to represent the machine code for <i>any</i> target machine. These - classes are defined in <tt>include/llvm/CodeGen/</tt>. At this level, - concepts like "constant pool entries" and "jump tables" are explicitly - exposed.</li> - - <li>Classes and algorithms used to represent code as the object file level, - the <a href="#mc">MC Layer</a>. These classes represent assembly level - constructs like labels, sections, and instructions. At this level, - concepts like "constant pool entries" and "jump tables" don't exist.</li> - - <li><a href="#codegenalgs">Target-independent algorithms</a> used to implement - various phases of native code generation (register allocation, scheduling, - stack frame representation, etc). This code lives - in <tt>lib/CodeGen/</tt>.</li> - - <li><a href="#targetimpls">Implementations of the abstract target description - interfaces</a> for particular targets. These machine descriptions make - use of the components provided by LLVM, and can optionally provide custom - target-specific passes, to build complete code generators for a specific - target. Target descriptions live in <tt>lib/Target/</tt>.</li> - - <li><a href="#jit">The target-independent JIT components</a>. The LLVM JIT is - completely target independent (it uses the <tt>TargetJITInfo</tt> - structure to interface for target-specific issues. The code for the - target-independent JIT lives in <tt>lib/ExecutionEngine/JIT</tt>.</li> -</ol> - -<p>Depending on which part of the code generator you are interested in working - on, different pieces of this will be useful to you. In any case, you should - be familiar with the <a href="#targetdesc">target description</a> - and <a href="#codegendesc">machine code representation</a> classes. If you - want to add a backend for a new target, you will need - to <a href="#targetimpls">implement the target description</a> classes for - your new target and understand the <a href="LangRef.html">LLVM code - representation</a>. If you are interested in implementing a - new <a href="#codegenalgs">code generation algorithm</a>, it should only - depend on the target-description and machine code representation classes, - ensuring that it is portable.</p> - -<!-- ======================================================================= --> -<h3> - <a name="required">Required components in the code generator</a> -</h3> - -<div> - -<p>The two pieces of the LLVM code generator are the high-level interface to the - code generator and the set of reusable components that can be used to build - target-specific backends. The two most important interfaces - (<a href="#targetmachine"><tt>TargetMachine</tt></a> - and <a href="#targetdata"><tt>TargetData</tt></a>) are the only ones that are - required to be defined for a backend to fit into the LLVM system, but the - others must be defined if the reusable code generator components are going to - be used.</p> - -<p>This design has two important implications. The first is that LLVM can - support completely non-traditional code generation targets. For example, the - C backend does not require register allocation, instruction selection, or any - of the other standard components provided by the system. As such, it only - implements these two interfaces, and does its own thing. Note that C backend - was removed from the trunk since LLVM 3.1 release. Another example of - a code generator like this is a (purely hypothetical) backend that converts - LLVM to the GCC RTL form and uses GCC to emit machine code for a target.</p> - -<p>This design also implies that it is possible to design and implement - radically different code generators in the LLVM system that do not make use - of any of the built-in components. Doing so is not recommended at all, but - could be required for radically different targets that do not fit into the - LLVM machine description model: FPGAs for example.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="high-level-design">The high-level design of the code generator</a> -</h3> - -<div> - -<p>The LLVM target-independent code generator is designed to support efficient - and quality code generation for standard register-based microprocessors. - Code generation in this model is divided into the following stages:</p> - -<ol> - <li><b><a href="#instselect">Instruction Selection</a></b> — This phase - determines an efficient way to express the input LLVM code in the target - instruction set. This stage produces the initial code for the program in - the target instruction set, then makes use of virtual registers in SSA - form and physical registers that represent any required register - assignments due to target constraints or calling conventions. This step - turns the LLVM code into a DAG of target instructions.</li> - - <li><b><a href="#selectiondag_sched">Scheduling and Formation</a></b> — - This phase takes the DAG of target instructions produced by the - instruction selection phase, determines an ordering of the instructions, - then emits the instructions - as <tt><a href="#machineinstr">MachineInstr</a></tt>s with that ordering. - Note that we describe this in the <a href="#instselect">instruction - selection section</a> because it operates on - a <a href="#selectiondag_intro">SelectionDAG</a>.</li> - - <li><b><a href="#ssamco">SSA-based Machine Code Optimizations</a></b> — - This optional stage consists of a series of machine-code optimizations - that operate on the SSA-form produced by the instruction selector. - Optimizations like modulo-scheduling or peephole optimization work - here.</li> - - <li><b><a href="#regalloc">Register Allocation</a></b> — The target code - is transformed from an infinite virtual register file in SSA form to the - concrete register file used by the target. This phase introduces spill - code and eliminates all virtual register references from the program.</li> - - <li><b><a href="#proepicode">Prolog/Epilog Code Insertion</a></b> — Once - the machine code has been generated for the function and the amount of - stack space required is known (used for LLVM alloca's and spill slots), - the prolog and epilog code for the function can be inserted and "abstract - stack location references" can be eliminated. This stage is responsible - for implementing optimizations like frame-pointer elimination and stack - packing.</li> - - <li><b><a href="#latemco">Late Machine Code Optimizations</a></b> — - Optimizations that operate on "final" machine code can go here, such as - spill code scheduling and peephole optimizations.</li> - - <li><b><a href="#codeemit">Code Emission</a></b> — The final stage - actually puts out the code for the current function, either in the target - assembler format or in machine code.</li> -</ol> - -<p>The code generator is based on the assumption that the instruction selector - will use an optimal pattern matching selector to create high-quality - sequences of native instructions. Alternative code generator designs based - on pattern expansion and aggressive iterative peephole optimization are much - slower. This design permits efficient compilation (important for JIT - environments) and aggressive optimization (used when generating code offline) - by allowing components of varying levels of sophistication to be used for any - step of compilation.</p> - -<p>In addition to these stages, target implementations can insert arbitrary - target-specific passes into the flow. For example, the X86 target uses a - special pass to handle the 80x87 floating point stack architecture. Other - targets with unusual requirements can be supported with custom passes as - needed.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="tablegen">Using TableGen for target description</a> -</h3> - -<div> - -<p>The target description classes require a detailed description of the target - architecture. These target descriptions often have a large amount of common - information (e.g., an <tt>add</tt> instruction is almost identical to a - <tt>sub</tt> instruction). In order to allow the maximum amount of - commonality to be factored out, the LLVM code generator uses - the <a href="TableGenFundamentals.html">TableGen</a> tool to describe big - chunks of the target machine, which allows the use of domain-specific and - target-specific abstractions to reduce the amount of repetition.</p> - -<p>As LLVM continues to be developed and refined, we plan to move more and more - of the target description to the <tt>.td</tt> form. Doing so gives us a - number of advantages. The most important is that it makes it easier to port - LLVM because it reduces the amount of C++ code that has to be written, and - the surface area of the code generator that needs to be understood before - someone can get something working. Second, it makes it easier to change - things. In particular, if tables and other things are all emitted - by <tt>tblgen</tt>, we only need a change in one place (<tt>tblgen</tt>) to - update all of the targets to a new interface.</p> - -</div> - -</div> - -<!-- *********************************************************************** --> -<h2> - <a name="targetdesc">Target description classes</a> -</h2> -<!-- *********************************************************************** --> - -<div> - -<p>The LLVM target description classes (located in the - <tt>include/llvm/Target</tt> directory) provide an abstract description of - the target machine independent of any particular client. These classes are - designed to capture the <i>abstract</i> properties of the target (such as the - instructions and registers it has), and do not incorporate any particular - pieces of code generation algorithms.</p> - -<p>All of the target description classes (except the - <tt><a href="#targetdata">TargetData</a></tt> class) are designed to be - subclassed by the concrete target implementation, and have virtual methods - implemented. To get to these implementations, the - <tt><a href="#targetmachine">TargetMachine</a></tt> class provides accessors - that should be implemented by the target.</p> - -<!-- ======================================================================= --> -<h3> - <a name="targetmachine">The <tt>TargetMachine</tt> class</a> -</h3> - -<div> - -<p>The <tt>TargetMachine</tt> class provides virtual methods that are used to - access the target-specific implementations of the various target description - classes via the <tt>get*Info</tt> methods (<tt>getInstrInfo</tt>, - <tt>getRegisterInfo</tt>, <tt>getFrameInfo</tt>, etc.). This class is - designed to be specialized by a concrete target implementation - (e.g., <tt>X86TargetMachine</tt>) which implements the various virtual - methods. The only required target description class is - the <a href="#targetdata"><tt>TargetData</tt></a> class, but if the code - generator components are to be used, the other interfaces should be - implemented as well.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="targetdata">The <tt>TargetData</tt> class</a> -</h3> - -<div> - -<p>The <tt>TargetData</tt> class is the only required target description class, - and it is the only class that is not extensible (you cannot derived a new - class from it). <tt>TargetData</tt> specifies information about how the - target lays out memory for structures, the alignment requirements for various - data types, the size of pointers in the target, and whether the target is - little-endian or big-endian.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="targetlowering">The <tt>TargetLowering</tt> class</a> -</h3> - -<div> - -<p>The <tt>TargetLowering</tt> class is used by SelectionDAG based instruction - selectors primarily to describe how LLVM code should be lowered to - SelectionDAG operations. Among other things, this class indicates:</p> - -<ul> - <li>an initial register class to use for various <tt>ValueType</tt>s,</li> - - <li>which operations are natively supported by the target machine,</li> - - <li>the return type of <tt>setcc</tt> operations,</li> - - <li>the type to use for shift amounts, and</li> - - <li>various high-level characteristics, like whether it is profitable to turn - division by a constant into a multiplication sequence</li> -</ul> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="targetregisterinfo">The <tt>TargetRegisterInfo</tt> class</a> -</h3> - -<div> - -<p>The <tt>TargetRegisterInfo</tt> class is used to describe the register file - of the target and any interactions between the registers.</p> - -<p>Registers in the code generator are represented in the code generator by - unsigned integers. Physical registers (those that actually exist in the - target description) are unique small numbers, and virtual registers are - generally large. Note that register #0 is reserved as a flag value.</p> - -<p>Each register in the processor description has an associated - <tt>TargetRegisterDesc</tt> entry, which provides a textual name for the - register (used for assembly output and debugging dumps) and a set of aliases - (used to indicate whether one register overlaps with another).</p> - -<p>In addition to the per-register description, the <tt>TargetRegisterInfo</tt> - class exposes a set of processor specific register classes (instances of the - <tt>TargetRegisterClass</tt> class). Each register class contains sets of - registers that have the same properties (for example, they are all 32-bit - integer registers). Each SSA virtual register created by the instruction - selector has an associated register class. When the register allocator runs, - it replaces virtual registers with a physical register in the set.</p> - -<p>The target-specific implementations of these classes is auto-generated from - a <a href="TableGenFundamentals.html">TableGen</a> description of the - register file.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="targetinstrinfo">The <tt>TargetInstrInfo</tt> class</a> -</h3> - -<div> - -<p>The <tt>TargetInstrInfo</tt> class is used to describe the machine - instructions supported by the target. It is essentially an array of - <tt>TargetInstrDescriptor</tt> objects, each of which describes one - instruction the target supports. Descriptors define things like the mnemonic - for the opcode, the number of operands, the list of implicit register uses - and defs, whether the instruction has certain target-independent properties - (accesses memory, is commutable, etc), and holds any target-specific - flags.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="targetframeinfo">The <tt>TargetFrameInfo</tt> class</a> -</h3> - -<div> - -<p>The <tt>TargetFrameInfo</tt> class is used to provide information about the - stack frame layout of the target. It holds the direction of stack growth, the - known stack alignment on entry to each function, and the offset to the local - area. The offset to the local area is the offset from the stack pointer on - function entry to the first location where function data (local variables, - spill locations) can be stored.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="targetsubtarget">The <tt>TargetSubtarget</tt> class</a> -</h3> - -<div> - -<p>The <tt>TargetSubtarget</tt> class is used to provide information about the - specific chip set being targeted. A sub-target informs code generation of - which instructions are supported, instruction latencies and instruction - execution itinerary; i.e., which processing units are used, in what order, - and for how long.</p> - -</div> - - -<!-- ======================================================================= --> -<h3> - <a name="targetjitinfo">The <tt>TargetJITInfo</tt> class</a> -</h3> - -<div> - -<p>The <tt>TargetJITInfo</tt> class exposes an abstract interface used by the - Just-In-Time code generator to perform target-specific activities, such as - emitting stubs. If a <tt>TargetMachine</tt> supports JIT code generation, it - should provide one of these objects through the <tt>getJITInfo</tt> - method.</p> - -</div> - -</div> - -<!-- *********************************************************************** --> -<h2> - <a name="codegendesc">Machine code description classes</a> -</h2> -<!-- *********************************************************************** --> - -<div> - -<p>At the high-level, LLVM code is translated to a machine specific - representation formed out of - <a href="#machinefunction"><tt>MachineFunction</tt></a>, - <a href="#machinebasicblock"><tt>MachineBasicBlock</tt></a>, - and <a href="#machineinstr"><tt>MachineInstr</tt></a> instances (defined - in <tt>include/llvm/CodeGen</tt>). This representation is completely target - agnostic, representing instructions in their most abstract form: an opcode - and a series of operands. This representation is designed to support both an - SSA representation for machine code, as well as a register allocated, non-SSA - form.</p> - -<!-- ======================================================================= --> -<h3> - <a name="machineinstr">The <tt>MachineInstr</tt> class</a> -</h3> - -<div> - -<p>Target machine instructions are represented as instances of the - <tt>MachineInstr</tt> class. This class is an extremely abstract way of - representing machine instructions. In particular, it only keeps track of an - opcode number and a set of operands.</p> - -<p>The opcode number is a simple unsigned integer that only has meaning to a - specific backend. All of the instructions for a target should be defined in - the <tt>*InstrInfo.td</tt> file for the target. The opcode enum values are - auto-generated from this description. The <tt>MachineInstr</tt> class does - not have any information about how to interpret the instruction (i.e., what - the semantics of the instruction are); for that you must refer to the - <tt><a href="#targetinstrinfo">TargetInstrInfo</a></tt> class.</p> - -<p>The operands of a machine instruction can be of several different types: a - register reference, a constant integer, a basic block reference, etc. In - addition, a machine operand should be marked as a def or a use of the value - (though only registers are allowed to be defs).</p> - -<p>By convention, the LLVM code generator orders instruction operands so that - all register definitions come before the register uses, even on architectures - that are normally printed in other orders. For example, the SPARC add - instruction: "<tt>add %i1, %i2, %i3</tt>" adds the "%i1", and "%i2" registers - and stores the result into the "%i3" register. In the LLVM code generator, - the operands should be stored as "<tt>%i3, %i1, %i2</tt>": with the - destination first.</p> - -<p>Keeping destination (definition) operands at the beginning of the operand - list has several advantages. In particular, the debugging printer will print - the instruction like this:</p> - -<div class="doc_code"> -<pre> -%r3 = add %i1, %i2 -</pre> -</div> - -<p>Also if the first operand is a def, it is easier to <a href="#buildmi">create - instructions</a> whose only def is the first operand.</p> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="buildmi">Using the <tt>MachineInstrBuilder.h</tt> functions</a> -</h4> - -<div> - -<p>Machine instructions are created by using the <tt>BuildMI</tt> functions, - located in the <tt>include/llvm/CodeGen/MachineInstrBuilder.h</tt> file. The - <tt>BuildMI</tt> functions make it easy to build arbitrary machine - instructions. Usage of the <tt>BuildMI</tt> functions look like this:</p> - -<div class="doc_code"> -<pre> -// Create a 'DestReg = mov 42' (rendered in X86 assembly as 'mov DestReg, 42') -// instruction. The '1' specifies how many operands will be added. -MachineInstr *MI = BuildMI(X86::MOV32ri, 1, DestReg).addImm(42); - -// Create the same instr, but insert it at the end of a basic block. -MachineBasicBlock &MBB = ... -BuildMI(MBB, X86::MOV32ri, 1, DestReg).addImm(42); - -// Create the same instr, but insert it before a specified iterator point. -MachineBasicBlock::iterator MBBI = ... -BuildMI(MBB, MBBI, X86::MOV32ri, 1, DestReg).addImm(42); - -// Create a 'cmp Reg, 0' instruction, no destination reg. -MI = BuildMI(X86::CMP32ri, 2).addReg(Reg).addImm(0); -// Create an 'sahf' instruction which takes no operands and stores nothing. -MI = BuildMI(X86::SAHF, 0); - -// Create a self looping branch instruction. -BuildMI(MBB, X86::JNE, 1).addMBB(&MBB); -</pre> -</div> - -<p>The key thing to remember with the <tt>BuildMI</tt> functions is that you - have to specify the number of operands that the machine instruction will - take. This allows for efficient memory allocation. You also need to specify - if operands default to be uses of values, not definitions. If you need to - add a definition operand (other than the optional destination register), you - must explicitly mark it as such:</p> - -<div class="doc_code"> -<pre> -MI.addReg(Reg, RegState::Define); -</pre> -</div> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="fixedregs">Fixed (preassigned) registers</a> -</h4> - -<div> - -<p>One important issue that the code generator needs to be aware of is the - presence of fixed registers. In particular, there are often places in the - instruction stream where the register allocator <em>must</em> arrange for a - particular value to be in a particular register. This can occur due to - limitations of the instruction set (e.g., the X86 can only do a 32-bit divide - with the <tt>EAX</tt>/<tt>EDX</tt> registers), or external factors like - calling conventions. In any case, the instruction selector should emit code - that copies a virtual register into or out of a physical register when - needed.</p> - -<p>For example, consider this simple LLVM example:</p> - -<div class="doc_code"> -<pre> -define i32 @test(i32 %X, i32 %Y) { - %Z = udiv i32 %X, %Y - ret i32 %Z -} -</pre> -</div> - -<p>The X86 instruction selector produces this machine code for the <tt>div</tt> - and <tt>ret</tt> (use "<tt>llc X.bc -march=x86 -print-machineinstrs</tt>" to - get this):</p> - -<div class="doc_code"> -<pre> -;; Start of div -%EAX = mov %reg1024 ;; Copy X (in reg1024) into EAX -%reg1027 = sar %reg1024, 31 -%EDX = mov %reg1027 ;; Sign extend X into EDX -idiv %reg1025 ;; Divide by Y (in reg1025) -%reg1026 = mov %EAX ;; Read the result (Z) out of EAX - -;; Start of ret -%EAX = mov %reg1026 ;; 32-bit return value goes in EAX -ret -</pre> -</div> - -<p>By the end of code generation, the register allocator has coalesced the - registers and deleted the resultant identity moves producing the following - code:</p> - -<div class="doc_code"> -<pre> -;; X is in EAX, Y is in ECX -mov %EAX, %EDX -sar %EDX, 31 -idiv %ECX -ret -</pre> -</div> - -<p>This approach is extremely general (if it can handle the X86 architecture, it - can handle anything!) and allows all of the target specific knowledge about - the instruction stream to be isolated in the instruction selector. Note that - physical registers should have a short lifetime for good code generation, and - all physical registers are assumed dead on entry to and exit from basic - blocks (before register allocation). Thus, if you need a value to be live - across basic block boundaries, it <em>must</em> live in a virtual - register.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="callclobber">Call-clobbered registers</a> -</h4> - -<div> - -<p>Some machine instructions, like calls, clobber a large number of physical - registers. Rather than adding <code><def,dead></code> operands for - all of them, it is possible to use an <code>MO_RegisterMask</code> operand - instead. The register mask operand holds a bit mask of preserved registers, - and everything else is considered to be clobbered by the instruction. </p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="ssa">Machine code in SSA form</a> -</h4> - -<div> - -<p><tt>MachineInstr</tt>'s are initially selected in SSA-form, and are - maintained in SSA-form until register allocation happens. For the most part, - this is trivially simple since LLVM is already in SSA form; LLVM PHI nodes - become machine code PHI nodes, and virtual registers are only allowed to have - a single definition.</p> - -<p>After register allocation, machine code is no longer in SSA-form because - there are no virtual registers left in the code.</p> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="machinebasicblock">The <tt>MachineBasicBlock</tt> class</a> -</h3> - -<div> - -<p>The <tt>MachineBasicBlock</tt> class contains a list of machine instructions - (<tt><a href="#machineinstr">MachineInstr</a></tt> instances). It roughly - corresponds to the LLVM code input to the instruction selector, but there can - be a one-to-many mapping (i.e. one LLVM basic block can map to multiple - machine basic blocks). The <tt>MachineBasicBlock</tt> class has a - "<tt>getBasicBlock</tt>" method, which returns the LLVM basic block that it - comes from.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="machinefunction">The <tt>MachineFunction</tt> class</a> -</h3> - -<div> - -<p>The <tt>MachineFunction</tt> class contains a list of machine basic blocks - (<tt><a href="#machinebasicblock">MachineBasicBlock</a></tt> instances). It - corresponds one-to-one with the LLVM function input to the instruction - selector. In addition to a list of basic blocks, - the <tt>MachineFunction</tt> contains a a <tt>MachineConstantPool</tt>, - a <tt>MachineFrameInfo</tt>, a <tt>MachineFunctionInfo</tt>, and a - <tt>MachineRegisterInfo</tt>. See - <tt>include/llvm/CodeGen/MachineFunction.h</tt> for more information.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="machineinstrbundle"><tt>MachineInstr Bundles</tt></a> -</h3> - -<div> - -<p>LLVM code generator can model sequences of instructions as MachineInstr - bundles. A MI bundle can model a VLIW group / pack which contains an - arbitrary number of parallel instructions. It can also be used to model - a sequential list of instructions (potentially with data dependencies) that - cannot be legally separated (e.g. ARM Thumb2 IT blocks).</p> - -<p>Conceptually a MI bundle is a MI with a number of other MIs nested within: -</p> - -<div class="doc_code"> -<pre> --------------- -| Bundle | --------- --------------- \ - | ---------------- - | | MI | - | ---------------- - | | - | ---------------- - | | MI | - | ---------------- - | | - | ---------------- - | | MI | - | ---------------- - | --------------- -| Bundle | -------- --------------- \ - | ---------------- - | | MI | - | ---------------- - | | - | ---------------- - | | MI | - | ---------------- - | | - | ... - | --------------- -| Bundle | -------- --------------- \ - | - ... -</pre> -</div> - -<p> MI bundle support does not change the physical representations of - MachineBasicBlock and MachineInstr. All the MIs (including top level and - nested ones) are stored as sequential list of MIs. The "bundled" MIs are - marked with the 'InsideBundle' flag. A top level MI with the special BUNDLE - opcode is used to represent the start of a bundle. It's legal to mix BUNDLE - MIs with indiviual MIs that are not inside bundles nor represent bundles. -</p> - -<p> MachineInstr passes should operate on a MI bundle as a single unit. Member - methods have been taught to correctly handle bundles and MIs inside bundles. - The MachineBasicBlock iterator has been modified to skip over bundled MIs to - enforce the bundle-as-a-single-unit concept. An alternative iterator - instr_iterator has been added to MachineBasicBlock to allow passes to - iterate over all of the MIs in a MachineBasicBlock, including those which - are nested inside bundles. The top level BUNDLE instruction must have the - correct set of register MachineOperand's that represent the cumulative - inputs and outputs of the bundled MIs.</p> - -<p> Packing / bundling of MachineInstr's should be done as part of the register - allocation super-pass. More specifically, the pass which determines what - MIs should be bundled together must be done after code generator exits SSA - form (i.e. after two-address pass, PHI elimination, and copy coalescing). - Bundles should only be finalized (i.e. adding BUNDLE MIs and input and - output register MachineOperands) after virtual registers have been - rewritten into physical registers. This requirement eliminates the need to - add virtual register operands to BUNDLE instructions which would effectively - double the virtual register def and use lists.</p> - -</div> - -</div> - -<!-- *********************************************************************** --> -<h2> - <a name="mc">The "MC" Layer</a> -</h2> -<!-- *********************************************************************** --> - -<div> - -<p> -The MC Layer is used to represent and process code at the raw machine code -level, devoid of "high level" information like "constant pools", "jump tables", -"global variables" or anything like that. At this level, LLVM handles things -like label names, machine instructions, and sections in the object file. The -code in this layer is used for a number of important purposes: the tail end of -the code generator uses it to write a .s or .o file, and it is also used by the -llvm-mc tool to implement standalone machine code assemblers and disassemblers. -</p> - -<p> -This section describes some of the important classes. There are also a number -of important subsystems that interact at this layer, they are described later -in this manual. -</p> - -<!-- ======================================================================= --> -<h3> - <a name="mcstreamer">The <tt>MCStreamer</tt> API</a> -</h3> - -<div> - -<p> -MCStreamer is best thought of as an assembler API. It is an abstract API which -is <em>implemented</em> in different ways (e.g. to output a .s file, output an -ELF .o file, etc) but whose API correspond directly to what you see in a .s -file. MCStreamer has one method per directive, such as EmitLabel, -EmitSymbolAttribute, SwitchSection, EmitValue (for .byte, .word), etc, which -directly correspond to assembly level directives. It also has an -EmitInstruction method, which is used to output an MCInst to the streamer. -</p> - -<p> -This API is most important for two clients: the llvm-mc stand-alone assembler is -effectively a parser that parses a line, then invokes a method on MCStreamer. In -the code generator, the <a href="#codeemit">Code Emission</a> phase of the code -generator lowers higher level LLVM IR and Machine* constructs down to the MC -layer, emitting directives through MCStreamer.</p> - -<p> -On the implementation side of MCStreamer, there are two major implementations: -one for writing out a .s file (MCAsmStreamer), and one for writing out a .o -file (MCObjectStreamer). MCAsmStreamer is a straight-forward implementation -that prints out a directive for each method (e.g. EmitValue -> .byte), but -MCObjectStreamer implements a full assembler. -</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="mccontext">The <tt>MCContext</tt> class</a> -</h3> - -<div> - -<p> -The MCContext class is the owner of a variety of uniqued data structures at the -MC layer, including symbols, sections, etc. As such, this is the class that you -interact with to create symbols and sections. This class can not be subclassed. -</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="mcsymbol">The <tt>MCSymbol</tt> class</a> -</h3> - -<div> - -<p> -The MCSymbol class represents a symbol (aka label) in the assembly file. There -are two interesting kinds of symbols: assembler temporary symbols, and normal -symbols. Assembler temporary symbols are used and processed by the assembler -but are discarded when the object file is produced. The distinction is usually -represented by adding a prefix to the label, for example "L" labels are -assembler temporary labels in MachO. -</p> - -<p>MCSymbols are created by MCContext and uniqued there. This means that -MCSymbols can be compared for pointer equivalence to find out if they are the -same symbol. Note that pointer inequality does not guarantee the labels will -end up at different addresses though. It's perfectly legal to output something -like this to the .s file:<p> - -<pre> - foo: - bar: - .byte 4 -</pre> - -<p>In this case, both the foo and bar symbols will have the same address.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="mcsection">The <tt>MCSection</tt> class</a> -</h3> - -<div> - -<p> -The MCSection class represents an object-file specific section. It is subclassed -by object file specific implementations (e.g. <tt>MCSectionMachO</tt>, -<tt>MCSectionCOFF</tt>, <tt>MCSectionELF</tt>) and these are created and uniqued -by MCContext. The MCStreamer has a notion of the current section, which can be -changed with the SwitchToSection method (which corresponds to a ".section" -directive in a .s file). -</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="mcinst">The <tt>MCInst</tt> class</a> -</h3> - -<div> - -<p> -The MCInst class is a target-independent representation of an instruction. It -is a simple class (much more so than <a href="#machineinstr">MachineInstr</a>) -that holds a target-specific opcode and a vector of MCOperands. MCOperand, in -turn, is a simple discriminated union of three cases: 1) a simple immediate, -2) a target register ID, 3) a symbolic expression (e.g. "Lfoo-Lbar+42") as an -MCExpr. -</p> - -<p>MCInst is the common currency used to represent machine instructions at the -MC layer. It is the type used by the instruction encoder, the instruction -printer, and the type generated by the assembly parser and disassembler. -</p> - -</div> - -</div> - -<!-- *********************************************************************** --> -<h2> - <a name="codegenalgs">Target-independent code generation algorithms</a> -</h2> -<!-- *********************************************************************** --> - -<div> - -<p>This section documents the phases described in the - <a href="#high-level-design">high-level design of the code generator</a>. - It explains how they work and some of the rationale behind their design.</p> - -<!-- ======================================================================= --> -<h3> - <a name="instselect">Instruction Selection</a> -</h3> - -<div> - -<p>Instruction Selection is the process of translating LLVM code presented to - the code generator into target-specific machine instructions. There are - several well-known ways to do this in the literature. LLVM uses a - SelectionDAG based instruction selector.</p> - -<p>Portions of the DAG instruction selector are generated from the target - description (<tt>*.td</tt>) files. Our goal is for the entire instruction - selector to be generated from these <tt>.td</tt> files, though currently - there are still things that require custom C++ code.</p> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_intro">Introduction to SelectionDAGs</a> -</h4> - -<div> - -<p>The SelectionDAG provides an abstraction for code representation in a way - that is amenable to instruction selection using automatic techniques - (e.g. dynamic-programming based optimal pattern matching selectors). It is - also well-suited to other phases of code generation; in particular, - instruction scheduling (SelectionDAG's are very close to scheduling DAGs - post-selection). Additionally, the SelectionDAG provides a host - representation where a large variety of very-low-level (but - target-independent) <a href="#selectiondag_optimize">optimizations</a> may be - performed; ones which require extensive information about the instructions - efficiently supported by the target.</p> - -<p>The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the - <tt>SDNode</tt> class. The primary payload of the <tt>SDNode</tt> is its - operation code (Opcode) that indicates what operation the node performs and - the operands to the operation. The various operation node types are - described at the top of the <tt>include/llvm/CodeGen/SelectionDAGNodes.h</tt> - file.</p> - -<p>Although most operations define a single value, each node in the graph may - define multiple values. For example, a combined div/rem operation will - define both the dividend and the remainder. Many other situations require - multiple values as well. Each node also has some number of operands, which - are edges to the node defining the used value. Because nodes may define - multiple values, edges are represented by instances of the <tt>SDValue</tt> - class, which is a <tt><SDNode, unsigned></tt> pair, indicating the node - and result value being used, respectively. Each value produced by - an <tt>SDNode</tt> has an associated <tt>MVT</tt> (Machine Value Type) - indicating what the type of the value is.</p> - -<p>SelectionDAGs contain two different kinds of values: those that represent - data flow and those that represent control flow dependencies. Data values - are simple edges with an integer or floating point value type. Control edges - are represented as "chain" edges which are of type <tt>MVT::Other</tt>. - These edges provide an ordering between nodes that have side effects (such as - loads, stores, calls, returns, etc). All nodes that have side effects should - take a token chain as input and produce a new one as output. By convention, - token chain inputs are always operand #0, and chain results are always the - last value produced by an operation.</p> - -<p>A SelectionDAG has designated "Entry" and "Root" nodes. The Entry node is - always a marker node with an Opcode of <tt>ISD::EntryToken</tt>. The Root - node is the final side-effecting node in the token chain. For example, in a - single basic block function it would be the return node.</p> - -<p>One important concept for SelectionDAGs is the notion of a "legal" vs. - "illegal" DAG. A legal DAG for a target is one that only uses supported - operations and supported types. On a 32-bit PowerPC, for example, a DAG with - a value of type i1, i8, i16, or i64 would be illegal, as would a DAG that - uses a SREM or UREM operation. The - <a href="#selectinodag_legalize_types">legalize types</a> and - <a href="#selectiondag_legalize">legalize operations</a> phases are - responsible for turning an illegal DAG into a legal DAG.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_process">SelectionDAG Instruction Selection Process</a> -</h4> - -<div> - -<p>SelectionDAG-based instruction selection consists of the following steps:</p> - -<ol> - <li><a href="#selectiondag_build">Build initial DAG</a> — This stage - performs a simple translation from the input LLVM code to an illegal - SelectionDAG.</li> - - <li><a href="#selectiondag_optimize">Optimize SelectionDAG</a> — This - stage performs simple optimizations on the SelectionDAG to simplify it, - and recognize meta instructions (like rotates - and <tt>div</tt>/<tt>rem</tt> pairs) for targets that support these meta - operations. This makes the resultant code more efficient and - the <a href="#selectiondag_select">select instructions from DAG</a> phase - (below) simpler.</li> - - <li><a href="#selectiondag_legalize_types">Legalize SelectionDAG Types</a> - — This stage transforms SelectionDAG nodes to eliminate any types - that are unsupported on the target.</li> - - <li><a href="#selectiondag_optimize">Optimize SelectionDAG</a> — The - SelectionDAG optimizer is run to clean up redundancies exposed by type - legalization.</li> - - <li><a href="#selectiondag_legalize">Legalize SelectionDAG Ops</a> — - This stage transforms SelectionDAG nodes to eliminate any operations - that are unsupported on the target.</li> - - <li><a href="#selectiondag_optimize">Optimize SelectionDAG</a> — The - SelectionDAG optimizer is run to eliminate inefficiencies introduced by - operation legalization.</li> - - <li><a href="#selectiondag_select">Select instructions from DAG</a> — - Finally, the target instruction selector matches the DAG operations to - target instructions. This process translates the target-independent input - DAG into another DAG of target instructions.</li> - - <li><a href="#selectiondag_sched">SelectionDAG Scheduling and Formation</a> - — The last phase assigns a linear order to the instructions in the - target-instruction DAG and emits them into the MachineFunction being - compiled. This step uses traditional prepass scheduling techniques.</li> -</ol> - -<p>After all of these steps are complete, the SelectionDAG is destroyed and the - rest of the code generation passes are run.</p> - -<p>One great way to visualize what is going on here is to take advantage of a - few LLC command line options. The following options pop up a window - displaying the SelectionDAG at specific times (if you only get errors printed - to the console while using this, you probably - <a href="ProgrammersManual.html#ViewGraph">need to configure your system</a> - to add support for it).</p> - -<ul> - <li><tt>-view-dag-combine1-dags</tt> displays the DAG after being built, - before the first optimization pass.</li> - - <li><tt>-view-legalize-dags</tt> displays the DAG before Legalization.</li> - - <li><tt>-view-dag-combine2-dags</tt> displays the DAG before the second - optimization pass.</li> - - <li><tt>-view-isel-dags</tt> displays the DAG before the Select phase.</li> - - <li><tt>-view-sched-dags</tt> displays the DAG before Scheduling.</li> -</ul> - -<p>The <tt>-view-sunit-dags</tt> displays the Scheduler's dependency graph. - This graph is based on the final SelectionDAG, with nodes that must be - scheduled together bundled into a single scheduling-unit node, and with - immediate operands and other nodes that aren't relevant for scheduling - omitted.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_build">Initial SelectionDAG Construction</a> -</h4> - -<div> - -<p>The initial SelectionDAG is naïvely peephole expanded from the LLVM - input by the <tt>SelectionDAGLowering</tt> class in the - <tt>lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp</tt> file. The intent of - this pass is to expose as much low-level, target-specific details to the - SelectionDAG as possible. This pass is mostly hard-coded (e.g. an - LLVM <tt>add</tt> turns into an <tt>SDNode add</tt> while a - <tt>getelementptr</tt> is expanded into the obvious arithmetic). This pass - requires target-specific hooks to lower calls, returns, varargs, etc. For - these features, the <tt><a href="#targetlowering">TargetLowering</a></tt> - interface is used.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_legalize_types">SelectionDAG LegalizeTypes Phase</a> -</h4> - -<div> - -<p>The Legalize phase is in charge of converting a DAG to only use the types - that are natively supported by the target.</p> - -<p>There are two main ways of converting values of unsupported scalar types to - values of supported types: converting small types to larger types - ("promoting"), and breaking up large integer types into smaller ones - ("expanding"). For example, a target might require that all f32 values are - promoted to f64 and that all i1/i8/i16 values are promoted to i32. The same - target might require that all i64 values be expanded into pairs of i32 - values. These changes can insert sign and zero extensions as needed to make - sure that the final code has the same behavior as the input.</p> - -<p>There are two main ways of converting values of unsupported vector types to - value of supported types: splitting vector types, multiple times if - necessary, until a legal type is found, and extending vector types by adding - elements to the end to round them out to legal types ("widening"). If a - vector gets split all the way down to single-element parts with no supported - vector type being found, the elements are converted to scalars - ("scalarizing").</p> - -<p>A target implementation tells the legalizer which types are supported (and - which register class to use for them) by calling the - <tt>addRegisterClass</tt> method in its TargetLowering constructor.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_legalize">SelectionDAG Legalize Phase</a> -</h4> - -<div> - -<p>The Legalize phase is in charge of converting a DAG to only use the - operations that are natively supported by the target.</p> - -<p>Targets often have weird constraints, such as not supporting every operation - on every supported datatype (e.g. X86 does not support byte conditional moves - and PowerPC does not support sign-extending loads from a 16-bit memory - location). Legalize takes care of this by open-coding another sequence of - operations to emulate the operation ("expansion"), by promoting one type to a - larger type that supports the operation ("promotion"), or by using a - target-specific hook to implement the legalization ("custom").</p> - -<p>A target implementation tells the legalizer which operations are not - supported (and which of the above three actions to take) by calling the - <tt>setOperationAction</tt> method in its <tt>TargetLowering</tt> - constructor.</p> - -<p>Prior to the existence of the Legalize passes, we required that every target - <a href="#selectiondag_optimize">selector</a> supported and handled every - operator and type even if they are not natively supported. The introduction - of the Legalize phases allows all of the canonicalization patterns to be - shared across targets, and makes it very easy to optimize the canonicalized - code because it is still in the form of a DAG.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_optimize"> - SelectionDAG Optimization Phase: the DAG Combiner - </a> -</h4> - -<div> - -<p>The SelectionDAG optimization phase is run multiple times for code - generation, immediately after the DAG is built and once after each - legalization. The first run of the pass allows the initial code to be - cleaned up (e.g. performing optimizations that depend on knowing that the - operators have restricted type inputs). Subsequent runs of the pass clean up - the messy code generated by the Legalize passes, which allows Legalize to be - very simple (it can focus on making code legal instead of focusing on - generating <em>good</em> and legal code).</p> - -<p>One important class of optimizations performed is optimizing inserted sign - and zero extension instructions. We currently use ad-hoc techniques, but - could move to more rigorous techniques in the future. Here are some good - papers on the subject:</p> - -<p>"<a href="http://www.eecs.harvard.edu/~nr/pubs/widen-abstract.html">Widening - integer arithmetic</a>"<br> - Kevin Redwine and Norman Ramsey<br> - International Conference on Compiler Construction (CC) 2004</p> - -<p>"<a href="http://portal.acm.org/citation.cfm?doid=512529.512552">Effective - sign extension elimination</a>"<br> - Motohiro Kawahito, Hideaki Komatsu, and Toshio Nakatani<br> - Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design - and Implementation.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_select">SelectionDAG Select Phase</a> -</h4> - -<div> - -<p>The Select phase is the bulk of the target-specific code for instruction - selection. This phase takes a legal SelectionDAG as input, pattern matches - the instructions supported by the target to this DAG, and produces a new DAG - of target code. For example, consider the following LLVM fragment:</p> - -<div class="doc_code"> -<pre> -%t1 = fadd float %W, %X -%t2 = fmul float %t1, %Y -%t3 = fadd float %t2, %Z -</pre> -</div> - -<p>This LLVM code corresponds to a SelectionDAG that looks basically like - this:</p> - -<div class="doc_code"> -<pre> -(fadd:f32 (fmul:f32 (fadd:f32 W, X), Y), Z) -</pre> -</div> - -<p>If a target supports floating point multiply-and-add (FMA) operations, one of - the adds can be merged with the multiply. On the PowerPC, for example, the - output of the instruction selector might look like this DAG:</p> - -<div class="doc_code"> -<pre> -(FMADDS (FADDS W, X), Y, Z) -</pre> -</div> - -<p>The <tt>FMADDS</tt> instruction is a ternary instruction that multiplies its -first two operands and adds the third (as single-precision floating-point -numbers). The <tt>FADDS</tt> instruction is a simple binary single-precision -add instruction. To perform this pattern match, the PowerPC backend includes -the following instruction definitions:</p> - -<div class="doc_code"> -<pre> -def FMADDS : AForm_1<59, 29, - (ops F4RC:$FRT, F4RC:$FRA, F4RC:$FRC, F4RC:$FRB), - "fmadds $FRT, $FRA, $FRC, $FRB", - [<b>(set F4RC:$FRT, (fadd (fmul F4RC:$FRA, F4RC:$FRC), - F4RC:$FRB))</b>]>; -def FADDS : AForm_2<59, 21, - (ops F4RC:$FRT, F4RC:$FRA, F4RC:$FRB), - "fadds $FRT, $FRA, $FRB", - [<b>(set F4RC:$FRT, (fadd F4RC:$FRA, F4RC:$FRB))</b>]>; -</pre> -</div> - -<p>The portion of the instruction definition in bold indicates the pattern used - to match the instruction. The DAG operators - (like <tt>fmul</tt>/<tt>fadd</tt>) are defined in - the <tt>include/llvm/Target/TargetSelectionDAG.td</tt> file. " - <tt>F4RC</tt>" is the register class of the input and result values.</p> - -<p>The TableGen DAG instruction selector generator reads the instruction - patterns in the <tt>.td</tt> file and automatically builds parts of the - pattern matching code for your target. It has the following strengths:</p> - -<ul> - <li>At compiler-compiler time, it analyzes your instruction patterns and tells - you if your patterns make sense or not.</li> - - <li>It can handle arbitrary constraints on operands for the pattern match. In - particular, it is straight-forward to say things like "match any immediate - that is a 13-bit sign-extended value". For examples, see the - <tt>immSExt16</tt> and related <tt>tblgen</tt> classes in the PowerPC - backend.</li> - - <li>It knows several important identities for the patterns defined. For - example, it knows that addition is commutative, so it allows the - <tt>FMADDS</tt> pattern above to match "<tt>(fadd X, (fmul Y, Z))</tt>" as - well as "<tt>(fadd (fmul X, Y), Z)</tt>", without the target author having - to specially handle this case.</li> - - <li>It has a full-featured type-inferencing system. In particular, you should - rarely have to explicitly tell the system what type parts of your patterns - are. In the <tt>FMADDS</tt> case above, we didn't have to tell - <tt>tblgen</tt> that all of the nodes in the pattern are of type 'f32'. - It was able to infer and propagate this knowledge from the fact that - <tt>F4RC</tt> has type 'f32'.</li> - - <li>Targets can define their own (and rely on built-in) "pattern fragments". - Pattern fragments are chunks of reusable patterns that get inlined into - your patterns during compiler-compiler time. For example, the integer - "<tt>(not x)</tt>" operation is actually defined as a pattern fragment - that expands as "<tt>(xor x, -1)</tt>", since the SelectionDAG does not - have a native '<tt>not</tt>' operation. Targets can define their own - short-hand fragments as they see fit. See the definition of - '<tt>not</tt>' and '<tt>ineg</tt>' for examples.</li> - - <li>In addition to instructions, targets can specify arbitrary patterns that - map to one or more instructions using the 'Pat' class. For example, the - PowerPC has no way to load an arbitrary integer immediate into a register - in one instruction. To tell tblgen how to do this, it defines: - <br> - <br> -<div class="doc_code"> -<pre> -// Arbitrary immediate support. Implement in terms of LIS/ORI. -def : Pat<(i32 imm:$imm), - (ORI (LIS (HI16 imm:$imm)), (LO16 imm:$imm))>; -</pre> -</div> - <br> - If none of the single-instruction patterns for loading an immediate into a - register match, this will be used. This rule says "match an arbitrary i32 - immediate, turning it into an <tt>ORI</tt> ('or a 16-bit immediate') and - an <tt>LIS</tt> ('load 16-bit immediate, where the immediate is shifted to - the left 16 bits') instruction". To make this work, the - <tt>LO16</tt>/<tt>HI16</tt> node transformations are used to manipulate - the input immediate (in this case, take the high or low 16-bits of the - immediate).</li> - - <li>While the system does automate a lot, it still allows you to write custom - C++ code to match special cases if there is something that is hard to - express.</li> -</ul> - -<p>While it has many strengths, the system currently has some limitations, - primarily because it is a work in progress and is not yet finished:</p> - -<ul> - <li>Overall, there is no way to define or match SelectionDAG nodes that define - multiple values (e.g. <tt>SMUL_LOHI</tt>, <tt>LOAD</tt>, <tt>CALL</tt>, - etc). This is the biggest reason that you currently still <em>have - to</em> write custom C++ code for your instruction selector.</li> - - <li>There is no great way to support matching complex addressing modes yet. - In the future, we will extend pattern fragments to allow them to define - multiple values (e.g. the four operands of the <a href="#x86_memory">X86 - addressing mode</a>, which are currently matched with custom C++ code). - In addition, we'll extend fragments so that a fragment can match multiple - different patterns.</li> - - <li>We don't automatically infer flags like isStore/isLoad yet.</li> - - <li>We don't automatically generate the set of supported registers and - operations for the <a href="#selectiondag_legalize">Legalizer</a> - yet.</li> - - <li>We don't have a way of tying in custom legalized nodes yet.</li> -</ul> - -<p>Despite these limitations, the instruction selector generator is still quite - useful for most of the binary and logical operations in typical instruction - sets. If you run into any problems or can't figure out how to do something, - please let Chris know!</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_sched">SelectionDAG Scheduling and Formation Phase</a> -</h4> - -<div> - -<p>The scheduling phase takes the DAG of target instructions from the selection - phase and assigns an order. The scheduler can pick an order depending on - various constraints of the machines (i.e. order for minimal register pressure - or try to cover instruction latencies). Once an order is established, the - DAG is converted to a list - of <tt><a href="#machineinstr">MachineInstr</a></tt>s and the SelectionDAG is - destroyed.</p> - -<p>Note that this phase is logically separate from the instruction selection - phase, but is tied to it closely in the code because it operates on - SelectionDAGs.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="selectiondag_future">Future directions for the SelectionDAG</a> -</h4> - -<div> - -<ol> - <li>Optional function-at-a-time selection.</li> - - <li>Auto-generate entire selector from <tt>.td</tt> file.</li> -</ol> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="ssamco">SSA-based Machine Code Optimizations</a> -</h3> -<div><p>To Be Written</p></div> - -<!-- ======================================================================= --> -<h3> - <a name="liveintervals">Live Intervals</a> -</h3> - -<div> - -<p>Live Intervals are the ranges (intervals) where a variable is <i>live</i>. - They are used by some <a href="#regalloc">register allocator</a> passes to - determine if two or more virtual registers which require the same physical - register are live at the same point in the program (i.e., they conflict). - When this situation occurs, one virtual register must be <i>spilled</i>.</p> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="livevariable_analysis">Live Variable Analysis</a> -</h4> - -<div> - -<p>The first step in determining the live intervals of variables is to calculate - the set of registers that are immediately dead after the instruction (i.e., - the instruction calculates the value, but it is never used) and the set of - registers that are used by the instruction, but are never used after the - instruction (i.e., they are killed). Live variable information is computed - for each <i>virtual</i> register and <i>register allocatable</i> physical - register in the function. This is done in a very efficient manner because it - uses SSA to sparsely compute lifetime information for virtual registers - (which are in SSA form) and only has to track physical registers within a - block. Before register allocation, LLVM can assume that physical registers - are only live within a single basic block. This allows it to do a single, - local analysis to resolve physical register lifetimes within each basic - block. If a physical register is not register allocatable (e.g., a stack - pointer or condition codes), it is not tracked.</p> - -<p>Physical registers may be live in to or out of a function. Live in values are - typically arguments in registers. Live out values are typically return values - in registers. Live in values are marked as such, and are given a dummy - "defining" instruction during live intervals analysis. If the last basic - block of a function is a <tt>return</tt>, then it's marked as using all live - out values in the function.</p> - -<p><tt>PHI</tt> nodes need to be handled specially, because the calculation of - the live variable information from a depth first traversal of the CFG of the - function won't guarantee that a virtual register used by the <tt>PHI</tt> - node is defined before it's used. When a <tt>PHI</tt> node is encountered, - only the definition is handled, because the uses will be handled in other - basic blocks.</p> - -<p>For each <tt>PHI</tt> node of the current basic block, we simulate an - assignment at the end of the current basic block and traverse the successor - basic blocks. If a successor basic block has a <tt>PHI</tt> node and one of - the <tt>PHI</tt> node's operands is coming from the current basic block, then - the variable is marked as <i>alive</i> within the current basic block and all - of its predecessor basic blocks, until the basic block with the defining - instruction is encountered.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="liveintervals_analysis">Live Intervals Analysis</a> -</h4> - -<div> - -<p>We now have the information available to perform the live intervals analysis - and build the live intervals themselves. We start off by numbering the basic - blocks and machine instructions. We then handle the "live-in" values. These - are in physical registers, so the physical register is assumed to be killed - by the end of the basic block. Live intervals for virtual registers are - computed for some ordering of the machine instructions <tt>[1, N]</tt>. A - live interval is an interval <tt>[i, j)</tt>, where <tt>1 <= i <= j - < N</tt>, for which a variable is live.</p> - -<p><i><b>More to come...</b></i></p> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="regalloc">Register Allocation</a> -</h3> - -<div> - -<p>The <i>Register Allocation problem</i> consists in mapping a program - <i>P<sub>v</sub></i>, that can use an unbounded number of virtual registers, - to a program <i>P<sub>p</sub></i> that contains a finite (possibly small) - number of physical registers. Each target architecture has a different number - of physical registers. If the number of physical registers is not enough to - accommodate all the virtual registers, some of them will have to be mapped - into memory. These virtuals are called <i>spilled virtuals</i>.</p> - -<!-- _______________________________________________________________________ --> - -<h4> - <a name="regAlloc_represent">How registers are represented in LLVM</a> -</h4> - -<div> - -<p>In LLVM, physical registers are denoted by integer numbers that normally - range from 1 to 1023. To see how this numbering is defined for a particular - architecture, you can read the <tt>GenRegisterNames.inc</tt> file for that - architecture. For instance, by - inspecting <tt>lib/Target/X86/X86GenRegisterInfo.inc</tt> we see that the - 32-bit register <tt>EAX</tt> is denoted by 43, and the MMX register - <tt>MM0</tt> is mapped to 65.</p> - -<p>Some architectures contain registers that share the same physical location. A - notable example is the X86 platform. For instance, in the X86 architecture, - the registers <tt>EAX</tt>, <tt>AX</tt> and <tt>AL</tt> share the first eight - bits. These physical registers are marked as <i>aliased</i> in LLVM. Given a - particular architecture, you can check which registers are aliased by - inspecting its <tt>RegisterInfo.td</tt> file. Moreover, the class - <tt>MCRegAliasIterator</tt> enumerates all the physical registers aliased to - a register.</p> - -<p>Physical registers, in LLVM, are grouped in <i>Register Classes</i>. - Elements in the same register class are functionally equivalent, and can be - interchangeably used. Each virtual register can only be mapped to physical - registers of a particular class. For instance, in the X86 architecture, some - virtuals can only be allocated to 8 bit registers. A register class is - described by <tt>TargetRegisterClass</tt> objects. To discover if a virtual - register is compatible with a given physical, this code can be used:</p> - -<div class="doc_code"> -<pre> -bool RegMapping_Fer::compatible_class(MachineFunction &mf, - unsigned v_reg, - unsigned p_reg) { - assert(TargetRegisterInfo::isPhysicalRegister(p_reg) && - "Target register must be physical"); - const TargetRegisterClass *trc = mf.getRegInfo().getRegClass(v_reg); - return trc->contains(p_reg); -} -</pre> -</div> - -<p>Sometimes, mostly for debugging purposes, it is useful to change the number - of physical registers available in the target architecture. This must be done - statically, inside the <tt>TargetRegsterInfo.td</tt> file. Just <tt>grep</tt> - for <tt>RegisterClass</tt>, the last parameter of which is a list of - registers. Just commenting some out is one simple way to avoid them being - used. A more polite way is to explicitly exclude some registers from - the <i>allocation order</i>. See the definition of the <tt>GR8</tt> register - class in <tt>lib/Target/X86/X86RegisterInfo.td</tt> for an example of this. - </p> - -<p>Virtual registers are also denoted by integer numbers. Contrary to physical - registers, different virtual registers never share the same number. Whereas - physical registers are statically defined in a <tt>TargetRegisterInfo.td</tt> - file and cannot be created by the application developer, that is not the case - with virtual registers. In order to create new virtual registers, use the - method <tt>MachineRegisterInfo::createVirtualRegister()</tt>. This method - will return a new virtual register. Use an <tt>IndexedMap<Foo, - VirtReg2IndexFunctor></tt> to hold information per virtual register. If you - need to enumerate all virtual registers, use the function - <tt>TargetRegisterInfo::index2VirtReg()</tt> to find the virtual register - numbers:</p> - -<div class="doc_code"> -<pre> - for (unsigned i = 0, e = MRI->getNumVirtRegs(); i != e; ++i) { - unsigned VirtReg = TargetRegisterInfo::index2VirtReg(i); - stuff(VirtReg); - } -</pre> -</div> - -<p>Before register allocation, the operands of an instruction are mostly virtual - registers, although physical registers may also be used. In order to check if - a given machine operand is a register, use the boolean - function <tt>MachineOperand::isRegister()</tt>. To obtain the integer code of - a register, use <tt>MachineOperand::getReg()</tt>. An instruction may define - or use a register. For instance, <tt>ADD reg:1026 := reg:1025 reg:1024</tt> - defines the registers 1024, and uses registers 1025 and 1026. Given a - register operand, the method <tt>MachineOperand::isUse()</tt> informs if that - register is being used by the instruction. The - method <tt>MachineOperand::isDef()</tt> informs if that registers is being - defined.</p> - -<p>We will call physical registers present in the LLVM bitcode before register - allocation <i>pre-colored registers</i>. Pre-colored registers are used in - many different situations, for instance, to pass parameters of functions - calls, and to store results of particular instructions. There are two types - of pre-colored registers: the ones <i>implicitly</i> defined, and - those <i>explicitly</i> defined. Explicitly defined registers are normal - operands, and can be accessed - with <tt>MachineInstr::getOperand(int)::getReg()</tt>. In order to check - which registers are implicitly defined by an instruction, use - the <tt>TargetInstrInfo::get(opcode)::ImplicitDefs</tt>, - where <tt>opcode</tt> is the opcode of the target instruction. One important - difference between explicit and implicit physical registers is that the - latter are defined statically for each instruction, whereas the former may - vary depending on the program being compiled. For example, an instruction - that represents a function call will always implicitly define or use the same - set of physical registers. To read the registers implicitly used by an - instruction, - use <tt>TargetInstrInfo::get(opcode)::ImplicitUses</tt>. Pre-colored - registers impose constraints on any register allocation algorithm. The - register allocator must make sure that none of them are overwritten by - the values of virtual registers while still alive.</p> - -</div> - -<!-- _______________________________________________________________________ --> - -<h4> - <a name="regAlloc_howTo">Mapping virtual registers to physical registers</a> -</h4> - -<div> - -<p>There are two ways to map virtual registers to physical registers (or to - memory slots). The first way, that we will call <i>direct mapping</i>, is - based on the use of methods of the classes <tt>TargetRegisterInfo</tt>, - and <tt>MachineOperand</tt>. The second way, that we will call <i>indirect - mapping</i>, relies on the <tt>VirtRegMap</tt> class in order to insert loads - and stores sending and getting values to and from memory.</p> - -<p>The direct mapping provides more flexibility to the developer of the register - allocator; however, it is more error prone, and demands more implementation - work. Basically, the programmer will have to specify where load and store - instructions should be inserted in the target function being compiled in - order to get and store values in memory. To assign a physical register to a - virtual register present in a given operand, - use <tt>MachineOperand::setReg(p_reg)</tt>. To insert a store instruction, - use <tt>TargetInstrInfo::storeRegToStackSlot(...)</tt>, and to insert a - load instruction, use <tt>TargetInstrInfo::loadRegFromStackSlot</tt>.</p> - -<p>The indirect mapping shields the application developer from the complexities - of inserting load and store instructions. In order to map a virtual register - to a physical one, use <tt>VirtRegMap::assignVirt2Phys(vreg, preg)</tt>. In - order to map a certain virtual register to memory, - use <tt>VirtRegMap::assignVirt2StackSlot(vreg)</tt>. This method will return - the stack slot where <tt>vreg</tt>'s value will be located. If it is - necessary to map another virtual register to the same stack slot, - use <tt>VirtRegMap::assignVirt2StackSlot(vreg, stack_location)</tt>. One - important point to consider when using the indirect mapping, is that even if - a virtual register is mapped to memory, it still needs to be mapped to a - physical register. This physical register is the location where the virtual - register is supposed to be found before being stored or after being - reloaded.</p> - -<p>If the indirect strategy is used, after all the virtual registers have been - mapped to physical registers or stack slots, it is necessary to use a spiller - object to place load and store instructions in the code. Every virtual that - has been mapped to a stack slot will be stored to memory after been defined - and will be loaded before being used. The implementation of the spiller tries - to recycle load/store instructions, avoiding unnecessary instructions. For an - example of how to invoke the spiller, - see <tt>RegAllocLinearScan::runOnMachineFunction</tt> - in <tt>lib/CodeGen/RegAllocLinearScan.cpp</tt>.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="regAlloc_twoAddr">Handling two address instructions</a> -</h4> - -<div> - -<p>With very rare exceptions (e.g., function calls), the LLVM machine code - instructions are three address instructions. That is, each instruction is - expected to define at most one register, and to use at most two registers. - However, some architectures use two address instructions. In this case, the - defined register is also one of the used register. For instance, an - instruction such as <tt>ADD %EAX, %EBX</tt>, in X86 is actually equivalent - to <tt>%EAX = %EAX + %EBX</tt>.</p> - -<p>In order to produce correct code, LLVM must convert three address - instructions that represent two address instructions into true two address - instructions. LLVM provides the pass <tt>TwoAddressInstructionPass</tt> for - this specific purpose. It must be run before register allocation takes - place. After its execution, the resulting code may no longer be in SSA - form. This happens, for instance, in situations where an instruction such - as <tt>%a = ADD %b %c</tt> is converted to two instructions such as:</p> - -<div class="doc_code"> -<pre> -%a = MOVE %b -%a = ADD %a %c -</pre> -</div> - -<p>Notice that, internally, the second instruction is represented as - <tt>ADD %a[def/use] %c</tt>. I.e., the register operand <tt>%a</tt> is both - used and defined by the instruction.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="regAlloc_ssaDecon">The SSA deconstruction phase</a> -</h4> - -<div> - -<p>An important transformation that happens during register allocation is called - the <i>SSA Deconstruction Phase</i>. The SSA form simplifies many analyses - that are performed on the control flow graph of programs. However, - traditional instruction sets do not implement PHI instructions. Thus, in - order to generate executable code, compilers must replace PHI instructions - with other instructions that preserve their semantics.</p> - -<p>There are many ways in which PHI instructions can safely be removed from the - target code. The most traditional PHI deconstruction algorithm replaces PHI - instructions with copy instructions. That is the strategy adopted by - LLVM. The SSA deconstruction algorithm is implemented - in <tt>lib/CodeGen/PHIElimination.cpp</tt>. In order to invoke this pass, the - identifier <tt>PHIEliminationID</tt> must be marked as required in the code - of the register allocator.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="regAlloc_fold">Instruction folding</a> -</h4> - -<div> - -<p><i>Instruction folding</i> is an optimization performed during register - allocation that removes unnecessary copy instructions. For instance, a - sequence of instructions such as:</p> - -<div class="doc_code"> -<pre> -%EBX = LOAD %mem_address -%EAX = COPY %EBX -</pre> -</div> - -<p>can be safely substituted by the single instruction:</p> - -<div class="doc_code"> -<pre> -%EAX = LOAD %mem_address -</pre> -</div> - -<p>Instructions can be folded with - the <tt>TargetRegisterInfo::foldMemoryOperand(...)</tt> method. Care must be - taken when folding instructions; a folded instruction can be quite different - from the original - instruction. See <tt>LiveIntervals::addIntervalsForSpills</tt> - in <tt>lib/CodeGen/LiveIntervalAnalysis.cpp</tt> for an example of its - use.</p> - -</div> - -<!-- _______________________________________________________________________ --> - -<h4> - <a name="regAlloc_builtIn">Built in register allocators</a> -</h4> - -<div> - -<p>The LLVM infrastructure provides the application developer with three - different register allocators:</p> - -<ul> - <li><i>Fast</i> — This register allocator is the default for debug - builds. It allocates registers on a basic block level, attempting to keep - values in registers and reusing registers as appropriate.</li> - - <li><i>Basic</i> — This is an incremental approach to register - allocation. Live ranges are assigned to registers one at a time in - an order that is driven by heuristics. Since code can be rewritten - on-the-fly during allocation, this framework allows interesting - allocators to be developed as extensions. It is not itself a - production register allocator but is a potentially useful - stand-alone mode for triaging bugs and as a performance baseline. - - <li><i>Greedy</i> — <i>The default allocator</i>. This is a - highly tuned implementation of the <i>Basic</i> allocator that - incorporates global live range splitting. This allocator works hard - to minimize the cost of spill code. - - <li><i>PBQP</i> — A Partitioned Boolean Quadratic Programming (PBQP) - based register allocator. This allocator works by constructing a PBQP - problem representing the register allocation problem under consideration, - solving this using a PBQP solver, and mapping the solution back to a - register assignment.</li> -</ul> - -<p>The type of register allocator used in <tt>llc</tt> can be chosen with the - command line option <tt>-regalloc=...</tt>:</p> - -<div class="doc_code"> -<pre> -$ llc -regalloc=linearscan file.bc -o ln.s; -$ llc -regalloc=fast file.bc -o fa.s; -$ llc -regalloc=pbqp file.bc -o pbqp.s; -</pre> -</div> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="proepicode">Prolog/Epilog Code Insertion</a> -</h3> - -<div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="compact_unwind">Compact Unwind</a> -</h4> - -<div> - -<p>Throwing an exception requires <em>unwinding</em> out of a function. The - information on how to unwind a given function is traditionally expressed in - DWARF unwind (a.k.a. frame) info. But that format was originally developed - for debuggers to backtrace, and each Frame Description Entry (FDE) requires - ~20-30 bytes per function. There is also the cost of mapping from an address - in a function to the corresponding FDE at runtime. An alternative unwind - encoding is called <em>compact unwind</em> and requires just 4-bytes per - function.</p> - -<p>The compact unwind encoding is a 32-bit value, which is encoded in an - architecture-specific way. It specifies which registers to restore and from - where, and how to unwind out of the function. When the linker creates a final - linked image, it will create a <code>__TEXT,__unwind_info</code> - section. This section is a small and fast way for the runtime to access - unwind info for any given function. If we emit compact unwind info for the - function, that compact unwind info will be encoded in - the <code>__TEXT,__unwind_info</code> section. If we emit DWARF unwind info, - the <code>__TEXT,__unwind_info</code> section will contain the offset of the - FDE in the <code>__TEXT,__eh_frame</code> section in the final linked - image.</p> - -<p>For X86, there are three modes for the compact unwind encoding:</p> - -<dl> - <dt><i>Function with a Frame Pointer (<code>EBP</code> or <code>RBP</code>)</i></dt> - <dd><p><code>EBP/RBP</code>-based frame, where <code>EBP/RBP</code> is pushed - onto the stack immediately after the return address, - then <code>ESP/RSP</code> is moved to <code>EBP/RBP</code>. Thus to - unwind, <code>ESP/RSP</code> is restored with the - current <code>EBP/RBP</code> value, then <code>EBP/RBP</code> is restored - by popping the stack, and the return is done by popping the stack once - more into the PC. All non-volatile registers that need to be restored must - have been saved in a small range on the stack that - starts <code>EBP-4</code> to <code>EBP-1020</code> (<code>RBP-8</code> - to <code>RBP-1020</code>). The offset (divided by 4 in 32-bit mode and 8 - in 64-bit mode) is encoded in bits 16-23 (mask: <code>0x00FF0000</code>). - The registers saved are encoded in bits 0-14 - (mask: <code>0x00007FFF</code>) as five 3-bit entries from the following - table:</p> -<table border="1" cellspacing="0"> - <tr> - <th>Compact Number</th> - <th>i386 Register</th> - <th>x86-64 Regiser</th> - </tr> - <tr> - <td>1</td> - <td><code>EBX</code></td> - <td><code>RBX</code></td> - </tr> - <tr> - <td>2</td> - <td><code>ECX</code></td> - <td><code>R12</code></td> - </tr> - <tr> - <td>3</td> - <td><code>EDX</code></td> - <td><code>R13</code></td> - </tr> - <tr> - <td>4</td> - <td><code>EDI</code></td> - <td><code>R14</code></td> - </tr> - <tr> - <td>5</td> - <td><code>ESI</code></td> - <td><code>R15</code></td> - </tr> - <tr> - <td>6</td> - <td><code>EBP</code></td> - <td><code>RBP</code></td> - </tr> -</table> - -</dd> - - <dt><i>Frameless with a Small Constant Stack Size (<code>EBP</code> - or <code>RBP</code> is not used as a frame pointer)</i></dt> - <dd><p>To return, a constant (encoded in the compact unwind encoding) is added - to the <code>ESP/RSP</code>. Then the return is done by popping the stack - into the PC. All non-volatile registers that need to be restored must have - been saved on the stack immediately after the return address. The stack - size (divided by 4 in 32-bit mode and 8 in 64-bit mode) is encoded in bits - 16-23 (mask: <code>0x00FF0000</code>). There is a maximum stack size of - 1024 bytes in 32-bit mode and 2048 in 64-bit mode. The number of registers - saved is encoded in bits 9-12 (mask: <code>0x00001C00</code>). Bits 0-9 - (mask: <code>0x000003FF</code>) contain which registers were saved and - their order. (See - the <code>encodeCompactUnwindRegistersWithoutFrame()</code> function - in <code>lib/Target/X86FrameLowering.cpp</code> for the encoding - algorithm.)</p></dd> - - <dt><i>Frameless with a Large Constant Stack Size (<code>EBP</code> - or <code>RBP</code> is not used as a frame pointer)</i></dt> - <dd><p>This case is like the "Frameless with a Small Constant Stack Size" - case, but the stack size is too large to encode in the compact unwind - encoding. Instead it requires that the function contains "<code>subl - $nnnnnn, %esp</code>" in its prolog. The compact encoding contains the - offset to the <code>$nnnnnn</code> value in the function in bits 9-12 - (mask: <code>0x00001C00</code>).</p></dd> -</dl> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="latemco">Late Machine Code Optimizations</a> -</h3> -<div><p>To Be Written</p></div> - -<!-- ======================================================================= --> -<h3> - <a name="codeemit">Code Emission</a> -</h3> - -<div> - -<p>The code emission step of code generation is responsible for lowering from -the code generator abstractions (like <a -href="#machinefunction">MachineFunction</a>, <a -href="#machineinstr">MachineInstr</a>, etc) down -to the abstractions used by the MC layer (<a href="#mcinst">MCInst</a>, -<a href="#mcstreamer">MCStreamer</a>, etc). This is -done with a combination of several different classes: the (misnamed) -target-independent AsmPrinter class, target-specific subclasses of AsmPrinter -(such as SparcAsmPrinter), and the TargetLoweringObjectFile class.</p> - -<p>Since the MC layer works at the level of abstraction of object files, it -doesn't have a notion of functions, global variables etc. Instead, it thinks -about labels, directives, and instructions. A key class used at this time is -the MCStreamer class. This is an abstract API that is implemented in different -ways (e.g. to output a .s file, output an ELF .o file, etc) that is effectively -an "assembler API". MCStreamer has one method per directive, such as EmitLabel, -EmitSymbolAttribute, SwitchSection, etc, which directly correspond to assembly -level directives. -</p> - -<p>If you are interested in implementing a code generator for a target, there -are three important things that you have to implement for your target:</p> - -<ol> -<li>First, you need a subclass of AsmPrinter for your target. This class -implements the general lowering process converting MachineFunction's into MC -label constructs. The AsmPrinter base class provides a number of useful methods -and routines, and also allows you to override the lowering process in some -important ways. You should get much of the lowering for free if you are -implementing an ELF, COFF, or MachO target, because the TargetLoweringObjectFile -class implements much of the common logic.</li> - -<li>Second, you need to implement an instruction printer for your target. The -instruction printer takes an <a href="#mcinst">MCInst</a> and renders it to a -raw_ostream as text. Most of this is automatically generated from the .td file -(when you specify something like "<tt>add $dst, $src1, $src2</tt>" in the -instructions), but you need to implement routines to print operands.</li> - -<li>Third, you need to implement code that lowers a <a -href="#machineinstr">MachineInstr</a> to an MCInst, usually implemented in -"<target>MCInstLower.cpp". This lowering process is often target -specific, and is responsible for turning jump table entries, constant pool -indices, global variable addresses, etc into MCLabels as appropriate. This -translation layer is also responsible for expanding pseudo ops used by the code -generator into the actual machine instructions they correspond to. The MCInsts -that are generated by this are fed into the instruction printer or the encoder. -</li> - -</ol> - -<p>Finally, at your choosing, you can also implement an subclass of -MCCodeEmitter which lowers MCInst's into machine code bytes and relocations. -This is important if you want to support direct .o file emission, or would like -to implement an assembler for your target.</p> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="vliw_packetizer">VLIW Packetizer</a> -</h3> - -<div> - -<p>In a Very Long Instruction Word (VLIW) architecture, the compiler is - responsible for mapping instructions to functional-units available on - the architecture. To that end, the compiler creates groups of instructions - called <i>packets</i> or <i>bundles</i>. The VLIW packetizer in LLVM is - a target-independent mechanism to enable the packetization of machine - instructions.</p> - -<!-- _______________________________________________________________________ --> - -<h4> - <a name="vliw_mapping">Mapping from instructions to functional units</a> -</h4> - -<div> - -<p>Instructions in a VLIW target can typically be mapped to multiple functional -units. During the process of packetizing, the compiler must be able to reason -about whether an instruction can be added to a packet. This decision can be -complex since the compiler has to examine all possible mappings of instructions -to functional units. Therefore to alleviate compilation-time complexity, the -VLIW packetizer parses the instruction classes of a target and generates tables -at compiler build time. These tables can then be queried by the provided -machine-independent API to determine if an instruction can be accommodated in a -packet.</p> -</div> - -<!-- ======================================================================= --> -<h4> - <a name="vliw_repr"> - How the packetization tables are generated and used - </a> -</h4> - -<div> - -<p>The packetizer reads instruction classes from a target's itineraries and -creates a deterministic finite automaton (DFA) to represent the state of a -packet. A DFA consists of three major elements: inputs, states, and -transitions. The set of inputs for the generated DFA represents the instruction -being added to a packet. The states represent the possible consumption -of functional units by instructions in a packet. In the DFA, transitions from -one state to another occur on the addition of an instruction to an existing -packet. If there is a legal mapping of functional units to instructions, then -the DFA contains a corresponding transition. The absence of a transition -indicates that a legal mapping does not exist and that the instruction cannot -be added to the packet.</p> - -<p>To generate tables for a VLIW target, add <i>Target</i>GenDFAPacketizer.inc -as a target to the Makefile in the target directory. The exported API provides -three functions: <tt>DFAPacketizer::clearResources()</tt>, -<tt>DFAPacketizer::reserveResources(MachineInstr *MI)</tt>, and -<tt>DFAPacketizer::canReserveResources(MachineInstr *MI)</tt>. These functions -allow a target packetizer to add an instruction to an existing packet and to -check whether an instruction can be added to a packet. See -<tt>llvm/CodeGen/DFAPacketizer.h</tt> for more information.</p> - -</div> - -</div> - -</div> - -<!-- *********************************************************************** --> -<h2> - <a name="nativeassembler">Implementing a Native Assembler</a> -</h2> -<!-- *********************************************************************** --> - -<div> - -<p>Though you're probably reading this because you want to write or maintain a -compiler backend, LLVM also fully supports building a native assemblers too. -We've tried hard to automate the generation of the assembler from the .td files -(in particular the instruction syntax and encodings), which means that a large -part of the manual and repetitive data entry can be factored and shared with the -compiler.</p> - -<!-- ======================================================================= --> -<h3 id="na_instparsing">Instruction Parsing</h3> - -<div><p>To Be Written</p></div> - - -<!-- ======================================================================= --> -<h3 id="na_instaliases"> - Instruction Alias Processing -</h3> - -<div> -<p>Once the instruction is parsed, it enters the MatchInstructionImpl function. -The MatchInstructionImpl function performs alias processing and then does -actual matching.</p> - -<p>Alias processing is the phase that canonicalizes different lexical forms of -the same instructions down to one representation. There are several different -kinds of alias that are possible to implement and they are listed below in the -order that they are processed (which is in order from simplest/weakest to most -complex/powerful). Generally you want to use the first alias mechanism that -meets the needs of your instruction, because it will allow a more concise -description.</p> - -<!-- _______________________________________________________________________ --> -<h4>Mnemonic Aliases</h4> - -<div> - -<p>The first phase of alias processing is simple instruction mnemonic -remapping for classes of instructions which are allowed with two different -mnemonics. This phase is a simple and unconditionally remapping from one input -mnemonic to one output mnemonic. It isn't possible for this form of alias to -look at the operands at all, so the remapping must apply for all forms of a -given mnemonic. Mnemonic aliases are defined simply, for example X86 has: -</p> - -<div class="doc_code"> -<pre> -def : MnemonicAlias<"cbw", "cbtw">; -def : MnemonicAlias<"smovq", "movsq">; -def : MnemonicAlias<"fldcww", "fldcw">; -def : MnemonicAlias<"fucompi", "fucomip">; -def : MnemonicAlias<"ud2a", "ud2">; -</pre> -</div> - -<p>... and many others. With a MnemonicAlias definition, the mnemonic is -remapped simply and directly. Though MnemonicAlias's can't look at any aspect -of the instruction (such as the operands) they can depend on global modes (the -same ones supported by the matcher), through a Requires clause:</p> - -<div class="doc_code"> -<pre> -def : MnemonicAlias<"pushf", "pushfq">, Requires<[In64BitMode]>; -def : MnemonicAlias<"pushf", "pushfl">, Requires<[In32BitMode]>; -</pre> -</div> - -<p>In this example, the mnemonic gets mapped into different a new one depending -on the current instruction set.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4>Instruction Aliases</h4> - -<div> - -<p>The most general phase of alias processing occurs while matching is -happening: it provides new forms for the matcher to match along with a specific -instruction to generate. An instruction alias has two parts: the string to -match and the instruction to generate. For example: -</p> - -<div class="doc_code"> -<pre> -def : InstAlias<"movsx $src, $dst", (MOVSX16rr8W GR16:$dst, GR8 :$src)>; -def : InstAlias<"movsx $src, $dst", (MOVSX16rm8W GR16:$dst, i8mem:$src)>; -def : InstAlias<"movsx $src, $dst", (MOVSX32rr8 GR32:$dst, GR8 :$src)>; -def : InstAlias<"movsx $src, $dst", (MOVSX32rr16 GR32:$dst, GR16 :$src)>; -def : InstAlias<"movsx $src, $dst", (MOVSX64rr8 GR64:$dst, GR8 :$src)>; -def : InstAlias<"movsx $src, $dst", (MOVSX64rr16 GR64:$dst, GR16 :$src)>; -def : InstAlias<"movsx $src, $dst", (MOVSX64rr32 GR64:$dst, GR32 :$src)>; -</pre> -</div> - -<p>This shows a powerful example of the instruction aliases, matching the -same mnemonic in multiple different ways depending on what operands are present -in the assembly. The result of instruction aliases can include operands in a -different order than the destination instruction, and can use an input -multiple times, for example:</p> - -<div class="doc_code"> -<pre> -def : InstAlias<"clrb $reg", (XOR8rr GR8 :$reg, GR8 :$reg)>; -def : InstAlias<"clrw $reg", (XOR16rr GR16:$reg, GR16:$reg)>; -def : InstAlias<"clrl $reg", (XOR32rr GR32:$reg, GR32:$reg)>; -def : InstAlias<"clrq $reg", (XOR64rr GR64:$reg, GR64:$reg)>; -</pre> -</div> - -<p>This example also shows that tied operands are only listed once. In the X86 -backend, XOR8rr has two input GR8's and one output GR8 (where an input is tied -to the output). InstAliases take a flattened operand list without duplicates -for tied operands. The result of an instruction alias can also use immediates -and fixed physical registers which are added as simple immediate operands in the -result, for example:</p> - -<div class="doc_code"> -<pre> -// Fixed Immediate operand. -def : InstAlias<"aad", (AAD8i8 10)>; - -// Fixed register operand. -def : InstAlias<"fcomi", (COM_FIr ST1)>; - -// Simple alias. -def : InstAlias<"fcomi $reg", (COM_FIr RST:$reg)>; -</pre> -</div> - - -<p>Instruction aliases can also have a Requires clause to make them -subtarget specific.</p> - -<p>If the back-end supports it, the instruction printer can automatically emit - the alias rather than what's being aliased. It typically leads to better, - more readable code. If it's better to print out what's being aliased, then - pass a '0' as the third parameter to the InstAlias definition.</p> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3 id="na_matching">Instruction Matching</h3> - -<div><p>To Be Written</p></div> - -</div> - -<!-- *********************************************************************** --> -<h2> - <a name="targetimpls">Target-specific Implementation Notes</a> -</h2> -<!-- *********************************************************************** --> - -<div> - -<p>This section of the document explains features or design decisions that are - specific to the code generator for a particular target. First we start - with a table that summarizes what features are supported by each target.</p> - -<!-- ======================================================================= --> -<h3> - <a name="targetfeatures">Target Feature Matrix</a> -</h3> - -<div> - -<p>Note that this table does not include the C backend or Cpp backends, since -they do not use the target independent code generator infrastructure. It also -doesn't list features that are not supported fully by any target yet. It -considers a feature to be supported if at least one subtarget supports it. A -feature being supported means that it is useful and works for most cases, it -does not indicate that there are zero known bugs in the implementation. Here -is the key:</p> - - -<table border="1" cellspacing="0"> - <tr> - <th>Unknown</th> - <th>No support</th> - <th>Partial Support</th> - <th>Complete Support</th> - </tr> - <tr> - <td class="unknown"></td> - <td class="no"></td> - <td class="partial"></td> - <td class="yes"></td> - </tr> -</table> - -<p>Here is the table:</p> - -<table width="689" border="1" cellspacing="0"> -<tr><td></td> -<td colspan="13" align="center" style="background-color:#ffc">Target</td> -</tr> - <tr> - <th>Feature</th> - <th>ARM</th> - <th>CellSPU</th> - <th>Hexagon</th> - <th>MBlaze</th> - <th>MSP430</th> - <th>Mips</th> - <th>PTX</th> - <th>PowerPC</th> - <th>Sparc</th> - <th>X86</th> - <th>XCore</th> - </tr> - -<tr> - <td><a href="#feat_reliable">is generally reliable</a></td> - <td class="yes"></td> <!-- ARM --> - <td class="no"></td> <!-- CellSPU --> - <td class="yes"></td> <!-- Hexagon --> - <td class="no"></td> <!-- MBlaze --> - <td class="unknown"></td> <!-- MSP430 --> - <td class="yes"></td> <!-- Mips --> - <td class="no"></td> <!-- PTX --> - <td class="yes"></td> <!-- PowerPC --> - <td class="yes"></td> <!-- Sparc --> - <td class="yes"></td> <!-- X86 --> - <td class="unknown"></td> <!-- XCore --> -</tr> - -<tr> - <td><a href="#feat_asmparser">assembly parser</a></td> - <td class="no"></td> <!-- ARM --> - <td class="no"></td> <!-- CellSPU --> - <td class="no"></td> <!-- Hexagon --> - <td class="yes"></td> <!-- MBlaze --> - <td class="no"></td> <!-- MSP430 --> - <td class="no"></td> <!-- Mips --> - <td class="no"></td> <!-- PTX --> - <td class="no"></td> <!-- PowerPC --> - <td class="no"></td> <!-- Sparc --> - <td class="yes"></td> <!-- X86 --> - <td class="no"></td> <!-- XCore --> -</tr> - -<tr> - <td><a href="#feat_disassembler">disassembler</a></td> - <td class="yes"></td> <!-- ARM --> - <td class="no"></td> <!-- CellSPU --> - <td class="no"></td> <!-- Hexagon --> - <td class="yes"></td> <!-- MBlaze --> - <td class="no"></td> <!-- MSP430 --> - <td class="no"></td> <!-- Mips --> - <td class="no"></td> <!-- PTX --> - <td class="no"></td> <!-- PowerPC --> - <td class="no"></td> <!-- Sparc --> - <td class="yes"></td> <!-- X86 --> - <td class="no"></td> <!-- XCore --> -</tr> - -<tr> - <td><a href="#feat_inlineasm">inline asm</a></td> - <td class="yes"></td> <!-- ARM --> - <td class="no"></td> <!-- CellSPU --> - <td class="yes"></td> <!-- Hexagon --> - <td class="yes"></td> <!-- MBlaze --> - <td class="unknown"></td> <!-- MSP430 --> - <td class="no"></td> <!-- Mips --> - <td class="unknown"></td> <!-- PTX --> - <td class="yes"></td> <!-- PowerPC --> - <td class="unknown"></td> <!-- Sparc --> - <td class="yes"></td> <!-- X86 --> - <td class="unknown"></td> <!-- XCore --> -</tr> - -<tr> - <td><a href="#feat_jit">jit</a></td> - <td class="partial"><a href="#feat_jit_arm">*</a></td> <!-- ARM --> - <td class="no"></td> <!-- CellSPU --> - <td class="no"></td> <!-- Hexagon --> - <td class="no"></td> <!-- MBlaze --> - <td class="unknown"></td> <!-- MSP430 --> - <td class="yes"></td> <!-- Mips --> - <td class="unknown"></td> <!-- PTX --> - <td class="yes"></td> <!-- PowerPC --> - <td class="unknown"></td> <!-- Sparc --> - <td class="yes"></td> <!-- X86 --> - <td class="unknown"></td> <!-- XCore --> -</tr> - -<tr> - <td><a href="#feat_objectwrite">.o file writing</a></td> - <td class="no"></td> <!-- ARM --> - <td class="no"></td> <!-- CellSPU --> - <td class="no"></td> <!-- Hexagon --> - <td class="yes"></td> <!-- MBlaze --> - <td class="no"></td> <!-- MSP430 --> - <td class="no"></td> <!-- Mips --> - <td class="no"></td> <!-- PTX --> - <td class="no"></td> <!-- PowerPC --> - <td class="no"></td> <!-- Sparc --> - <td class="yes"></td> <!-- X86 --> - <td class="no"></td> <!-- XCore --> -</tr> - -<tr> - <td><a href="#feat_tailcall">tail calls</a></td> - <td class="yes"></td> <!-- ARM --> - <td class="no"></td> <!-- CellSPU --> - <td class="yes"></td> <!-- Hexagon --> - <td class="no"></td> <!-- MBlaze --> - <td class="unknown"></td> <!-- MSP430 --> - <td class="no"></td> <!-- Mips --> - <td class="unknown"></td> <!-- PTX --> - <td class="yes"></td> <!-- PowerPC --> - <td class="unknown"></td> <!-- Sparc --> - <td class="yes"></td> <!-- X86 --> - <td class="unknown"></td> <!-- XCore --> -</tr> - -<tr> - <td><a href="#feat_segstacks">segmented stacks</a></td> - <td class="no"></td> <!-- ARM --> - <td class="no"></td> <!-- CellSPU --> - <td class="no"></td> <!-- Hexagon --> - <td class="no"></td> <!-- MBlaze --> - <td class="no"></td> <!-- MSP430 --> - <td class="no"></td> <!-- Mips --> - <td class="no"></td> <!-- PTX --> - <td class="no"></td> <!-- PowerPC --> - <td class="no"></td> <!-- Sparc --> - <td class="partial"><a href="#feat_segstacks_x86">*</a></td> <!-- X86 --> - <td class="no"></td> <!-- XCore --> -</tr> - - -</table> - -<!-- _______________________________________________________________________ --> -<h4 id="feat_reliable">Is Generally Reliable</h4> - -<div> -<p>This box indicates whether the target is considered to be production quality. -This indicates that the target has been used as a static compiler to -compile large amounts of code by a variety of different people and is in -continuous use.</p> -</div> - -<!-- _______________________________________________________________________ --> -<h4 id="feat_asmparser">Assembly Parser</h4> - -<div> -<p>This box indicates whether the target supports parsing target specific .s -files by implementing the MCAsmParser interface. This is required for llvm-mc -to be able to act as a native assembler and is required for inline assembly -support in the native .o file writer.</p> - -</div> - - -<!-- _______________________________________________________________________ --> -<h4 id="feat_disassembler">Disassembler</h4> - -<div> -<p>This box indicates whether the target supports the MCDisassembler API for -disassembling machine opcode bytes into MCInst's.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4 id="feat_inlineasm">Inline Asm</h4> - -<div> -<p>This box indicates whether the target supports most popular inline assembly -constraints and modifiers.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4 id="feat_jit">JIT Support</h4> - -<div> -<p>This box indicates whether the target supports the JIT compiler through -the ExecutionEngine interface.</p> - -<p id="feat_jit_arm">The ARM backend has basic support for integer code -in ARM codegen mode, but lacks NEON and full Thumb support.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4 id="feat_objectwrite">.o File Writing</h4> - -<div> - -<p>This box indicates whether the target supports writing .o files (e.g. MachO, -ELF, and/or COFF) files directly from the target. Note that the target also -must include an assembly parser and general inline assembly support for full -inline assembly support in the .o writer.</p> - -<p>Targets that don't support this feature can obviously still write out .o -files, they just rely on having an external assembler to translate from a .s -file to a .o file (as is the case for many C compilers).</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4 id="feat_tailcall">Tail Calls</h4> - -<div> - -<p>This box indicates whether the target supports guaranteed tail calls. These -are calls marked "<a href="LangRef.html#i_call">tail</a>" and use the fastcc -calling convention. Please see the <a href="#tailcallopt">tail call section -more more details</a>.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4 id="feat_segstacks">Segmented Stacks</h4> - -<div> - -<p>This box indicates whether the target supports segmented stacks. This -replaces the traditional large C stack with many linked segments. It -is compatible with the <a href="http://gcc.gnu.org/wiki/SplitStacks">gcc -implementation</a> used by the Go front end.</p> - -<p id="feat_segstacks_x86">Basic support exists on the X86 backend. Currently -vararg doesn't work and the object files are not marked the way the gold -linker expects, but simple Go programs can be built by dragonegg.</p> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="tailcallopt">Tail call optimization</a> -</h3> - -<div> - -<p>Tail call optimization, callee reusing the stack of the caller, is currently - supported on x86/x86-64 and PowerPC. It is performed if:</p> - -<ul> - <li>Caller and callee have the calling convention <tt>fastcc</tt> or - <tt>cc 10</tt> (GHC call convention).</li> - - <li>The call is a tail call - in tail position (ret immediately follows call - and ret uses value of call or is void).</li> - - <li>Option <tt>-tailcallopt</tt> is enabled.</li> - - <li>Platform specific constraints are met.</li> -</ul> - -<p>x86/x86-64 constraints:</p> - -<ul> - <li>No variable argument lists are used.</li> - - <li>On x86-64 when generating GOT/PIC code only module-local calls (visibility - = hidden or protected) are supported.</li> -</ul> - -<p>PowerPC constraints:</p> - -<ul> - <li>No variable argument lists are used.</li> - - <li>No byval parameters are used.</li> - - <li>On ppc32/64 GOT/PIC only module-local calls (visibility = hidden or protected) are supported.</li> -</ul> - -<p>Example:</p> - -<p>Call as <tt>llc -tailcallopt test.ll</tt>.</p> - -<div class="doc_code"> -<pre> -declare fastcc i32 @tailcallee(i32 inreg %a1, i32 inreg %a2, i32 %a3, i32 %a4) - -define fastcc i32 @tailcaller(i32 %in1, i32 %in2) { - %l1 = add i32 %in1, %in2 - %tmp = tail call fastcc i32 @tailcallee(i32 %in1 inreg, i32 %in2 inreg, i32 %in1, i32 %l1) - ret i32 %tmp -} -</pre> -</div> - -<p>Implications of <tt>-tailcallopt</tt>:</p> - -<p>To support tail call optimization in situations where the callee has more - arguments than the caller a 'callee pops arguments' convention is used. This - currently causes each <tt>fastcc</tt> call that is not tail call optimized - (because one or more of above constraints are not met) to be followed by a - readjustment of the stack. So performance might be worse in such cases.</p> - -</div> -<!-- ======================================================================= --> -<h3> - <a name="sibcallopt">Sibling call optimization</a> -</h3> - -<div> - -<p>Sibling call optimization is a restricted form of tail call optimization. - Unlike tail call optimization described in the previous section, it can be - performed automatically on any tail calls when <tt>-tailcallopt</tt> option - is not specified.</p> - -<p>Sibling call optimization is currently performed on x86/x86-64 when the - following constraints are met:</p> - -<ul> - <li>Caller and callee have the same calling convention. It can be either - <tt>c</tt> or <tt>fastcc</tt>. - - <li>The call is a tail call - in tail position (ret immediately follows call - and ret uses value of call or is void).</li> - - <li>Caller and callee have matching return type or the callee result is not - used. - - <li>If any of the callee arguments are being passed in stack, they must be - available in caller's own incoming argument stack and the frame offsets - must be the same. -</ul> - -<p>Example:</p> -<div class="doc_code"> -<pre> -declare i32 @bar(i32, i32) - -define i32 @foo(i32 %a, i32 %b, i32 %c) { -entry: - %0 = tail call i32 @bar(i32 %a, i32 %b) - ret i32 %0 -} -</pre> -</div> - -</div> -<!-- ======================================================================= --> -<h3> - <a name="x86">The X86 backend</a> -</h3> - -<div> - -<p>The X86 code generator lives in the <tt>lib/Target/X86</tt> directory. This - code generator is capable of targeting a variety of x86-32 and x86-64 - processors, and includes support for ISA extensions such as MMX and SSE.</p> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="x86_tt">X86 Target Triples supported</a> -</h4> - -<div> - -<p>The following are the known target triples that are supported by the X86 - backend. This is not an exhaustive list, and it would be useful to add those - that people test.</p> - -<ul> - <li><b>i686-pc-linux-gnu</b> — Linux</li> - - <li><b>i386-unknown-freebsd5.3</b> — FreeBSD 5.3</li> - - <li><b>i686-pc-cygwin</b> — Cygwin on Win32</li> - - <li><b>i686-pc-mingw32</b> — MingW on Win32</li> - - <li><b>i386-pc-mingw32msvc</b> — MingW crosscompiler on Linux</li> - - <li><b>i686-apple-darwin*</b> — Apple Darwin on X86</li> - - <li><b>x86_64-unknown-linux-gnu</b> — Linux</li> -</ul> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="x86_cc">X86 Calling Conventions supported</a> -</h4> - - -<div> - -<p>The following target-specific calling conventions are known to backend:</p> - -<ul> -<li><b>x86_StdCall</b> — stdcall calling convention seen on Microsoft - Windows platform (CC ID = 64).</li> -<li><b>x86_FastCall</b> — fastcall calling convention seen on Microsoft - Windows platform (CC ID = 65).</li> -<li><b>x86_ThisCall</b> — Similar to X86_StdCall. Passes first argument - in ECX, others via stack. Callee is responsible for stack cleaning. This - convention is used by MSVC by default for methods in its ABI - (CC ID = 70).</li> -</ul> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="x86_memory">Representing X86 addressing modes in MachineInstrs</a> -</h4> - -<div> - -<p>The x86 has a very flexible way of accessing memory. It is capable of - forming memory addresses of the following expression directly in integer - instructions (which use ModR/M addressing):</p> - -<div class="doc_code"> -<pre> -SegmentReg: Base + [1,2,4,8] * IndexReg + Disp32 -</pre> -</div> - -<p>In order to represent this, LLVM tracks no less than 5 operands for each - memory operand of this form. This means that the "load" form of - '<tt>mov</tt>' has the following <tt>MachineOperand</tt>s in this order:</p> - -<div class="doc_code"> -<pre> -Index: 0 | 1 2 3 4 5 -Meaning: DestReg, | BaseReg, Scale, IndexReg, Displacement Segment -OperandTy: VirtReg, | VirtReg, UnsImm, VirtReg, SignExtImm PhysReg -</pre> -</div> - -<p>Stores, and all other instructions, treat the four memory operands in the - same way and in the same order. If the segment register is unspecified - (regno = 0), then no segment override is generated. "Lea" operations do not - have a segment register specified, so they only have 4 operands for their - memory reference.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="x86_memory">X86 address spaces supported</a> -</h4> - -<div> - -<p>x86 has a feature which provides - the ability to perform loads and stores to different address spaces - via the x86 segment registers. A segment override prefix byte on an - instruction causes the instruction's memory access to go to the specified - segment. LLVM address space 0 is the default address space, which includes - the stack, and any unqualified memory accesses in a program. Address spaces - 1-255 are currently reserved for user-defined code. The GS-segment is - represented by address space 256, while the FS-segment is represented by - address space 257. Other x86 segments have yet to be allocated address space - numbers.</p> - -<p>While these address spaces may seem similar to TLS via the - <tt>thread_local</tt> keyword, and often use the same underlying hardware, - there are some fundamental differences.</p> - -<p>The <tt>thread_local</tt> keyword applies to global variables and - specifies that they are to be allocated in thread-local memory. There are - no type qualifiers involved, and these variables can be pointed to with - normal pointers and accessed with normal loads and stores. - The <tt>thread_local</tt> keyword is target-independent at the LLVM IR - level (though LLVM doesn't yet have implementations of it for some - configurations).<p> - -<p>Special address spaces, in contrast, apply to static types. Every - load and store has a particular address space in its address operand type, - and this is what determines which address space is accessed. - LLVM ignores these special address space qualifiers on global variables, - and does not provide a way to directly allocate storage in them. - At the LLVM IR level, the behavior of these special address spaces depends - in part on the underlying OS or runtime environment, and they are specific - to x86 (and LLVM doesn't yet handle them correctly in some cases).</p> - -<p>Some operating systems and runtime environments use (or may in the future - use) the FS/GS-segment registers for various low-level purposes, so care - should be taken when considering them.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="x86_names">Instruction naming</a> -</h4> - -<div> - -<p>An instruction name consists of the base name, a default operand size, and a - a character per operand with an optional special size. For example:</p> - -<div class="doc_code"> -<pre> -ADD8rr -> add, 8-bit register, 8-bit register -IMUL16rmi -> imul, 16-bit register, 16-bit memory, 16-bit immediate -IMUL16rmi8 -> imul, 16-bit register, 16-bit memory, 8-bit immediate -MOVSX32rm16 -> movsx, 32-bit register, 16-bit memory -</pre> -</div> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="ppc">The PowerPC backend</a> -</h3> - -<div> - -<p>The PowerPC code generator lives in the lib/Target/PowerPC directory. The - code generation is retargetable to several variations or <i>subtargets</i> of - the PowerPC ISA; including ppc32, ppc64 and altivec.</p> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="ppc_abi">LLVM PowerPC ABI</a> -</h4> - -<div> - -<p>LLVM follows the AIX PowerPC ABI, with two deviations. LLVM uses a PC - relative (PIC) or static addressing for accessing global values, so no TOC - (r2) is used. Second, r31 is used as a frame pointer to allow dynamic growth - of a stack frame. LLVM takes advantage of having no TOC to provide space to - save the frame pointer in the PowerPC linkage area of the caller frame. - Other details of PowerPC ABI can be found at <a href= - "http://developer.apple.com/documentation/DeveloperTools/Conceptual/LowLevelABI/Articles/32bitPowerPC.html" - >PowerPC ABI.</a> Note: This link describes the 32 bit ABI. The 64 bit ABI - is similar except space for GPRs are 8 bytes wide (not 4) and r13 is reserved - for system use.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="ppc_frame">Frame Layout</a> -</h4> - -<div> - -<p>The size of a PowerPC frame is usually fixed for the duration of a - function's invocation. Since the frame is fixed size, all references - into the frame can be accessed via fixed offsets from the stack pointer. The - exception to this is when dynamic alloca or variable sized arrays are - present, then a base pointer (r31) is used as a proxy for the stack pointer - and stack pointer is free to grow or shrink. A base pointer is also used if - llvm-gcc is not passed the -fomit-frame-pointer flag. The stack pointer is - always aligned to 16 bytes, so that space allocated for altivec vectors will - be properly aligned.</p> - -<p>An invocation frame is laid out as follows (low memory at top);</p> - -<table class="layout"> - <tr> - <td>Linkage<br><br></td> - </tr> - <tr> - <td>Parameter area<br><br></td> - </tr> - <tr> - <td>Dynamic area<br><br></td> - </tr> - <tr> - <td>Locals area<br><br></td> - </tr> - <tr> - <td>Saved registers area<br><br></td> - </tr> - <tr style="border-style: none hidden none hidden;"> - <td><br></td> - </tr> - <tr> - <td>Previous Frame<br><br></td> - </tr> -</table> - -<p>The <i>linkage</i> area is used by a callee to save special registers prior - to allocating its own frame. Only three entries are relevant to LLVM. The - first entry is the previous stack pointer (sp), aka link. This allows - probing tools like gdb or exception handlers to quickly scan the frames in - the stack. A function epilog can also use the link to pop the frame from the - stack. The third entry in the linkage area is used to save the return - address from the lr register. Finally, as mentioned above, the last entry is - used to save the previous frame pointer (r31.) The entries in the linkage - area are the size of a GPR, thus the linkage area is 24 bytes long in 32 bit - mode and 48 bytes in 64 bit mode.</p> - -<p>32 bit linkage area</p> - -<table class="layout"> - <tr> - <td>0</td> - <td>Saved SP (r1)</td> - </tr> - <tr> - <td>4</td> - <td>Saved CR</td> - </tr> - <tr> - <td>8</td> - <td>Saved LR</td> - </tr> - <tr> - <td>12</td> - <td>Reserved</td> - </tr> - <tr> - <td>16</td> - <td>Reserved</td> - </tr> - <tr> - <td>20</td> - <td>Saved FP (r31)</td> - </tr> -</table> - -<p>64 bit linkage area</p> - -<table class="layout"> - <tr> - <td>0</td> - <td>Saved SP (r1)</td> - </tr> - <tr> - <td>8</td> - <td>Saved CR</td> - </tr> - <tr> - <td>16</td> - <td>Saved LR</td> - </tr> - <tr> - <td>24</td> - <td>Reserved</td> - </tr> - <tr> - <td>32</td> - <td>Reserved</td> - </tr> - <tr> - <td>40</td> - <td>Saved FP (r31)</td> - </tr> -</table> - -<p>The <i>parameter area</i> is used to store arguments being passed to a callee - function. Following the PowerPC ABI, the first few arguments are actually - passed in registers, with the space in the parameter area unused. However, - if there are not enough registers or the callee is a thunk or vararg - function, these register arguments can be spilled into the parameter area. - Thus, the parameter area must be large enough to store all the parameters for - the largest call sequence made by the caller. The size must also be - minimally large enough to spill registers r3-r10. This allows callees blind - to the call signature, such as thunks and vararg functions, enough space to - cache the argument registers. Therefore, the parameter area is minimally 32 - bytes (64 bytes in 64 bit mode.) Also note that since the parameter area is - a fixed offset from the top of the frame, that a callee can access its spilt - arguments using fixed offsets from the stack pointer (or base pointer.)</p> - -<p>Combining the information about the linkage, parameter areas and alignment. A - stack frame is minimally 64 bytes in 32 bit mode and 128 bytes in 64 bit - mode.</p> - -<p>The <i>dynamic area</i> starts out as size zero. If a function uses dynamic - alloca then space is added to the stack, the linkage and parameter areas are - shifted to top of stack, and the new space is available immediately below the - linkage and parameter areas. The cost of shifting the linkage and parameter - areas is minor since only the link value needs to be copied. The link value - can be easily fetched by adding the original frame size to the base pointer. - Note that allocations in the dynamic space need to observe 16 byte - alignment.</p> - -<p>The <i>locals area</i> is where the llvm compiler reserves space for local - variables.</p> - -<p>The <i>saved registers area</i> is where the llvm compiler spills callee - saved registers on entry to the callee.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="ppc_prolog">Prolog/Epilog</a> -</h4> - -<div> - -<p>The llvm prolog and epilog are the same as described in the PowerPC ABI, with - the following exceptions. Callee saved registers are spilled after the frame - is created. This allows the llvm epilog/prolog support to be common with - other targets. The base pointer callee saved register r31 is saved in the - TOC slot of linkage area. This simplifies allocation of space for the base - pointer and makes it convenient to locate programatically and during - debugging.</p> - -</div> - -<!-- _______________________________________________________________________ --> -<h4> - <a name="ppc_dynamic">Dynamic Allocation</a> -</h4> - -<div> - -<p><i>TODO - More to come.</i></p> - -</div> - -</div> - -<!-- ======================================================================= --> -<h3> - <a name="ptx">The PTX backend</a> -</h3> - -<div> - -<p>The PTX code generator lives in the lib/Target/PTX directory. It is - currently a work-in-progress, but already supports most of the code - generation functionality needed to generate correct PTX kernels for - CUDA devices.</p> - -<p>The code generator can target PTX 2.0+, and shader model 1.0+. The - PTX ISA Reference Manual is used as the primary source of ISA - information, though an effort is made to make the output of the code - generator match the output of the NVidia nvcc compiler, whenever - possible.</p> - -<p>Code Generator Options:</p> -<table border="1" cellspacing="0"> - <tr> - <th>Option</th> - <th>Description</th> - </tr> - <tr> - <td><code>double</code></td> - <td align="left">If enabled, the map_f64_to_f32 directive is - disabled in the PTX output, allowing native double-precision - arithmetic</td> - </tr> - <tr> - <td><code>no-fma</code></td> - <td align="left">Disable generation of Fused-Multiply Add - instructions, which may be beneficial for some devices</td> - </tr> - <tr> - <td><code>smxy / computexy</code></td> - <td align="left">Set shader model/compute capability to x.y, - e.g. sm20 or compute13</td> - </tr> -</table> - -<p>Working:</p> -<ul> - <li>Arithmetic instruction selection (including combo FMA)</li> - <li>Bitwise instruction selection</li> - <li>Control-flow instruction selection</li> - <li>Function calls (only on SM 2.0+ and no return arguments)</li> - <li>Addresses spaces (0 = global, 1 = constant, 2 = local, 4 = - shared)</li> - <li>Thread synchronization (bar.sync)</li> - <li>Special register reads ([N]TID, [N]CTAID, PMx, CLOCK, etc.)</li> -</ul> - -<p>In Progress:</p> -<ul> - <li>Robust call instruction selection</li> - <li>Stack frame allocation</li> - <li>Device-specific instruction scheduling optimizations</li> -</ul> - - -</div> - -</div> - -<!-- *********************************************************************** --> -<hr> -<address> - <a href="http://jigsaw.w3.org/css-validator/check/referer"><img - src="http://jigsaw.w3.org/css-validator/images/vcss-blue" alt="Valid CSS"></a> - <a href="http://validator.w3.org/check/referer"><img - src="http://www.w3.org/Icons/valid-html401-blue" alt="Valid HTML 4.01"></a> - - <a href="mailto:sabre@nondot.org">Chris Lattner</a><br> - <a href="http://llvm.org/">The LLVM Compiler Infrastructure</a><br> - Last modified: $Date$ -</address> - -</body> -</html> diff --git a/docs/CodeGenerator.rst b/docs/CodeGenerator.rst new file mode 100644 index 0000000000..df04f5cb78 --- /dev/null +++ b/docs/CodeGenerator.rst @@ -0,0 +1,2428 @@ +.. _code_generator: + +.. role:: raw-html(raw) + :format: html + +.. raw:: html + + <style> + .unknown { background-color: #C0C0C0; text-align: center; } + .unknown:before { content: "?" } + .no { background-color: #C11B17 } + .no:before { content: "N" } + .partial { background-color: #F88017 } + .yes { background-color: #0F0; } + .yes:before { content: "Y" } + </style> + +========================================== +The LLVM Target-Independent Code Generator +========================================== + +.. contents:: + :local: + +.. warning:: + This is a work in progress. + +Introduction +============ + +The LLVM target-independent code generator is a framework that provides a suite +of reusable components for translating the LLVM internal representation to the +machine code for a specified target---either in assembly form (suitable for a +static compiler) or in binary machine code format (usable for a JIT +compiler). The LLVM target-independent code generator consists of six main +components: + +1. `Abstract target description`_ interfaces which capture important properties + about various aspects of the machine, independently of how they will be used. + These interfaces are defined in ``include/llvm/Target/``. + +2. Classes used to represent the `code being generated`_ for a target. These + classes are intended to be abstract enough to represent the machine code for + *any* target machine. These classes are defined in + ``include/llvm/CodeGen/``. At this level, concepts like "constant pool + entries" and "jump tables" are explicitly exposed. + +3. Classes and algorithms used to represent code as the object file level, the + `MC Layer`_. These classes represent assembly level constructs like labels, + sections, and instructions. At this level, concepts like "constant pool + entries" and "jump tables" don't exist. + +4. `Target-independent algorithms`_ used to implement various phases of native + code generation (register allocation, scheduling, stack frame representation, + etc). This code lives in ``lib/CodeGen/``. + +5. `Implementations of the abstract target description interfaces`_ for + particular targets. These machine descriptions make use of the components + provided by LLVM, and can optionally provide custom target-specific passes, + to build complete code generators for a specific target. Target descriptions + live in ``lib/Target/``. + +6. The target-independent JIT components. The LLVM JIT is completely target + independent (it uses the ``TargetJITInfo`` structure to interface for + target-specific issues. The code for the target-independent JIT lives in + ``lib/ExecutionEngine/JIT``. + +Depending on which part of the code generator you are interested in working on, +different pieces of this will be useful to you. In any case, you should be +familiar with the `target description`_ and `machine code representation`_ +classes. If you want to add a backend for a new target, you will need to +`implement the target description`_ classes for your new target and understand +the `LLVM code representation <LangRef.html>`_. If you are interested in +implementing a new `code generation algorithm`_, it should only depend on the +target-description and machine code representation classes, ensuring that it is +portable. + +Required components in the code generator +----------------------------------------- + +The two pieces of the LLVM code generator are the high-level interface to the +code generator and the set of reusable components that can be used to build +target-specific backends. The two most important interfaces (:raw-html:`<tt>` +`TargetMachine`_ :raw-html:`</tt>` and :raw-html:`<tt>` `TargetData`_ +:raw-html:`</tt>`) are the only ones that are required to be defined for a +backend to fit into the LLVM system, but the others must be defined if the +reusable code generator components are going to be used. + +This design has two important implications. The first is that LLVM can support +completely non-traditional code generation targets. For example, the C backend +does not require register allocation, instruction selection, or any of the other +standard components provided by the system. As such, it only implements these +two interfaces, and does its own thing. Note that C backend was removed from the +trunk since LLVM 3.1 release. Another example of a code generator like this is a +(purely hypothetical) backend that converts LLVM to the GCC RTL form and uses +GCC to emit machine code for a target. + +This design also implies that it is possible to design and implement radically +different code generators in the LLVM system that do not make use of any of the +built-in components. Doing so is not recommended at all, but could be required +for radically different targets that do not fit into the LLVM machine +description model: FPGAs for example. + +.. _high-level design of the code generator: + +The high-level design of the code generator +------------------------------------------- + +The LLVM target-independent code generator is designed to support efficient and +quality code generation for standard register-based microprocessors. Code +generation in this model is divided into the following stages: + +1. `Instruction Selection`_ --- This phase determines an efficient way to + express the input LLVM code in the target instruction set. This stage + produces the initial code for the program in the target instruction set, then + makes use of virtual registers in SSA form and physical registers that + represent any required register assignments due to target constraints or + calling conventions. This step turns the LLVM code into a DAG of target + instructions. + +2. `Scheduling and Formation`_ --- This phase takes the DAG of target + instructions produced by the instruction selection phase, determines an + ordering of the instructions, then emits the instructions as :raw-html:`<tt>` + `MachineInstr`_\s :raw-html:`</tt>` with that ordering. Note that we + describe this in the `instruction selection section`_ because it operates on + a `SelectionDAG`_. + +3. `SSA-based Machine Code Optimizations`_ --- This optional stage consists of a + series of machine-code optimizations that operate on the SSA-form produced by + the instruction selector. Optimizations like modulo-scheduling or peephole + optimization work here. + +4. `Register Allocation`_ --- The target code is transformed from an infinite + virtual register file in SSA form to the concrete register file used by the + target. This phase introduces spill code and eliminates all virtual register + references from the program. + +5. `Prolog/Epilog Code Insertion`_ --- Once the machine code has been generated + for the function and the amount of stack space required is known (used for + LLVM alloca's and spill slots), the prolog and epilog code for the function + can be inserted and "abstract stack location references" can be eliminated. + This stage is responsible for implementing optimizations like frame-pointer + elimination and stack packing. + +6. `Late Machine Code Optimizations`_ --- Optimizations that operate on "final" + machine code can go here, such as spill code scheduling and peephole + optimizations. + +7. `Code Emission`_ --- The final stage actually puts out the code for the + current function, either in the target assembler format or in machine + code. + +The code generator is based on the assumption that the instruction selector will +use an optimal pattern matching selector to create high-quality sequences of +native instructions. Alternative code generator designs based on pattern +expansion and aggressive iterative peephole optimization are much slower. This +design permits efficient compilation (important for JIT environments) and +aggressive optimization (used when generating code offline) by allowing +components of varying levels of sophistication to be used for any step of +compilation. + +In addition to these stages, target implementations can insert arbitrary +target-specific passes into the flow. For example, the X86 target uses a +special pass to handle the 80x87 floating point stack architecture. Other +targets with unusual requirements can be supported with custom passes as needed. + +Using TableGen for target description +------------------------------------- + +The target description classes require a detailed description of the target +architecture. These target descriptions often have a large amount of common +information (e.g., an ``add`` instruction is almost identical to a ``sub`` +instruction). In order to allow the maximum amount of commonality to be +factored out, the LLVM code generator uses the +`TableGen <TableGenFundamentals.html>`_ tool to describe big chunks of the +target machine, which allows the use of domain-specific and target-specific +abstractions to reduce the amount of repetition. + +As LLVM continues to be developed and refined, we plan to move more and more of +the target description to the ``.td`` form. Doing so gives us a number of +advantages. The most important is that it makes it easier to port LLVM because +it reduces the amount of C++ code that has to be written, and the surface area +of the code generator that needs to be understood before someone can get +something working. Second, it makes it easier to change things. In particular, +if tables and other things are all emitted by ``tblgen``, we only need a change +in one place (``tblgen``) to update all of the targets to a new interface. + +.. _Abstract target description: +.. _target description: + +Target description classes +========================== + +The LLVM target description classes (located in the ``include/llvm/Target`` +directory) provide an abstract description of the target machine independent of +any particular client. These classes are designed to capture the *abstract* +properties of the target (such as the instructions and registers it has), and do +not incorporate any particular pieces of code generation algorithms. + +All of the target description classes (except the :raw-html:`<tt>` `TargetData`_ +:raw-html:`</tt>` class) are designed to be subclassed by the concrete target +implementation, and have virtual methods implemented. To get to these +implementations, the :raw-html:`<tt>` `TargetMachine`_ :raw-html:`</tt>` class +provides accessors that should be implemented by the target. + +.. _TargetMachine: + +The ``TargetMachine`` class +--------------------------- + +The ``TargetMachine`` class provides virtual methods that are used to access the +target-specific implementations of the various target description classes via +the ``get*Info`` methods (``getInstrInfo``, ``getRegisterInfo``, +``getFrameInfo``, etc.). This class is designed to be specialized by a concrete +target implementation (e.g., ``X86TargetMachine``) which implements the various +virtual methods. The only required target description class is the +:raw-html:`<tt>` `TargetData`_ :raw-html:`</tt>` class, but if the code +generator components are to be used, the other interfaces should be implemented +as well. + +.. _TargetData: + +The ``TargetData`` class +------------------------ + +The ``TargetData`` class is the only required target description class, and it +is the only class that is not extensible (you cannot derived a new class from +it). ``TargetData`` specifies information about how the target lays out memory +for structures, the alignment requirements for various data types, the size of +pointers in the target, and whether the target is little-endian or +big-endian. + +.. _targetlowering: + +The ``TargetLowering`` class +---------------------------- + +The ``TargetLowering`` class is used by SelectionDAG based instruction selectors +primarily to describe how LLVM code should be lowered to SelectionDAG +operations. Among other things, this class indicates: + +* an initial register class to use for various ``ValueType``\s, + +* which operations are natively supported by the target machine, + +* the return type of ``setcc`` operations, + +* the type to use for shift amounts, and + +* various high-level characteristics, like whether it is profitable to turn + division by a constant into a multiplication sequence + +The ``TargetRegisterInfo`` class +-------------------------------- + +The ``TargetRegisterInfo`` class is used to describe the register file of the +target and any interactions between the registers. + +Registers in the code generator are represented in the code generator by +unsigned integers. Physical registers (those that actually exist in the target +description) are unique small numbers, and virtual registers are generally +large. Note that register ``#0`` is reserved as a flag value. + +Each register in the processor description has an associated +``TargetRegisterDesc`` entry, which provides a textual name for the register +(used for assembly output and debugging dumps) and a set of aliases (used to +indicate whether one register overlaps with another). + +In addition to the per-register description, the ``TargetRegisterInfo`` class +exposes a set of processor specific register classes (instances of the +``TargetRegisterClass`` class). Each register class contains sets of registers +that have the same properties (for example, they are all 32-bit integer +registers). Each SSA virtual register created by the instruction selector has +an associated register class. When the register allocator runs, it replaces +virtual registers with a physical register in the set. + +The target-specific implementations of these classes is auto-generated from a +`TableGen <TableGenFundamentals.html>`_ description of the register file. + +.. _TargetInstrInfo: + +The ``TargetInstrInfo`` class +----------------------------- + +The ``TargetInstrInfo`` class is used to describe the machine instructions +supported by the target. It is essentially an array of ``TargetInstrDescriptor`` +objects, each of which describes one instruction the target +supports. Descriptors define things like the mnemonic for the opcode, the number +of operands, the list of implicit register uses and defs, whether the +instruction has certain target-independent properties (accesses memory, is +commutable, etc), and holds any target-specific flags. + +The ``TargetFrameInfo`` class +----------------------------- + +The ``TargetFrameInfo`` class is used to provide information about the stack +frame layout of the target. It holds the direction of stack growth, the known +stack alignment on entry to each function, and the offset to the local area. +The offset to the local area is the offset from the stack pointer on function +entry to the first location where function data (local variables, spill +locations) can be stored. + +The ``TargetSubtarget`` class +----------------------------- + +The ``TargetSubtarget`` class is used to provide information about the specific +chip set being targeted. A sub-target informs code generation of which +instructions are supported, instruction latencies and instruction execution +itinerary; i.e., which processing units are used, in what order, and for how +long. + +The ``TargetJITInfo`` class +--------------------------- + +The ``TargetJITInfo`` class exposes an abstract interface used by the +Just-In-Time code generator to perform target-specific activities, such as +emitting stubs. If a ``TargetMachine`` supports JIT code generation, it should +provide one of these objects through the ``getJITInfo`` method. + +.. _code being generated: +.. _machine code representation: + +Machine code description classes +================================ + +At the high-level, LLVM code is translated to a machine specific representation +formed out of :raw-html:`<tt>` `MachineFunction`_ :raw-html:`</tt>`, +:raw-html:`<tt>` `MachineBasicBlock`_ :raw-html:`</tt>`, and :raw-html:`<tt>` +`MachineInstr`_ :raw-html:`</tt>` instances (defined in +``include/llvm/CodeGen``). This representation is completely target agnostic, +representing instructions in their most abstract form: an opcode and a series of +operands. This representation is designed to support both an SSA representation +for machine code, as well as a register allocated, non-SSA form. + +.. _MachineInstr: + +The ``MachineInstr`` class +-------------------------- + +Target machine instructions are represented as instances of the ``MachineInstr`` +class. This class is an extremely abstract way of representing machine +instructions. In particular, it only keeps track of an opcode number and a set +of operands. + +The opcode number is a simple unsigned integer that only has meaning to a +specific backend. All of the instructions for a target should be defined in the +``*InstrInfo.td`` file for the target. The opcode enum values are auto-generated +from this description. The ``MachineInstr`` class does not have any information +about how to interpret the instruction (i.e., what the semantics of the +instruction are); for that you must refer to the :raw-html:`<tt>` +`TargetInstrInfo`_ :raw-html:`</tt>` class. + +The operands of a machine instruction can be of several different types: a +register reference, a constant integer, a basic block reference, etc. In +addition, a machine operand should be marked as a def or a use of the value +(though only registers are allowed to be defs). + +By convention, the LLVM code generator orders instruction operands so that all +register definitions come before the register uses, even on architectures that +are normally printed in other orders. For example, the SPARC add instruction: +"``add %i1, %i2, %i3``" adds the "%i1", and "%i2" registers and stores the +result into the "%i3" register. In the LLVM code generator, the operands should +be stored as "``%i3, %i1, %i2``": with the destination first. + +Keeping destination (definition) operands at the beginning of the operand list +has several advantages. In particular, the debugging printer will print the +instruction like this: + +.. code-block:: llvm + + %r3 = add %i1, %i2 + +Also if the first operand is a def, it is easier to `create instructions`_ whose +only def is the first operand. + +.. _create instructions: + +Using the ``MachineInstrBuilder.h`` functions +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Machine instructions are created by using the ``BuildMI`` functions, located in +the ``include/llvm/CodeGen/MachineInstrBuilder.h`` file. The ``BuildMI`` +functions make it easy to build arbitrary machine instructions. Usage of the +``BuildMI`` functions look like this: + +.. code-block:: c++ + + // Create a 'DestReg = mov 42' (rendered in X86 assembly as 'mov DestReg, 42') + // instruction. The '1' specifies how many operands will be added. + MachineInstr *MI = BuildMI(X86::MOV32ri, 1, DestReg).addImm(42); + + // Create the same instr, but insert it at the end of a basic block. + MachineBasicBlock &MBB = ... + BuildMI(MBB, X86::MOV32ri, 1, DestReg).addImm(42); + + // Create the same instr, but insert it before a specified iterator point. + MachineBasicBlock::iterator MBBI = ... + BuildMI(MBB, MBBI, X86::MOV32ri, 1, DestReg).addImm(42); + + // Create a 'cmp Reg, 0' instruction, no destination reg. + MI = BuildMI(X86::CMP32ri, 2).addReg(Reg).addImm(0); + + // Create an 'sahf' instruction which takes no operands and stores nothing. + MI = BuildMI(X86::SAHF, 0); + + // Create a self looping branch instruction. + BuildMI(MBB, X86::JNE, 1).addMBB(&MBB); + +The key thing to remember with the ``BuildMI`` functions is that you have to +specify the number of operands that the machine instruction will take. This +allows for efficient memory allocation. You also need to specify if operands +default to be uses of values, not definitions. If you need to add a definition +operand (other than the optional destination register), you must explicitly mark +it as such: + +.. code-block:: c++ + + MI.addReg(Reg, RegState::Define); + +Fixed (preassigned) registers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +One important issue that the code generator needs to be aware of is the presence +of fixed registers. In particular, there are often places in the instruction +stream where the register allocator *must* arrange for a particular value to be +in a particular register. This can occur due to limitations of the instruction +set (e.g., the X86 can only do a 32-bit divide with the ``EAX``/``EDX`` +registers), or external factors like calling conventions. In any case, the +instruction selector should emit code that copies a virtual register into or out +of a physical register when needed. + +For example, consider this simple LLVM example: + +.. code-block:: llvm + + define i32 @test(i32 %X, i32 %Y) { + %Z = udiv i32 %X, %Y + ret i32 %Z + } + +The X86 instruction selector produces this machine code for the ``div`` and +``ret`` (use "``llc X.bc -march=x86 -print-machineinstrs``" to get this): + +.. code-block:: llvm + + ;; Start of div + %EAX = mov %reg1024 ;; Copy X (in reg1024) into EAX + %reg1027 = sar %reg1024, 31 + %EDX = mov %reg1027 ;; Sign extend X into EDX + idiv %reg1025 ;; Divide by Y (in reg1025) + %reg1026 = mov %EAX ;; Read the result (Z) out of EAX + + ;; Start of ret + %EAX = mov %reg1026 ;; 32-bit return value goes in EAX + ret + +By the end of code generation, the register allocator has coalesced the +registers and deleted the resultant identity moves producing the following +code: + +.. code-block:: llvm + + ;; X is in EAX, Y is in ECX + mov %EAX, %EDX + sar %EDX, 31 + idiv %ECX + ret + +This approach is extremely general (if it can handle the X86 architecture, it +can handle anything!) and allows all of the target specific knowledge about the +instruction stream to be isolated in the instruction selector. Note that +physical registers should have a short lifetime for good code generation, and +all physical registers are assumed dead on entry to and exit from basic blocks +(before register allocation). Thus, if you need a value to be live across basic +block boundaries, it *must* live in a virtual register. + +Call-clobbered registers +^^^^^^^^^^^^^^^^^^^^^^^^ + +Some machine instructions, like calls, clobber a large number of physical +registers. Rather than adding ``<def,dead>`` operands for all of them, it is +possible to use an ``MO_RegisterMask`` operand instead. The register mask +operand holds a bit mask of preserved registers, and everything else is +considered to be clobbered by the instruction. + +Machine code in SSA form +^^^^^^^^^^^^^^^^^^^^^^^^ + +``MachineInstr``'s are initially selected in SSA-form, and are maintained in +SSA-form until register allocation happens. For the most part, this is +trivially simple since LLVM is already in SSA form; LLVM PHI nodes become +machine code PHI nodes, and virtual registers are only allowed to have a single +definition. + +After register allocation, machine code is no longer in SSA-form because there +are no virtual registers left in the code. + +.. _MachineBasicBlock: + +The ``MachineBasicBlock`` class +------------------------------- + +The ``MachineBasicBlock`` class contains a list of machine instructions +(:raw-html:`<tt>` `MachineInstr`_ :raw-html:`</tt>` instances). It roughly +corresponds to the LLVM code input to the instruction selector, but there can be +a one-to-many mapping (i.e. one LLVM basic block can map to multiple machine +basic blocks). The ``MachineBasicBlock`` class has a "``getBasicBlock``" method, +which returns the LLVM basic block that it comes from. + +.. _MachineFunction: + +The ``MachineFunction`` class +----------------------------- + +The ``MachineFunction`` class contains a list of machine basic blocks +(:raw-html:`<tt>` `MachineBasicBlock`_ :raw-html:`</tt>` instances). It +corresponds one-to-one with the LLVM function input to the instruction selector. +In addition to a list of basic blocks, the ``MachineFunction`` contains a a +``MachineConstantPool``, a ``MachineFrameInfo``, a ``MachineFunctionInfo``, and +a ``MachineRegisterInfo``. See ``include/llvm/CodeGen/MachineFunction.h`` for +more information. + +``MachineInstr Bundles`` +------------------------ + +LLVM code generator can model sequences of instructions as MachineInstr +bundles. A MI bundle can model a VLIW group / pack which contains an arbitrary +number of parallel instructions. It can also be used to model a sequential list +of instructions (potentially with data dependencies) that cannot be legally +separated (e.g. ARM Thumb2 IT blocks). + +Conceptually a MI bundle is a MI with a number of other MIs nested within: + +:: + + -------------- + | Bundle | --------- + -------------- \ + | ---------------- + | | MI | + | ---------------- + | | + | ---------------- + | | MI | + | ---------------- + | | + | ---------------- + | | MI | + | ---------------- + | + -------------- + | Bundle | -------- + -------------- \ + | ---------------- + | | MI | + | ---------------- + | | + | ---------------- + | | MI | + | ---------------- + | | + | ... + | + -------------- + | Bundle | -------- + -------------- \ + | + ... + +MI bundle support does not change the physical representations of +MachineBasicBlock and MachineInstr. All the MIs (including top level and nested +ones) are stored as sequential list of MIs. The "bundled" MIs are marked with +the 'InsideBundle' flag. A top level MI with the special BUNDLE opcode is used +to represent the start of a bundle. It's legal to mix BUNDLE MIs with indiviual +MIs that are not inside bundles nor represent bundles. + +MachineInstr passes should operate on a MI bundle as a single unit. Member +methods have been taught to correctly handle bundles and MIs inside bundles. +The MachineBasicBlock iterator has been modified to skip over bundled MIs to +enforce the bundle-as-a-single-unit concept. An alternative iterator +instr_iterator has been added to MachineBasicBlock to allow passes to iterate +over all of the MIs in a MachineBasicBlock, including those which are nested +inside bundles. The top level BUNDLE instruction must have the correct set of +register MachineOperand's that represent the cumulative inputs and outputs of +the bundled MIs. + +Packing / bundling of MachineInstr's should be done as part of the register +allocation super-pass. More specifically, the pass which determines what MIs +should be bundled together must be done after code generator exits SSA form +(i.e. after two-address pass, PHI elimination, and copy coalescing). Bundles +should only be finalized (i.e. adding BUNDLE MIs and input and output register +MachineOperands) after virtual registers have been rewritten into physical +registers. This requirement eliminates the need to add virtual register operands +to BUNDLE instructions which would effectively double the virtual register def +and use lists. + +.. _MC Layer: + +The "MC" Layer +============== + +The MC Layer is used to represent and process code at the raw machine code +level, devoid of "high level" information like "constant pools", "jump tables", +"global variables" or anything like that. At this level, LLVM handles things +like label names, machine instructions, and sections in the object file. The +code in this layer is used for a number of important purposes: the tail end of +the code generator uses it to write a .s or .o file, and it is also used by the +llvm-mc tool to implement standalone machine code assemblers and disassemblers. + +This section describes some of the important classes. There are also a number +of important subsystems that interact at this layer, they are described later in +this manual. + +.. _MCStreamer: + +The ``MCStreamer`` API +---------------------- + +MCStreamer is best thought of as an assembler API. It is an abstract API which +is *implemented* in different ways (e.g. to output a .s file, output an ELF .o +file, etc) but whose API correspond directly to what you see in a .s file. +MCStreamer has one method per directive, such as EmitLabel, EmitSymbolAttribute, +SwitchSection, EmitValue (for .byte, .word), etc, which directly correspond to +assembly level directives. It also has an EmitInstruction method, which is used +to output an MCInst to the streamer. + +This API is most important for two clients: the llvm-mc stand-alone assembler is +effectively a parser that parses a line, then invokes a method on MCStreamer. In +the code generator, the `Code Emission`_ phase of the code generator lowers +higher level LLVM IR and Machine* constructs down to the MC layer, emitting +directives through MCStreamer. + +On the implementation side of MCStreamer, there are two major implementations: +one for writing out a .s file (MCAsmStreamer), and one for writing out a .o +file (MCObjectStreamer). MCAsmStreamer is a straight-forward implementation +that prints out a directive for each method (e.g. ``EmitValue -> .byte``), but +MCObjectStreamer implements a full assembler. + +The ``MCContext`` class +----------------------- + +The MCContext class is the owner of a variety of uniqued data structures at the +MC layer, including symbols, sections, etc. As such, this is the class that you +interact with to create symbols and sections. This class can not be subclassed. + +The ``MCSymbol`` class +---------------------- + +The MCSymbol class represents a symbol (aka label) in the assembly file. There +are two interesting kinds of symbols: assembler temporary symbols, and normal +symbols. Assembler temporary symbols are used and processed by the assembler +but are discarded when the object file is produced. The distinction is usually +represented by adding a prefix to the label, for example "L" labels are +assembler temporary labels in MachO. + +MCSymbols are created by MCContext and uniqued there. This means that MCSymbols +can be compared for pointer equivalence to find out if they are the same symbol. +Note that pointer inequality does not guarantee the labels will end up at +different addresses though. It's perfectly legal to output something like this +to the .s file: + +:: + + foo: + bar: + .byte 4 + +In this case, both the foo and bar symbols will have the same address. + +The ``MCSection`` class +----------------------- + +The ``MCSection`` class represents an object-file specific section. It is +subclassed by object file specific implementations (e.g. ``MCSectionMachO``, +``MCSectionCOFF``, ``MCSectionELF``) and these are created and uniqued by +MCContext. The MCStreamer has a notion of the current section, which can be +changed with the SwitchToSection method (which corresponds to a ".section" +directive in a .s file). + +.. _MCInst: + +The ``MCInst`` class +-------------------- + +The ``MCInst`` class is a target-independent representation of an instruction. +It is a simple class (much more so than `MachineInstr`_) that holds a +target-specific opcode and a vector of MCOperands. MCOperand, in turn, is a +simple discriminated union of three cases: 1) a simple immediate, 2) a target +register ID, 3) a symbolic expression (e.g. "``Lfoo-Lbar+42``") as an MCExpr. + +MCInst is the common currency used to represent machine instructions at the MC +layer. It is the type used by the instruction encoder, the instruction printer, +and the type generated by the assembly parser and disassembler. + +.. _Target-independent algorithms: +.. _code generation algorithm: + +Target-independent code generation algorithms +============================================= + +This section documents the phases described in the `high-level design of the +code generator`_. It explains how they work and some of the rationale behind +their design. + +.. _Instruction Selection: +.. _instruction selection section: + +Instruction Selection +--------------------- + +Instruction Selection is the process of translating LLVM code presented to the +code generator into target-specific machine instructions. There are several +well-known ways to do this in the literature. LLVM uses a SelectionDAG based +instruction selector. + +Portions of the DAG instruction selector are generated from the target +description (``*.td``) files. Our goal is for the entire instruction selector +to be generated from these ``.td`` files, though currently there are still +things that require custom C++ code. + +.. _SelectionDAG: + +Introduction to SelectionDAGs +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The SelectionDAG provides an abstraction for code representation in a way that +is amenable to instruction selection using automatic techniques +(e.g. dynamic-programming based optimal pattern matching selectors). It is also +well-suited to other phases of code generation; in particular, instruction +scheduling (SelectionDAG's are very close to scheduling DAGs post-selection). +Additionally, the SelectionDAG provides a host representation where a large +variety of very-low-level (but target-independent) `optimizations`_ may be +performed; ones which require extensive information about the instructions +efficiently supported by the target. + +The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the +``SDNode`` class. The primary payload of the ``SDNode`` is its operation code +(Opcode) that indicates what operation the node performs and the operands to the +operation. The various operation node types are described at the top of the +``include/llvm/CodeGen/SelectionDAGNodes.h`` file. + +Although most operations define a single value, each node in the graph may +define multiple values. For example, a combined div/rem operation will define +both the dividend and the remainder. Many other situations require multiple +values as well. Each node also has some number of operands, which are edges to +the node defining the used value. Because nodes may define multiple values, +edges are represented by instances of the ``SDValue`` class, which is a +``<SDNode, unsigned>`` pair, indicating the node and result value being used, +respectively. Each value produced by an ``SDNode`` has an associated ``MVT`` +(Machine Value Type) indicating what the type of the value is. + +SelectionDAGs contain two different kinds of values: those that represent data +flow and those that represent control flow dependencies. Data values are simple +edges with an integer or floating point value type. Control edges are +represented as "chain" edges which are of type ``MVT::Other``. These edges +provide an ordering between nodes that have side effects (such as loads, stores, +calls, returns, etc). All nodes that have side effects should take a token +chain as input and produce a new one as output. By convention, token chain +inputs are always operand #0, and chain results are always the last value +produced by an operation. + +A SelectionDAG has designated "Entry" and "Root" nodes. The Entry node is +always a marker node with an Opcode of ``ISD::EntryToken``. The Root node is +the final side-effecting node in the token chain. For example, in a single basic +block function it would be the return node. + +One important concept for SelectionDAGs is the notion of a "legal" vs. +"illegal" DAG. A legal DAG for a target is one that only uses supported +operations and supported types. On a 32-bit PowerPC, for example, a DAG with a +value of type i1, i8, i16, or i64 would be illegal, as would a DAG that uses a +SREM or UREM operation. The `legalize types`_ and `legalize operations`_ phases +are responsible for turning an illegal DAG into a legal DAG. + +SelectionDAG Instruction Selection Process +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +SelectionDAG-based instruction selection consists of the following steps: + +#. `Build initial DAG`_ --- This stage performs a simple translation from the + input LLVM code to an illegal SelectionDAG. + +#. `Optimize SelectionDAG`_ --- This stage performs simple optimizations on the + SelectionDAG to simplify it, and recognize meta instructions (like rotates + and ``div``/``rem`` pairs) for targets that support these meta operations. + This makes the resultant code more efficient and the `select instructions + from DAG`_ phase (below) simpler. + +#. `Legalize SelectionDAG Types`_ --- This stage transforms SelectionDAG nodes + to eliminate any types that are unsupported on the target. + +#. `Optimize SelectionDAG`_ --- The SelectionDAG optimizer is run to clean up + redundancies exposed by type legalization. + +#. `Legalize SelectionDAG Ops`_ --- This stage transforms SelectionDAG nodes to + eliminate any operations that are unsupported on the target. + +#. `Optimize SelectionDAG`_ --- The SelectionDAG optimizer is run to eliminate + inefficiencies introduced by operation legalization. + +#. `Select instructions from DAG`_ --- Finally, the target instruction selector + matches the DAG operations to target instructions. This process translates + the target-independent input DAG into another DAG of target instructions. + +#. `SelectionDAG Scheduling and Formation`_ --- The last phase assigns a linear + order to the instructions in the target-instruction DAG and emits them into + the MachineFunction being compiled. This step uses traditional prepass + scheduling techniques. + +After all of these steps are complete, the SelectionDAG is destroyed and the +rest of the code generation passes are run. + +One great way to visualize what is going on here is to take advantage of a few +LLC command line options. The following options pop up a window displaying the +SelectionDAG at specific times (if you only get errors printed to the console +while using this, you probably `need to configure your +system <ProgrammersManual.html#ViewGraph>`_ to add support for it). + +* ``-view-dag-combine1-dags`` displays the DAG after being built, before the + first optimization pass. + +* ``-view-legalize-dags`` displays the DAG before Legalization. + +* ``-view-dag-combine2-dags`` displays the DAG before the second optimization + pass. + +* ``-view-isel-dags`` displays the DAG before the Select phase. + +* ``-view-sched-dags`` displays the DAG before Scheduling. + +The ``-view-sunit-dags`` displays the Scheduler's dependency graph. This graph +is based on the final SelectionDAG, with nodes that must be scheduled together +bundled into a single scheduling-unit node, and with immediate operands and +other nodes that aren't relevant for scheduling omitted. + +.. _Build initial DAG: + +Initial SelectionDAG Construction +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The initial SelectionDAG is na\ :raw-html:`ï`\ vely peephole expanded from +the LLVM input by the ``SelectionDAGLowering`` class in the +``lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp`` file. The intent of this pass +is to expose as much low-level, target-specific details to the SelectionDAG as +possible. This pass is mostly hard-coded (e.g. an LLVM ``add`` turns into an +``SDNode add`` while a ``getelementptr`` is expanded into the obvious +arithmetic). This pass requires target-specific hooks to lower calls, returns, +varargs, etc. For these features, the :raw-html:`<tt>` `TargetLowering`_ +:raw-html:`</tt>` interface is used. + +.. _legalize types: +.. _Legalize SelectionDAG Types: +.. _Legalize SelectionDAG Ops: + +SelectionDAG LegalizeTypes Phase +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Legalize phase is in charge of converting a DAG to only use the types that +are natively supported by the target. + +There are two main ways of converting values of unsupported scalar types to +values of supported types: converting small types to larger types ("promoting"), +and breaking up large integer types into smaller ones ("expanding"). For +example, a target might require that all f32 values are promoted to f64 and that +all i1/i8/i16 values are promoted to i32. The same target might require that +all i64 values be expanded into pairs of i32 values. These changes can insert +sign and zero extensions as needed to make sure that the final code has the same +behavior as the input. + +There are two main ways of converting values of unsupported vector types to +value of supported types: splitting vector types, multiple times if necessary, +until a legal type is found, and extending vector types by adding elements to +the end to round them out to legal types ("widening"). If a vector gets split +all the way down to single-element parts with no supported vector type being +found, the elements are converted to scalars ("scalarizing"). + +A target implementation tells the legalizer which types are supported (and which +register class to use for them) by calling the ``addRegisterClass`` method in +its TargetLowering constructor. + +.. _legalize operations: +.. _Legalizer: + +SelectionDAG Legalize Phase +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Legalize phase is in charge of converting a DAG to only use the operations +that are natively supported by the target. + +Targets often have weird constraints, such as not supporting every operation on +every supported datatype (e.g. X86 does not support byte conditional moves and +PowerPC does not support sign-extending loads from a 16-bit memory location). +Legalize takes care of this by open-coding another sequence of operations to +emulate the operation ("expansion"), by promoting one type to a larger type that +supports the operation ("promotion"), or by using a target-specific hook to +implement the legalization ("custom"). + +A target implementation tells the legalizer which operations are not supported +(and which of the above three actions to take) by calling the +``setOperationAction`` method in its ``TargetLowering`` constructor. + +Prior to the existence of the Legalize passes, we required that every target +`selector`_ supported and handled every operator and type even if they are not +natively supported. The introduction of the Legalize phases allows all of the +canonicalization patterns to be shared across targets, and makes it very easy to +optimize the canonicalized code because it is still in the form of a DAG. + +.. _optimizations: +.. _Optimize SelectionDAG: +.. _selector: + +SelectionDAG Optimization Phase: the DAG Combiner +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The SelectionDAG optimization phase is run multiple times for code generation, +immediately after the DAG is built and once after each legalization. The first +run of the pass allows the initial code to be cleaned up (e.g. performing +optimizations that depend on knowing that the operators have restricted type +inputs). Subsequent runs of the pass clean up the messy code generated by the +Legalize passes, which allows Legalize to be very simple (it can focus on making +code legal instead of focusing on generating *good* and legal code). + +One important class of optimizations performed is optimizing inserted sign and +zero extension instructions. We currently use ad-hoc techniques, but could move +to more rigorous techniques in the future. Here are some good papers on the +subject: + +"`Widening integer arithmetic <http://www.eecs.harvard.edu/~nr/pubs/widen-abstract.html>`_" :raw-html:`<br>` +Kevin Redwine and Norman Ramsey :raw-html:`<br>` +International Conference on Compiler Construction (CC) 2004 + +"`Effective sign extension elimination <http://portal.acm.org/citation.cfm?doid=512529.512552>`_" :raw-html:`<br>` +Motohiro Kawahito, Hideaki Komatsu, and Toshio Nakatani :raw-html:`<br>` +Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design +and Implementation. + +.. _Select instructions from DAG: + +SelectionDAG Select Phase +^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Select phase is the bulk of the target-specific code for instruction +selection. This phase takes a legal SelectionDAG as input, pattern matches the +instructions supported by the target to this DAG, and produces a new DAG of +target code. For example, consider the following LLVM fragment: + +.. code-block:: llvm + + %t1 = fadd float %W, %X + %t2 = fmul float %t1, %Y + %t3 = fadd float %t2, %Z + +This LLVM code corresponds to a SelectionDAG that looks basically like this: + +.. code-block:: llvm + + (fadd:f32 (fmul:f32 (fadd:f32 W, X), Y), Z) + +If a target supports floating point multiply-and-add (FMA) operations, one of +the adds can be merged with the multiply. On the PowerPC, for example, the +output of the instruction selector might look like this DAG: + +:: + + (FMADDS (FADDS W, X), Y, Z) + +The ``FMADDS`` instruction is a ternary instruction that multiplies its first +two operands and adds the third (as single-precision floating-point numbers). +The ``FADDS`` instruction is a simple binary single-precision add instruction. +To perform this pattern match, the PowerPC backend includes the following +instruction definitions: + +:: + + def FMADDS : AForm_1<59, 29, + (ops F4RC:$FRT, F4RC:$FRA, F4RC:$FRC, F4RC:$FRB), + "fmadds $FRT, $FRA, $FRC, $FRB", + [(set F4RC:$FRT, (fadd (fmul F4RC:$FRA, F4RC:$FRC), + F4RC:$FRB))]>; + def FADDS : AForm_2<59, 21, + (ops F4RC:$FRT, F4RC:$FRA, F4RC:$FRB), + "fadds $FRT, $FRA, $FRB", + [(set F4RC:$FRT, (fadd F4RC:$FRA, F4RC:$FRB))]>; + +The portion of the instruction definition in bold indicates the pattern used to +match the instruction. The DAG operators (like ``fmul``/``fadd``) are defined +in the ``include/llvm/Target/TargetSelectionDAG.td`` file. " ``F4RC``" is the +register class of the input and result values. + +The TableGen DAG instruction selector generator reads the instruction patterns +in the ``.td`` file and automatically builds parts of the pattern matching code +for your target. It has the following strengths: + +* At compiler-compiler time, it analyzes your instruction patterns and tells you + if your patterns make sense or not. + +* It can handle arbitrary constraints on operands for the pattern match. In + particular, it is straight-forward to say things like "match any immediate + that is a 13-bit sign-extended value". For examples, see the ``immSExt16`` + and related ``tblgen`` classes in the PowerPC backend. + +* It knows several important identities for the patterns defined. For example, + it knows that addition is commutative, so it allows the ``FMADDS`` pattern + above to match "``(fadd X, (fmul Y, Z))``" as well as "``(fadd (fmul X, Y), + Z)``", without the target author having to specially handle this case. + +* It has a full-featured type-inferencing system. In particular, you should + rarely have to explicitly tell the system what type parts of your patterns + are. In the ``FMADDS`` case above, we didn't have to tell ``tblgen`` that all + of the nodes in the pattern are of type 'f32'. It was able to infer and + propagate this knowledge from the fact that ``F4RC`` has type 'f32'. + +* Targets can define their own (and rely on built-in) "pattern fragments". + Pattern fragments are chunks of reusable patterns that get inlined into your + patterns during compiler-compiler time. For example, the integer "``(not + x)``" operation is actually defined as a pattern fragment that expands as + "``(xor x, -1)``", since the SelectionDAG does not have a native '``not``' + operation. Targets can define their own short-hand fragments as they see fit. + See the definition of '``not``' and '``ineg``' for examples. + +* In addition to instructions, targets can specify arbitrary patterns that map + to one or more instructions using the 'Pat' class. For example, the PowerPC + has no way to load an arbitrary integer immediate into a register in one + instruction. To tell tblgen how to do this, it defines: + + :: + + // Arbitrary immediate support. Implement in terms of LIS/ORI. + def : Pat<(i32 imm:$imm), + (ORI (LIS (HI16 imm:$imm)), (LO16 imm:$imm))>; + + If none of the single-instruction patterns for loading an immediate into a + register match, this will be used. This rule says "match an arbitrary i32 + immediate, turning it into an ``ORI`` ('or a 16-bit immediate') and an ``LIS`` + ('load 16-bit immediate, where the immediate is shifted to the left 16 bits') + instruction". To make this work, the ``LO16``/``HI16`` node transformations + are used to manipulate the input immediate (in this case, take the high or low + 16-bits of the immediate). + +* While the system does automate a lot, it still allows you to write custom C++ + code to match special cases if there is something that is hard to + express. + +While it has many strengths, the system currently has some limitations, +primarily because it is a work in progress and is not yet finished: + +* Overall, there is no way to define or match SelectionDAG nodes that define + multiple values (e.g. ``SMUL_LOHI``, ``LOAD``, ``CALL``, etc). This is the + biggest reason that you currently still *have to* write custom C++ code + for your instruction selector. + +* There is no great way to support matching complex addressing modes yet. In + the future, we will extend pattern fragments to allow them to define multiple + values (e.g. the four operands of the `X86 addressing mode`_, which are + currently matched with custom C++ code). In addition, we'll extend fragments + so that a fragment can match multiple different patterns. + +* We don't automatically infer flags like ``isStore``/``isLoad`` yet. + +* We don't automatically generate the set of supported registers and operations + for the `Legalizer`_ yet. + +* We don't have a way of tying in custom legalized nodes yet. + +Despite these limitations, the instruction selector generator is still quite +useful for most of the binary and logical operations in typical instruction +sets. If you run into any problems or can't figure out how to do something, +please let Chris know! + +.. _Scheduling and Formation: +.. _SelectionDAG Scheduling and Formation: + +SelectionDAG Scheduling and Formation Phase +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The scheduling phase takes the DAG of target instructions from the selection +phase and assigns an order. The scheduler can pick an order depending on +various constraints of the machines (i.e. order for minimal register pressure or +try to cover instruction latencies). Once an order is established, the DAG is +converted to a list of :raw-html:`<tt>` `MachineInstr`_\s :raw-html:`</tt>` and +the SelectionDAG is destroyed. + +Note that this phase is logically separate from the instruction selection phase, +but is tied to it closely in the code because it operates on SelectionDAGs. + +Future directions for the SelectionDAG +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +#. Optional function-at-a-time selection. + +#. Auto-generate entire selector from ``.td`` file. + +.. _SSA-based Machine Code Optimizations: + +SSA-based Machine Code Optimizations +------------------------------------ + +To Be Written + +Live Intervals +-------------- + +Live Intervals are the ranges (intervals) where a variable is *live*. They are +used by some `register allocator`_ passes to determine if two or more virtual +registers which require the same physical register are live at the same point in +the program (i.e., they conflict). When this situation occurs, one virtual +register must be *spilled*. + +Live Variable Analysis +^^^^^^^^^^^^^^^^^^^^^^ + +The first step in determining the live intervals of variables is to calculate +the set of registers that are immediately dead after the instruction (i.e., the +instruction calculates the value, but it is never used) and the set of registers +that are used by the instruction, but are never used after the instruction +(i.e., they are killed). Live variable information is computed for +each *virtual* register and *register allocatable* physical register +in the function. This is done in a very efficient manner because it uses SSA to +sparsely compute lifetime information for virtual registers (which are in SSA +form) and only has to track physical registers within a block. Before register +allocation, LLVM can assume that physical registers are only live within a +single basic block. This allows it to do a single, local analysis to resolve +physical register lifetimes within each basic block. If a physical register is +not register allocatable (e.g., a stack pointer or condition codes), it is not +tracked. + +Physical registers may be live in to or out of a function. Live in values are +typically arguments in registers. Live out values are typically return values in +registers. Live in values are marked as such, and are given a dummy "defining" +instruction during live intervals analysis. If the last basic block of a +function is a ``return``, then it's marked as using all live out values in the +function. + +``PHI`` nodes need to be handled specially, because the calculation of the live +variable information from a depth first traversal of the CFG of the function +won't guarantee that a virtual register used by the ``PHI`` node is defined +before it's used. When a ``PHI`` node is encountered, only the definition is +handled, because the uses will be handled in other basic blocks. + +For each ``PHI`` node of the current basic block, we simulate an assignment at +the end of the current basic block and traverse the successor basic blocks. If a +successor basic block has a ``PHI`` node and one of the ``PHI`` node's operands +is coming from the current basic block, then the variable is marked as *alive* +within the current basic block and all of its predecessor basic blocks, until +the basic block with the defining instruction is encountered. + +Live Intervals Analysis +^^^^^^^^^^^^^^^^^^^^^^^ + +We now have the information available to perform the live intervals analysis and +build the live intervals themselves. We start off by numbering the basic blocks +and machine instructions. We then handle the "live-in" values. These are in +physical registers, so the physical register is assumed to be killed by the end +of the basic block. Live intervals for virtual registers are computed for some +ordering of the machine instructions ``[1, N]``. A live interval is an interval +``[i, j)``, where ``1 >= i >= j > N``, for which a variable is live. + +.. note:: + More to come... + +.. _Register Allocation: +.. _register allocator: + +Register Allocation +------------------- + +The *Register Allocation problem* consists in mapping a program +:raw-html:`<b><tt>` P\ :sub:`v`\ :raw-html:`</tt></b>`, that can use an unbounded +number of virtual registers, to a program :raw-html:`<b><tt>` P\ :sub:`p`\ +:raw-html:`</tt></b>` that contains a finite (possibly small) number of physical +registers. Each target architecture has a different number of physical +registers. If the number of physical registers is not enough to accommodate all +the virtual registers, some of them will have to be mapped into memory. These +virtuals are called *spilled virtuals*. + +How registers are represented in LLVM +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +In LLVM, physical registers are denoted by integer numbers that normally range +from 1 to 1023. To see how this numbering is defined for a particular +architecture, you can read the ``GenRegisterNames.inc`` file for that +architecture. For instance, by inspecting +``lib/Target/X86/X86GenRegisterInfo.inc`` we see that the 32-bit register +``EAX`` is denoted by 43, and the MMX register ``MM0`` is mapped to 65. + +Some architectures contain registers that share the same physical location. A +notable example is the X86 platform. For instance, in the X86 architecture, the +registers ``EAX``, ``AX`` and ``AL`` share the first eight bits. These physical +registers are marked as *aliased* in LLVM. Given a particular architecture, you +can check which registers are aliased by inspecting its ``RegisterInfo.td`` +file. Moreover, the class ``MCRegAliasIterator`` enumerates all the physical +registers aliased to a register. + +Physical registers, in LLVM, are grouped in *Register Classes*. Elements in the +same register class are functionally equivalent, and can be interchangeably +used. Each virtual register can only be mapped to physical registers of a +particular class. For instance, in the X86 architecture, some virtuals can only +be allocated to 8 bit registers. A register class is described by +``TargetRegisterClass`` objects. To discover if a virtual register is +compatible with a given physical, this code can be used:</p> + +.. code-block:: c++ + + bool RegMapping_Fer::compatible_class(MachineFunction &mf, + unsigned v_reg, + unsigned p_reg) { + assert(TargetRegisterInfo::isPhysicalRegister(p_reg) && + "Target register must be physical"); + const TargetRegisterClass *trc = mf.getRegInfo().getRegClass(v_reg); + return trc->contains(p_reg); + } + +Sometimes, mostly for debugging purposes, it is useful to change the number of +physical registers available in the target architecture. This must be done +statically, inside the ``TargetRegsterInfo.td`` file. Just ``grep`` for +``RegisterClass``, the last parameter of which is a list of registers. Just +commenting some out is one simple way to avoid them being used. A more polite +way is to explicitly exclude some registers from the *allocation order*. See the +definition of the ``GR8`` register class in +``lib/Target/X86/X86RegisterInfo.td`` for an example of this. + +Virtual registers are also denoted by integer numbers. Contrary to physical +registers, different virtual registers never share the same number. Whereas +physical registers are statically defined in a ``TargetRegisterInfo.td`` file +and cannot be created by the application developer, that is not the case with +virtual registers. In order to create new virtual registers, use the method +``MachineRegisterInfo::createVirtualRegister()``. This method will return a new +virtual register. Use an ``IndexedMap<Foo, VirtReg2IndexFunctor>`` to hold +information per virtual register. If you need to enumerate all virtual +registers, use the function ``TargetRegisterInfo::index2VirtReg()`` to find the +virtual register numbers: + +.. code-block:: c++ + + for (unsigned i = 0, e = MRI->getNumVirtRegs(); i != e; ++i) { + unsigned VirtReg = TargetRegisterInfo::index2VirtReg(i); + stuff(VirtReg); + } + +Before register allocation, the operands of an instruction are mostly virtual +registers, although physical registers may also be used. In order to check if a +given machine operand is a register, use the boolean function +``MachineOperand::isRegister()``. To obtain the integer code of a register, use +``MachineOperand::getReg()``. An instruction may define or use a register. For +instance, ``ADD reg:1026 := reg:1025 reg:1024`` defines the registers 1024, and +uses registers 1025 and 1026. Given a register operand, the method +``MachineOperand::isUse()`` informs if that register is being used by the +instruction. The method ``MachineOperand::isDef()`` informs if that registers is +being defined. + +We will call physical registers present in the LLVM bitcode before register +allocation *pre-colored registers*. Pre-colored registers are used in many +different situations, for instance, to pass parameters of functions calls, and +to store results of particular instructions. There are two types of pre-colored +registers: the ones *implicitly* defined, and those *explicitly* +defined. Explicitly defined registers are normal operands, and can be accessed +with ``MachineInstr::getOperand(int)::getReg()``. In order to check which +registers are implicitly defined by an instruction, use the +``TargetInstrInfo::get(opcode)::ImplicitDefs``, where ``opcode`` is the opcode +of the target instruction. One important difference between explicit and +implicit physical registers is that the latter are defined statically for each +instruction, whereas the former may vary depending on the program being +compiled. For example, an instruction that represents a function call will +always implicitly define or use the same set of physical registers. To read the +registers implicitly used by an instruction, use +``TargetInstrInfo::get(opcode)::ImplicitUses``. Pre-colored registers impose +constraints on any register allocation algorithm. The register allocator must +make sure that none of them are overwritten by the values of virtual registers +while still alive. + +Mapping virtual registers to physical registers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +There are two ways to map virtual registers to physical registers (or to memory +slots). The first way, that we will call *direct mapping*, is based on the use +of methods of the classes ``TargetRegisterInfo``, and ``MachineOperand``. The +second way, that we will call *indirect mapping*, relies on the ``VirtRegMap`` +class in order to insert loads and stores sending and getting values to and from +memory. + +The direct mapping provides more flexibility to the developer of the register +allocator; however, it is more error prone, and demands more implementation +work. Basically, the programmer will have to specify where load and store +instructions should be inserted in the target function being compiled in order +to get and store values in memory. To assign a physical register to a virtual +register present in a given operand, use ``MachineOperand::setReg(p_reg)``. To +insert a store instruction, use ``TargetInstrInfo::storeRegToStackSlot(...)``, +and to insert a load instruction, use ``TargetInstrInfo::loadRegFromStackSlot``. + +The indirect mapping shields the application developer from the complexities of +inserting load and store instructions. In order to map a virtual register to a +physical one, use ``VirtRegMap::assignVirt2Phys(vreg, preg)``. In order to map +a certain virtual register to memory, use +``VirtRegMap::assignVirt2StackSlot(vreg)``. This method will return the stack +slot where ``vreg``'s value will be located. If it is necessary to map another +virtual register to the same stack slot, use +``VirtRegMap::assignVirt2StackSlot(vreg, stack_location)``. One important point +to consider when using the indirect mapping, is that even if a virtual register +is mapped to memory, it still needs to be mapped to a physical register. This +physical register is the location where the virtual register is supposed to be +found before being stored or after being reloaded. + +If the indirect strategy is used, after all the virtual registers have been +mapped to physical registers or stack slots, it is necessary to use a spiller +object to place load and store instructions in the code. Every virtual that has +been mapped to a stack slot will be stored to memory after been defined and will +be loaded before being used. The implementation of the spiller tries to recycle +load/store instructions, avoiding unnecessary instructions. For an example of +how to invoke the spiller, see ``RegAllocLinearScan::runOnMachineFunction`` in +``lib/CodeGen/RegAllocLinearScan.cpp``. + +Handling two address instructions +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +With very rare exceptions (e.g., function calls), the LLVM machine code +instructions are three address instructions. That is, each instruction is +expected to define at most one register, and to use at most two registers. +However, some architectures use two address instructions. In this case, the +defined register is also one of the used register. For instance, an instruction +such as ``ADD %EAX, %EBX``, in X86 is actually equivalent to ``%EAX = %EAX + +%EBX``. + +In order to produce correct code, LLVM must convert three address instructions +that represent two address instructions into true two address instructions. LLVM +provides the pass ``TwoAddressInstructionPass`` for this specific purpose. It +must be run before register allocation takes place. After its execution, the +resulting code may no longer be in SSA form. This happens, for instance, in +situations where an instruction such as ``%a = ADD %b %c`` is converted to two +instructions such as: + +:: + + %a = MOVE %b + %a = ADD %a %c + +Notice that, internally, the second instruction is represented as ``ADD +%a[def/use] %c``. I.e., the register operand ``%a`` is both used and defined by +the instruction. + +The SSA deconstruction phase +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +An important transformation that happens during register allocation is called +the *SSA Deconstruction Phase*. The SSA form simplifies many analyses that are +performed on the control flow graph of programs. However, traditional +instruction sets do not implement PHI instructions. Thus, in order to generate +executable code, compilers must replace PHI instructions with other instructions +that preserve their semantics. + +There are many ways in which PHI instructions can safely be removed from the +target code. The most traditional PHI deconstruction algorithm replaces PHI +instructions with copy instructions. That is the strategy adopted by LLVM. The +SSA deconstruction algorithm is implemented in +``lib/CodeGen/PHIElimination.cpp``. In order to invoke this pass, the identifier +``PHIEliminationID`` must be marked as required in the code of the register +allocator. + +Instruction folding +^^^^^^^^^^^^^^^^^^^ + +*Instruction folding* is an optimization performed during register allocation +that removes unnecessary copy instructions. For instance, a sequence of +instructions such as: + +:: + + %EBX = LOAD %mem_address + %EAX = COPY %EBX + +can be safely substituted by the single instruction: + +:: + + %EAX = LOAD %mem_address + +Instructions can be folded with the +``TargetRegisterInfo::foldMemoryOperand(...)`` method. Care must be taken when +folding instructions; a folded instruction can be quite different from the +original instruction. See ``LiveIntervals::addIntervalsForSpills`` in +``lib/CodeGen/LiveIntervalAnalysis.cpp`` for an example of its use. + +Built in register allocators +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The LLVM infrastructure provides the application developer with three different +register allocators: + +* *Fast* --- This register allocator is the default for debug builds. It + allocates registers on a basic block level, attempting to keep values in + registers and reusing registers as appropriate. + +* *Basic* --- This is an incremental approach to register allocation. Live + ranges are assigned to registers one at a time in an order that is driven by + heuristics. Since code can be rewritten on-the-fly during allocation, this + framework allows interesting allocators to be developed as extensions. It is + not itself a production register allocator but is a potentially useful + stand-alone mode for triaging bugs and as a performance baseline. + +* *Greedy* --- *The default allocator*. This is a highly tuned implementation of + the *Basic* allocator that incorporates global live range splitting. This + allocator works hard to minimize the cost of spill code. + +* *PBQP* --- A Partitioned Boolean Quadratic Programming (PBQP) based register + allocator. This allocator works by constructing a PBQP problem representing + the register allocation problem under consideration, solving this using a PBQP + solver, and mapping the solution back to a register assignment. + +The type of register allocator used in ``llc`` can be chosen with the command +line option ``-regalloc=...``: + +.. code-block:: bash + + $ llc -regalloc=linearscan file.bc -o ln.s + $ llc -regalloc=fast file.bc -o fa.s + $ llc -regalloc=pbqp file.bc -o pbqp.s + +.. _Prolog/Epilog Code Insertion: + +Prolog/Epilog Code Insertion +---------------------------- + +Compact Unwind + +Throwing an exception requires *unwinding* out of a function. The information on +how to unwind a given function is traditionally expressed in DWARF unwind +(a.k.a. frame) info. But that format was originally developed for debuggers to +backtrace, and each Frame Description Entry (FDE) requires ~20-30 bytes per +function. There is also the cost of mapping from an address in a function to the +corresponding FDE at runtime. An alternative unwind encoding is called *compact +unwind* and requires just 4-bytes per function. + +The compact unwind encoding is a 32-bit value, which is encoded in an +architecture-specific way. It specifies which registers to restore and from +where, and how to unwind out of the function. When the linker creates a final +linked image, it will create a ``__TEXT,__unwind_info`` section. This section is +a small and fast way for the runtime to access unwind info for any given +function. If we emit compact unwind info for the function, that compact unwind +info will be encoded in the ``__TEXT,__unwind_info`` section. If we emit DWARF +unwind info, the ``__TEXT,__unwind_info`` section will contain the offset of the +FDE in the ``__TEXT,__eh_frame`` section in the final linked image. + +For X86, there are three modes for the compact unwind encoding: + +*Function with a Frame Pointer (``EBP`` or ``RBP``)* + ``EBP/RBP``-based frame, where ``EBP/RBP`` is pushed onto the stack + immediately after the return address, then ``ESP/RSP`` is moved to + ``EBP/RBP``. Thus to unwind, ``ESP/RSP`` is restored with the current + ``EBP/RBP`` value, then ``EBP/RBP`` is restored by popping the stack, and the + return is done by popping the stack once more into the PC. All non-volatile + registers that need to be restored must have been saved in a small range on + the stack that starts ``EBP-4`` to ``EBP-1020`` (``RBP-8`` to + ``RBP-1020``). The offset (divided by 4 in 32-bit mode and 8 in 64-bit mode) + is encoded in bits 16-23 (mask: ``0x00FF0000``). The registers saved are + encoded in bits 0-14 (mask: ``0x00007FFF``) as five 3-bit entries from the + following table: + + ============== ============= =============== + Compact Number i386 Register x86-64 Register + ============== ============= =============== + 1 ``EBX`` ``RBX`` + 2 ``ECX`` ``R12`` + 3 ``EDX`` ``R13`` + 4 ``EDI`` ``R14`` + 5 ``ESI`` ``R15`` + 6 ``EBP`` ``RBP`` + ============== ============= =============== + +*Frameless with a Small Constant Stack Size (``EBP`` or ``RBP`` is not used as a frame pointer)* + To return, a constant (encoded in the compact unwind encoding) is added to the + ``ESP/RSP``. Then the return is done by popping the stack into the PC. All + non-volatile registers that need to be restored must have been saved on the + stack immediately after the return address. The stack size (divided by 4 in + 32-bit mode and 8 in 64-bit mode) is encoded in bits 16-23 (mask: + ``0x00FF0000``). There is a maximum stack size of 1024 bytes in 32-bit mode + and 2048 in 64-bit mode. The number of registers saved is encoded in bits 9-12 + (mask: ``0x00001C00``). Bits 0-9 (mask: ``0x000003FF``) contain which + registers were saved and their order. (See the + ``encodeCompactUnwindRegistersWithoutFrame()`` function in + ``lib/Target/X86FrameLowering.cpp`` for the encoding algorithm.) + +*Frameless with a Large Constant Stack Size (``EBP`` or ``RBP`` is not used as a frame pointer)* + This case is like the "Frameless with a Small Constant Stack Size" case, but + the stack size is too large to encode in the compact unwind encoding. Instead + it requires that the function contains "``subl $nnnnnn, %esp``" in its + prolog. The compact encoding contains the offset to the ``$nnnnnn`` value in + the function in bits 9-12 (mask: ``0x00001C00``). + +.. _Late Machine Code Optimizations: + +Late Machine Code Optimizations +------------------------------- + +.. note:: + + To Be Written + +.. _Code Emission: + +Code Emission +------------- + +The code emission step of code generation is responsible for lowering from the +code generator abstractions (like `MachineFunction`_, `MachineInstr`_, etc) down +to the abstractions used by the MC layer (`MCInst`_, `MCStreamer`_, etc). This +is done with a combination of several different classes: the (misnamed) +target-independent AsmPrinter class, target-specific subclasses of AsmPrinter +(such as SparcAsmPrinter), and the TargetLoweringObjectFile class. + +Since the MC layer works at the level of abstraction of object files, it doesn't +have a notion of functions, global variables etc. Instead, it thinks about +labels, directives, and instructions. A key class used at this time is the +MCStreamer class. This is an abstract API that is implemented in different ways +(e.g. to output a .s file, output an ELF .o file, etc) that is effectively an +"assembler API". MCStreamer has one method per directive, such as EmitLabel, +EmitSymbolAttribute, SwitchSection, etc, which directly correspond to assembly +level directives. + +If you are interested in implementing a code generator for a target, there are +three important things that you have to implement for your target: + +#. First, you need a subclass of AsmPrinter for your target. This class + implements the general lowering process converting MachineFunction's into MC + label constructs. The AsmPrinter base class provides a number of useful + methods and routines, and also allows you to override the lowering process in + some important ways. You should get much of the lowering for free if you are + implementing an ELF, COFF, or MachO target, because the + TargetLoweringObjectFile class implements much of the common logic. + +#. Second, you need to implement an instruction printer for your target. The + instruction printer takes an `MCInst`_ and renders it to a raw_ostream as + text. Most of this is automatically generated from the .td file (when you + specify something like "``add $dst, $src1, $src2``" in the instructions), but + you need to implement routines to print operands. + +#. Third, you need to implement code that lowers a `MachineInstr`_ to an MCInst, + usually implemented in "<target>MCInstLower.cpp". This lowering process is + often target specific, and is responsible for turning jump table entries, + constant pool indices, global variable addresses, etc into MCLabels as + appropriate. This translation layer is also responsible for expanding pseudo + ops used by the code generator into the actual machine instructions they + correspond to. The MCInsts that are generated by this are fed into the + instruction printer or the encoder. + +Finally, at your choosing, you can also implement an subclass of MCCodeEmitter +which lowers MCInst's into machine code bytes and relocations. This is +important if you want to support direct .o file emission, or would like to +implement an assembler for your target. + +VLIW Packetizer +--------------- + +In a Very Long Instruction Word (VLIW) architecture, the compiler is responsible +for mapping instructions to functional-units available on the architecture. To +that end, the compiler creates groups of instructions called *packets* or +*bundles*. The VLIW packetizer in LLVM is a target-independent mechanism to +enable the packetization of machine instructions. + +Mapping from instructions to functional units +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Instructions in a VLIW target can typically be mapped to multiple functional +units. During the process of packetizing, the compiler must be able to reason +about whether an instruction can be added to a packet. This decision can be +complex since the compiler has to examine all possible mappings of instructions +to functional units. Therefore to alleviate compilation-time complexity, the +VLIW packetizer parses the instruction classes of a target and generates tables +at compiler build time. These tables can then be queried by the provided +machine-independent API to determine if an instruction can be accommodated in a +packet. + +How the packetization tables are generated and used +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The packetizer reads instruction classes from a target's itineraries and creates +a deterministic finite automaton (DFA) to represent the state of a packet. A DFA +consists of three major elements: inputs, states, and transitions. The set of +inputs for the generated DFA represents the instruction being added to a +packet. The states represent the possible consumption of functional units by +instructions in a packet. In the DFA, transitions from one state to another +occur on the addition of an instruction to an existing packet. If there is a +legal mapping of functional units to instructions, then the DFA contains a +corresponding transition. The absence of a transition indicates that a legal +mapping does not exist and that the instruction cannot be added to the packet. + +To generate tables for a VLIW target, add *Target*\ GenDFAPacketizer.inc as a +target to the Makefile in the target directory. The exported API provides three +functions: ``DFAPacketizer::clearResources()``, +``DFAPacketizer::reserveResources(MachineInstr *MI)``, and +``DFAPacketizer::canReserveResources(MachineInstr *MI)``. These functions allow +a target packetizer to add an instruction to an existing packet and to check +whether an instruction can be added to a packet. See +``llvm/CodeGen/DFAPacketizer.h`` for more information. + +Implementing a Native Assembler +=============================== + +Though you're probably reading this because you want to write or maintain a +compiler backend, LLVM also fully supports building a native assemblers too. +We've tried hard to automate the generation of the assembler from the .td files +(in particular the instruction syntax and encodings), which means that a large +part of the manual and repetitive data entry can be factored and shared with the +compiler. + +Instruction Parsing +------------------- + +.. note:: + + To Be Written + + +Instruction Alias Processing +---------------------------- + +Once the instruction is parsed, it enters the MatchInstructionImpl function. +The MatchInstructionImpl function performs alias processing and then does actual +matching. + +Alias processing is the phase that canonicalizes different lexical forms of the +same instructions down to one representation. There are several different kinds +of alias that are possible to implement and they are listed below in the order +that they are processed (which is in order from simplest/weakest to most +complex/powerful). Generally you want to use the first alias mechanism that +meets the needs of your instruction, because it will allow a more concise +description. + +Mnemonic Aliases +^^^^^^^^^^^^^^^^ + +The first phase of alias processing is simple instruction mnemonic remapping for +classes of instructions which are allowed with two different mnemonics. This +phase is a simple and unconditionally remapping from one input mnemonic to one +output mnemonic. It isn't possible for this form of alias to look at the +operands at all, so the remapping must apply for all forms of a given mnemonic. +Mnemonic aliases are defined simply, for example X86 has: + +:: + + def : MnemonicAlias<"cbw", "cbtw">; + def : MnemonicAlias<"smovq", "movsq">; + def : MnemonicAlias<"fldcww", "fldcw">; + def : MnemonicAlias<"fucompi", "fucomip">; + def : MnemonicAlias<"ud2a", "ud2">; + +... and many others. With a MnemonicAlias definition, the mnemonic is remapped +simply and directly. Though MnemonicAlias's can't look at any aspect of the +instruction (such as the operands) they can depend on global modes (the same +ones supported by the matcher), through a Requires clause: + +:: + + def : MnemonicAlias<"pushf", "pushfq">, Requires<[In64BitMode]>; + def : MnemonicAlias<"pushf", "pushfl">, Requires<[In32BitMode]>; + +In this example, the mnemonic gets mapped into different a new one depending on +the current instruction set. + +Instruction Aliases +^^^^^^^^^^^^^^^^^^^ + +The most general phase of alias processing occurs while matching is happening: +it provides new forms for the matcher to match along with a specific instruction +to generate. An instruction alias has two parts: the string to match and the +instruction to generate. For example: + +:: + + def : InstAlias<"movsx $src, $dst", (MOVSX16rr8W GR16:$dst, GR8 :$src)>; + def : InstAlias<"movsx $src, $dst", (MOVSX16rm8W GR16:$dst, i8mem:$src)>; + def : InstAlias<"movsx $src, $dst", (MOVSX32rr8 GR32:$dst, GR8 :$src)>; + def : InstAlias<"movsx $src, $dst", (MOVSX32rr16 GR32:$dst, GR16 :$src)>; + def : InstAlias<"movsx $src, $dst", (MOVSX64rr8 GR64:$dst, GR8 :$src)>; + def : InstAlias<"movsx $src, $dst", (MOVSX64rr16 GR64:$dst, GR16 :$src)>; + def : InstAlias<"movsx $src, $dst", (MOVSX64rr32 GR64:$dst, GR32 :$src)>; + +This shows a powerful example of the instruction aliases, matching the same +mnemonic in multiple different ways depending on what operands are present in +the assembly. The result of instruction aliases can include operands in a +different order than the destination instruction, and can use an input multiple +times, for example: + +:: + + def : InstAlias<"clrb $reg", (XOR8rr GR8 :$reg, GR8 :$reg)>; + def : InstAlias<"clrw $reg", (XOR16rr GR16:$reg, GR16:$reg)>; + def : InstAlias<"clrl $reg", (XOR32rr GR32:$reg, GR32:$reg)>; + def : InstAlias<"clrq $reg", (XOR64rr GR64:$reg, GR64:$reg)>; + +This example also shows that tied operands are only listed once. In the X86 +backend, XOR8rr has two input GR8's and one output GR8 (where an input is tied +to the output). InstAliases take a flattened operand list without duplicates +for tied operands. The result of an instruction alias can also use immediates +and fixed physical registers which are added as simple immediate operands in the +result, for example: + +:: + + // Fixed Immediate operand. + def : InstAlias<"aad", (AAD8i8 10)>; + + // Fixed register operand. + def : InstAlias<"fcomi", (COM_FIr ST1)>; + + // Simple alias. + def : InstAlias<"fcomi $reg", (COM_FIr RST:$reg)>; + +Instruction aliases can also have a Requires clause to make them subtarget +specific. + +If the back-end supports it, the instruction printer can automatically emit the +alias rather than what's being aliased. It typically leads to better, more +readable code. If it's better to print out what's being aliased, then pass a '0' +as the third parameter to the InstAlias definition. + +Instruction Matching +-------------------- + +.. note:: + + To Be Written + +.. _Implementations of the abstract target description interfaces: +.. _implement the target description: + +Target-specific Implementation Notes +==================================== + +This section of the document explains features or design decisions that are +specific to the code generator for a particular target. First we start with a +table that summarizes what features are supported by each target. + +Target Feature Matrix +--------------------- + +Note that this table does not include the C backend or Cpp backends, since they +do not use the target independent code generator infrastructure. It also +doesn't list features that are not supported fully by any target yet. It +considers a feature to be supported if at least one subtarget supports it. A +feature being supported means that it is useful and works for most cases, it +does not indicate that there are zero known bugs in the implementation. Here is +the key: + +:raw-html:`<table border="1" cellspacing="0">` +:raw-html:`<tr>` +:raw-html:`<th>Unknown</th>` +:raw-html:`<th>No support</th>` +:raw-html:`<th>Partial Support</th>` +:raw-html:`<th>Complete Support</th>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td class="unknown"></td>` +:raw-html:`<td class="no"></td>` +:raw-html:`<td class="partial"></td>` +:raw-html:`<td class="yes"></td>` +:raw-html:`</tr>` +:raw-html:`</table>` + +Here is the table: + +:raw-html:`<table width="689" border="1" cellspacing="0">` +:raw-html:`<tr><td></td>` +:raw-html:`<td colspan="13" align="center" style="background-color:#ffc">Target</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<th>Feature</th>` +:raw-html:`<th>ARM</th>` +:raw-html:`<th>CellSPU</th>` +:raw-html:`<th>Hexagon</th>` +:raw-html:`<th>MBlaze</th>` +:raw-html:`<th>MSP430</th>` +:raw-html:`<th>Mips</th>` +:raw-html:`<th>PTX</th>` +:raw-html:`<th>PowerPC</th>` +:raw-html:`<th>Sparc</th>` +:raw-html:`<th>X86</th>` +:raw-html:`<th>XCore</th>` +:raw-html:`</tr>` + +:raw-html:`<tr>` +:raw-html:`<td><a href="#feat_reliable">is generally reliable</a></td>` +:raw-html:`<td class="yes"></td> <!-- ARM -->` +:raw-html:`<td class="no"></td> <!-- CellSPU -->` +:raw-html:`<td class="yes"></td> <!-- Hexagon -->` +:raw-html:`<td class="no"></td> <!-- MBlaze -->` +:raw-html:`<td class="unknown"></td> <!-- MSP430 -->` +:raw-html:`<td class="yes"></td> <!-- Mips -->` +:raw-html:`<td class="no"></td> <!-- PTX -->` +:raw-html:`<td class="yes"></td> <!-- PowerPC -->` +:raw-html:`<td class="yes"></td> <!-- Sparc -->` +:raw-html:`<td class="yes"></td> <!-- X86 -->` +:raw-html:`<td class="unknown"></td> <!-- XCore -->` +:raw-html:`</tr>` + +:raw-html:`<tr>` +:raw-html:`<td><a href="#feat_asmparser">assembly parser</a></td>` +:raw-html:`<td class="no"></td> <!-- ARM -->` +:raw-html:`<td class="no"></td> <!-- CellSPU -->` +:raw-html:`<td class="no"></td> <!-- Hexagon -->` +:raw-html:`<td class="yes"></td> <!-- MBlaze -->` +:raw-html:`<td class="no"></td> <!-- MSP430 -->` +:raw-html:`<td class="no"></td> <!-- Mips -->` +:raw-html:`<td class="no"></td> <!-- PTX -->` +:raw-html:`<td class="no"></td> <!-- PowerPC -->` +:raw-html:`<td class="no"></td> <!-- Sparc -->` +:raw-html:`<td class="yes"></td> <!-- X86 -->` +:raw-html:`<td class="no"></td> <!-- XCore -->` +:raw-html:`</tr>` + +:raw-html:`<tr>` +:raw-html:`<td><a href="#feat_disassembler">disassembler</a></td>` +:raw-html:`<td class="yes"></td> <!-- ARM -->` +:raw-html:`<td class="no"></td> <!-- CellSPU -->` +:raw-html:`<td class="no"></td> <!-- Hexagon -->` +:raw-html:`<td class="yes"></td> <!-- MBlaze -->` +:raw-html:`<td class="no"></td> <!-- MSP430 -->` +:raw-html:`<td class="no"></td> <!-- Mips -->` +:raw-html:`<td class="no"></td> <!-- PTX -->` +:raw-html:`<td class="no"></td> <!-- PowerPC -->` +:raw-html:`<td class="no"></td> <!-- Sparc -->` +:raw-html:`<td class="yes"></td> <!-- X86 -->` +:raw-html:`<td class="no"></td> <!-- XCore -->` +:raw-html:`</tr>` + +:raw-html:`<tr>` +:raw-html:`<td><a href="#feat_inlineasm">inline asm</a></td>` +:raw-html:`<td class="yes"></td> <!-- ARM -->` +:raw-html:`<td class="no"></td> <!-- CellSPU -->` +:raw-html:`<td class="yes"></td> <!-- Hexagon -->` +:raw-html:`<td class="yes"></td> <!-- MBlaze -->` +:raw-html:`<td class="unknown"></td> <!-- MSP430 -->` +:raw-html:`<td class="no"></td> <!-- Mips -->` +:raw-html:`<td class="unknown"></td> <!-- PTX -->` +:raw-html:`<td class="yes"></td> <!-- PowerPC -->` +:raw-html:`<td class="unknown"></td> <!-- Sparc -->` +:raw-html:`<td class="yes"></td> <!-- X86 -->` +:raw-html:`<td class="unknown"></td> <!-- XCore -->` +:raw-html:`</tr>` + +:raw-html:`<tr>` +:raw-html:`<td><a href="#feat_jit">jit</a></td>` +:raw-html:`<td class="partial"><a href="#feat_jit_arm">*</a></td> <!-- ARM -->` +:raw-html:`<td class="no"></td> <!-- CellSPU -->` +:raw-html:`<td class="no"></td> <!-- Hexagon -->` +:raw-html:`<td class="no"></td> <!-- MBlaze -->` +:raw-html:`<td class="unknown"></td> <!-- MSP430 -->` +:raw-html:`<td class="yes"></td> <!-- Mips -->` +:raw-html:`<td class="unknown"></td> <!-- PTX -->` +:raw-html:`<td class="yes"></td> <!-- PowerPC -->` +:raw-html:`<td class="unknown"></td> <!-- Sparc -->` +:raw-html:`<td class="yes"></td> <!-- X86 -->` +:raw-html:`<td class="unknown"></td> <!-- XCore -->` +:raw-html:`</tr>` + +:raw-html:`<tr>` +:raw-html:`<td><a href="#feat_objectwrite">.o file writing</a></td>` +:raw-html:`<td class="no"></td> <!-- ARM -->` +:raw-html:`<td class="no"></td> <!-- CellSPU -->` +:raw-html:`<td class="no"></td> <!-- Hexagon -->` +:raw-html:`<td class="yes"></td> <!-- MBlaze -->` +:raw-html:`<td class="no"></td> <!-- MSP430 -->` +:raw-html:`<td class="no"></td> <!-- Mips -->` +:raw-html:`<td class="no"></td> <!-- PTX -->` +:raw-html:`<td class="no"></td> <!-- PowerPC -->` +:raw-html:`<td class="no"></td> <!-- Sparc -->` +:raw-html:`<td class="yes"></td> <!-- X86 -->` +:raw-html:`<td class="no"></td> <!-- XCore -->` +:raw-html:`</tr>` + +:raw-html:`<tr>` +:raw-html:`<td><a hr:raw-html:`ef="#feat_tailcall">tail calls</a></td>` +:raw-html:`<td class="yes"></td> <!-- ARM -->` +:raw-html:`<td class="no"></td> <!-- CellSPU -->` +:raw-html:`<td class="yes"></td> <!-- Hexagon -->` +:raw-html:`<td class="no"></td> <!-- MBlaze -->` +:raw-html:`<td class="unknown"></td> <!-- MSP430 -->` +:raw-html:`<td class="no"></td> <!-- Mips -->` +:raw-html:`<td class="unknown"></td> <!-- PTX -->` +:raw-html:`<td class="yes"></td> <!-- PowerPC -->` +:raw-html:`<td class="unknown"></td> <!-- Sparc -->` +:raw-html:`<td class="yes"></td> <!-- X86 -->` +:raw-html:`<td class="unknown"></td> <!-- XCore -->` +:raw-html:`</tr>` + +:raw-html:`<tr>` +:raw-html:`<td><a href="#feat_segstacks">segmented stacks</a></td>` +:raw-html:`<td class="no"></td> <!-- ARM -->` +:raw-html:`<td class="no"></td> <!-- CellSPU -->` +:raw-html:`<td class="no"></td> <!-- Hexagon -->` +:raw-html:`<td class="no"></td> <!-- MBlaze -->` +:raw-html:`<td class="no"></td> <!-- MSP430 -->` +:raw-html:`<td class="no"></td> <!-- Mips -->` +:raw-html:`<td class="no"></td> <!-- PTX -->` +:raw-html:`<td class="no"></td> <!-- PowerPC -->` +:raw-html:`<td class="no"></td> <!-- Sparc -->` +:raw-html:`<td class="partial"><a href="#feat_segstacks_x86">*</a></td> <!-- X86 -->` +:raw-html:`<td class="no"></td> <!-- XCore -->` +:raw-html:`</tr>` + +:raw-html:`</table>` + +.. _feat_reliable: + +Is Generally Reliable +^^^^^^^^^^^^^^^^^^^^^ + +This box indicates whether the target is considered to be production quality. +This indicates that the target has been used as a static compiler to compile +large amounts of code by a variety of different people and is in continuous use. + +.. _feat_asmparser: + +Assembly Parser +^^^^^^^^^^^^^^^ + +This box indicates whether the target supports parsing target specific .s files +by implementing the MCAsmParser interface. This is required for llvm-mc to be +able to act as a native assembler and is required for inline assembly support in +the native .o file writer. + +.. _feat_disassembler: + +Disassembler +^^^^^^^^^^^^ + +This box indicates whether the target supports the MCDisassembler API for +disassembling machine opcode bytes into MCInst's. + +.. _feat_inlineasm: + +Inline Asm +^^^^^^^^^^ + +This box indicates whether the target supports most popular inline assembly +constraints and modifiers. + +.. _feat_jit: + +JIT Support +^^^^^^^^^^^ + +This box indicates whether the target supports the JIT compiler through the +ExecutionEngine interface. + +.. _feat_jit_arm: + +The ARM backend has basic support for integer code in ARM codegen mode, but +lacks NEON and full Thumb support. + +.. _feat_objectwrite: + +.o File Writing +^^^^^^^^^^^^^^^ + +This box indicates whether the target supports writing .o files (e.g. MachO, +ELF, and/or COFF) files directly from the target. Note that the target also +must include an assembly parser and general inline assembly support for full +inline assembly support in the .o writer. + +Targets that don't support this feature can obviously still write out .o files, +they just rely on having an external assembler to translate from a .s file to a +.o file (as is the case for many C compilers). + +.. _feat_tailcall: + +Tail Calls +^^^^^^^^^^ + +This box indicates whether the target supports guaranteed tail calls. These are +calls marked "`tail <LangRef.html#i_call>`_" and use the fastcc calling +convention. Please see the `tail call section more more details`_. + +.. _feat_segstacks: + +Segmented Stacks +^^^^^^^^^^^^^^^^ + +This box indicates whether the target supports segmented stacks. This replaces +the traditional large C stack with many linked segments. It is compatible with +the `gcc implementation <http://gcc.gnu.org/wiki/SplitStacks>`_ used by the Go +front end. + +.. _feat_segstacks_x86: + +Basic support exists on the X86 backend. Currently vararg doesn't work and the +object files are not marked the way the gold linker expects, but simple Go +programs can be built by dragonegg. + +.. _tail call section more more details: + +Tail call optimization +---------------------- + +Tail call optimization, callee reusing the stack of the caller, is currently +supported on x86/x86-64 and PowerPC. It is performed if: + +* Caller and callee have the calling convention ``fastcc`` or ``cc 10`` (GHC + call convention). + +* The call is a tail call - in tail position (ret immediately follows call and + ret uses value of call or is void). + +* Option ``-tailcallopt`` is enabled. + +* Platform specific constraints are met. + +x86/x86-64 constraints: + +* No variable argument lists are used. + +* On x86-64 when generating GOT/PIC code only module-local calls (visibility = + hidden or protected) are supported. + +PowerPC constraints: + +* No variable argument lists are used. + +* No byval parameters are used. + +* On ppc32/64 GOT/PIC only module-local calls (visibility = hidden or protected) + are supported. + +Example: + +Call as ``llc -tailcallopt test.ll``. + +.. code-block:: llvm + + declare fastcc i32 @tailcallee(i32 inreg %a1, i32 inreg %a2, i32 %a3, i32 %a4) + + define fastcc i32 @tailcaller(i32 %in1, i32 %in2) { + %l1 = add i32 %in1, %in2 + %tmp = tail call fastcc i32 @tailcallee(i32 %in1 inreg, i32 %in2 inreg, i32 %in1, i32 %l1) + ret i32 %tmp + } + +Implications of ``-tailcallopt``: + +To support tail call optimization in situations where the callee has more +arguments than the caller a 'callee pops arguments' convention is used. This +currently causes each ``fastcc`` call that is not tail call optimized (because +one or more of above constraints are not met) to be followed by a readjustment +of the stack. So performance might be worse in such cases. + +Sibling call optimization +------------------------- + +Sibling call optimization is a restricted form of tail call optimization. +Unlike tail call optimization described in the previous section, it can be +performed automatically on any tail calls when ``-tailcallopt`` option is not +specified. + +Sibling call optimization is currently performed on x86/x86-64 when the +following constraints are met: + +* Caller and callee have the same calling convention. It can be either ``c`` or + ``fastcc``. + +* The call is a tail call - in tail position (ret immediately follows call and + ret uses value of call or is void). + +* Caller and callee have matching return type or the callee result is not used. + +* If any of the callee arguments are being passed in stack, they must be + available in caller's own incoming argument stack and the frame offsets must + be the same. + +Example: + +.. code-block:: llvm + + declare i32 @bar(i32, i32) + + define i32 @foo(i32 %a, i32 %b, i32 %c) { + entry: + %0 = tail call i32 @bar(i32 %a, i32 %b) + ret i32 %0 + } + +The X86 backend +--------------- + +The X86 code generator lives in the ``lib/Target/X86`` directory. This code +generator is capable of targeting a variety of x86-32 and x86-64 processors, and +includes support for ISA extensions such as MMX and SSE. + +X86 Target Triples supported +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The following are the known target triples that are supported by the X86 +backend. This is not an exhaustive list, and it would be useful to add those +that people test. + +* **i686-pc-linux-gnu** --- Linux + +* **i386-unknown-freebsd5.3** --- FreeBSD 5.3 + +* **i686-pc-cygwin** --- Cygwin on Win32 + +* **i686-pc-mingw32** --- MingW on Win32 + +* **i386-pc-mingw32msvc** --- MingW crosscompiler on Linux + +* **i686-apple-darwin*** --- Apple Darwin on X86 + +* **x86_64-unknown-linux-gnu** --- Linux + +X86 Calling Conventions supported +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The following target-specific calling conventions are known to backend: + +* **x86_StdCall** --- stdcall calling convention seen on Microsoft Windows + platform (CC ID = 64). + +* **x86_FastCall** --- fastcall calling convention seen on Microsoft Windows + platform (CC ID = 65). + +* **x86_ThisCall** --- Similar to X86_StdCall. Passes first argument in ECX, + others via stack. Callee is responsible for stack cleaning. This convention is + used by MSVC by default for methods in its ABI (CC ID = 70). + +.. _X86 addressing mode: + +Representing X86 addressing modes in MachineInstrs +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The x86 has a very flexible way of accessing memory. It is capable of forming +memory addresses of the following expression directly in integer instructions +(which use ModR/M addressing): + +:: + + SegmentReg: Base + [1,2,4,8] * IndexReg + Disp32 + +In order to represent this, LLVM tracks no less than 5 operands for each memory +operand of this form. This means that the "load" form of '``mov``' has the +following ``MachineOperand``\s in this order: + +:: + + Index: 0 | 1 2 3 4 5 + Meaning: DestReg, | BaseReg, Scale, IndexReg, Displacement Segment + OperandTy: VirtReg, | VirtReg, UnsImm, VirtReg, SignExtImm PhysReg + +Stores, and all other instructions, treat the four memory operands in the same +way and in the same order. If the segment register is unspecified (regno = 0), +then no segment override is generated. "Lea" operations do not have a segment +register specified, so they only have 4 operands for their memory reference. + +X86 address spaces supported +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +x86 has a feature which provides the ability to perform loads and stores to +different address spaces via the x86 segment registers. A segment override +prefix byte on an instruction causes the instruction's memory access to go to +the specified segment. LLVM address space 0 is the default address space, which +includes the stack, and any unqualified memory accesses in a program. Address +spaces 1-255 are currently reserved for user-defined code. The GS-segment is +represented by address space 256, while the FS-segment is represented by address +space 257. Other x86 segments have yet to be allocated address space +numbers. + +While these address spaces may seem similar to TLS via the ``thread_local`` +keyword, and often use the same underlying hardware, there are some fundamental +differences. + +The ``thread_local`` keyword applies to global variables and specifies that they +are to be allocated in thread-local memory. There are no type qualifiers +involved, and these variables can be pointed to with normal pointers and +accessed with normal loads and stores. The ``thread_local`` keyword is +target-independent at the LLVM IR level (though LLVM doesn't yet have +implementations of it for some configurations) + +Special address spaces, in contrast, apply to static types. Every load and store +has a particular address space in its address operand type, and this is what +determines which address space is accessed. LLVM ignores these special address +space qualifiers on global variables, and does not provide a way to directly +allocate storage in them. At the LLVM IR level, the behavior of these special +address spaces depends in part on the underlying OS or runtime environment, and +they are specific to x86 (and LLVM doesn't yet handle them correctly in some +cases). + +Some operating systems and runtime environments use (or may in the future use) +the FS/GS-segment registers for various low-level purposes, so care should be +taken when considering them. + +Instruction naming +^^^^^^^^^^^^^^^^^^ + +An instruction name consists of the base name, a default operand size, and a a +character per operand with an optional special size. For example: + +:: + + ADD8rr -> add, 8-bit register, 8-bit register + IMUL16rmi -> imul, 16-bit register, 16-bit memory, 16-bit immediate + IMUL16rmi8 -> imul, 16-bit register, 16-bit memory, 8-bit immediate + MOVSX32rm16 -> movsx, 32-bit register, 16-bit memory + +The PowerPC backend +------------------- + +The PowerPC code generator lives in the lib/Target/PowerPC directory. The code +generation is retargetable to several variations or *subtargets* of the PowerPC +ISA; including ppc32, ppc64 and altivec. + +LLVM PowerPC ABI +^^^^^^^^^^^^^^^^ + +LLVM follows the AIX PowerPC ABI, with two deviations. LLVM uses a PC relative +(PIC) or static addressing for accessing global values, so no TOC (r2) is +used. Second, r31 is used as a frame pointer to allow dynamic growth of a stack +frame. LLVM takes advantage of having no TOC to provide space to save the frame +pointer in the PowerPC linkage area of the caller frame. Other details of +PowerPC ABI can be found at `PowerPC ABI +<http://developer.apple.com/documentation/DeveloperTools/Conceptual/LowLevelABI/Articles/32bitPowerPC.html>`_\ +. Note: This link describes the 32 bit ABI. The 64 bit ABI is similar except +space for GPRs are 8 bytes wide (not 4) and r13 is reserved for system use. + +Frame Layout +^^^^^^^^^^^^ + +The size of a PowerPC frame is usually fixed for the duration of a function's +invocation. Since the frame is fixed size, all references into the frame can be +accessed via fixed offsets from the stack pointer. The exception to this is +when dynamic alloca or variable sized arrays are present, then a base pointer +(r31) is used as a proxy for the stack pointer and stack pointer is free to grow +or shrink. A base pointer is also used if llvm-gcc is not passed the +-fomit-frame-pointer flag. The stack pointer is always aligned to 16 bytes, so +that space allocated for altivec vectors will be properly aligned. + +An invocation frame is laid out as follows (low memory at top): + +:raw-html:`<table border="1" cellspacing="0">` +:raw-html:`<tr>` +:raw-html:`<td>Linkage<br><br></td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>Parameter area<br><br></td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>Dynamic area<br><br></td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>Locals area<br><br></td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>Saved registers area<br><br></td>` +:raw-html:`</tr>` +:raw-html:`<tr style="border-style: none hidden none hidden;">` +:raw-html:`<td><br></td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>Previous Frame<br><br></td>` +:raw-html:`</tr>` +:raw-html:`</table>` + +The *linkage* area is used by a callee to save special registers prior to +allocating its own frame. Only three entries are relevant to LLVM. The first +entry is the previous stack pointer (sp), aka link. This allows probing tools +like gdb or exception handlers to quickly scan the frames in the stack. A +function epilog can also use the link to pop the frame from the stack. The +third entry in the linkage area is used to save the return address from the lr +register. Finally, as mentioned above, the last entry is used to save the +previous frame pointer (r31.) The entries in the linkage area are the size of a +GPR, thus the linkage area is 24 bytes long in 32 bit mode and 48 bytes in 64 +bit mode. + +32 bit linkage area: + +:raw-html:`<table border="1" cellspacing="0">` +:raw-html:`<tr>` +:raw-html:`<td>0</td>` +:raw-html:`<td>Saved SP (r1)</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>4</td>` +:raw-html:`<td>Saved CR</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>8</td>` +:raw-html:`<td>Saved LR</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>12</td>` +:raw-html:`<td>Reserved</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>16</td>` +:raw-html:`<td>Reserved</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>20</td>` +:raw-html:`<td>Saved FP (r31)</td>` +:raw-html:`</tr>` +:raw-html:`</table>` + +64 bit linkage area: + +:raw-html:`<table border="1" cellspacing="0">` +:raw-html:`<tr>` +:raw-html:`<td>0</td>` +:raw-html:`<td>Saved SP (r1)</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>8</td>` +:raw-html:`<td>Saved CR</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>16</td>` +:raw-html:`<td>Saved LR</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>24</td>` +:raw-html:`<td>Reserved</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>32</td>` +:raw-html:`<td>Reserved</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>40</td>` +:raw-html:`<td>Saved FP (r31)</td>` +:raw-html:`</tr>` +:raw-html:`</table>` + +The *parameter area* is used to store arguments being passed to a callee +function. Following the PowerPC ABI, the first few arguments are actually +passed in registers, with the space in the parameter area unused. However, if +there are not enough registers or the callee is a thunk or vararg function, +these register arguments can be spilled into the parameter area. Thus, the +parameter area must be large enough to store all the parameters for the largest +call sequence made by the caller. The size must also be minimally large enough +to spill registers r3-r10. This allows callees blind to the call signature, +such as thunks and vararg functions, enough space to cache the argument +registers. Therefore, the parameter area is minimally 32 bytes (64 bytes in 64 +bit mode.) Also note that since the parameter area is a fixed offset from the +top of the frame, that a callee can access its spilt arguments using fixed +offsets from the stack pointer (or base pointer.) + +Combining the information about the linkage, parameter areas and alignment. A +stack frame is minimally 64 bytes in 32 bit mode and 128 bytes in 64 bit mode. + +The *dynamic area* starts out as size zero. If a function uses dynamic alloca +then space is added to the stack, the linkage and parameter areas are shifted to +top of stack, and the new space is available immediately below the linkage and +parameter areas. The cost of shifting the linkage and parameter areas is minor +since only the link value needs to be copied. The link value can be easily +fetched by adding the original frame size to the base pointer. Note that +allocations in the dynamic space need to observe 16 byte alignment. + +The *locals area* is where the llvm compiler reserves space for local variables. + +The *saved registers area* is where the llvm compiler spills callee saved +registers on entry to the callee. + +Prolog/Epilog +^^^^^^^^^^^^^ + +The llvm prolog and epilog are the same as described in the PowerPC ABI, with +the following exceptions. Callee saved registers are spilled after the frame is +created. This allows the llvm epilog/prolog support to be common with other +targets. The base pointer callee saved register r31 is saved in the TOC slot of +linkage area. This simplifies allocation of space for the base pointer and +makes it convenient to locate programatically and during debugging. + +Dynamic Allocation +^^^^^^^^^^^^^^^^^^ + +.. note:: + + TODO - More to come. + +The PTX backend +--------------- + +The PTX code generator lives in the lib/Target/PTX directory. It is currently a +work-in-progress, but already supports most of the code generation functionality +needed to generate correct PTX kernels for CUDA devices. + +The code generator can target PTX 2.0+, and shader model 1.0+. The PTX ISA +Reference Manual is used as the primary source of ISA information, though an +effort is made to make the output of the code generator match the output of the +NVidia nvcc compiler, whenever possible. + +Code Generator Options: + +:raw-html:`<table border="1" cellspacing="0">` +:raw-html:`<tr>` +:raw-html:`<th>Option</th>` +:raw-html:`<th>Description</th>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>``double``</td>` +:raw-html:`<td align="left">If enabled, the map_f64_to_f32 directive is disabled in the PTX output, allowing native double-precision arithmetic</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>``no-fma``</td>` +:raw-html:`<td align="left">Disable generation of Fused-Multiply Add instructions, which may be beneficial for some devices</td>` +:raw-html:`</tr>` +:raw-html:`<tr>` +:raw-html:`<td>``smxy / computexy``</td>` +:raw-html:`<td align="left">Set shader model/compute capability to x.y, e.g. sm20 or compute13</td>` +:raw-html:`</tr>` +:raw-html:`</table>` + +Working: + +* Arithmetic instruction selection (including combo FMA) + +* Bitwise instruction selection + +* Control-flow instruction selection + +* Function calls (only on SM 2.0+ and no return arguments) + +* Addresses spaces (0 = global, 1 = constant, 2 = local, 4 = shared) + +* Thread synchronization (bar.sync) + +* Special register reads ([N]TID, [N]CTAID, PMx, CLOCK, etc.) + +In Progress: + +* Robust call instruction selection + +* Stack frame allocation + +* Device-specific instruction scheduling optimizations diff --git a/docs/subsystems.rst b/docs/subsystems.rst index c4c3b6d595..be33295a15 100644 --- a/docs/subsystems.rst +++ b/docs/subsystems.rst @@ -10,6 +10,7 @@ Subsystem Documentation BitCodeFormat BranchWeightMetadata Bugpoint + CodeGenerator ExceptionHandling LinkTimeOptimization SegmentedStacks @@ -22,9 +23,9 @@ Subsystem Documentation * `Writing an LLVM Backend <WritingAnLLVMBackend.html>`_ Information on how to write LLVM backends for machine targets. - -* `The LLVM Target-Independent Code Generator <CodeGenerator.html>`_ - + +* :ref:`code_generator` + The design and implementation of the LLVM code generator. Useful if you are working on retargetting LLVM to a new architecture, designing a new codegen pass, or enhancing existing components. |