summaryrefslogtreecommitdiff
path: root/doc/XenIPC.txt
blob: 14a75d0505a9dcd7c52c371b473b9ba17386679c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Linpicker-on-Xen Interprocess Communication (IPC)
David A. Wheeler
2008-05-09
Eamon Walsh
2011-06-02

This document describes how Linpicker communicates on top of Xen, in
great detail.  The goal is to make exactly what happens clear, and
to ensure that we have not missed anything regarding security.

Linpicker involves the interaction of two types of virtual machines (VM)s:
* Driver Domain. This is trusted, runs the Linpicker server, and interacts
  directly with the user.  For now it will run on dom0, though in theory it
  could run in a stub domain.
* Client(s).  Each client is responsible for setting up its graphical
  environment, and communicating with the server to express what it would
  LIKE to have displayed.  The server does NOT trust the clients, and
  clients typically do not trust each other either.  From here on, we will
  discuss one client at a time, but with the acknowledgement that there
  often are several.

The Driver Domain and Client VM's communicate using the following channels:
* XenStore.  When a Client VM starts up, Linpicker server will write backend
  information into Xenstore.  The Client VM can then write frontend information
  including the grant reference ID's that the server needs to map, and the
  event channels to use.  Both frontend and backend transition through a
  state diagram until both are in the Connected state.

  A malicious client could write incorrect information into XenStore.  The
  server's state model should be robust against unexpected state changes by
  the frontend.  Incorrect grant reference ID's or event channels should be
  protected through hypervisor mechanisms.  The server should also detect
  when the Client VM has disconnected, either through VM shutdown or state
  change away from Connected, and immediately free the associated resources.

* Grant References and Event Channels.  These are the raw primitives used for
  inter-domain communications.  All memory used for Linpicker purposes should
  be allocated by the guest and granted to the Driver Domain (for an exception,
  see "Track Protocol" below).  This includes memory used for the guest
  framebuffers as well as pages used for ring buffers.  These primitives are
  assumed to be robust as they are implemented by the hypervisor.  However, the
  Driver Domain implementation of the gntdev and evtchn devices should be
  robust as well.

* Libvchan.  This is a communications library based on a standard Xen ring
  buffer shared via a grant reference.  The server-side libvchan implementation
  needs to be robust against guest behavior such as arbitraily modifying the
  data in the ring buffer or attempting to exploit race conditions by rapidly
  modifying ring buffer data.  The cases where the ring buffer is filled or
  inaccessible need to be handled gracefully as well.

* Track protocol.  This is the application protocol used by linpicker_track to
  communicate view information via libvchan to Linpicker server.  Since the
  guest side is untrusted, this protocol needs to be robust against malicious
  behavior.  In particular, the number of views and buffers must be limited
  due to the fact that each view and buffer consumes resources in on the
  server side.  The aspects of each view also need to be checked, for example
  making sure that the view is within the dimensions of the containing buffer.
  The amount of protocol activity may also need to be limited to reduce the
  potential for denial of service attacks on the Driver Domain.