Skip to content

ModelingServices

Larry Peterson edited this page Feb 21, 2015 · 8 revisions

Modeling Services

Approach

Rather than model each service as a subclass of the XOS Service class, we associate a set of opaque tuples (attributes) with each service, and create a service-specific View that renders this service to the user. This view understands how to interpret the attributes.

More specifically, we build a view using xoslib, which means we need to extend the server side of xoslib to include this service. It is fact this server side that "interprets" the service attributes. This is a trust issue that needs to be addressed, but moving this to xoslib (or some service-specific library) is easier than adding models to the DB.

In the case of native services (those that do not already have a service-specific controller), we also provide a generic Observer that pulls these tuples out of the DB and distributes them to the instances (slivers) that implement the service, perhaps using a building block mechanism like Syndicate.

So in the end, XOS maintains the persistent state, but all service-specific interpretation of this state happens outside the DB (i.e., in a library built on top, or in a backend instance that parses the configuration attributes it is handed).

Data Model

Service

  • Name -- human readable name
  • Slices[ ] -- resource container(s) in which service instances run
  • View -- info needed to implement a view on the service
  • Controller -- info needed to implement an observer plug-in
  • Parameters -- set of Attributes that represent global configuration state
  • Tenants[ ] -- state representing an individual tenant of the service

View

  • Name -- human readable name
  • Type -- iframe, javascript,...
  • Render -- URL that renders view

Tenant

  • Name -- human readable name
  • Tenant -- a service
  • Provider -- a service
  • Connectivity -- how they connect (shared network, interconnected networks)
  • Parameters -- set of Attributes that represent this tenant (e.g., credentials)

Controller

  • URL of external controller, if there is one
  • credentials needed to call external controller
  • other info needed to run the observer plug-in

Attributes

  • Set of name/value pairs

Policy

  • Whitelist -- predicate that determines whether or not a given tenant can create a new tenancy.

Observer Support

In the case of legacy services -- those that already have a service controller -- the observer plug-in associated with the service reads the global parameters and per-tenant parameters associated with the service from the DB, and makes the corresponding calls on the existing service controller.

In the case of native services -- those that do not have a legacy controller -- we expect to use Syndicate to distribute the global parameters and per-tenant parameters to all the instances running in OpenCloud. The corresponding tuples are likely translated into a config file that is meaningful to those instances, and then written to a well-known place in a shared volume.

Securely Bootstrapping Services

In addition to distributing configuration state to instances implementing a native service, Syndicate also offers the means to securely bootstrap service instances. Syndicate would offer a way for users to ship program binaries to their slices, as well as a way for the observer to ship configuration state to the slice.

Each slice will receive a private Syndicate volume, and the Observer will be registered as a read/write user on it that creates read-only configuration files. The user can mount the same volume on their local workstation to access the service proper as well as put new service code and data into the VM (Figure 1).

Figure 1

Figure 1: the Syndicate shared volume is used both as a mechanism for delivering service configuration from the OpenCloud observer to VMs (red arrows), as well as a mechanism for users to upload new service code and download new service state (green arrows).

Composing Services

XOS provides explicit support for composing services, or to be more specific, for service providers to declare that “Service A is a tenant of Service B." We represent this relationship with the Tenant object (defined above). The object identifies the tenant and provider services, plus how they are connected in the underlying network (see below).

While multi-tenancy is a staple of cloud services, there are typically two assumptions that do not apply in the general case that XOS addresses. The first assumption is that the tenant is a user (e.g., John Smith is a tenant of Amazon’s EC2 service). Tenancy raises additional challenges when the tenant is itself an elastically scalable service; all the service instances must collectively be able to access the provider service. The second assumption is that all services are autonomous -- that is, each is an independently operated service that runs “on” some cloud (e.g., EC2). But XOS is also designed to support services that are “part of” the cloud, which more closely corresponds to all the services offered by AWS (as opposed to run on AWS). These services build upon each other and are all operated by a single cloud provider (Amazon).

While we could leave these challenges to the individual services to address, XOS provides mechanisms that lower the operational costs of Service A being a tenant of Service B under these more general circumstances. We broadly characterize them as enabling composition in the data plane and enabling composition in the control plane. The first aspect of service composition is data plane connectivity -- the ability of one service to connect to (and exchange packets with) another service. All services in XOS are connected to one or more virtual networks. These virtual networks are designed to provide isolation. This is the primary role they play in a multi-tenant cloud, but for two services to compose means that the constituent service instances (i.e., VMs) must be able to communicate with each other. This could happen over the public Internet, but doing so violates the principle of least privilege. There are many deployment scenarios in which “internal” services are composed with each other, but without offering a publicly reachable interface.

XOS provides three means to interconnect a pair of services (as indicated by the Connectivity field in the Tenant object). The first is the default today: the services communicate over the public Internet. The second is to create a single VN that is shared by two or more slices. Such a shared VN interconnects the union of all VMs belonging to the participating slices. The third leverages OpenVirteX to install the appropriate flow rules in the underlying switches so as to pass packets from one VN to another. The “gateway” between the two VNs is logical—packets do not traverse a “router process” as they cross from one VN to another.

The second aspect of service composition is control plane tenancy -- managing the tenancy of one service relative to another. XOS provides mechanisms that address two challenges. First, each instance of service A (i.e., each VM that implements A) must have the requisite tenancy credentials to access B. For example, if Service A is a tenant of a scalable storage service (B), then each VM in A needs the credentials that allow it to read and write data stored in the VMs of B. Mechanistically, XOS records the all tenancy state corresponding to A being a tenant of B in its Data Model (the Parameters field) and has a means to distribute this state to all the service instances.

Second, one service might need to take some action when one of its tenants changes its instances. For example, if Service A takes advantage of a scalable storage service (B) that mounts volumes in each of the service instances (VMs) that implement A, then Service B needs to be alerted when Service A adds a new VM to one of its slices. Mechanistically, this dependency is explicitly recorded in the XOS Data Model so that when there is a change in the state maintained for Service A, the Controller plug-in for Service B is notified, thereby giving it an opportunity to take service-specific actions (e.g., auto-mount a volume in the newly created VM belonging to Service A).

Depreciated

Material below this line is depreciated... It should appear in one form or another in the above discussion, but has been left tacked on to the end in case we need to recover bits-and-pieces at some later time.

###Tenancy

By establishing conventions about tenancy (leveraging the tenant object as necessary), XOS can also offer a means for one service to be a tenant of another service. Let service A be a “landlord” service that grants tenancies, and let service B be a “tenant” service. Using the generic tenant object and the per-service Syndicate volume, creating a tenancy in A for B is a matter of (1) having A’s developers allow B to request a tenancy, (2) OpenCloud approving the tenancy request on behalf of A, and (3) the XOS observer generating and propagating the tenancy state to B’s VMs simply by serializing and writing them out as read-only files in B’s shared volume (Figure 1).

Step (1) is solved by A’s developers providing other OpenCloud users a Tenant view, as well as the complementary RESTful API for accessing it programmatically (via xoslib). Step (3) is solved using Syndicate.

Step (2) could be addressed using a per-service whitelisting function that lets OpenCloud evaluate whether or not the landlord service (service A) will allow the requesting service (service B) to have a tenancy. To do so, a landlord service (service A) would maintain a set of ACLs that allow/deny groups of tenants, which are evaluated by the whitelisting function and are kept up-to-date by A’s developers via either the developer view or the xoslib RESTful API. Any service-specific policies such as tenant billing are negotiated off-site, out of band. This means that service A could run a separate billing service that processed B’s payments, and then called back to A’s xoslib RESTful API to alter the ACLs to allow B to create tenancies (Figure 2).

Figure 2

Figure 2: Once service A and B have negotiated out-of-band (step 0), service A or some agent of it alters the ACLs in OpenCloud to allow B to request a tenancy (step 1). B then requests the tenancy (2a), and upon verifying with A’s whitelist function, XOS generates a tenancy object representing B’s tenancy in A (2b). Subsequently, XOS serializes the tenancy object into a directory hierarchy which it writes to B’s Syndicate volume (3).