Skip to content

RYU Openstack Havana environment HOWTO

KANEKO Yoshihiro edited this page Mar 24, 2014 · 3 revisions

==================================== Instructions for construction of Openstack havana environment with Ryu using devstack

This document explains how to construct OpenStack havana environment with Ryu using devstack.

1. System structure

Assuming the following systems as a goal.

+--------+
| Client |
+----+---+
     |     Public Network
-----+----+----------
          |                   Private Network
          |      ---+--------------+---------
          |         |              |
      +---+---------+---+  +-------+--------+
      | Controller Node |  | Compute Node   |
      |  Ryu            |  |  Neutron       |
      |  Nova           |  |   ryu-agent    |
      |  Keystone       |  |  Nova          |
      |  Glance         |  |   nova-compute |
      |  Neutron        |  +----------------+
      |  Cinder         |
      |  Horizon        |
      +-----------------+
  • Controller Node:

    Ryu and OpenStack components are run on this node. It is including nova-compute, so VM is run on this node too. Besides a traffic to the outside network from VM on compute node goes via this node.

  • Compute Node:

    The nova-compute and the agent for Neutron Ryu plugin are run on this node.

  • Client:

    This host is used for confirmation of communication of VM and a remote host on an external network.

  • Public Network:

    This network is used by communication between VM and the external network, and client access to VM via Floating IP.

  • Private Network:

    This network is used by communication between Ryu and OVS, each OpenStack components. Besides, GRE tunnel that connects OVS on both node through this network.

We use Ubuntu 12.04 Server on both node.

On this document, the following addresses are used in each network and host.

  • Networks

    • Public Network: 192.168.100.0/24
    • Private Network: 192.168.0.0/24
  • Hosts

    • Controller Node:

      • Host name: controller
      • IP address: 192.168.0.10
    • Compute Node:

      • Host name: compute
      • IP address: 192.168.0.11
    • Client:

      • Host name: client
      • IP address: 192.168.100.200

This picture illustrates network components of the inside of each node.

Public Network
----+---------
    |                      Private Network
    |       --+---------------------+-----
    |         |                     |
    |         |                     |
+---|---------|----+  +-------------|----+
| (eth1)    (eth0) |  |           (eth0) |
|   |              |  |                  |
| [br-ex]  ______________                |
|   :     /      GRE  |  \               |
| [br-int]--<qbr>  |  | [br-int]--<qbr>  |
|             |    |  |             |    |
|            VM    |  |            VM    |
+------------------+  +------------------+
    Controller Node          Compute Node

[] OVS bridge
<> Linux bridge
() Network interface
.. IP Routing

2. Getting and setting

We assume installation of Ubutnu 12.04 Server was completed.

Install git and python-dev.

$ sudo apt-get install git python-dev

Download devstack.

$ cd
$ git clone https://github.com/openstack-dev/devstack.git -b stable/havana

2.1 Controller Node

Configure network interface.

$ sudo vi /etc/network/interfaces

/etc/network/interfaces:

auto lo

auto eth0
iface eth0 inet static
        address 192.168.0.10
        netmask 255.255.255.0
        gateway 192.168.0.1
        dns-nameservers 192.168.0.1

auto eth1
iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up
        up ip link set $IFACE promisc on
        down ip link set $IFACE promisc off
        down ifconfig $IFACE down

Restart networking.

$ sudo /etc/init.d/networking restart

Configure devstack.

$ cd ~/devstack
$ vi localrc
$ vi local.sh
$ chmod +x local.sh

localrc:

SERVICE_HOST=192.168.0.10
HOST_IP=192.168.0.10
disable_service n-net
enable_service neutron q-svc q-agt q-l3 q-dhcp q-meta q-lbaas
enable_service ryu
FLOATING_RANGE=192.168.100.0/24
PUBLIC_NETWORK_GATEWAY=192.168.100.1
Q_PLUGIN=ryu

Q_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
RYU_API_HOST=$SERVICE_HOST
RYU_OFP_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

RYU_APPS=ryu.app.gre_tunnel,ryu.app.quantum_adapter,ryu.app.rest,ryu.app.rest_conf_switch,ryu.app.rest_tunnel,ryu.app.tunnel_port_updater,ryu.app.rest_quantum

local.sh:

sudo ovs-vsctl --no-wait -- --may-exist add-port br-ex eth1

2.2 Compute Node

Configure network interface.

$ sudo vi /etc/network/interfaces

/etc/network/interfaces:

auto lo

auto eth0
iface eth0 inet static
        address 192.168.0.11
        netmask 255.255.255.0
        gateway 192.168.0.1
        dns-nameservers 192.168.0.1

Restart networking.

$ sudo /etc/init.d/networking restart

Configure devstack.

$ cd ~/devstack
$ vi localrc

localrc:

SERVICE_HOST=192.168.0.10
disable_all_services
enable_service n-cpu neutron q-agt
Q_PLUGIN=ryu

Q_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
RYU_API_HOST=$SERVICE_HOST
RYU_OFP_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

3. Play Ryu and OpenStack

3.1 Starting devstack

Start devstack on controller node first.

Controller Node:

$ cd ~/devstack
$ ./stack.sh
[snip]

Horizon is now available at http://192.168.0.10/
Keystone is serving at http://192.168.0.10:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: admin
This is your host ip: 192.168.0.10
stack.sh completed in 2459 seconds.
$

Then start devstack on compute node.

Compute Node:

$ cd ~/devstack
$ ./stack.sh

Confirm the status of services.

$ cd ~/devstack
$ . ./openrc admin admin
$ nova-manage service list 2>/dev/null
Binary           Host                                 Zone             Status     State Updated_At
nova-conductor   ryu1                                 internal         enabled    :-)   2013-11-07 10:48:31
nova-compute     ryu1                                 nova             enabled    :-)   2013-11-07 10:48:32
nova-cert        ryu1                                 internal         enabled    :-)   2013-11-07 10:48:27
nova-scheduler   ryu1                                 internal         enabled    :-)   2013-11-07 10:48:30
nova-consoleauth ryu1                                 internal         enabled    :-)   2013-11-07 10:48:32
nova-compute     ryu2                                 nova             enabled    :-)   2013-11-07 10:48:26
$

3.2 Run VM

Launch VM by the following command.

nova boot --flavor <flavor-id> --image <image-id> --nic net-id=<net-id> <VM-name>

Example:

$ . ./openrc demo demo
$ nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 42 | m1.nano   | 64        | 0    | 0         |      | 1     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 84 | m1.micro  | 128       | 0    | 0         |      | 1     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
$ nova image-list
+--------------------------------------+---------------------------------+--------+--------+
| ID                                   | Name                            | Status | Server |
+--------------------------------------+---------------------------------+--------+--------+
| d25fd382-57e0-433c-9b8e-d6434ea34d89 | cirros-0.3.1-x86_64-uec         | ACTIVE |        |
| 97b03c42-5033-4c40-ac03-995136bedc83 | cirros-0.3.1-x86_64-uec-kernel  | ACTIVE |        |
| a92812c2-c932-48f4-8e43-5c1774053b71 | cirros-0.3.1-x86_64-uec-ramdisk | ACTIVE |        |
+--------------------------------------+---------------------------------+--------+--------+
$ neutron net-list
+--------------------------------------+---------+--------------------------------------------------+
| id                                   | name    | subnets                                          |
+--------------------------------------+---------+--------------------------------------------------+
| 10af861b-e640-4db8-ad56-a4d398d2b66c | private | ab9772ef-14b8-480a-99f3-fe0696fc979b 10.0.0.0/24 |
| b583ceaa-23e9-480d-af63-a4fcaec54c19 | public  | a4f9e8e2-0777-4a7c-8317-bd35dbf124b0             |
+--------------------------------------+---------+--------------------------------------------------+
$ nova boot --flavor m1.tiny --image d25fd382-57e0-433c-9b8e-d6434ea34d89 --nic net-id=10af861b-e640-4db8-ad56-a4d398d2b66c vm1
$ nova list
+--------------------------------------+------+--------+------------+-------------+------------------+
| ID                                   | Name | Status | Task State | Power State | Networks         |
+--------------------------------------+------+--------+------------+-------------+------------------+
| c0599ba9-c4d6-4fe8-ab85-847273716f2e | vm1  | ACTIVE | None       | Running     | private=10.0.0.3 |
+--------------------------------------+------+--------+------------+-------------+------------------+

3.3 Security Groups

Setting up Security Groups by the following commands.

neutron security-group-rule-create --protocol icmp <group id>
neutron security-group-rule-create --protocol tcp --port-range-min <from-port> --port-range-max <to-port> <group id>
neutron security-group-rule-create --protocol udp --port-range-min <from-port> --port-range-max <to-port> <group id>

Example:

$ cd devstack
$ . ./openrc demo demo
$ neutron security-group-list
+--------------------------------------+---------+-------------+
| id                                   | name    | description |
+--------------------------------------+---------+-------------+
| f309445b-99ba-48d2-a01f-9046e98f4048 | default | default     |
+--------------------------------------+---------+-------------+
$ neutron security-group-rule-create --protocol icmp f309445b-99ba-48d2-a01f-9046e98f4048
$ neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 f309445b-99ba-48d2-a01f-9046e98f4048

3.4 Associate Floating IP

Associate Floating IP with VM.

neutron floatingip-create <net-id>
neutron floatingip-associate <float-id> <port-id>

Example:

$ neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.100.3                        |
| floating_network_id | b583ceaa-23e9-480d-af63-a4fcaec54c19 |
| id                  | e791538d-f278-4e1e-8ec9-70ab1ad4f1a4 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | ac52df2904e6471d981d5f8a9d3ea675     |
+---------------------+--------------------------------------+
$ neutron port-list
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| 1649e38b-75b9-4d9f-9f57-6da1b41ce85d |      | fa:16:3e:82:b5:2c | {"subnet_id": "ab9772ef-14b8-480a-99f3-fe0696fc979b", "ip_address": "10.0.0.2"} |
| 411892b3-abdc-4950-924b-39916d8c5bff |      | fa:16:3e:a0:17:df | {"subnet_id": "ab9772ef-14b8-480a-99f3-fe0696fc979b", "ip_address": "10.0.0.1"} |
| a97461ef-a9e7-4b73-aced-05262d09b906 |      | fa:16:3e:e8:e0:6e | {"subnet_id": "ab9772ef-14b8-480a-99f3-fe0696fc979b", "ip_address": "10.0.0.3"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
$ neutron floatingip-associate e791538d-f278-4e1e-8ec9-70ab1ad4f1a4 a97461ef-a9e7-4b73-aced-05262d09b906
Associated floatingip e791538d-f278-4e1e-8ec9-70ab1ad4f1a4
$ neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| e791538d-f278-4e1e-8ec9-70ab1ad4f1a4 | 10.0.0.3         | 192.168.100.3       | a97461ef-a9e7-4b73-aced-05262d09b906 |
+--------------------------------------+------------------+---------------------+--------------------------------------+

3.5 Access to VM from client host

Run ping and ssh login to VM via Floating IP from client.

$ ping -c 10 192.168.100.3
PING 192.168.100.3 (192.168.100.3) 56(84) bytes of data.
64 bytes from 192.168.100.3: icmp_req=1 ttl=63 time=13.2 ms
64 bytes from 192.168.100.3: icmp_req=2 ttl=63 time=0.718 ms
64 bytes from 192.168.100.3: icmp_req=3 ttl=63 time=0.843 ms
64 bytes from 192.168.100.3: icmp_req=4 ttl=63 time=0.838 ms
64 bytes from 192.168.100.3: icmp_req=5 ttl=63 time=0.892 ms
64 bytes from 192.168.100.3: icmp_req=6 ttl=63 time=0.798 ms
64 bytes from 192.168.100.3: icmp_req=7 ttl=63 time=0.803 ms
64 bytes from 192.168.100.3: icmp_req=8 ttl=63 time=0.765 ms
64 bytes from 192.168.100.3: icmp_req=9 ttl=63 time=0.858 ms
64 bytes from 192.168.100.3: icmp_req=10 ttl=63 time=0.824 ms

--- 192.168.100.3 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9008ms
rtt min/avg/max/mdev = 0.718/2.055/13.216/3.720 ms
$
$ ssh cirros@192.168.100.3
cirros@192.168.100.3's password: cubswin:)
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:E8:E0:6E  
          inet addr:10.0.0.3  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fee8:e06e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:214 errors:0 dropped:0 overruns:0 frame:0
          TX packets:221 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:23489 (22.9 KiB)  TX bytes:22010 (21.4 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ exit
Connection to 192.168.100.3 closed.