[openstack-dev] [Neutron][DevStack] How to increase developer usage of Neutron
Mike Spreitzer
mspreitz at us.ibm.com
Fri Aug 15 04:58:21 UTC 2014
"CARVER, PAUL" <pc2929 at att.com> wrote on 08/14/2014 09:35:17 AM:
> Mike Spreitzer [mailto:mspreitz at us.ibm.com] wrote:
>
> >I'll bet I am not the only developer who is not highly competent with
> >bridges and tunnels, Open VSwitch, Neutron configuration, and how
DevStack
> >transmutes all those. My bet is that you would have more developers
using
> >Neutron if there were an easy-to-find and easy-to-follow recipe to use,
to
> >create a developer install of OpenStack with Neutron. One that's a
pretty
> >basic and easy case. Let's say a developer gets a recent image of
Ubuntu
> >14.04 from Canonical, and creates an instance in some undercloud, and
that
> >instance has just one NIC, at 10.9.8.7/16. If there were a recipe for
> >such a developer to follow from that point on, it would be great.
>
> https://wiki.openstack.org/wiki/NeutronDevstack worked for me.
>
> However, I'm pretty sure it's only a single node all in one setup...
Single node is a fine place to start. I'd love a working vanilla example
of that.
I tried to work such an example myself today, and it partially worked.
Maybe a report of that would make it clearer the sort of information I am
looking for. Here goes. The short version is as follows. In an Icehouse
undercloud with flat Neutron networking I created a host VM with two NICs,
eth0 on the flat public network and eth1 on the private network. Inside
the host VM I left eth0 with its DHCP configuration, found eth1 having no
configuration, and configured it to be manually in promiscuous mode. I
said nothing about networking in my localrc except: set HOST_IP to the
public (host's eth0) address, set FLOATING_RANGE to be an unused address
range on the host VM's subnet, set FIXED_RANGE to something unrelated in
172.24.16.0/20, set NETWORK_GATEWAY and PUBLIC_NETWORK_GATEWAY
correspondingly, set ENABLE_TENANT_TUNNELS=True, and enabled/disabled the
appropriate services. I ran stack.sh and it claimed success and the
resulting OpenStack partly works --- but its floating IP addresses are not
reachable from outside the host, its VMs cannot communicate beyond the
host VM, and there may be some errors in the network and/or DHCP setup.
Now for the full version. I have changed the passwords and intranet
addresses to protect the guilty. I will pretend that my company's
intranet is at 42.42.0.0/16.
For my undercloud I am using an installation (done by someone else, not
using DevStack) of Icehouse OpenStack with Neutron. Horizon shows the
"admin" user three networks in the admin view:
(1) an external net owned by "service", named "ext_net", and having
Network Type "gre" and Segmentation ID 851;
(2) a non-external net owned by "service", named "flat", having Network
Type "flat" and no Segmentation ID, having one subnet at 42.42.40.0/21
with allocation range 42.42.42.192--42.42.42.215, and (among others) a
Port at 42.42.42.193 with Device Attached = "network:dhcp"; and
(3) a non-external net owned by "admin", named "private", having Network
Type "gre" and Segmentation ID 850, having one subnet at 10.10.0.0/16 with
allocation range 10.10.0.2--10.10.255.254, and (among others) a Port at
10.10.0.3 with Device Attached = "network:dhcp".
I have authority over 42.42.42.216/29, and (you will see below) used that
as my FLOATING_RANGE in my DevStack run.
I started with an Ubuntu 14.04 image in my undercloud, fetched directly
from Canonical. It is the July 21 build, which I can no longer find on
http://cloud-images.ubuntu.com/ . The image is like
http://cloud-images.ubuntu.com/trusty/20140809/trusty-server-cloudimg-amd64-disk1.img
except that it was the July 21 version.
>From that image I made a Compute instance (VM) in my undercloud to serve
as the host for my DevStack install. I gave my host VM two NICs: the
first at 42.42.42.206/21 and the second at 10.10.0.24/16. When it came
up, the first things I did were `apt-get update`, `apt-get dist-upgrade`,
and `apt-get install git emacs`; then a reboot. Once the host VM finished
rebooting, I logged in and started working on the networking. At first I
saw this:
ubuntu at mjs-dstk-814a:~$ ifconfig
eth0 Link encap:Ethernet HWaddr fa:16:3e:e2:92:30
inet addr:42.42.42.206 Bcast:42.42.47.255 Mask:255.255.248.0
inet6 addr: fe80::f816:3eff:fee2:9230/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1581 errors:0 dropped:0 overruns:0 frame:0
TX packets:315 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:126535 (126.5 KB) TX bytes:33753 (33.7 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
and
root at mjs-dstk-814a:/etc/network/interfaces.d# cat eth0.cfg
# The primary network interface
auto eth0
iface eth0 inet dhcp
Based on some clues from a colleague, I then configured eth1 to be
manually configured in promiscuous mode:
root at mjs-dstk-814a:/etc/network/interfaces.d# cat > eth1.cfg
# secondary network - in promiscuous mode
auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
After a reboot, I then saw:
ubuntu at mjs-dstk-814a:~$ ifconfig
eth0 Link encap:Ethernet HWaddr fa:16:3e:e2:92:30
inet addr:42.42.42.206 Bcast:42.42.47.255 Mask:255.255.248.0
inet6 addr: fe80::f816:3eff:fee2:9230/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2371 errors:0 dropped:0 overruns:0 frame:0
TX packets:288 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:169557 (169.5 KB) TX bytes:31239 (31.2 KB)
eth1 Link encap:Ethernet HWaddr fa:16:3e:e6:ed:30
inet6 addr: fe80::f816:3eff:fee6:ed30/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:578 (578.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1184 (1.1 KB) TX bytes:1184 (1.1 KB)
Then I did the `git clone https://github.com/openstack-dev/devstack.git`.
Then I created devstack/localrc, with the following contents:
HOST_IP=42.42.42.206
ADMIN_PASSWORD=XXX
MYSQL_PASSWORD=XXX
RABBIT_PASSWORD=XXX
SERVICE_PASSWORD=XXX
SERVICE_TOKEN=XXX
MYSQL_PASSWORD=XXX
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
enable_service g-api
enable_service rabbit
enable_service lbaas
enable_service ceilometer-acompute ceilometer-acentral
ceilometer-anotification ceilometer-collector ceilometer-api
enable_service ceilometer-alarm-notifier ceilometer-alarm-evaluator
LOG=True
MULTI_HOST=True
FIXED_RANGE=172.24.17.0/24
FLOATING_RANGE=42.42.42.216/29
NETWORK_GATEWAY=172.24.17.1
PUBLIC_NETWORK_GATEWAY=42.42.42.217
ENABLE_TENANT_TUNNELS=True
SCREEN_LOGDIR=/home/ubuntu/devstack/logs
LOG_COLOR=False
LOGFILE=/home/ubuntu/devstack/logs/stack.log
Then I ran stack.sh; it completed with apparent success. However, I
notice some disturbing things about the OpenStack that was installed in
this host VM.
My freshly installed OpenStack has two networks:
(1) an external network owned by the "admin" tenant, named "public",
having Network Type "vxlan" and segmentation ID 1002, having one subnet at
42.42.42.216/29, no Ports attached to network:dhcp but having a "DHCP
Agent" on host mjs-dstk-814a; and
(2) a non-external network owned by the "demo" tenant, named "private",
having Network Type "vxlan" and Segmentation ID 1001, having one subnet at
172.24.17.0/24, a Port (among others) at 172.24.17.3 with Device Attached
= network:dhcp, and having a "DHCP Agent" on host mjs-dstk-814a.
I am worried about that VXLAN stuff; is that the expected result from my
localrc, is it a good choice in this case, and should I expect it to
actually work?
I am not sure I got the DHCP stuff right. There is already DHCP service
provided by the undercloud --- but it knows nothing of the VMs that will
be created by the overcloud. My overcloud's public network will only be
used for floating IPs (right?) and so does not need any DHCP service from
my OpenStack (right?); is the fact that it has a "DHCP Agent" a problem?
My "private" network seems to have DHCP provided two ways; should I expect
that to work without confusing anything (either inside my OpenStack or
outside)?
In my overcloud I uploaded some public keys. I created a security group,
in the demo tenant, named demo-test; I gave it ingress and egress rules
that allow IPv4 traffic on all protocols and ports from/to CIDR 0.0.0.0/0.
In my overcloud, as the "demo" user, I used Horizon at
http://42.42.42.206/ to create a nested VM from the
Fedora-x86_64-20-20140618-sda image (which seems to have been installed by
DevStack by default). During the creation dialogue I was offered no
choice about the networking, it pre-selected the private network for one
NIC and did not offer me the public network. This nested VM got a NIC at
172.24.17.4/24. Using Horizon to look at the nested VM's boot log, I see
that it is not terribly slow, despite being nested; bootup finishes in
under a minute. Using Horizon's "console" tab on the Instance view, I
logged into this VM and verified that it was up and running with the
expected network configuration. But attempts to communicate from inside
the nested VM to IP addresses outside the host VM get no response. I used
Horizon at http://42.42.42.206/ to allocate a floating IP address, it
turned out to be 42.42.42.220, and assign it to the nested VM. When
logged into a shell on my host VM (mjs-dstk-814a), `ssh
fedora at 42.42.42.220` succeeds in connecting. From a shell elsewhere in my
company's intranet, `ssh fedora at 42.42.42.220` fails with connection
timeout; `ping 42.42.42.220` also gets no response. Attempts to
communicate from inside the nested VM to outside the host VM continue to
get no response.
In my OpenStack, I created another Fedora instance. This one got address
172.24.17.5/24. I verified IP connectivity between these two nested VMs
using their 172.24.17.X addresses. I also found that when logged into the
VM at 172.24.17.5, `ssh 42.42.42.220` succeeds in connecting.
Note the problems with IP connectivity between (a) the nested VM with a
floating IP and (b) the world beyond the host VM; that seems wrong (as in,
undesirable and ought to be avoidable). The base physical subnet is
42.42.40.0/21 --- the physical router knows that. The undercloud's
external network has matching configuration. Interestingly, the host VM
(mjs-dstk-814a) still says this about eth0:
eth0 Link encap:Ethernet HWaddr fa:16:3e:e2:92:30
inet addr:42.42.42.206 Bcast:42.42.47.255 Mask:255.255.248.0
inet6 addr: fe80::f816:3eff:fee2:9230/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:827683 errors:0 dropped:0 overruns:0 frame:0
TX packets:221744 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:672452076 (672.4 MB) TX bytes:57877492 (57.8 MB)
Thanks,
Mike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140815/2358864a/attachment.html>
More information about the OpenStack-dev
mailing list