[Openstack-operators] OpenStack-operators Digest, Vol 46, Issue 20

Jeff Silverman jeff at sweetlabs.com
Mon Aug 18 23:33:57 UTC 2014


I figured out how to give the instance a virtual NIC.  It is getting an IP
address from our DHCP, I can ssh in, and I can surf the web.

root at compute1-prod.compute1-prod:/var/log/neutron# ifconfig VLAN_15
VLAN_15   Link encap:Ethernet  HWaddr 00:25:90:5B:AA:A0
          inet addr:10.50.15.48  Bcast:10.50.15.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8355781 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1644788 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:5040893113 (4.6 GiB)  TX bytes:211825561 (202.0 MiB)

root at compute1-prod.compute1-prod:/var/log/neutron#
/etc/init.d/openvswitch  start

ovs-vsctl list-br   # you should see nothing
ovs-vsctl add-br br-int
ovs-vsctl list-br
ovs-vsctl add-br VLAN_15
ovs-vsctl add-br VLAN_20
ovs-vsctl list-br

Connect the open virtual switch to the instance

root at compute1-prod.compute1-prod:/var/log/neutron# virsh
attach-interface instance-00000006 bridge VLAN_15
Interface attached successfully


root at compute1-prod.compute1-prod:/var/log/neutron#

I rebooted the guest and voila!

$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 52:54:00:8E:E5:25
          inet addr:10.50.15.239  Bcast:10.50.15.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe8e:e525/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1404 errors:0 dropped:0 overruns:0 frame:0
          TX packets:256 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:82015 (80.0 KiB)  TX bytes:27940 (27.2 KiB)
          Interrupt:10 Base address:0x6000

$
The IP address came from our dhcpd and works fine.  dhcpd also
provided a default router and populated /etc/resolv.conf

My lead asked to add ports to the VLAN since there are two NICs which
are bonded together using LACP:

root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN_15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN_20
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports br-int
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl add-port
VLAN_15 bond0.15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl add-port
VLAN_20 bond0.20
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN_15
bond0.15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN_20
bond0.20
root at compute1-prod.compute1-prod:/var/log/neutron#

root at compute1-prod.compute1-prod:/var/log/neutron# virsh dumpxml 3
<domain>
...
    <interface type='bridge'>
      <mac address='52:54:00:8e:e5:25'/>
      <source bridge='VLAN_15'/>
      <target dev='vnet1'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
    </interface>
...
</domain>

root at compute1-prod.compute1-prod:/var/log/neutron#


So I know that the hypervisor is working okay.  This is exactly what I
want: I want the compute node to connect direct to the physical
switch/router.  I don't want to go through neutron, which I think will
be a bottleneck.  I think I have good enough security upstream to
protect the openstack.  But how do I do this configuration using
neutron and open virtual switch?  Or is this the way it is supposed to
be done?


Thank you


Jeff





On Mon, Aug 18, 2014 at 5:00 AM, <
openstack-operators-request at lists.openstack.org> wrote:

> Send OpenStack-operators mailing list submissions to
>         openstack-operators at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> or, via email, send a message with subject or body 'help' to
>         openstack-operators-request at lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-operators-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-operators digest..."
>
>
> Today's Topics:
>
>    1. How can I select host node to create VM? (Taeheum Na)
>    2. Re: I have finally created an instance, and it works!
>       However, there is no ethernet card (Andreas Scheuring)
>    3. Operations summit (Sean Roberts)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 18 Aug 2014 12:55:58 +0900
> From: Taeheum Na <thna at nm.gist.ac.kr>
> To: "openstack-operators at lists.openstack.org"
>         <openstack-operators at lists.openstack.org>
> Subject: [Openstack-operators] How can I select host node to create
>         VM?
> Message-ID:
>         <
> 0145D5DF1C56174FABB6EB834BC4909C03FEBD1915AC at NML2007.netmedia.kjist.ac.kr>
>
> Content-Type: text/plain; charset="us-ascii"
>
> Hello,
> When I create instance, I want to choice host node.
> I saw that by default, VMs are scheduled by filter scheduler which only
> consider computing resource.
> I want to apply substrate network considered scheduling algorithm on
> OpenStack.
> To do this one, I have to monitor utilization of computing/network
> resources.
> Now, I'm planning to use OpenvSwitch commend to know network situation.
>
> Do you have any comment for me? (monitor/scheduling)
>
> Regards
> Taeheum Na
> ****************************************************
> M.S. candidate of Networked Computing Systems Lab.
> School of Information and Communications
> GIST (Gwangju Inst. of Sci. and Tech.)
> E-mail: thna at nm.gist.ac.kr<mailto:thna at nm.gist.ac.kr>
> Phone: +82-10-2238-9424
> Office: +82-62-715-2273
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/470f4ef4/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Mon, 18 Aug 2014 09:04:26 +0200
> From: Andreas Scheuring <scheuran at linux.vnet.ibm.com>
> To: "openstack-operators at lists.openstack.org"
>         <openstack-operators at lists.openstack.org>
> Subject: Re: [Openstack-operators] I have finally created an instance,
>         and it works! However, there is no ethernet card
> Message-ID: <1408345466.4188.6.camel at oc5515017671.ibm.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Assuming you're running a libvirt based hypervisor (e.g. kvm). Could you
> please dump the libvirt xml of your instance?
>
> You can get it this way
>
> virsh list --all
> --> shows a list of all virtual servers running on your hypervisor
>
> virsh dumpxml <vm-id>
> --> dumps the xml of your vm, addressed by the id (first column). Id
> does not correlate with UUID!!!!! So if you're not sure which list entry
> belongs to your instance just stop all other via openstack to have only
> one running.
>
> PS: on some systems you have to sudo when using virsh.
>
>
> There should be a subtag of devices called <interface type='bridge'> or
> 'network' or somehting like that representing your eth interface
>
>
> Andreas
>
>
> On Fri, 2014-08-15 at 11:43 -0700, Jeff Silverman wrote:
> > I have been surfing the internet, and one of the ideas that comes to
> > mind is modifying the /etc/neutron/agent.ini file on the compute
> > nodes.  In the agent.ini file, there is a comment near the top that is
> > almost helpful:
> >
> >
> > # L3 requires that an interface driver be set. Choose the one that
> > best
> > # matches your plugin.
> >
> >
> > The only plug I know about is ml2.  I have no idea if that is right
> > for me or not.  And I have no idea to choose the interface drive that
> > best matches my plugin.
> >
> >
> > Thank you
> >
> >
> >
> >
> > Jeff
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Fri, Aug 15, 2014 at 10:26 AM, Jeff Silverman <jeff at sweetlabs.com>
> > wrote:
> >         By "defined a network space for your instances", does that
> >         mean going through the process as described
> >         in
> http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
> ?
> >
> >
> >         I got part way through that when I realized that the procedure
> >         was going to bridge packets through neutron.  That's not what
> >         I want.  I want the packets to go directly to the physical
> >         router.  For example, I have two tenants, with IP addresses
> >         10.50.15.80/24 and 10.50.18.15.90/24.and the router is at
> >         10.50.15.1.  There is a nice picture of what I am trying to do
> >         at
> http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud
> .  But if the hypervisor doesn't present a virtual device to the guests,
> then nothing else is going to happen.  The network troubleshooting guide
> http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud
> does not explain what to do if the virtual NIC is missing.
> >
> >
> >
> >
> >         Thank you
> >
> >
> >         Jeff
> >
> >
> >
> >
> >         On Fri, Aug 15, 2014 at 9:38 AM, Abel Lopez
> >         <alopgeek at gmail.com> wrote:
> >                 Curious if you?ve defined a network space for your
> >                 instances. If you?re using the traditional
> >                 flat_network, this is known as the ?fixed_address?
> >                 space.
> >                 If you?re using neutron, you would need to create a
> >                 network and a subnet (and router with gateway, etc).
> >                 You?d then assign the instance to a network at launch
> >                 time.
> >
> >
> >
> >                 On Aug 15, 2014, at 9:17 AM, Jeff Silverman
> >                 <jeff at sweetlabs.com> wrote:
> >
> >                 > <ip_a.png>
> >                 >
> >                 > ?
> >                 >
> >                 > For those of you that can't see pictures:
> >                 > $ sudo ip a
> >                 > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc
> >                 > noqueue
> >                 >     link/loopback 00:00:00:00:00:00 brd
> >                 > 00:00:00:00:00:00
> >                 >     inet 127.0.0.1/8 scope host lo
> >                 >     inet6 ::1/128 scope host
> >                 >         valid_lft forever preferred_1ft forever
> >                 >
> >                 >
> >                 > I suspect that the issue is that the hypervisor is
> >                 > not presenting a virtual ethernet card.
> >                 >
> >                 >
> >                 > Thank you
> >                 >
> >                 >
> >                 >
> >                 >
> >                 > Jeff
> >                 >
> >                 >
> >                 >
> >                 >
> >                 > On Thu, Aug 14, 2014 at 6:57 PM, Nhan Cao
> >                 > <nhanct92 at gmail.com> wrote:
> >                 >         can you show output of command:
> >                 >         ip a
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >         2014-08-15 7:41 GMT+07:00 Jeff Silverman
> >                 >         <jeff at sweetlabs.com>:
> >                 >                 People,
> >                 >
> >                 >
> >                 >                 I have brought up an instance, and I
> >                 >                 can connect to it using my browser!
> >                 >                  I am so pleased.
> >                 >
> >                 >
> >                 >                 However, my instance doesn't have an
> >                 >                 ethernet device, only a loopback
> >                 >                 device.   My management wants me to
> >                 >                 use a provider network, which I
> >                 >                 understand to mean that my instances
> >                 >                 will have IP addresses in the same
> >                 >                 space as the controller, block
> >                 >                 storage, and compute node
> >                 >                 administrative addresses.  However,
> >                 >                 I think that discussing addressing
> >                 >                 is premature until I have a working
> >                 >                 virtual ethernet card.
> >                 >
> >                 >
> >                 >                 I am reading
> >                 >                 through
> http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
> and I think that the ML2 plugin is what I need.  However, I think I do not
> want a network type of GRE, because that encapsulates the packets and I
> don't have anything to un-encapsulate them.
> >                 >
> >                 >
> >                 >                 Thank you
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >                 Jeff
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >                 --
> >                 >                 Jeff Silverman
> >                 >                 Systems Engineer
> >                 >                 (253) 459-2318 (c)
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
>  _______________________________________________
> >                 >                 OpenStack-operators mailing list
> >                 >
> OpenStack-operators at lists.openstack.org
> >                 >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 > --
> >                 > Jeff Silverman
> >                 > Systems Engineer
> >                 > (253) 459-2318 (c)
> >                 >
> >                 >
> >                 > _______________________________________________
> >                 > OpenStack-operators mailing list
> >                 > OpenStack-operators at lists.openstack.org
> >                 >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >                 >
> >
> >
> >
> >
> >
> >
> >         --
> >         Jeff Silverman
> >         Systems Engineer
> >         (253) 459-2318 (c)
> >
> >
> >
> >
> >
> >
> > --
> > Jeff Silverman
> > Systems Engineer
> > (253) 459-2318 (c)
> >
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 18 Aug 2014 00:34:22 -0700
> From: Sean Roberts <seanroberts66 at gmail.com>
> To: openstack-operators at lists.openstack.org
> Subject: [Openstack-operators] Operations summit
> Message-ID: <FA2E9C27-6CB8-45D3-A21B-C2431F54ADBC at gmail.com>
> Content-Type: text/plain; charset="us-ascii"
>
> I wanted to post a few items on the upcoming operator summit
> - I am going to moderate the Tuesday 10:30am Deploy/Config/Upgrade
> session. Any ideas on content is welcome.
> - I would like to add Congress / Policy to the Tuesday 1:30pm session
> along side Puppet, Chef, Salt, and Ansible. I think we are still missing
> someone to represent Salt.
> - I believe the Monday 10:30am Network session will be on the Nova.network
> to Neutron migration path. Any ideas on content is welcome.
>
> I am going to think some on the Deploy/Config/Upgrade session agenda and
> post it here for early discussion.
>
> ~ sean
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/3ca3c098/attachment-0001.html
> >
>
> ------------------------------
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> End of OpenStack-operators Digest, Vol 46, Issue 20
> ***************************************************
>



-- 
*Jeff Silverman*
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/7506e506/attachment-0001.html>


More information about the OpenStack-operators mailing list