[Openstack] Adding an extra compute node

Cristian Falcas cristi.falcas at gmail.com
Thu Sep 26 18:58:28 UTC 2013


So, it is working now or still not?

I had some problems that manifested the same and I got a lot of help
from this page:
http://techbackground.blogspot.ro/2013/05/debugging-quantum-dhcp-and-open-vswitch.html

In my case the vm was attached directly to br-int.

The ip exec and br-ex advice was for the network node, but I see that
you don't have access to the internal network, not internet. Sorry
about that.

If still not working, check the xml file from the vm and check the
target and source interfaces: should be tap and qbr.

Also, when you ran "brctl show" you should see the qbrd bridge with 2
interfaces:  tap and qvb

The qvb interface should be paired with a qvo interface:

ethtool -S qvb$id

Check that the index from above command is in "ip link show" and it's
the qvo interface.

qvo interface should be in the br-int bridge from openvswitch.

After that you should check the rules in openvswitch and see that the
same id from one compute node is translated on the network node.

I presume that there are no errors in /var/log/quantum/*.log on the
network node.


On Thu, Sep 26, 2013 at 6:04 PM, Brandon Adams
<brandon.adams at newwave-technologies.com> wrote:
> On my net/compute node I have two interfaces; one is for the management
> network, and the other is for VM internet access. This is already added to
> br-ex as per a normal set up. I don't have this on the dedicated compute
> node because it shouldn't have direct access to the internet. And I can't
> execute ip netns exec qrouter-4cff1882-1962-4084-9b48-cb1bcd048a4e ping $IP
> on the compute node because that name space doesn't exist there, only on the
> network node.
>
> I went back and restarted the dhcp service just as a precaution, and found
> it hadn't been running correctly. Not sure if that affected anything. I also
> ran ifconfig on the compute node after alunching a VM and it showed the qbr
> qbo and qbv interfaces, as well as the tap interface.
>
>
> On Thu, Sep 26, 2013 at 10:26 AM, Cristian Falcas <cristi.falcas at gmail.com>
> wrote:
>>
>> On br-ex you need to add an interface that has no ip but it's up:
>> ovs-vsctl add-port br-ex eth1
>> ifconfig eth1 up
>>
>> the instance is not connected to the router namespace
>>
>> Check in /var/lib/nova/instances/$instance_id/libvirt.xml for source
>> bridge and target dev
>>
>> try ip netns exec qrouter-4cff1882-1962-4084-9b48-cb1bcd048a4e ping $IP
>>
>> also, do the same from the dhcp namespace
>>
>>
>>
>>
>> On Thu, Sep 26, 2013 at 5:17 PM, Brandon Adams
>> <brandon.adams at newwave-technologies.com> wrote:
>> > Cristian,
>> > Thanks for responding so quickly. The instances should be attached to
>> > the
>> > internal network interface, which I can find on the network/compute
>> > node.
>> > It's under the namespace of the router for the project:
>> > ip netns exec qrouter-4cff1882-1962-4084-9b48-cb1bcd048a4e ifconfig
>> > shows
>> > two interfaces, one for the internal network and one for the external
>> > network.
>> >
>> > The problem I think is that this router isn't reachable from my extra
>> > compute node. When I create the router, its namespace does not appear on
>> > the
>> > extra node.
>> >
>> > And I am using dhcp, the quantum-dhcp-agent is installed on the
>> > network/compute node.
>> >
>> > Brandon
>> >
>> >
>> > On Thu, Sep 26, 2013 at 9:57 AM, Cristian Falcas
>> > <cristi.falcas at gmail.com>
>> > wrote:
>> >>
>> >> layer 3 is created only on the network node. On compute nodes you have
>> >> layer 2 only (openvswitch)
>> >>
>> >> The gre tunnels should take care of everything magically :).
>> >>
>> >> Where are the instances attached (that should be tap$id and source
>> >> qbr$id)? Do you use dhcp?
>> >>
>> >>
>> >> On Thu, Sep 26, 2013 at 4:40 PM, Brandon Adams
>> >> <brandon.adams at newwave-technologies.com> wrote:
>> >> > Hi all,
>> >> >
>> >> > I'm trying to add a second compute node my dev cluster, I've already
>> >> > got
>> >> > one
>> >> > controller node and one network/compute node running perfectly. I've
>> >> > installed all of the necessary packages and everything seems to be
>> >> > running
>> >> > smoothly. However, when I create a private network and subnet, they
>> >> > don't
>> >> > seem to be applied to the extra node. That is, I can see the extra
>> >> > interfaces when I run ovs-vsctl show on the network/compute node, but
>> >> > not on
>> >> > the dedicated compute node. There are GRE tunnels which I assume
>> >> > connect
>> >> > the
>> >> > two nodes across the management network, and instances boot up on the
>> >> > new
>> >> > node. They just can't find the internal network and thus can't be
>> >> > reached at
>> >> > all. I'm running Grizzly on Ubuntu, using Quantum OpenVSwitch with
>> >> > GRE
>> >> > tunnels. I've already fixed an error regarding brcompatd running on
>> >> > both
>> >> > nodes,  so I'm wondering what my next step is. Thanks.
>> >> >
>> >> > Brandon
>> >> >
>> >> > _______________________________________________
>> >> > Mailing list:
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >> > Post to     : openstack at lists.openstack.org
>> >> > Unsubscribe :
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >> >
>> >
>> >
>
>




More information about the Openstack mailing list