[Openstack-operators] can't access vms in quantum, Folsom

Dan Wendlandt dan at nicira.com
Fri Oct 5 18:32:23 UTC 2012


On Fri, Oct 5, 2012 at 8:34 AM, Jānis Ģeņģeris <janis.gengeris at gmail.com> wrote:
> Hi all,
>
> I have fixed my problem.

Great.  Both of these items are covered in the Quantum API doc, but it
might be a good idea to file an enhancement bug if you have
suggestions for how they can be more visible or more detailed.

>
> Two things:
> 1) There was a missing route from VM instance network to metadata service
> through public network. So after adding the rule on phys router I got that
> working.

http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_l3_agent_metadata.html

> 2) Quantum auth settings were not configured inside nova.conf on the box
> where the nova-api is running, so metadata service couldn't connect to
> quantum and was failing.

http://docs.openstack.org/trunk/openstack-network/admin/content/nova_with_quantum_api.html

Thanks,

Dan



>
>
> On Wed, Oct 3, 2012 at 11:01 PM, Jānis Ģeņģeris <janis.gengeris at gmail.com>
> wrote:
>>
>> On Tue, Oct 2, 2012 at 5:55 PM, Dan Wendlandt <dan at nicira.com> wrote:
>>>
>>> Hi Janis,
>>>
>>> Thanks for the detailed report.  Responses inline.
>>>
>>> dan
>>>
>>> On Tue, Oct 2, 2012 at 7:13 AM, Jānis Ģeņģeris <janis.gengeris at gmail.com>
>>> wrote:
>>> > Hello all,
>>> >
>>> > I'm trying to set up quantum+openvswitch with the Folsom release. The
>>> > intended configuration is fixed IP network 10.0.1.0/24 and floating IP
>>> > network 85.254.50.0/24. And am a little stuck with connection problems
>>> > to
>>> > VMs.
>>> >
>>> > My config is the following:
>>> >
>>> > 1) Controller node that is running rabbit, mysql, quantum-server,
>>> > nova-api,
>>> > nova-scheduler, nova-volume, keystone, etc. Have two net interfaces,
>>> > one for
>>> > service network (192.168.164.1) and other for outside world
>>> > connections.
>>> >
>>> > 2) Compute node, which is working also as quantum network node, and is
>>> > running: kvm, nova-compute, quantum-l3-agent, quantum-dchp-agent. Have
>>> > two
>>> > net interfaces, one is from service network 192.168.164.101, and the
>>> > other
>>> > is for floating ips 85.254.50.0/24, bridged into openvswitch. And using
>>> > libvirt 0.9.11.
>>>
>>> That all makes sense.
>>>
>>> >
>>> > I wonder if local_ip in ovs_quantum_plugin.ini might break something,
>>> > because the docs say that it should be set only on hypervisors, but I
>>> > have
>>> > merged hypervisor with network node.
>>> >
>>> > ovs_quantum_plugin.ini fragment:
>>> > [OVS]
>>> > enable_tunneling = True
>>> > tenant_network_type = gre
>>> > tunnel_id_ranges = 1:1000
>>> > local_ip = 192.168.164.101
>>>
>>> that should be fine.  besides, the communication that is not working
>>> is all within one device, based on your description.
>>>
>>> >
>>> > nova.conf fragment:
>>> >
>>> > libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver
>>> > libvirt_use_virtio_for_bridges=True
>>> >
>>> > The VMs are getting created successfully, nova-compute.log and
>>> > console-log
>>> > for each vm looks ok.
>>> >
>>> > Here are the dumps of current network configuration:
>>> >
>>> > ovs-vsctl show - http://pastebin.com/0V6kRw1N
>>> > ip addr (on default namespace) - http://pastebin.com/VTLbit11
>>> > output from router and dhcp namespaces - http://pastebin.com/pDmjpmLE
>>> >
>>> > pings for gateways in router namespace work ok:
>>> > # ip netns exec qrouter-3442d231-2e00-4d26-823e-1feb5d02a798 ping
>>> > 10.0.1.1
>>> > # ip netns exec qrouter-3442d231-2e00-4d26-823e-1feb5d02a798 ping
>>> > 85.254.50.1
>>> >
>>> > But it is not possible to ping any of the instances in fixed network
>>> > from
>>> > router namespace (floating network is also not working of course).
>>> >
>>> > a) Can this be an iptables/NAT problem?
>>> > b) What about libvirt nwfilters, they are also active.
>>>
>>> unlikely, given that you're using the integrated OVS vif driver, which
>>> doesn't invoke iptables hooks.
>>>
>>> > c) What else could be wrong?
>>>
>>>
>>> Two questions:
>>>
>>> 1) have you confirmed that the VMs got an IP via DHCP?  You can do
>>> this by looking at the console log, or by using VNC to access the
>>> instances.
>>
>> I think this excerpt from vm console-log means a big no:
>>
>> cloud-init start-local running: Tue, 02 Oct 2012 13:36:32 +0000. up 4.45
>> seconds
>> no instance data found in start-local
>> cloud-init-nonet waiting 120 seconds for a network device.
>> cloud-init-nonet gave up waiting for a network device.
>> ci-info: lo    : 1 127.0.0.1       255.0.0.0       .
>> ci-info: eth0  : 1 .               .               fa:16:3e:61:7b:bc
>> route_info failed
>>
>>
>>>
>>> 2) if so, can you confirm that you can ping the DHCP IP address in the
>>> subnet from the router namespace?
>>
>> See answer to 1).
>>
>>>
>>>
>>>
>>> It would also be good to run tcpdump on the linux device that
>>> corresponds to the VM you are pinging (i.e., vnetX for VM X).
>>
>> Running tcpdump on any of the vnetX interfaces can see any ping request to
>> the IPs in fixed subnet 10.0.1.0/24.
>> For example:
>> # tcpdump -i vnet3
>> tcpdump: WARNING: vnet3: no IPv4 address assigned
>> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>> listening on vnet3, link-type EN10MB (Ethernet), capture size 65535 bytes
>> 22:03:08.565579 ARP, Request who-has 10.0.1.6 tell 10.0.1.1, length 28
>> 22:03:09.564696 ARP, Request who-has 10.0.1.6 tell 10.0.1.1, length 28
>>
>> The only responsive IP is the on set on the gw.
>>
>>> Its possible this is related to the specific vif-driver, which is
>>> using the new libvirt integrated support for OVS.
>>>
>>> For example, you could try:
>>>
>>> libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
>>>
>>> but remember that when you do this, you will also want to open up the
>>> default security group:
>>>
>>> nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
>>
>> Haven't tried this part yet, will do tomorrow.
>>
>> But looks like I have problem with metadata-api service. I changed
>> metadata service address inside l3-agent config to public one 85.254.49.100.
>>
>> I have this iptables rule inside router namespace now:
>> Chain quantum-l3-agent-PREROUTING (1 references)
>>  pkts bytes target     prot opt in     out     source
>> destination
>>     0     0 DNAT       tcp  --  *      *       0.0.0.0/0
>> 169.254.169.254      tcp dpt:80 to:85.254.49.100:8775
>>
>> But with 0 hits anyway.
>>
>> The metadata service is now accessible if going directly to 85.254.49.100
>> from router namesapce.
>> I think what I' missing is the route from VM network on metadata host as
>> described here
>>
>> http://docs.openstack.org/trunk/openstack-network/admin/content/adv_cfg_l3_agent_metadata.html,
>> except I have no clue how to set and get it working.
>>
>> When trying:
>> # ip netns exec qrouter-3442d231-2e00-4d26-823e-1feb5d02a798 nc -v
>> 169.254.169.254 80
>> nc: connect to 169.254.169.254 port 80 (tcp) failed: Connection refused
>>
>> How will private IP from 10.0.1.0/24 get to the metada-service through
>> public network?
>>>
>>>
>>>
>>> Dan
>>>
>>>
>>> >
>>> > Any help and comments how to fix this are welcome.
>>> >
>>> > Regards,
>>> > --janis
>>> >
>>> > _______________________________________________
>>> > OpenStack-operators mailing list
>>> > OpenStack-operators at lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >
>>>
>>>
>>>
>>> --
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> Dan Wendlandt
>>> Nicira, Inc: www.nicira.com
>>> twitter: danwendlandt
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> Oh I'm little confused with all the network namespaces and vswitch in the
>> middle. Can someone who have real running setup explain briefly what is
>> required for basic setup to start working, as I see I'm not the only, nor
>> the last one having this problem.
>>
>>
>> --
>> --janis
>
>
>
>
> --
> --janis



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~



More information about the OpenStack-operators mailing list