[Openstack] Accessing VMs in Flat DHCP mode with multiple host
Michaël Van de Borne
michael.vandeborne at cetic.be
Thu May 10 14:40:15 UTC 2012
ok I'm gonna check this and I'll keep you posted.
By the way, how could I check the network between the control node's
br100 and the compute node's br100? I guess I can do this by checking
that each bridge knows the other in the ARP table. Or did you have
another idea?
Michaël Van de Borne
R&D Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
Le 10/05/2012 15:31, Yong Sheng Gong a écrit :
> HI,
> First you have to make sure the network between your control node's
> br100 and your compute node's br100 are connected.
> and then can you show the output on control node:
> ps -ef | grep dnsmasq
> brctl show
> ifconfig
> 2. can you login to your vm by vnc to see the eth0 configuration and
> then try to run udhcpc?
>
> Thanks
> -----openstack-bounces+gongysh=cn.ibm.com at lists.launchpad.net wrote: -----
>
> To: "openstack at lists.launchpad.net" <openstack at lists.launchpad.net>
> From: Michaël Van de Borne <michael.vandeborne at cetic.be>
> Sent by: openstack-bounces+gongysh=cn.ibm.com at lists.launchpad.net
> Date: 05/10/2012 09:03PM
> Subject: [Openstack] Accessing VMs in Flat DHCP mode with multiple
> host
>
> Hello,
>
> I'm running into troubles accessing my instances.
> I have 3 nodes:
> 1. proxmox that virtualizes in KVM my controller node
> 1.1 the controller node (10.10.200.50) runs keystone,
> nova-api, network, scheduler, vncproxy and volumes but NOT compute
> as it is already a VM
> 2. glance in a physical node
> 3. compute in a physical node
>
> my nova.conf network config is:
> --dhcpbridge_flagfile=/etc/nova/nova.conf
> --dhcpbridge=/usr/bin/nova-dhcpbridge
> --routing_source_ip=10.10.200.50
> --libvirt_use_virtio_for_bridges=true
> --network_manager=nova.network.manager.FlatDHCPManager
> --public_interface=eth0
> --flat_interface=eth1
> --flat_network_bridge=br100
> --fixed_range=192.168.200.0/24
> --floating_range=10.10.200.0/24
> --network_size=256
> --flat_network_dhcp_start=192.168.200.5
> --flat_injected=False
> --force_dhcp_release
> --network_host=10.10.200.50
>
> I even explicitly allows icmp and tcp port 22 traffic like this:
> euca-authorize -P icmp -t -1:-1 default
> euca-authorize -P tcp -p 22 default
>
> before setting these rules, I was getting 'Operation not
> permitted' when pinging the VM from the compute node. After
> setting these, I just get no output at all (not even 'Destination
> Host Unreachable')
>
>
> The network was created like this:
> nova-manage network create private
> --fixed_range_v4=192.168.200.0/24 --bridge=br100
> --bridge_interface=eth1 --num_networks=1 --network_size=256
>
> However I cannot ping or ssh my instances once they're active. I
> have already set up such an Essex environment but the controller
> node was physical. Morevover, every examples in the doc presents a
> controller node that runs nova-compute.
>
> So I'm wondering if either:
> - having the controller in a VM
> - or not running compute on the controller
> would prevent things to work properly.
>
> What can I check? iptables? is dnsmasq unable to give the VM an
> address?
>
> I'm running out of ideas. Any suggestion would be highly appreciated.
>
> Thank you,
>
> michaël
>
>
>
>
> --
> Michaël Van de Borne
> R&D Engineer, SOA team, CETIC
> Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype:
> mikemowgli
> www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> Post to : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> More help : https://help.launchpad.net/ListHelp
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120510/4e630e2e/attachment.html>
More information about the Openstack
mailing list