[Openstack] Openstack Digest, Vol 8, Issue 13

谢先斌 xianbinxie at 163.com
Thu Feb 13 09:14:35 UTC 2014


hello openstackers:

When I invoking openstack api, I really don't know the difference of 35357 port and 5000 port. Anyone can give me a answer please, thank you.

Xianbin Xie
xianbinxie at 163.com
At 2014-02-12 20:00:04,openstack-request at lists.openstack.org wrote:
>Send Openstack mailing list submissions to
>	openstack at lists.openstack.org
>
>To subscribe or unsubscribe via the World Wide Web, visit
>	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>or, via email, send a message with subject or body 'help' to
>	openstack-request at lists.openstack.org
>
>You can reach the person managing the list at
>	openstack-owner at lists.openstack.org
>
>When replying, please edit your Subject line so it is more specific
>than "Re: Contents of Openstack digest..."
>
>
>Today's Topics:
>
>   1. Duplicate DHCP-agent problem? (Nick Ma)
>   2. GSOC 2014 - org reg and application for mentoring (Kov?cs B?lint)
>   3. Re: GSOC 2014 - org reg and application for mentoring
>      (Davanum Srinivas)
>   4. Re: Is there any article for installing Havana with neutron
>      in 2 nodes with only 2 nics? (jeffty)
>   5. Re: Neutron (Havana) configuration on Ubuntu (George Mihaiescu)
>   6. [Neutron][IPv6] Idea: Floating IPv6 - "Without any kind	of
>      NAT" (Martinx - ?????)
>   7. Re: [Neutron][IPv6] Idea: Floating IPv6 - "Without any	kind
>      of NAT" (Martinx - ?????)
>   8. Re: GSOC 2014 - org reg and application for mentoring
>      (Kov?cs B?lint)
>   9. Limited external access from VM created by DevStack
>      (Mike Spreitzer)
>  10. Re: Bringing up VMs in an OpenStack private cloud with access
>      to 2 external networks (dmz and corporate) (Ritesh Nanda)
>  11. Re: Neutron (Havana) configuration on Ubuntu (Lillie Ross-CDSR11)
>  12. [Swift] Question regarding global scaling (Adam Lawson)
>  13. Re: Neutron (Havana) configuration on Ubuntu (Lillie Ross-CDSR11)
>  14. Re: Neutron (Havana) configuration on Ubuntu (Lillie Ross-CDSR11)
>  15. Call for panelists on "HPC in the Cloud" (Brian Schott)
>  16. Re: dhcp request can't reach br-int of network node (Vikash Kumar)
>  17. Re: dhcp request can't reach br-int of network node (Li, Chen)
>  18. Trunk pass over OVS (Remo Mattei)
>  19. Re: dhcp request can't reach br-int of network node
>      (Robert Collins)
>  20. Problem with dnsmasq - DHCPDISCOVER no address available
>      (Rajshree Thorat)
>  21. Re: Problem with dnsmasq - DHCPDISCOVER no address available
>      (Li, Chen)
>  22. Re: dhcp request can't reach br-int of network node (Vikash Kumar)
>  23. Connection breaking between the controller and the	compute
>      node. (Akshat Kansal)
>  24. PeiiteCloud 0.2.5 Released (Aryeh Friedman)
>  25. Re: PeiiteCloud 0.2.5 Released (Dnsbed Ops)
>  26. Re: [Ceilometer/Telemetry] How to receive events
>      fromkeystone/identity component ? (MICHON Anthony)
>  27. Re: PeiiteCloud 0.2.5 Released (Aryeh Friedman)
>  28. Re: [Swift] Question regarding global scaling (Dnsbed Ops)
>  29. openstack havana cinder chooses wrong host to create	new
>      volume (Staicu Gabriel)
>  30. Re: dhcp request can't reach br-int of network node
>      (GALAMBOS Daniel)
>
>
>----------------------------------------------------------------------
>
>Message: 1
>Date: Tue, 11 Feb 2014 22:02:01 +0800
>From: Nick Ma <skywalker.nick at gmail.com>
>To: Openstack Milis <openstack at lists.openstack.org>
>Subject: [Openstack] Duplicate DHCP-agent problem?
>Message-ID: <52FA2D59.1000501 at gmail.com>
>Content-Type: text/plain; charset=UTF-8; format=flowed
>
>Hi all,
>
>I find that there's a parameter "dhcp_agents_per_network" in 
>neutron.conf to implement duplicate DHCP-agents serving a certain 
>network. When it sets 2, 2 DHCP-agents serve one network.
>
>Is it stable solution? It runs 2 qdhcp-xxx in different hosts and each 
>qdhcp-xxx runs dnsmasq to serve dhcp requests of that tenant. AFAIK, 
>dnsmasq is stateful sinces it stores records on the disk. If the two 
>dnsmasq is not that synchronized, for example, I run multiple instances 
>at the same time, is it possible that two different instances require 
>the IP from the two dnsmasq and finally get the same IP?
>
>Please correct me if I'm wrong.
>
>-- 
>
>Nick Ma
>skywalker.nick at gmail.com
>
>
>
>
>------------------------------
>
>Message: 2
>Date: Tue, 11 Feb 2014 16:11:35 +0100
>From: Kov?cs B?lint <blint at balabit.hu>
>To: openstack at lists.openstack.org
>Subject: [Openstack] GSOC 2014 - org reg and application for mentoring
>Message-ID: <52FA3DA7.7070808 at balabit.hu>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>Hi guys,
>
>there have been a few mails about OpenStack participating in GSOC 2014, 
>but I haven't found any pointers on whether there has already been an 
>organisation registration.
>
>We at the Zorp project have been pondering doing our org registration 
>too, but figured that joining up with you in an OpenStack-related idea 
>would probably be more valuable to both of us.
>
>Zorp is an application level proxy firewall with a python configuration 
>language and extremely flexible architecture and we are looking into 
>creating a Zorp driver for the Neutron FWaaS extension.
>
>Of course we would be happy to mentor anybody from the Zorp side and 
>(given our limited understanding of OpenStack) would be very grateful if 
>we could have somebody co-mentor the OpenStack side of the idea.
>
>What do you think?
>
>Best Regards,
>Balint
>
>
>
>
>
>
>
>
>
>
>------------------------------
>
>Message: 3
>Date: Tue, 11 Feb 2014 10:34:06 -0500
>From: Davanum Srinivas <davanum at gmail.com>
>To: Kov?cs B?lint <blint at balabit.hu>
>Cc: openstack at lists.openstack.org
>Subject: Re: [Openstack] GSOC 2014 - org reg and application for
>	mentoring
>Message-ID:
>	<CANw6fcHN+gABNYLfiuS__G-01UT3VUkm-u2KBNs_yHeqhjxgPw at mail.gmail.com>
>Content-Type: text/plain; charset=ISO-8859-1
>
>Balint,
>
>Yes, i will be registering OpenStack org. We are collecting info from
>mentors/participants here - https://wiki.openstack.org/wiki/GSoC2014
>Can you please add what you mention here into that wiki page?
>
>thanks,
>dims
>
>On Tue, Feb 11, 2014 at 10:11 AM, Kov?cs B?lint <blint at balabit.hu> wrote:
>> Hi guys,
>>
>> there have been a few mails about OpenStack participating in GSOC 2014, but
>> I haven't found any pointers on whether there has already been an
>> organisation registration.
>>
>> We at the Zorp project have been pondering doing our org registration too,
>> but figured that joining up with you in an OpenStack-related idea would
>> probably be more valuable to both of us.
>>
>> Zorp is an application level proxy firewall with a python configuration
>> language and extremely flexible architecture and we are looking into
>> creating a Zorp driver for the Neutron FWaaS extension.
>>
>> Of course we would be happy to mentor anybody from the Zorp side and (given
>> our limited understanding of OpenStack) would be very grateful if we could
>> have somebody co-mentor the OpenStack side of the idea.
>>
>> What do you think?
>>
>> Best Regards,
>> Balint
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>-- 
>Davanum Srinivas :: http://davanum.wordpress.com
>
>
>
>------------------------------
>
>Message: 4
>Date: Tue, 11 Feb 2014 23:59:53 +0800
>From: jeffty <wantwatering at gmail.com>
>To: Jitendra Kumar Bhaskar <jitendra.b at pramati.com>
>Cc: Openstack Users <openstack at lists.openstack.org>
>Subject: Re: [Openstack] Is there any article for installing Havana
>	with neutron in 2 nodes with only 2 nics?
>Message-ID: <52FA48F9.6020807 at gmail.com>
>Content-Type: text/plain; charset=UTF-8
>
>Thanks Jitendra.
>
>On 2/10/2014 7:54 PM, Jitendra Kumar Bhaskar wrote:
>> Hi Jeffy,
>> 
>> You can use 2 NIC only , it will work. You can install neutron server on
>> controller and use controller internal ip for "DATA_INTERFACE_IP".
>> 
>> 
>> Regards*
>> Jitendra Bhaskar*
>> 
>> 
>> 
>> 
>> 
>> 
>> On Mon, Feb 10, 2014 at 1:49 PM, jeffty <wantwatering at gmail.com
>> <mailto:wantwatering at gmail.com>> wrote:
>> 
>>     Thanks Robert.
>> 
>>     It should be
>>     http://docs.openstack.org/havana/install-guide/install/apt/content/install-neutron.install-plug-in.ovs.gre.html
>> 
>>     So I can install neutron in my controller or compute node with 2 nics,
>>     and configure as the document describes except that need to replace the
>>     DATA_INTERFACE_IP with the internal subnet nic's IP as below?
>> 
>>     [ovs]
>>     tenant_network_type = gre
>>     tunnel_id_ranges = 1:1000
>>     enable_tunneling = True
>>     integration_bridge = br-int
>>     tunnel_bridge = br-tun
>>     local_ip = DATA_INTERFACE_IP
>> 
>>     Thanks.
>> 
>> 
>>     On 2/10/2014 4:08 PM, Robert Collins wrote:
>>     > Not sure what doc / image you're specifically referring to - you
>>     > linked to the table of contents, but you can install Neutron with just
>>     > one NIC.
>>     >
>>     > -Rob
>>     >
>>     > On 10 February 2014 15:51, jeffty <wantwatering at gmail.com
>>     <mailto:wantwatering at gmail.com>> wrote:
>>     >> Hi there,
>>     >>
>>     >> I have 2 PC and each of them has 2 nics. One for internet access and
>>     >> another for internal network.
>>     >>
>>     >> I want to install havana with one controller and one compute. Neutron
>>     >> can be installed in any of them.
>>     >>
>>     >> The document illustrates 3 nics are needed for dedicated network
>>     node.
>>     >> Is there any articles for installing neutron service with only 2
>>     nics?
>>     >> Any changes need to be performed based on
>>     >> http://docs.openstack.org/havana/install-guide/install/apt/content/?
>>     >>
>>     >> Thanks.
>>     >>
>>     >> _______________________________________________
>>     >> Mailing list:
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>     >> Post to     : openstack at lists.openstack.org
>>     <mailto:openstack at lists.openstack.org>
>>     >> Unsubscribe :
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>     >
>>     >
>>     >
>> 
>> 
>>     _______________________________________________
>>     Mailing list:
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>     Post to     : openstack at lists.openstack.org
>>     <mailto:openstack at lists.openstack.org>
>>     Unsubscribe :
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>> 
>
>
>
>
>------------------------------
>
>Message: 5
>Date: Tue, 11 Feb 2014 11:12:45 -0500
>From: "George Mihaiescu" <George.Mihaiescu at Q9.com>
>To: "Lillie Ross-CDSR11" <Ross.Lillie at motorolasolutions.com>,
>	<openstack at lists.openstack.org>
>Subject: Re: [Openstack] Neutron (Havana) configuration on Ubuntu
>Message-ID:
>	<413FEEF1743111439393FB76D0221E4825BE7303 at leopard.zoo.q9networks.com>
>Content-Type: text/plain; charset="us-ascii"
>
>Hi Ross,
>
> 
>
>You also have to assign the router you created an interface on the admin-net.1 
>
> 
>
>If everything is setup correctly you can then run "neutron router-list" to see the ID of the router you created, and then "neutron router-port-list ROUTER_ID" and you should see that your router has two interfaces: one on the admin-net.1 subnet with IP 10.0.1.1/24 and another one on the admin-net.float subnet.
>
> 
>
>There are three steps involved in setting up a private router: create the router, add an interface to the private network and then set its external gateway and the process is documented here: http://docs.openstack.org/havana/install-guide/install/apt/content/demo_per_tenant_router_network_config.html (Step 5d)
>
> 
>
>I hope this helps.
>
> 
>
> 
>
>George
>
> 
>
> 
>
> 
>
> 
>
>________________________________
>
>From: Lillie Ross-CDSR11 [mailto:Ross.Lillie at motorolasolutions.com] 
>Sent: Monday, February 10, 2014 7:53 PM
>To: openstack at lists.openstack.org
>Subject: [Openstack] Neutron (Havana) configuration on Ubuntu
>
> 
>
>If this issue has already been discussed, please excuse.
>
>I'm somewhat confused about neutron configuration and tenancy. Correct me if I'm wrong. 
>
>First, I've create a private network under the 'admin' tenant named 'admin-net'. I've associated a subnet named admin-net.1 with the admin-net with a CIDR of 10.0.1.0/24.
>
>Next, I created a network with router:external set to True associated with our campus network named 'campus-net'.  This network was created under the 'service' tenant'. I also create a router named 'campus-gw' under the 'service' tenant and set it's gateway to be the 'campus-net' network.
>
>Finally, I create a floating address pool under the 'admin' tenant named 'admin-net.float', and add it as an interface to the 'campus-gw' router.  I also create a default security group under the 'admin' tenant to allow SSH and ICMP access.
>
>When I boot an image, as a member of the admin tenant, the instance is correctly assigned an IP address from the admin tenant's private network.  I next allocate (nova floating-ip-create admin-net.float) a floating IP address and associated it my running instance.
>
>However, I'm unable to ping the running instance, and I see no indication of the end of the tunnel being established on the network/controller node.
>
>I'm not that well versed with network namespaces nor the openvswitch commands. 
>
>2 questions.  Does my overall configuration sound correct? And how best to diagnose what's going on here?  Any pointers would be helpful. Additional details can be provided as needed.  Thanks loads in advance.
>
>Regards,
>/ross
>
>-
>
>(neutron) net-list
>+--------------------------------------+------------+----------------------------------------------------+
>| id                                   | name       | subnets                                            |
>+--------------------------------------+------------+----------------------------------------------------+
>| 2426f4d8-a983-4f50-ab5a-fd2a37e5cd94 | campus-net | a948538d-c2c2-4c02-9116-b89a79f0c73a 173.23.0.0/16 |
>| e6984375-f35b-4636-a293-43d0d296e0ff | admin-net  | 2ced890b-944f-4f6e-8f7a-3f5a4d07c2bb 10.0.1.0/24   |
>+--------------------------------------+------------+----------------------------------------------------+
>(neutron) subnet-list
>+--------------------------------------+--------------------+---------------+---------------------------------------------------+
>| id                                   | name               | cidr          | allocation_pools                                  |
>+--------------------------------------+--------------------+---------------+---------------------------------------------------+
>| 2ced890b-944f-4f6e-8f7a-3f5a4d07c2bb | admin-net.1        | 10.0.1.0/24   | {"start": "10.0.1.2", "end": "10.0.1.254"}        |
>| a948538d-c2c2-4c02-9116-b89a79f0c73a | admin-net.floating | 173.23.0.0/16 | {"start": "173.23.182.2", "end": "173.23.182.15"} |
>+--------------------------------------+--------------------+---------------+---------------------------------------------------+
>(neutron) router-list
>+--------------------------------------+-----------+-----------------------------------------------------------------------------+
>| id                                   | name      | external_gateway_info                                                       |
>+--------------------------------------+-----------+-----------------------------------------------------------------------------+
>| 43c596c4-65fe-4c22-a48a-0a6e200abf78 | campus-gw | {"network_id": "2426f4d8-a983-4f50-ab5a-fd2a37e5cd94", "enable_snat": true} |
>+--------------------------------------+-----------+-----------------------------------------------------------------------------+
>(neutron) router-show campus-gw
>+-----------------------+-----------------------------------------------------------------------------+
>| Field                 | Value                                                                       |
>+-----------------------+-----------------------------------------------------------------------------+
>| admin_state_up        | True                                                                        |
>| external_gateway_info | {"network_id": "2426f4d8-a983-4f50-ab5a-fd2a37e5cd94", "enable_snat": true} |
>| id                    | 43c596c4-65fe-4c22-a48a-0a6e200abf78                                        |
>| name                  | campus-gw                                                                   |
>| routes                |                                                                             |
>| status                | ACTIVE                                                                      |
>| tenant_id             | service                                                                     |
>+-----------------------+-----------------------------------------------------------------------------+
>(neutron) security-group-list
>+--------------------------------------+---------+-------------+
>| id                                   | name    | description |
>+--------------------------------------+---------+-------------+
>| 0d66a3e2-7a0f-4caf-8b63-c3c8f3106242 | default | default     |
>| c87230fa-9193-47a7-8ade-cec5f7f6b958 | default | default     |
>+--------------------------------------+---------+-------------+
>(neutron)  
>
>root at cirrus3:/var/log/neutron# nova list
>+--------------------------------------+------+--------+------------+-------------+----------------------------------+
>| ID                                   | Name | Status | Task State | Power State | Networks                         |
>+--------------------------------------+------+--------+------------+-------------+----------------------------------+
>| ccdf7541-3a74-4289-a8ce-9fe5cffe9dbd | tvm1 | ACTIVE | None       | Running     | admin-net=10.0.1.2, 173.23.182.3 |
>+--------------------------------------+------+--------+------------+-------------+----------------------------------+
>root at cirrus3:/var/log/neutron# nova show tvm1
>+--------------------------------------+----------------------------------------------------------+
>| Property                             | Value                                                    |
>+--------------------------------------+----------------------------------------------------------+
>| status                               | ACTIVE                                                   |
>| updated                              | 2014-02-11T00:03:25Z                                     |
>| OS-EXT-STS:task_state                | None                                                     |
>| OS-EXT-SRV-ATTR:host                 | cn1                                                      |
>| key_name                             | root                                                     |
>| image                                | cirros (57a9f5d6-8b07-4bdb-b8a0-900de339d804)            |
>| admin-net network                    | 10.0.1.2, 173.23.182.3                                   |
>| hostId                               | 982cd20cde9c5f8514c95b5ca8530258fa9454cdc988a8b007a6d20b |
>| OS-EXT-STS:vm_state                  | active                                                   |
>| OS-EXT-SRV-ATTR:instance_name        | instance-00000021                                        |
>| OS-SRV-USG:launched_at               | 2014-02-11T00:03:25.000000                               |
>| OS-EXT-SRV-ATTR:hypervisor_hostname  | cn1                                                      |
>| flavor                               | m1.tiny (1)                                              |
>| id                                   | ccdf7541-3a74-4289-a8ce-9fe5cffe9dbd                     |
>| security_groups                      | [{u'name': u'default'}]                                  |
>| OS-SRV-USG:terminated_at             | None                                                     |
>| user_id                              | 090a2de6e74b4573bd29318d4f494191                         |
>| name                                 | tvm1                                                     |
>| created                              | 2014-02-11T00:02:47Z                                     |
>| tenant_id                            | ec54b7cadcab4620bbb6d568be7bd4a8                         |
>| OS-DCF:diskConfig                    | MANUAL                                                   |
>| metadata                             | {}                                                       |
>| os-extended-volumes:volumes_attached | []                                                       |
>| accessIPv4                           |                                                          |
>| accessIPv6                           |                                                          |
>| progress                             | 0                                                        |
>| OS-EXT-STS:power_state               | 1                                                        |
>| OS-EXT-AZ:availability_zone          | nova                                                     |
>| config_drive                         |                                                          |
>+--------------------------------------+----------------------------------------------------------+
>root at cirrus3:/var/log/neutron# 
>
>--
>Ross Lillie
>Distinguished Member of Technical Staff
>Motorola Solutions, Inc.
>
>motorolasolutions.com
>O: +1.847.576.0012
>M: +1.847.980.2241
>E: ross.lillie at motorolasolutions.com
>
>
> 
>
> 
>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140211/69b5a982/attachment-0001.html>
>-------------- next part --------------
>A non-text attachment was scrubbed...
>Name: image001.gif
>Type: image/gif
>Size: 2405 bytes
>Desc: image001.gif
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140211/69b5a982/attachment-0001.gif>
>
>------------------------------
>
>Message: 6
>Date: Tue, 11 Feb 2014 14:25:31 -0200
>From: Martinx - ?????  <thiagocmartinsc at gmail.com>
>To: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: [Openstack] [Neutron][IPv6] Idea: Floating IPv6 - "Without
>	any kind	of NAT"
>Message-ID:
>	<CAJSM8J1Z5Vg5wsdEUMU4bHRZV5f8wXZEKifobPbu_LKFi0+0HQ at mail.gmail.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>Hello Stackers!
>
>It is very nice to watch the OpenStack evolution in IPv6! Great job guys!!
>
>
>I have another idea:
>
>"Floating IP" for IPv6, or just "Floating IPv6"
>
>
>With IPv4, as we know, OpenStack have a feature called "Floating IP", which
>is basically a 1-to-1 NAT rule (within tenant's Namespace q-router). In
>IPv4 networks, we need this "Floating IP" attached to a Instance, to be
>able to reach it from the Internet (*I don't like it*). But, what is the
>use case for a "Floating IP" when you have *no NAT** (as it is with IPv6)?!
>
>At first, when with IPv6, I was planning to disable the "Floating IP"
>feature entirely, by removing it from Dashboard and from APIs (even for
>IPv4, if FWaaS can in somehow, be able to manage q-router IPv4 NAT rules,
>and not only the "iptables filter table") and, I just had an idea!
>
>For IPv6, the "Floating IP" can still be used to allocate more (and more)
>IPs to a Instance BUT, instead of creating a NAT rule (like it is for
>IPv4), it will configure the DNSMasq (or something like it) to provide more
>IPv6 address per MAC / Instance. That way, we can virtually
>allocate unlimited IPs (v6) for each Instance!
>
>It will be pretty cool to see the attached "Floating IPv6", literally
>"floating around" the tenant subnet, appearing inside the Instances itself
>(instead of inside the tenant's Namespace), so, we'll be able to see it
>(the Floating IPv6) with "ip -6 address" command within the attached
>Instance!
>
>The only problem I see with this is that, for IPv4, the allocated
>"Floating IPs"
>come from the "External Network" (neutron / --allocation-pool) and, for IPv6,
>it will come from the tenant's IPv6 subnet itself... I think... Right?!
>
>---
>Why I want tons of IPv6 within each Instance?
>
>A.: Because we can! I mean, we can go back to the days when we had 1
>website per 1 public IP (i.e. using IP-Based Virtual Hosts with Apache - I
>prefer this approach).
>
>Also, we can try to turn the "Floating IPv6", in some kind of "Floating
>IPv6 Range", this way, we can for example, allocate millions of IPs per
>Instance, like this in DHCPv6: "range6 2001:db8:1:1::1000
>2001:db8:1:1000:1000;"...
>---
>
>NOTE: I prefer multiple IPs per Instance, instead of 1 IP per Instance,
>when using VT, unless, of course, the Instances are based on Docker, so,
>with it, I can easily see millions of tiny instances, each of it with its
>own IPv6 address, without the overhead of virtualized environment. So, with
>Docker, this "Floating IPv6 Range" doesn't seems to be useful...
>
>
>* I know that there is NAT66 out there but, who is actually using it?! I'll
>never use this thing. Personally I dislike NAT very much, mostly because it
>breaks the end-to-end Internet connectivity, effectively kicking you out
>from the real Internet, and it is just a workaround created to deal with
>IPv4 exaustion.
>
>
>BTW, please guys, let me know if this isn't the right place to post "ideas
>for OpenStack / feature requests"... I don't want to bloat this list with
>undesirable messages.
>
>
>Best Regards,
>Thiago Martins
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140211/6595aa06/attachment-0001.html>
>
>------------------------------
>
>Message: 7
>Date: Tue, 11 Feb 2014 14:59:10 -0200
>From: Martinx - ?????  <thiagocmartinsc at gmail.com>
>To: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: Re: [Openstack] [Neutron][IPv6] Idea: Floating IPv6 -
>	"Without any	kind of NAT"
>Message-ID:
>	<CAJSM8J01oLtET0vZCcWoyjGCx=OxhKf7_Psz6CMGmNNGJBfyvg at mail.gmail.com>
>Content-Type: text/plain; charset="iso-2022-jp"
>
>Sorry guys, I'll double post this to OpenStack Dev instead... My mistake...
>
>
>On 11 February 2014 14:25, Martinx - ????? <thiagocmartinsc at gmail.com>wrote:
>
>> Hello Stackers!
>>
>> It is very nice to watch the OpenStack evolution in IPv6! Great job guys!!
>>
>>
>> I have another idea:
>>
>> "Floating IP" for IPv6, or just "Floating IPv6"
>>
>>
>> With IPv4, as we know, OpenStack have a feature called "Floating IP",
>> which is basically a 1-to-1 NAT rule (within tenant's Namespace q-router).
>> In IPv4 networks, we need this "Floating IP" attached to a Instance, to
>> be able to reach it from the Internet (*I don't like it*). But, what is
>> the use case for a "Floating IP" when you have *no NAT** (as it is with
>> IPv6)?!
>>
>> At first, when with IPv6, I was planning to disable the "Floating IP"
>> feature entirely, by removing it from Dashboard and from APIs (even for
>> IPv4, if FWaaS can in somehow, be able to manage q-router IPv4 NAT rules,
>> and not only the "iptables filter table") and, I just had an idea!
>>
>> For IPv6, the "Floating IP" can still be used to allocate more (and more)
>> IPs to a Instance BUT, instead of creating a NAT rule (like it is for
>> IPv4), it will configure the DNSMasq (or something like it) to provide more
>> IPv6 address per MAC / Instance. That way, we can virtually
>> allocate unlimited IPs (v6) for each Instance!
>>
>> It will be pretty cool to see the attached "Floating IPv6", literally
>> "floating around" the tenant subnet, appearing inside the Instances
>> itself (instead of inside the tenant's Namespace), so, we'll be able to see
>> it (the Floating IPv6) with "ip -6 address" command within the attached
>> Instance!
>>
>> The only problem I see with this is that, for IPv4, the allocated "
>> Floating IPs" come from the "External Network" (neutron /
>> --allocation-pool) and, for IPv6, it will come from the tenant's IPv6 subnet
>> itself... I think... Right?!
>>
>> ---
>> Why I want tons of IPv6 within each Instance?
>>
>> A.: Because we can! I mean, we can go back to the days when we had 1
>> website per 1 public IP (i.e. using IP-Based Virtual Hosts with Apache -
>> I prefer this approach).
>>
>> Also, we can try to turn the "Floating IPv6", in some kind of "Floating
>> IPv6 Range", this way, we can for example, allocate millions of IPs per
>> Instance, like this in DHCPv6: "range6 2001:db8:1:1::1000
>> 2001:db8:1:1000:1000;"...
>> ---
>>
>> NOTE: I prefer multiple IPs per Instance, instead of 1 IP per Instance,
>> when using VT, unless, of course, the Instances are based on Docker, so,
>> with it, I can easily see millions of tiny instances, each of it with its
>> own IPv6 address, without the overhead of virtualized environment. So, with
>> Docker, this "Floating IPv6 Range" doesn't seems to be useful...
>>
>>
>> * I know that there is NAT66 out there but, who is actually using it?!
>> I'll never use this thing. Personally I dislike NAT very much, mostly
>> because it breaks the end-to-end Internet connectivity, effectively kicking
>> you out from the real Internet, and it is just a workaround created to deal
>> with IPv4 exaustion.
>>
>>
>> BTW, please guys, let me know if this isn't the right place to post "ideas
>> for OpenStack / feature requests"... I don't want to bloat this list with
>> undesirable messages.
>>
>>
>> Best Regards,
>> Thiago Martins
>>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140211/0948511e/attachment-0001.html>
>
>------------------------------
>
>Message: 8
>Date: Tue, 11 Feb 2014 18:47:19 +0100
>From: Kov?cs B?lint <blint at balabit.hu>
>To: Davanum Srinivas <davanum at gmail.com>
>Cc: openstack at lists.openstack.org
>Subject: Re: [Openstack] GSOC 2014 - org reg and application for
>	mentoring
>Message-ID: <52FA6227.9070500 at balabit.hu>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>Hi dims,
>
>great to hear that!
>
>I've added the idea to the Ideas section of the GSoC2014 page, I can 
>always add more details if you would like.
>Should I also add myself to the Mentors section too? Should I mark that 
>I can help rather on the Zorp side of things?
>
>Thanks,
>Balint
>
>On 02/11/2014 04:34 PM, Davanum Srinivas wrote:
>> Balint,
>>
>> Yes, i will be registering OpenStack org. We are collecting info from
>> mentors/participants here - https://wiki.openstack.org/wiki/GSoC2014
>> Can you please add what you mention here into that wiki page?
>>
>> thanks,
>> dims
>>
>> On Tue, Feb 11, 2014 at 10:11 AM, Kov?cs B?lint <blint at balabit.hu> wrote:
>>> Hi guys,
>>>
>>> there have been a few mails about OpenStack participating in GSOC 2014, but
>>> I haven't found any pointers on whether there has already been an
>>> organisation registration.
>>>
>>> We at the Zorp project have been pondering doing our org registration too,
>>> but figured that joining up with you in an OpenStack-related idea would
>>> probably be more valuable to both of us.
>>>
>>> Zorp is an application level proxy firewall with a python configuration
>>> language and extremely flexible architecture and we are looking into
>>> creating a Zorp driver for the Neutron FWaaS extension.
>>>
>>> Of course we would be happy to mentor anybody from the Zorp side and (given
>>> our limited understanding of OpenStack) would be very grateful if we could
>>> have somebody co-mentor the OpenStack side of the idea.
>>>
>>> What do you think?
>>>
>>> Best Regards,
>>> Balint
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
>
>
>
>
>------------------------------
>
>Message: 9
>Date: Tue, 11 Feb 2014 16:23:06 -0500
>From: Mike Spreitzer <mspreitz at us.ibm.com>
>To: openstack at lists.openstack.org
>Subject: [Openstack] Limited external access from VM created by
>	DevStack
>Message-ID:
>	<OF55A5A310.93BF916A-ON85257C7C.00718024-85257C7C.00757968 at us.ibm.com>
>Content-Type: text/plain; charset="us-ascii"
>
>I am consistently suffering a network problem in simple DevStack 
>installations.  Am I doing something wrong, or is this a bug, or is it to 
>be expected?
>
>I install DevStack, using a pretty basic local.conf; the only thing it 
>says that is relevant to networking is setting HOST_IP to the address of 
>the machine where I am installing DevStack.  Thus, it is using nova 
>networking (the default), with the default address ranges.  DevStack 
>completes successfully.  I edit the default security group, completely 
>opening up ICMP, TCP, and UDP.  I instantiate an image.  Using Horizon I 
>log into the console of that image.  From that instance I can ping 
>anywhere.  Then I associate a floating IP address with that instance. 
>While that floating IP is associated, I can not ping anywhere --- that 
>instance can only ping the host's address and those of other VMs on the 
>same host, the instance can NOT ping other hosts on the same subnet as the 
>instance's host nor anything more distant.
>
>I get this both when installing DevStack onto bare metal and when 
>installing DevStack into a VM instance.  I get this when using branch 
>stable/havana and when using the master branch (over the last few weeks).
>
>Following are the details from an example in which DevStack (master 
>branch) was installed onto a bare metal machine a few days ago.  Before 
>installing DevStack, the host's networking config was as follows:
>
>ubu_wa at pvespa015:~$ ifconfig
>eth0      Link encap:Ethernet  HWaddr 00:21:5e:21:04:78 
>          inet addr:9.0.0.191  Bcast:9.0.1.255  Mask:255.255.254.0
>          inet6 addr: fe80::221:5eff:fe21:478/64 Scope:Link
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:266 errors:0 dropped:1 overruns:0 frame:0
>          TX packets:147 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000 
>          RX bytes:24924 (24.9 KB)  TX bytes:19720 (19.7 KB)
>
>eth1      Link encap:Ethernet  HWaddr 00:21:5e:21:04:7a 
>          UP BROADCAST MULTICAST  MTU:1500  Metric:1
>          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000 
>          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>lo        Link encap:Local Loopback 
>          inet addr:127.0.0.1  Mask:255.0.0.0
>          inet6 addr: ::1/128 Scope:Host
>          UP LOOPBACK RUNNING  MTU:65536  Metric:1
>          RX packets:23 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:0 
>          RX bytes:2841 (2.8 KB)  TX bytes:2841 (2.8 KB)
>
>ubu_wa at pvespa015:~$ netstat -nr
>Kernel IP routing table
>Destination     Gateway         Genmask         Flags   MSS Window  irtt 
>Iface
>0.0.0.0         9.0.0.2         0.0.0.0         UG        0 0          0 
>eth0
>9.0.0.0         0.0.0.0         255.255.254.0   U         0 0          0 
>eth0
>169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 
>eth0
>ubu_wa at pvespa015:~$
>
>Here is the local.conf that I used:
>
>[[local|localrc]]
>HOST_IP=9.0.0.191
>#SERVICE_HOST=FIXME
>ADMIN_PASSWORD=POK-1428
>ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
>MYSQL_PASSWORD=$ADMIN_PASSWORD
>RABBIT_PASSWORD=$ADMIN_PASSWORD
>SERVICE_PASSWORD=$ADMIN_PASSWORD
>DEST=/opt/stack
>LOGFILE=stack.sh.log
>LOGDAYS=7
>LOG_COLOR=False
>SCREEN_LOGDIR=$DEST/logs/screen
>RECLONE=yes
>KEYSTONE_CATALOG_BACKEND=sql
>VOLUME_GROUP="stack-volumes"
>VOLUME_NAME_PREFIX="volume-"
>VOLUME_BACKING_FILE_SIZE=5130M
>API_RATE_LIMIT=False
>IMAGE_URLS+=",
>http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F17-x86_64-cfntools.qcow2
>"
>
>There were no failures reported from the DevStack installation.  After 
>that installation, the host's network config looked like this:
>
>ubu_wa at pvespa015:~$ ifconfig
>eth0      Link encap:Ethernet  HWaddr 00:21:5e:21:04:78 
>          inet addr:9.0.0.191  Bcast:9.0.1.255  Mask:255.255.254.0
>          inet6 addr: fe80::221:5eff:fe21:478/64 Scope:Link
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:677321 errors:0 dropped:161 overruns:0 frame:0
>          TX packets:299006 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000 
>          RX bytes:858856772 (858.8 MB)  TX bytes:20708280 (20.7 MB)
>
>eth1      Link encap:Ethernet  HWaddr 00:21:5e:21:04:7a 
>          UP BROADCAST MULTICAST  MTU:1500  Metric:1
>          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000 
>          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>lo        Link encap:Local Loopback 
>          inet addr:127.0.0.1  Mask:255.0.0.0
>          inet6 addr: ::1/128 Scope:Host
>          UP LOOPBACK RUNNING  MTU:65536  Metric:1
>          RX packets:277765 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:277765 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:0 
>          RX bytes:537924232 (537.9 MB)  TX bytes:537924232 (537.9 MB)
>
>virbr0    Link encap:Ethernet  HWaddr 3a:a3:37:44:79:f4 
>          inet addr:192.168.122.1  Bcast:192.168.122.255 
>Mask:255.255.255.0
>          UP BROADCAST MULTICAST  MTU:1500  Metric:1
>          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:0 
>          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>ubu_wa at pvespa015:~$ netstat -nr
>Kernel IP routing table
>Destination     Gateway         Genmask         Flags   MSS Window  irtt 
>Iface
>0.0.0.0         9.0.0.2         0.0.0.0         UG        0 0          0 
>eth0
>9.0.0.0         0.0.0.0         255.255.254.0   U         0 0          0 
>eth0
>169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 
>eth0
>192.168.122.0   0.0.0.0         255.255.255.0   U         0 0          0 
>virbr0
>ubu_wa at pvespa015:~$
>
>I then created and tested some VM instances.  After that, the host's 
>network config looked like this:
>
>ubu_wa at pvespa015:~$ ifconfig
>br100     Link encap:Ethernet  HWaddr 00:21:5e:21:04:78 
>          inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
>          inet6 addr: fe80::fce9:c9ff:feab:ac5/64 Scope:Link
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:2065 errors:0 dropped:2 overruns:0 frame:0
>          TX packets:213 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:0 
>          RX bytes:123592 (123.5 KB)  TX bytes:33219 (33.2 KB)
>
>eth0      Link encap:Ethernet  HWaddr 00:21:5e:21:04:78 
>          inet6 addr: fe80::221:5eff:fe21:478/64 Scope:Link
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:691620 errors:0 dropped:209 overruns:0 frame:0
>          TX packets:303320 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000 
>          RX bytes:860180754 (860.1 MB)  TX bytes:24551439 (24.5 MB)
>
>eth1      Link encap:Ethernet  HWaddr 00:21:5e:21:04:7a 
>          UP BROADCAST MULTICAST  MTU:1500  Metric:1
>          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000 
>          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>lo        Link encap:Local Loopback 
>          inet addr:127.0.0.1  Mask:255.0.0.0
>          inet6 addr: ::1/128 Scope:Host
>          UP LOOPBACK RUNNING  MTU:65536  Metric:1
>          RX packets:407649 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:407649 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:0 
>          RX bytes:1043746254 (1.0 GB)  TX bytes:1043746254 (1.0 GB)
>
>virbr0    Link encap:Ethernet  HWaddr 3a:a3:37:44:79:f4 
>          inet addr:192.168.122.1  Bcast:192.168.122.255 
>Mask:255.255.255.0
>          UP BROADCAST MULTICAST  MTU:1500  Metric:1
>          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:0 
>          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>vnet0     Link encap:Ethernet  HWaddr fe:16:3e:59:bf:df 
>          inet6 addr: fe80::fc16:3eff:fe59:bfdf/64 Scope:Link
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:75 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:1843 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:500 
>          RX bytes:7575 (7.5 KB)  TX bytes:132432 (132.4 KB)
>
>ubu_wa at pvespa015:~$ netstat -nr
>Kernel IP routing table
>Destination     Gateway         Genmask         Flags   MSS Window  irtt 
>Iface
>0.0.0.0         9.0.0.2         0.0.0.0         UG        0 0          0 
>br100
>9.0.0.0         0.0.0.0         255.255.254.0   U         0 0          0 
>br100
>10.0.0.0        0.0.0.0         255.255.255.0   U         0 0          0 
>br100
>192.168.122.0   0.0.0.0         255.255.255.0   U         0 0          0 
>virbr0
>ubu_wa at pvespa015:~$
>
>Following are some examples from an instance of F17 with private IP 
>address 10.0.0.2.  While it has no floating IP address, it can ping 
>10.0.0.2, 10.0.0.8 (a sibling VM), 9.0.0.191 (its host), 9.0.0.193 
>(another machine on the same subnet), and 8.8.8.8 (something entirely 
>outside IBM's intranet).  Here is what the network config looks like 
>inside the VM:
>
>[root at mjs-f17-test ~]# ifconfig
>eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>        inet 10.0.0.2  netmask 255.255.255.0  broadcast 10.0.0.255
>        inet6 fe80::f816:3eff:fe5f:7086  prefixlen 64  scopeid 0x20<link>
>        ether fa:16:3e:5f:70:86  txqueuelen 1000  (Ethernet)
>        RX packets 1205699  bytes 82470224 (78.6 MiB)
>        RX errors 0  dropped 3  overruns 0  frame 0
>        TX packets 4382  bytes 815192 (796.0 KiB)
>        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
>        inet 127.0.0.1  netmask 255.0.0.0
>        inet6 ::1  prefixlen 128  scopeid 0x10<host>
>        loop  txqueuelen 0  (Local Loopback)
>        RX packets 4  bytes 336 (336.0 B)
>        RX errors 0  dropped 0  overruns 0  frame 0
>        TX packets 4  bytes 336 (336.0 B)
>        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>[root at mjs-f17-test ~]# route -n
>Kernel IP routing table
>Destination     Gateway         Genmask         Flags Metric Ref    Use 
>Iface
>0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 
>eth0
>10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 
>eth0
>[root at mjs-f17-test ~]# 
>
>Using Horizon I associate a floating IP address, it comes up with an 
>address like 172.24.4.7.  Now it can ping 10.0.0.2, 10.0.0.8, and 
>9.0.0.191 --- but it can NOT ping 9.0.0.193 nor 8.8.8.8.  Inside the VM 
>the network config looks the same.
>
>I then dissociate the floating IP address, and the VM goes back to being 
>able to ping anything.
>
> 
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140211/0ed7b257/attachment-0001.html>
>
>------------------------------
>
>Message: 10
>Date: Tue, 11 Feb 2014 17:45:40 -0500
>From: Ritesh Nanda <riteshnanda09 at gmail.com>
>To: Vivek Varghese Cherian <vivekcherian at gmail.com>
>Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: Re: [Openstack] Bringing up VMs in an OpenStack private cloud
>	with access to 2 external networks (dmz and corporate)
>Message-ID:
>	<CAO5CbpDt=LyQGy5ca+D7WJs0RaBA1=Q61-NuAWu=w+TPQLpUOg at mail.gmail.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>Hello Vivek,
>
>I hope you are talking about something related to provider networks in
>quantum. You can use vlans based networking in neutron. if i understand
>correctly you want every vm created gets a nic with ip either in dmz or
>corporate network.
>Share more details.
>
>Regards,
>Ritesh Nanda
>
>
>
>
>On Tue, Feb 11, 2014 at 12:02 AM, Vivek Varghese Cherian <
>vivekcherian at gmail.com> wrote:
>
>> Hi,
>>
>>
>> We are trying to set up a OpenStack based private cloud. We have 2
>> networks one a dmz network with little or no restrictions and
>> the other a corporate network with all the corporate access policies in
>> place.
>>
>> The goal of setting up this private cloud is to ensure that any vms that
>> come up in the OpenStack cloud should have I.P. Addresses assigned
>> either in the dmz or corporate network or both depending on the project
>> requirement.
>>
>> We currently have a 4 server setup, every server in the setup has 4 nic
>> cards each. We are planning to have a network,controller,compute and
>> storage node with future plans of adding HA to the setup.
>>
>> We have set up a network controller node with 4 nics.  We are planning to
>> map the first nic to the dmz network, the second nic to the corporate
>> network, the third and
>> fourth nic to the management and data network respectively.
>>
>> Currently we are trying to bridge map each of these 4 interfaces on the
>> network controller to the dmz, corporate, data and management networks
>> respectively.
>>
>> I would like to get pointers on how to go about with this approach or if
>> the community can suggest any better solutions than bridge mappings to
>> achieve our objective.
>>
>> Regards,
>> --
>> Vivek Varghese Cherian
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
>
>-- 
>
>
>* With Regards  *
>
>
>* Ritesh Nanda*
>
>
> <http://www.ericsson.com/>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140211/72f33de5/attachment-0001.html>
>
>------------------------------
>
>Message: 11
>Date: Tue, 11 Feb 2014 22:49:30 +0000
>From: Lillie Ross-CDSR11 <Ross.Lillie at motorolasolutions.com>
>To: sylecn <sylecn at gmail.com>
>Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: Re: [Openstack] Neutron (Havana) configuration on Ubuntu
>Message-ID:
>	<1429FDD3-563A-490E-9F57-66D0700B9330 at motorolasolutions.com>
>Content-Type: text/plain; charset="Windows-1252"
>
>Oops! forgot to hit ?reply all?.  Sorry for the duplicates? Also adding additional observations/questions.
>
>When I attach to the compute node, I don?t see any network namespaces. Is this normal? Admittedly, I haven?t read up on all the gory details of neutron (which I probably need to do this evening).
>
>Original message follows:
>
>-------------------------
>
>Hi Yuanle,
>
>OK, checking the console log, it doesn?t appear that my instance is getting the dhcp assigned address?
>
>--------
>Starting network...
>udhcpc (v1.20.1) started
>Sending discover...
>Sending discover...
>Sending discover...
>No lease, failing
>WARN: /etc/rc3.d/S40-network failed
>cirros-ds 'net' up at 181.14
>checking http://169.254.169.254/2009-04-04/instance-id
>failed 1/20: up 181.15. reques
>????
>
>and later, in summary I see?
>
>=== network info ===
>if-info: lo,up,127.0.0.1,8,::1
>if-info: eth0,up,,8,fe80::f816:3eff:fe56:3612
>=== datasource: None None ===
>=== cirros: current=0.3.1 uptime=221.94 ===
>route: fscanf
>=== pinging gateway failed, debugging connection ===
>############ debug start ##############
>### /etc/init.d/sshd start
>Starting dropbear sshd: OK
>route: fscanf
>### ifconfig -a
>eth0      Link encap:Ethernet  HWaddr FA:16:3E:56:36:12  
>         inet6 addr: fe80::f816:3eff:fe56:3612/64 Scope:Link
>         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>         RX packets:6 errors:0 dropped:0 overruns:0 frame:0
>         TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
>         collisions:0 txqueuelen:1000 
>         RX bytes:468 (468.0 B)  TX bytes:1112 (1.0 KiB)
>
>lo        Link encap:Local Loopback  
>         inet addr:127.0.0.1  Mask:255.0.0.0
>         inet6 addr: ::1/128 Scope:Host
>         UP LOOPBACK RUNNING  MTU:16436  Metric:1
>         RX packets:12 errors:0 dropped:0 overruns:0 frame:0
>         TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
>         collisions:0 txqueuelen:0 
>         RX bytes:1020 (1020.0 B)  TX bytes:1020 (1020.0 B)
>
>### route -n
>Kernel IP routing table
>Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
>
>I don?t see anything in the logs that indicates a problem to me, so I?m missing something.  Checking the DHCP logs on the network/controller node I see the address is being allocated.  The instance, however, isn?t seeing the DHCP server. Multi-cast issue? Any guidance is appreciated and thanks again. 
>
>Regards,
>/ross
>
>On Feb 10, 2014, at 7:58 PM, sylecn <sylecn at gmail.com> wrote:
>
>> Hi Ross,
>> 
>> 1. Make sure you have enabled ping (ICMP) in security groups.
>>    The default security groups does not allow ping.
>> 
>>    neutron security-group-rule-create --direction ingress --protocol icmp $SG_ID
>> 
>>    I suggest you explicitly create security group and use that when you
>>    boot instance. In this case, I see two security groups named
>>    "default". Better add that rule for both.
>> 
>> 2. Check whether you can ping the fixed ip.
>>    Run on the neutron node:
>> 
>>    sudo ip netns exec qrouter-43c596c4-65fe-4c22-a48a-0a6e200abf78 ping -c 4 10.0.1.2
>> 
>> 3. Check console log of the vm. Did it boot correctly? Did it get IP from DHCP?
>> 
>>    nova console-log tvm1
>> 
>> Thanks,
>> Yuanle
>> 
>> 
>> 
>> On Tue, Feb 11, 2014 at 8:52 AM, Lillie Ross-CDSR11 <Ross.Lillie at motorolasolutions.com> wrote:
>> If this issue has already been discussed, please excuse.
>> 
>> I?m somewhat confused about neutron configuration and tenancy. Correct me if I?m wrong. 
>> 
>> First, I?ve create a private network under the ?admin? tenant named ?admin-net'. I?ve associated a subnet named admin-net.1 with the admin-net with a CIDR of 10.0.1.0/24.
>> 
>> Next, I created a network with router:external set to True associated with our campus network named ?campus-net?.  This network was created under the ?service? tenant?. I also create a router named ?campus-gw? under the ?service? tenant and set it?s gateway to be the ?campus-net? network.
>> 
>> Finally, I create a floating address pool under the ?admin? tenant named ?admin-net.float', and add it as an interface to the ?campus-gw? router.  I also create a default security group under the ?admin? tenant to allow SSH and ICMP access.
>> 
>> When I boot an image, as a member of the admin tenant, the instance is correctly assigned an IP address from the admin tenant?s private network.  I next allocate (nova floating-ip-create admin-net.float) a floating IP address and associated it my running instance.
>> 
>> However, I?m unable to ping the running instance, and I see no indication of the end of the tunnel being established on the network/controller node.
>> 
>> I?m not that well versed with network namespaces nor the openvswitch commands. 
>> 
>> 2 questions.  Does my overall configuration sound correct? And how best to diagnose what?s going on here?  Any pointers would be helpful. Additional details can be provided as needed.  Thanks loads in advance.
>> 
>> Regards,
>> /ross
>> 
>> ?
>> 
>> (neutron) net-list
>> +--------------------------------------+------------+----------------------------------------------------+
>> | id                                   | name       | subnets                                            |
>> +--------------------------------------+------------+----------------------------------------------------+
>> | 2426f4d8-a983-4f50-ab5a-fd2a37e5cd94 | campus-net | a948538d-c2c2-4c02-9116-b89a79f0c73a 173.23.0.0/16 |
>> | e6984375-f35b-4636-a293-43d0d296e0ff | admin-net  | 2ced890b-944f-4f6e-8f7a-3f5a4d07c2bb 10.0.1.0/24   |
>> +--------------------------------------+------------+----------------------------------------------------+
>> (neutron) subnet-list
>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>> | id                                   | name               | cidr          | allocation_pools                                  |
>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>> | 2ced890b-944f-4f6e-8f7a-3f5a4d07c2bb | admin-net.1        | 10.0.1.0/24   | {"start": "10.0.1.2", "end": "10.0.1.254"}        |
>> | a948538d-c2c2-4c02-9116-b89a79f0c73a | admin-net.floating | 173.23.0.0/16 | {"start": "173.23.182.2", "end": "173.23.182.15"} |
>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>> (neutron) router-list
>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>> | id                                   | name      | external_gateway_info                                                       |
>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>> | 43c596c4-65fe-4c22-a48a-0a6e200abf78 | campus-gw | {"network_id": "2426f4d8-a983-4f50-ab5a-fd2a37e5cd94", "enable_snat": true} |
>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>> (neutron) router-show campus-gw
>> +-----------------------+-----------------------------------------------------------------------------+
>> | Field                 | Value                                                                       |
>> +-----------------------+-----------------------------------------------------------------------------+
>> | admin_state_up        | True                                                                        |
>> | external_gateway_info | {"network_id": "2426f4d8-a983-4f50-ab5a-fd2a37e5cd94", "enable_snat": true} |
>> | id                    | 43c596c4-65fe-4c22-a48a-0a6e200abf78                                        |
>> | name                  | campus-gw                                                                   |
>> | routes                |                                                                             |
>> | status                | ACTIVE                                                                      |
>> | tenant_id             | service                                                                     |
>> +-----------------------+-----------------------------------------------------------------------------+
>> (neutron) security-group-list
>> +--------------------------------------+---------+-------------+
>> | id                                   | name    | description |
>> +--------------------------------------+---------+-------------+
>> | 0d66a3e2-7a0f-4caf-8b63-c3c8f3106242 | default | default     |
>> | c87230fa-9193-47a7-8ade-cec5f7f6b958 | default | default     |
>> +--------------------------------------+---------+-------------+
>> (neutron) 
>> root at cirrus3:/var/log/neutron# nova list
>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>> | ID                                   | Name | Status | Task State | Power State | Networks                         |
>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>> | ccdf7541-3a74-4289-a8ce-9fe5cffe9dbd | tvm1 | ACTIVE | None       | Running     | admin-net=10.0.1.2, 173.23.182.3 |
>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>> root at cirrus3:/var/log/neutron# nova show tvm1
>> +--------------------------------------+----------------------------------------------------------+
>> | Property                             | Value                                                    |
>> +--------------------------------------+----------------------------------------------------------+
>> | status                               | ACTIVE                                                   |
>> | updated                              | 2014-02-11T00:03:25Z                                     |
>> | OS-EXT-STS:task_state                | None                                                     |
>> | OS-EXT-SRV-ATTR:host                 | cn1                                                      |
>> | key_name                             | root                                                     |
>> | image                                | cirros (57a9f5d6-8b07-4bdb-b8a0-900de339d804)            |
>> | admin-net network                    | 10.0.1.2, 173.23.182.3                                   |
>> | hostId                               | 982cd20cde9c5f8514c95b5ca8530258fa9454cdc988a8b007a6d20b |
>> | OS-EXT-STS:vm_state                  | active                                                   |
>> | OS-EXT-SRV-ATTR:instance_name        | instance-00000021                                        |
>> | OS-SRV-USG:launched_at               | 2014-02-11T00:03:25.000000                               |
>> | OS-EXT-SRV-ATTR:hypervisor_hostname  | cn1                                                      |
>> | flavor                               | m1.tiny (1)                                              |
>> | id                                   | ccdf7541-3a74-4289-a8ce-9fe5cffe9dbd                     |
>> | security_groups                      | [{u'name': u'default'}]                                  |
>> | OS-SRV-USG:terminated_at             | None                                                     |
>> | user_id                              | 090a2de6e74b4573bd29318d4f494191                         |
>> | name                                 | tvm1                                                     |
>> | created                              | 2014-02-11T00:02:47Z                                     |
>> | tenant_id                            | ec54b7cadcab4620bbb6d568be7bd4a8                         |
>> | OS-DCF:diskConfig                    | MANUAL                                                   |
>> | metadata                             | {}                                                       |
>> | os-extended-volumes:volumes_attached | []                                                       |
>> | accessIPv4                           |                                                          |
>> | accessIPv6                           |                                                          |
>> | progress                             | 0                                                        |
>> | OS-EXT-STS:power_state               | 1                                                        |
>> | OS-EXT-AZ:availability_zone          | nova                                                     |
>> | config_drive                         |                                                          |
>> +--------------------------------------+----------------------------------------------------------+
>> root at cirrus3:/var/log/neutron# 
>> 
>> --
>> Ross Lillie
>> Distinguished Member of Technical Staff
>> Motorola Solutions, Inc.
>> 
>> motorolasolutions.com
>> O: +1.847.576.0012
>> M: +1.847.980.2241
>> E: ross.lillie at motorolasolutions.com
>> 
>> 
>> <MSI-Email-Identity-sm.png>
>> 
>> 
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>> 
>
>
>
>
>
>
>------------------------------
>
>Message: 12
>Date: Tue, 11 Feb 2014 15:38:27 -0800
>From: Adam Lawson <alawson at aqorn.com>
>To: openstack <openstack at lists.openstack.org>
>Subject: [Openstack] [Swift] Question regarding global scaling
>Message-ID:
>	<CAJfWK49XT96xquvH7xK=PYW1qA6oBZLkJy2LCmgvUrs0MdCNEg at mail.gmail.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>Hola peoples.
>
>I'm working on a general purpose Swift deployment that needs to scale
>globally. For example, nodes in West Coast, East Coast, EU and APAC. We
>have a Swift PoC cluster that spans West Coast and EU and it works fine,
>replicating using zones for now.
>
>For those who are scaling to that degree, are you building multiple unique
>clusters and replicating between them somehow or using regions and
>replicating within essentially one giant cluster and using affinity rules
>like read_affinity and write_affinity*?
>
>
>*Adam Lawson*
>AQORN, Inc.
>427 North Tatnall Street
>Ste. 58461
>Wilmington, Delaware 19801-2230
>Toll-free: (888) 406-7620
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140211/a1d5f492/attachment-0001.html>
>
>------------------------------
>
>Message: 13
>Date: Tue, 11 Feb 2014 23:40:08 +0000
>From: Lillie Ross-CDSR11 <Ross.Lillie at motorolasolutions.com>
>To: sylecn <sylecn at gmail.com>
>Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: Re: [Openstack] Neutron (Havana) configuration on Ubuntu
>Message-ID:
>	<922F3E7E-A66B-4E54-8984-5FA84F3B8223 at motorolasolutions.com>
>Content-Type: text/plain; charset="Windows-1252"
>
>As a further follow-on?
>
>Forget my question about namespaces on the compute node. Dumb. Realized it the minute I hit send.
>
>Regarding my instance not receiving a DHCP response, I did the following test.
>
>In the namespace for my dhcp server on the network controller, I issued the following command:
>
># ip netns exec qdhcp-05137211-1660-44e1-ae50-107900090e05 tcpdump -i all
>
>Then, in another process, I boot up and instance of cirros, e.g.
>
># nova boot ?flavor m1.tiny ?key-name root ?image cirros tvm
>
>Nova shows the instance booting, and finally running with the correct DHCP address, however the process running tcpdump in the namespace shows nothing.
>
>Any ideas of where to start digging? I know this is a stupid config bug - I just can?t see it.
>
>Thanks again,
>Ross
>
>On Feb 11, 2014, at 4:49 PM, Ross Lillie <ross.lillie at motorolasolutions.com> wrote:
>
>> Oops! forgot to hit ?reply all?.  Sorry for the duplicates? Also adding additional observations/questions.
>> 
>> When I attach to the compute node, I don?t see any network namespaces. Is this normal? Admittedly, I haven?t read up on all the gory details of neutron (which I probably need to do this evening).
>> 
>> Original message follows:
>> 
>> -------------------------
>> 
>> Hi Yuanle,
>> 
>> OK, checking the console log, it doesn?t appear that my instance is getting the dhcp assigned address?
>> 
>> --------
>> Starting network...
>> udhcpc (v1.20.1) started
>> Sending discover...
>> Sending discover...
>> Sending discover...
>> No lease, failing
>> WARN: /etc/rc3.d/S40-network failed
>> cirros-ds 'net' up at 181.14
>> checking http://169.254.169.254/2009-04-04/instance-id
>> failed 1/20: up 181.15. reques
>> ????
>> 
>> and later, in summary I see?
>> 
>> === network info ===
>> if-info: lo,up,127.0.0.1,8,::1
>> if-info: eth0,up,,8,fe80::f816:3eff:fe56:3612
>> === datasource: None None ===
>> === cirros: current=0.3.1 uptime=221.94 ===
>> route: fscanf
>> === pinging gateway failed, debugging connection ===
>> ############ debug start ##############
>> ### /etc/init.d/sshd start
>> Starting dropbear sshd: OK
>> route: fscanf
>> ### ifconfig -a
>> eth0      Link encap:Ethernet  HWaddr FA:16:3E:56:36:12  
>>         inet6 addr: fe80::f816:3eff:fe56:3612/64 Scope:Link
>>         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>         RX packets:6 errors:0 dropped:0 overruns:0 frame:0
>>         TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
>>         collisions:0 txqueuelen:1000 
>>         RX bytes:468 (468.0 B)  TX bytes:1112 (1.0 KiB)
>> 
>> lo        Link encap:Local Loopback  
>>         inet addr:127.0.0.1  Mask:255.0.0.0
>>         inet6 addr: ::1/128 Scope:Host
>>         UP LOOPBACK RUNNING  MTU:16436  Metric:1
>>         RX packets:12 errors:0 dropped:0 overruns:0 frame:0
>>         TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
>>         collisions:0 txqueuelen:0 
>>         RX bytes:1020 (1020.0 B)  TX bytes:1020 (1020.0 B)
>> 
>> ### route -n
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
>> 
>> I don?t see anything in the logs that indicates a problem to me, so I?m missing something.  Checking the DHCP logs on the network/controller node I see the address is being allocated.  The instance, however, isn?t seeing the DHCP server. Multi-cast issue? Any guidance is appreciated and thanks again. 
>> 
>> Regards,
>> /ross
>> 
>> On Feb 10, 2014, at 7:58 PM, sylecn <sylecn at gmail.com> wrote:
>> 
>>> Hi Ross,
>>> 
>>> 1. Make sure you have enabled ping (ICMP) in security groups.
>>>   The default security groups does not allow ping.
>>> 
>>>   neutron security-group-rule-create --direction ingress --protocol icmp $SG_ID
>>> 
>>>   I suggest you explicitly create security group and use that when you
>>>   boot instance. In this case, I see two security groups named
>>>   "default". Better add that rule for both.
>>> 
>>> 2. Check whether you can ping the fixed ip.
>>>   Run on the neutron node:
>>> 
>>>   sudo ip netns exec qrouter-43c596c4-65fe-4c22-a48a-0a6e200abf78 ping -c 4 10.0.1.2
>>> 
>>> 3. Check console log of the vm. Did it boot correctly? Did it get IP from DHCP?
>>> 
>>>   nova console-log tvm1
>>> 
>>> Thanks,
>>> Yuanle
>>> 
>>> 
>>> 
>>> On Tue, Feb 11, 2014 at 8:52 AM, Lillie Ross-CDSR11 <Ross.Lillie at motorolasolutions.com> wrote:
>>> If this issue has already been discussed, please excuse.
>>> 
>>> I?m somewhat confused about neutron configuration and tenancy. Correct me if I?m wrong. 
>>> 
>>> First, I?ve create a private network under the ?admin? tenant named ?admin-net'. I?ve associated a subnet named admin-net.1 with the admin-net with a CIDR of 10.0.1.0/24.
>>> 
>>> Next, I created a network with router:external set to True associated with our campus network named ?campus-net?.  This network was created under the ?service? tenant?. I also create a router named ?campus-gw? under the ?service? tenant and set it?s gateway to be the ?campus-net? network.
>>> 
>>> Finally, I create a floating address pool under the ?admin? tenant named ?admin-net.float', and add it as an interface to the ?campus-gw? router.  I also create a default security group under the ?admin? tenant to allow SSH and ICMP access.
>>> 
>>> When I boot an image, as a member of the admin tenant, the instance is correctly assigned an IP address from the admin tenant?s private network.  I next allocate (nova floating-ip-create admin-net.float) a floating IP address and associated it my running instance.
>>> 
>>> However, I?m unable to ping the running instance, and I see no indication of the end of the tunnel being established on the network/controller node.
>>> 
>>> I?m not that well versed with network namespaces nor the openvswitch commands. 
>>> 
>>> 2 questions.  Does my overall configuration sound correct? And how best to diagnose what?s going on here?  Any pointers would be helpful. Additional details can be provided as needed.  Thanks loads in advance.
>>> 
>>> Regards,
>>> /ross
>>> 
>>> ?
>>> 
>>> (neutron) net-list
>>> +--------------------------------------+------------+----------------------------------------------------+
>>> | id                                   | name       | subnets                                            |
>>> +--------------------------------------+------------+----------------------------------------------------+
>>> | 2426f4d8-a983-4f50-ab5a-fd2a37e5cd94 | campus-net | a948538d-c2c2-4c02-9116-b89a79f0c73a 173.23.0.0/16 |
>>> | e6984375-f35b-4636-a293-43d0d296e0ff | admin-net  | 2ced890b-944f-4f6e-8f7a-3f5a4d07c2bb 10.0.1.0/24   |
>>> +--------------------------------------+------------+----------------------------------------------------+
>>> (neutron) subnet-list
>>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>>> | id                                   | name               | cidr          | allocation_pools                                  |
>>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>>> | 2ced890b-944f-4f6e-8f7a-3f5a4d07c2bb | admin-net.1        | 10.0.1.0/24   | {"start": "10.0.1.2", "end": "10.0.1.254"}        |
>>> | a948538d-c2c2-4c02-9116-b89a79f0c73a | admin-net.floating | 173.23.0.0/16 | {"start": "173.23.182.2", "end": "173.23.182.15"} |
>>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>>> (neutron) router-list
>>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>>> | id                                   | name      | external_gateway_info                                                       |
>>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>>> | 43c596c4-65fe-4c22-a48a-0a6e200abf78 | campus-gw | {"network_id": "2426f4d8-a983-4f50-ab5a-fd2a37e5cd94", "enable_snat": true} |
>>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>>> (neutron) router-show campus-gw
>>> +-----------------------+-----------------------------------------------------------------------------+
>>> | Field                 | Value                                                                       |
>>> +-----------------------+-----------------------------------------------------------------------------+
>>> | admin_state_up        | True                                                                        |
>>> | external_gateway_info | {"network_id": "2426f4d8-a983-4f50-ab5a-fd2a37e5cd94", "enable_snat": true} |
>>> | id                    | 43c596c4-65fe-4c22-a48a-0a6e200abf78                                        |
>>> | name                  | campus-gw                                                                   |
>>> | routes                |                                                                             |
>>> | status                | ACTIVE                                                                      |
>>> | tenant_id             | service                                                                     |
>>> +-----------------------+-----------------------------------------------------------------------------+
>>> (neutron) security-group-list
>>> +--------------------------------------+---------+-------------+
>>> | id                                   | name    | description |
>>> +--------------------------------------+---------+-------------+
>>> | 0d66a3e2-7a0f-4caf-8b63-c3c8f3106242 | default | default     |
>>> | c87230fa-9193-47a7-8ade-cec5f7f6b958 | default | default     |
>>> +--------------------------------------+---------+-------------+
>>> (neutron) 
>>> root at cirrus3:/var/log/neutron# nova list
>>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>>> | ID                                   | Name | Status | Task State | Power State | Networks                         |
>>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>>> | ccdf7541-3a74-4289-a8ce-9fe5cffe9dbd | tvm1 | ACTIVE | None       | Running     | admin-net=10.0.1.2, 173.23.182.3 |
>>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>>> root at cirrus3:/var/log/neutron# nova show tvm1
>>> +--------------------------------------+----------------------------------------------------------+
>>> | Property                             | Value                                                    |
>>> +--------------------------------------+----------------------------------------------------------+
>>> | status                               | ACTIVE                                                   |
>>> | updated                              | 2014-02-11T00:03:25Z                                     |
>>> | OS-EXT-STS:task_state                | None                                                     |
>>> | OS-EXT-SRV-ATTR:host                 | cn1                                                      |
>>> | key_name                             | root                                                     |
>>> | image                                | cirros (57a9f5d6-8b07-4bdb-b8a0-900de339d804)            |
>>> | admin-net network                    | 10.0.1.2, 173.23.182.3                                   |
>>> | hostId                               | 982cd20cde9c5f8514c95b5ca8530258fa9454cdc988a8b007a6d20b |
>>> | OS-EXT-STS:vm_state                  | active                                                   |
>>> | OS-EXT-SRV-ATTR:instance_name        | instance-00000021                                        |
>>> | OS-SRV-USG:launched_at               | 2014-02-11T00:03:25.000000                               |
>>> | OS-EXT-SRV-ATTR:hypervisor_hostname  | cn1                                                      |
>>> | flavor                               | m1.tiny (1)                                              |
>>> | id                                   | ccdf7541-3a74-4289-a8ce-9fe5cffe9dbd                     |
>>> | security_groups                      | [{u'name': u'default'}]                                  |
>>> | OS-SRV-USG:terminated_at             | None                                                     |
>>> | user_id                              | 090a2de6e74b4573bd29318d4f494191                         |
>>> | name                                 | tvm1                                                     |
>>> | created                              | 2014-02-11T00:02:47Z                                     |
>>> | tenant_id                            | ec54b7cadcab4620bbb6d568be7bd4a8                         |
>>> | OS-DCF:diskConfig                    | MANUAL                                                   |
>>> | metadata                             | {}                                                       |
>>> | os-extended-volumes:volumes_attached | []                                                       |
>>> | accessIPv4                           |                                                          |
>>> | accessIPv6                           |                                                          |
>>> | progress                             | 0                                                        |
>>> | OS-EXT-STS:power_state               | 1                                                        |
>>> | OS-EXT-AZ:availability_zone          | nova                                                     |
>>> | config_drive                         |                                                          |
>>> +--------------------------------------+----------------------------------------------------------+
>>> root at cirrus3:/var/log/neutron# 
>>> 
>>> --
>>> Ross Lillie
>>> Distinguished Member of Technical Staff
>>> Motorola Solutions, Inc.
>>> 
>>> motorolasolutions.com
>>> O: +1.847.576.0012
>>> M: +1.847.980.2241
>>> E: ross.lillie at motorolasolutions.com
>>> 
>>> 
>>> <MSI-Email-Identity-sm.png>
>>> 
>>> 
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> 
>>> 
>> 
>
>
>
>
>
>
>------------------------------
>
>Message: 14
>Date: Wed, 12 Feb 2014 01:13:49 +0000
>From: Lillie Ross-CDSR11 <Ross.Lillie at motorolasolutions.com>
>To: sylecn <sylecn at gmail.com>
>Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: Re: [Openstack] Neutron (Havana) configuration on Ubuntu
>Message-ID:
>	<2B1A18FC-7724-42E1-BDFF-3A6288A6987A at motorolasolutions.com>
>Content-Type: text/plain; charset="Windows-1252"
>
>Everyone, thanks for all the help, but I found the problem, and am going to go pound my head against the wall this evening or have a martini.
>
>My cloud config uses VLAN tagging. Somehow, during the upgrade to new hardware and the essex to havana migration, the tag profile in my Brocade switch got set back to the default tag value of 8100. This prevents packets tagged by Openstack from being switched between ports. Simple fix, and I should have suspected this when I didn?t see any traffic on the virtual router.
>
>Thanks again everyone and regards,
>/ross
>
>On Feb 11, 2014, at 5:40 PM, Lillie Ross-CDSR11 <Ross.Lillie at motorolasolutions.com> wrote:
>
>> As a further follow-on?
>> 
>> Forget my question about namespaces on the compute node. Dumb. Realized it the minute I hit send.
>> 
>> Regarding my instance not receiving a DHCP response, I did the following test.
>> 
>> In the namespace for my dhcp server on the network controller, I issued the following command:
>> 
>> # ip netns exec qdhcp-05137211-1660-44e1-ae50-107900090e05 tcpdump -i all
>> 
>> Then, in another process, I boot up and instance of cirros, e.g.
>> 
>> # nova boot ?flavor m1.tiny ?key-name root ?image cirros tvm
>> 
>> Nova shows the instance booting, and finally running with the correct DHCP address, however the process running tcpdump in the namespace shows nothing.
>> 
>> Any ideas of where to start digging? I know this is a stupid config bug - I just can?t see it.
>> 
>> Thanks again,
>> Ross
>> 
>> On Feb 11, 2014, at 4:49 PM, Ross Lillie <ross.lillie at motorolasolutions.com> wrote:
>> 
>>> Oops! forgot to hit ?reply all?.  Sorry for the duplicates? Also adding additional observations/questions.
>>> 
>>> When I attach to the compute node, I don?t see any network namespaces. Is this normal? Admittedly, I haven?t read up on all the gory details of neutron (which I probably need to do this evening).
>>> 
>>> Original message follows:
>>> 
>>> -------------------------
>>> 
>>> Hi Yuanle,
>>> 
>>> OK, checking the console log, it doesn?t appear that my instance is getting the dhcp assigned address?
>>> 
>>> --------
>>> Starting network...
>>> udhcpc (v1.20.1) started
>>> Sending discover...
>>> Sending discover...
>>> Sending discover...
>>> No lease, failing
>>> WARN: /etc/rc3.d/S40-network failed
>>> cirros-ds 'net' up at 181.14
>>> checking http://169.254.169.254/2009-04-04/instance-id
>>> failed 1/20: up 181.15. reques
>>> ????
>>> 
>>> and later, in summary I see?
>>> 
>>> === network info ===
>>> if-info: lo,up,127.0.0.1,8,::1
>>> if-info: eth0,up,,8,fe80::f816:3eff:fe56:3612
>>> === datasource: None None ===
>>> === cirros: current=0.3.1 uptime=221.94 ===
>>> route: fscanf
>>> === pinging gateway failed, debugging connection ===
>>> ############ debug start ##############
>>> ### /etc/init.d/sshd start
>>> Starting dropbear sshd: OK
>>> route: fscanf
>>> ### ifconfig -a
>>> eth0      Link encap:Ethernet  HWaddr FA:16:3E:56:36:12  
>>>        inet6 addr: fe80::f816:3eff:fe56:3612/64 Scope:Link
>>>        UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>        RX packets:6 errors:0 dropped:0 overruns:0 frame:0
>>>        TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
>>>        collisions:0 txqueuelen:1000 
>>>        RX bytes:468 (468.0 B)  TX bytes:1112 (1.0 KiB)
>>> 
>>> lo        Link encap:Local Loopback  
>>>        inet addr:127.0.0.1  Mask:255.0.0.0
>>>        inet6 addr: ::1/128 Scope:Host
>>>        UP LOOPBACK RUNNING  MTU:16436  Metric:1
>>>        RX packets:12 errors:0 dropped:0 overruns:0 frame:0
>>>        TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
>>>        collisions:0 txqueuelen:0 
>>>        RX bytes:1020 (1020.0 B)  TX bytes:1020 (1020.0 B)
>>> 
>>> ### route -n
>>> Kernel IP routing table
>>> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
>>> 
>>> I don?t see anything in the logs that indicates a problem to me, so I?m missing something.  Checking the DHCP logs on the network/controller node I see the address is being allocated.  The instance, however, isn?t seeing the DHCP server. Multi-cast issue? Any guidance is appreciated and thanks again. 
>>> 
>>> Regards,
>>> /ross
>>> 
>>> On Feb 10, 2014, at 7:58 PM, sylecn <sylecn at gmail.com> wrote:
>>> 
>>>> Hi Ross,
>>>> 
>>>> 1. Make sure you have enabled ping (ICMP) in security groups.
>>>>  The default security groups does not allow ping.
>>>> 
>>>>  neutron security-group-rule-create --direction ingress --protocol icmp $SG_ID
>>>> 
>>>>  I suggest you explicitly create security group and use that when you
>>>>  boot instance. In this case, I see two security groups named
>>>>  "default". Better add that rule for both.
>>>> 
>>>> 2. Check whether you can ping the fixed ip.
>>>>  Run on the neutron node:
>>>> 
>>>>  sudo ip netns exec qrouter-43c596c4-65fe-4c22-a48a-0a6e200abf78 ping -c 4 10.0.1.2
>>>> 
>>>> 3. Check console log of the vm. Did it boot correctly? Did it get IP from DHCP?
>>>> 
>>>>  nova console-log tvm1
>>>> 
>>>> Thanks,
>>>> Yuanle
>>>> 
>>>> 
>>>> 
>>>> On Tue, Feb 11, 2014 at 8:52 AM, Lillie Ross-CDSR11 <Ross.Lillie at motorolasolutions.com> wrote:
>>>> If this issue has already been discussed, please excuse.
>>>> 
>>>> I?m somewhat confused about neutron configuration and tenancy. Correct me if I?m wrong. 
>>>> 
>>>> First, I?ve create a private network under the ?admin? tenant named ?admin-net'. I?ve associated a subnet named admin-net.1 with the admin-net with a CIDR of 10.0.1.0/24.
>>>> 
>>>> Next, I created a network with router:external set to True associated with our campus network named ?campus-net?.  This network was created under the ?service? tenant?. I also create a router named ?campus-gw? under the ?service? tenant and set it?s gateway to be the ?campus-net? network.
>>>> 
>>>> Finally, I create a floating address pool under the ?admin? tenant named ?admin-net.float', and add it as an interface to the ?campus-gw? router.  I also create a default security group under the ?admin? tenant to allow SSH and ICMP access.
>>>> 
>>>> When I boot an image, as a member of the admin tenant, the instance is correctly assigned an IP address from the admin tenant?s private network.  I next allocate (nova floating-ip-create admin-net.float) a floating IP address and associated it my running instance.
>>>> 
>>>> However, I?m unable to ping the running instance, and I see no indication of the end of the tunnel being established on the network/controller node.
>>>> 
>>>> I?m not that well versed with network namespaces nor the openvswitch commands. 
>>>> 
>>>> 2 questions.  Does my overall configuration sound correct? And how best to diagnose what?s going on here?  Any pointers would be helpful. Additional details can be provided as needed.  Thanks loads in advance.
>>>> 
>>>> Regards,
>>>> /ross
>>>> 
>>>> ?
>>>> 
>>>> (neutron) net-list
>>>> +--------------------------------------+------------+----------------------------------------------------+
>>>> | id                                   | name       | subnets                                            |
>>>> +--------------------------------------+------------+----------------------------------------------------+
>>>> | 2426f4d8-a983-4f50-ab5a-fd2a37e5cd94 | campus-net | a948538d-c2c2-4c02-9116-b89a79f0c73a 173.23.0.0/16 |
>>>> | e6984375-f35b-4636-a293-43d0d296e0ff | admin-net  | 2ced890b-944f-4f6e-8f7a-3f5a4d07c2bb 10.0.1.0/24   |
>>>> +--------------------------------------+------------+----------------------------------------------------+
>>>> (neutron) subnet-list
>>>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>>>> | id                                   | name               | cidr          | allocation_pools                                  |
>>>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>>>> | 2ced890b-944f-4f6e-8f7a-3f5a4d07c2bb | admin-net.1        | 10.0.1.0/24   | {"start": "10.0.1.2", "end": "10.0.1.254"}        |
>>>> | a948538d-c2c2-4c02-9116-b89a79f0c73a | admin-net.floating | 173.23.0.0/16 | {"start": "173.23.182.2", "end": "173.23.182.15"} |
>>>> +--------------------------------------+--------------------+---------------+---------------------------------------------------+
>>>> (neutron) router-list
>>>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>>>> | id                                   | name      | external_gateway_info                                                       |
>>>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>>>> | 43c596c4-65fe-4c22-a48a-0a6e200abf78 | campus-gw | {"network_id": "2426f4d8-a983-4f50-ab5a-fd2a37e5cd94", "enable_snat": true} |
>>>> +--------------------------------------+-----------+-----------------------------------------------------------------------------+
>>>> (neutron) router-show campus-gw
>>>> +-----------------------+-----------------------------------------------------------------------------+
>>>> | Field                 | Value                                                                       |
>>>> +-----------------------+-----------------------------------------------------------------------------+
>>>> | admin_state_up        | True                                                                        |
>>>> | external_gateway_info | {"network_id": "2426f4d8-a983-4f50-ab5a-fd2a37e5cd94", "enable_snat": true} |
>>>> | id                    | 43c596c4-65fe-4c22-a48a-0a6e200abf78                                        |
>>>> | name                  | campus-gw                                                                   |
>>>> | routes                |                                                                             |
>>>> | status                | ACTIVE                                                                      |
>>>> | tenant_id             | service                                                                     |
>>>> +-----------------------+-----------------------------------------------------------------------------+
>>>> (neutron) security-group-list
>>>> +--------------------------------------+---------+-------------+
>>>> | id                                   | name    | description |
>>>> +--------------------------------------+---------+-------------+
>>>> | 0d66a3e2-7a0f-4caf-8b63-c3c8f3106242 | default | default     |
>>>> | c87230fa-9193-47a7-8ade-cec5f7f6b958 | default | default     |
>>>> +--------------------------------------+---------+-------------+
>>>> (neutron) 
>>>> root at cirrus3:/var/log/neutron# nova list
>>>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>>>> | ID                                   | Name | Status | Task State | Power State | Networks                         |
>>>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>>>> | ccdf7541-3a74-4289-a8ce-9fe5cffe9dbd | tvm1 | ACTIVE | None       | Running     | admin-net=10.0.1.2, 173.23.182.3 |
>>>> +--------------------------------------+------+--------+------------+-------------+----------------------------------+
>>>> root at cirrus3:/var/log/neutron# nova show tvm1
>>>> +--------------------------------------+----------------------------------------------------------+
>>>> | Property                             | Value                                                    |
>>>> +--------------------------------------+----------------------------------------------------------+
>>>> | status                               | ACTIVE                                                   |
>>>> | updated                              | 2014-02-11T00:03:25Z                                     |
>>>> | OS-EXT-STS:task_state                | None                                                     |
>>>> | OS-EXT-SRV-ATTR:host                 | cn1                                                      |
>>>> | key_name                             | root                                                     |
>>>> | image                                | cirros (57a9f5d6-8b07-4bdb-b8a0-900de339d804)            |
>>>> | admin-net network                    | 10.0.1.2, 173.23.182.3                                   |
>>>> | hostId                               | 982cd20cde9c5f8514c95b5ca8530258fa9454cdc988a8b007a6d20b |
>>>> | OS-EXT-STS:vm_state                  | active                                                   |
>>>> | OS-EXT-SRV-ATTR:instance_name        | instance-00000021                                        |
>>>> | OS-SRV-USG:launched_at               | 2014-02-11T00:03:25.000000                               |
>>>> | OS-EXT-SRV-ATTR:hypervisor_hostname  | cn1                                                      |
>>>> | flavor                               | m1.tiny (1)                                              |
>>>> | id                                   | ccdf7541-3a74-4289-a8ce-9fe5cffe9dbd                     |
>>>> | security_groups                      | [{u'name': u'default'}]                                  |
>>>> | OS-SRV-USG:terminated_at             | None                                                     |
>>>> | user_id                              | 090a2de6e74b4573bd29318d4f494191                         |
>>>> | name                                 | tvm1                                                     |
>>>> | created                              | 2014-02-11T00:02:47Z                                     |
>>>> | tenant_id                            | ec54b7cadcab4620bbb6d568be7bd4a8                         |
>>>> | OS-DCF:diskConfig                    | MANUAL                                                   |
>>>> | metadata                             | {}                                                       |
>>>> | os-extended-volumes:volumes_attached | []                                                       |
>>>> | accessIPv4                           |                                                          |
>>>> | accessIPv6                           |                                                          |
>>>> | progress                             | 0                                                        |
>>>> | OS-EXT-STS:power_state               | 1                                                        |
>>>> | OS-EXT-AZ:availability_zone          | nova                                                     |
>>>> | config_drive                         |                                                          |
>>>> +--------------------------------------+----------------------------------------------------------+
>>>> root at cirrus3:/var/log/neutron# 
>>>> 
>>>> --
>>>> Ross Lillie
>>>> Distinguished Member of Technical Staff
>>>> Motorola Solutions, Inc.
>>>> 
>>>> motorolasolutions.com
>>>> O: +1.847.576.0012
>>>> M: +1.847.980.2241
>>>> E: ross.lillie at motorolasolutions.com
>>>> 
>>>> 
>>>> <MSI-Email-Identity-sm.png>
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to     : openstack at lists.openstack.org
>>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> 
>>>> 
>>> 
>> 
>
>
>
>
>
>
>------------------------------
>
>Message: 15
>Date: Tue, 11 Feb 2014 21:43:52 -0500
>From: Brian Schott <brian.schott at nimbisservices.com>
>To: openstack at lists.openstack.org
>Cc: Ron Minnich <rminnich at gmail.com>
>Subject: [Openstack] Call for panelists on "HPC in the Cloud"
>Message-ID: <6365D9B1-208C-4493-8DCD-21EC64679502 at nimbisservices.com>
>Content-Type: text/plain; charset="windows-1252"
>
>My friend Ron Minnich from Google is looking for panelists on "HPC in the Cloud".  I know the OpenStack community is doing everything from bare-metal HPC systems, to virtualized machines with SRIOV interconnects, to hybrid cloud front-end management nodes with traditional HPC batch backend systems.  I think all would be interesting.  If you are interested in representing the OpenStack community at this event, please contact Ron Minnich <rminnich at gmail.com>.
>
>http://hpcs2014.cisedu.info/2-conference/symposia/symp01-intercloudhpc
>
>International Symposium on Cloud Computing and Services for 
>High Performance Computing Systems
>(InterCloud-HPC 2014)
>July 21 ? July 25, 2014
>The Savoia Hotel Regency
>Bologna, Italy 
>
>Submission Deadline: March 11, 2014
>
>Thanks!
>Brian
>
>-------------------------------------------------
>Brian Schott, CTO
>Nimbis Services, Inc.
>brian.schott at nimbisservices.com
>ph: 443-274-6064  fx: 443-274-6060
>
>
>
>-------------- next part --------------
>A non-text attachment was scrubbed...
>Name: smime.p7s
>Type: application/pkcs7-signature
>Size: 3662 bytes
>Desc: not available
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140211/6125db50/attachment-0001.bin>
>
>------------------------------
>
>Message: 16
>Date: Wed, 12 Feb 2014 10:10:53 +0530
>From: Vikash Kumar <vikash.kumar at oneconvergence.com>
>To: Chris Baker <openstack2014 at qq.com>
>Cc: openstack <openstack at lists.openstack.org>
>Subject: Re: [Openstack] dhcp request can't reach br-int of network
>	node
>Message-ID:
>	<CAPHM3WYP0UPwo282mcrG2_EFEVDCcPmbvXVoBKa3-oUBp0oVLg at mail.gmail.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>U can try few things quickly.
>
>a) Connectivity between ur compute node and n/w node. Check the interfaces.
>b) Check the route entry.
>
>Ur compute node can ping to n/w node. U can also chk the vlan tag settings
>on ovs bridge.
>
>
>On Fri, Feb 7, 2014 at 10:11 PM, Chris Baker <openstack2014 at qq.com> wrote:
>
>> Hi guys,
>>
>> My havana installation has 3 nodes:
>> control node, runs keystone APIs and neutron server;
>> network node, runs l3, dhcp, metadata, ovs agents; with VLAN mode
>> compute node, runs nova compute and ovs agents;
>>
>> repo from http://repos.fedorapeople.org/repos/openstack/openstack-havana,
>> and the os gets updated well with latest kernel.
>>
>>
>> Currently my VM can't get dhcp ack from network node.
>> Based the topology picture for the package flow:
>> http://docs.openstack.org/admin-guide-cloud/content/figures/10/a/common/
>> figures/under-the-hood-scenario-1-ovs-network.png
>>
>> With the tcpdump result on both network and compute node, it says the dhcp
>> request successfully leaves compute node, and it can reach the
>> "int-br-eth1" of network node, but not the next step br "br-int", so the
>> dnsmasq would not able to ack. I think this is the problem why I can't get
>> dhcp ipaddr.
>>
>> # uname -r
>> 2.6.32-358.123.2.openstack.el6.x86_64
>>
>> # ip netns  (should we say namespace works well?)
>> qdhcp-11f3adc1-6a2e-429b-9679-b565347e2f74
>> qdhcp-4aaa7c19-7864-4b17-aebc-d6aa354d4cd5
>> qdhcp-285e259e-e3ec-4149-81db-8e94e1713aa2
>> qdhcp-4044cdf0-717b-4628-9ce0-a9ff49533d8f
>> qrouter-76c9b884-5928-42f4-a016-afab1b72066b
>>
>> Anyone can help where/which section I should check for the issue next? or
>> let me know if other info is needed.
>> Thanks a lot.
>>
>> Chris
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/80bb56d3/attachment-0001.html>
>
>------------------------------
>
>Message: 17
>Date: Wed, 12 Feb 2014 05:04:10 +0000
>From: "Li, Chen" <chen.li at intel.com>
>To: Vikash Kumar <vikash.kumar at oneconvergence.com>, Chris Baker
>	<openstack2014 at qq.com>
>Cc: openstack <openstack at lists.openstack.org>
>Subject: Re: [Openstack] dhcp request can't reach br-int of network
>	node
>Message-ID:
>	<988E98D31B01E44893AF6E48ED9DEFD4019BCF3E at SHSMSX101.ccr.corp.intel.com>
>	
>Content-Type: text/plain; charset="us-ascii"
>
>I will check openvswitch open-flow tables:
>
>ovs-ofctl  show br-int
>......
>1(int-br-eth4): addr:5a:cb:04:ab:33:90
>     config:     0
>     state:      0
>     current:    10GB-FD COPPER
>     speed: 10000 Mbps now, 0 Mbps max
>2(tap130bc000-fb): addr:36:86:30:f5:bc:da
>     config:     0
>     state:      0
>     current:    10GB-FD COPPER
>     speed: 10000 Mbps now, 0 Mbps max
>......
>
>ovs-ofctl dump-flows br-int
>NXST_FLOW reply (xid=0x4):
>cookie=0x0, duration=432224.342s, table=0, n_packets=24419311, n_bytes=1619005477, idle_age=204, hard_age=65534, priority=3,in_port=1,dl_vlan=2999 actions=mod_vlan_vid:1,NORMAL
>cookie=0x0, duration=432235.912s, table=0, n_packets=445090, n_bytes=117218014, idle_age=8, hard_age=65534, priority=2,in_port=1 actions=drop
>cookie=0x0, duration=432236.951s, table=0, n_packets=21963102, n_bytes=1207410551474, idle_age=65534, hard_age=65534, priority=1 actions=NORMAL
>
>Thanks.
>-chen
>
>From: Vikash Kumar [mailto:vikash.kumar at oneconvergence.com]
>Sent: Wednesday, February 12, 2014 12:41 PM
>To: Chris Baker
>Cc: openstack
>Subject: Re: [Openstack] dhcp request can't reach br-int of network node
>
>U can try few things quickly.
>a) Connectivity between ur compute node and n/w node. Check the interfaces.
>b) Check the route entry.
>
>Ur compute node can ping to n/w node. U can also chk the vlan tag settings on ovs bridge.
>
>On Fri, Feb 7, 2014 at 10:11 PM, Chris Baker <openstack2014 at qq.com<mailto:openstack2014 at qq.com>> wrote:
>Hi guys,
>
>My havana installation has 3 nodes:
>control node, runs keystone APIs and neutron server;
>network node, runs l3, dhcp, metadata, ovs agents; with VLAN mode
>compute node, runs nova compute and ovs agents;
>
>repo from http://repos.fedorapeople.org/repos/openstack/openstack-havana<http://repos.fedorapeople.org/repos/openstack/openstack-havana/>, and the os gets updated well with latest kernel.
>
>
>Currently my VM can't get dhcp ack from network node.
>Based the topology picture for the package flow:
>http://docs.openstack.org/admin-guide-cloud/content/figures/10/a/common/figures/under-the-hood-scenario-1-ovs-network.png
>
>With the tcpdump result on both network and compute node, it says the dhcp request successfully leaves compute node, and it can reach the "int-br-eth1" of network node, but not the next step br "br-int", so the dnsmasq would not able to ack. I think this is the problem why I can't get dhcp ipaddr.
>
># uname -r
>2.6.32-358.123.2.openstack.el6.x86_64
>
># ip netns  (should we say namespace works well?)
>qdhcp-11f3adc1-6a2e-429b-9679-b565347e2f74
>qdhcp-4aaa7c19-7864-4b17-aebc-d6aa354d4cd5
>qdhcp-285e259e-e3ec-4149-81db-8e94e1713aa2
>qdhcp-4044cdf0-717b-4628-9ce0-a9ff49533d8f
>qrouter-76c9b884-5928-42f4-a016-afab1b72066b
>
>Anyone can help where/which section I should check for the issue next? or let me know if other info is needed.
>Thanks a lot.
>
>Chris
>
>_______________________________________________
>Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>Post to     : openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/0bd1fb3b/attachment-0001.html>
>
>------------------------------
>
>Message: 18
>Date: Tue, 11 Feb 2014 21:22:51 -0800
>From: Remo Mattei <remo at italy1.com>
>To: OpenStack Users <openstack at lists.openstack.org>
>Subject: [Openstack] Trunk pass over OVS
>Message-ID: <5BC7D943-B263-4AB3-BC21-770B6657E904 at italy1.com>
>Content-Type: text/plain; charset=us-ascii
>
>Hello everyone, 
>I wonder if anyone knows if there is a way to pass a full trunk over ovs? Is there a max number I could use to how many vlan I can pass? I thought I could bind that in the config file to a single nic  to a senario where I can pass 30-40 vlans. Does anyone have any ins on this?
>
>Thanks 
>
>
>------------------------------
>
>Message: 19
>Date: Wed, 12 Feb 2014 18:26:44 +1300
>From: Robert Collins <robertc at robertcollins.net>
>To: Chris Baker <openstack2014 at qq.com>
>Cc: openstack <openstack at lists.openstack.org>
>Subject: Re: [Openstack] dhcp request can't reach br-int of network
>	node
>Message-ID:
>	<CAJ3HoZ02cEx3w8awEC6=rRbaD_mmYoXgbCn0vsKk0nhPN=-dww at mail.gmail.com>
>Content-Type: text/plain; charset=ISO-8859-1
>
>Check the outbound ofctl rules on your hypervisor nodes. If they
>aren't tagging traffic properly, it won't be processed by the incoming
>gre rules and you'll see the symptoms you have.
>
>On 8 February 2014 05:41, Chris Baker <openstack2014 at qq.com> wrote:
>> Hi guys,
>>
>> My havana installation has 3 nodes:
>> control node, runs keystone APIs and neutron server;
>> network node, runs l3, dhcp, metadata, ovs agents; with VLAN mode
>> compute node, runs nova compute and ovs agents;
>>
>> repo from http://repos.fedorapeople.org/repos/openstack/openstack-havana,
>> and the os gets updated well with latest kernel.
>>
>>
>> Currently my VM can't get dhcp ack from network node.
>> Based the topology picture for the package flow:
>> http://docs.openstack.org/admin-guide-cloud/content/figures/10/a/common/figures/under-the-hood-scenario-1-ovs-network.png
>>
>> With the tcpdump result on both network and compute node, it says the dhcp
>> request successfully leaves compute node, and it can reach the "int-br-eth1"
>> of network node, but not the next step br "br-int", so the dnsmasq would not
>> able to ack. I think this is the problem why I can't get dhcp ipaddr.
>>
>> # uname -r
>> 2.6.32-358.123.2.openstack.el6.x86_64
>>
>> # ip netns  (should we say namespace works well?)
>> qdhcp-11f3adc1-6a2e-429b-9679-b565347e2f74
>> qdhcp-4aaa7c19-7864-4b17-aebc-d6aa354d4cd5
>> qdhcp-285e259e-e3ec-4149-81db-8e94e1713aa2
>> qdhcp-4044cdf0-717b-4628-9ce0-a9ff49533d8f
>> qrouter-76c9b884-5928-42f4-a016-afab1b72066b
>>
>> Anyone can help where/which section I should check for the issue next? or
>> let me know if other info is needed.
>> Thanks a lot.
>>
>> Chris
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
>
>-- 
>Robert Collins <rbtcollins at hp.com>
>Distinguished Technologist
>HP Converged Cloud
>
>
>
>------------------------------
>
>Message: 20
>Date: Wed, 12 Feb 2014 11:20:02 +0530
>From: Rajshree Thorat <rajshree.thorat at gslab.com>
>To: openstack Users <openstack at lists.openstack.org>
>Subject: [Openstack] Problem with dnsmasq - DHCPDISCOVER no address
>	available
>Message-ID: <52FB0B8A.5030906 at gslab.com>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>Hi All,
>
>I have installed Grizzly with quantum and OVS-plugin. I'm using dnsmasq 
>and never had any problems with it until now. Dnsmasq is working fine in 
>most of the cases. But lately there were a few reports that dnsmasq is 
>not working entirely reliable. Sometimes dnsmasq claims that no address 
>is available for the specific mac-address, while other clients are still 
>receiving their addresses fine.
>
>Details:
>
>Used dnsmasq version is 2.59. The issue was recognized on Ubuntu 12.04 LTS.
>
>Logs:
>
>Feb 12 11:10:31 Controller1 ovs-vsctl: 00001|vsctl|INFO|Called as 
>/usr/bin/ovs-vsctl -- --may-exist add-port br-ex qg-f34f83e7-87 -- set 
>Interface qg-f34f83e7-87 type=internal -- set Interface qg-f34f83e7-87 
>external-ids:iface-id=f34f83e7-873f-47d4-8b4c-1d042c25e368 -- set 
>Interface qg-f34f83e7-87 external-ids:iface-status=active -- set 
>Interface qg-f34f83e7-87 external-ids:attached-mac=fa:16:3e:7a:bb:65
>Feb 12 11:10:36 Controller1 dnsmasq-dhcp[14159]: 
>DHCPDISCOVER(tapd788c180-85) fa:16:3e:3a:a3:d5 no address available
>Feb 12 11:11:11  dnsmasq-dhcp[14159]: last message repeated 2 times
>Feb 12 11:11:11 Controller1 ovs-vsctl: 00001|vsctl|INFO|Called as 
>/usr/bin/ovs-vsctl -- --may-exist add-port br-ex qg-f34f83e7-87 -- set 
>Interface qg-f34f83e7-87 type=internal -- set Interface qg-f34f83e7-87 
>external-ids:iface-id=f34f83e7-873f-47d4-8b4c-1d042c25e368 -- set 
>Interface qg-f34f83e7-87 external-ids:iface-status=active -- set 
>Interface qg-f34f83e7-87 external-ids:attached-mac=fa:16:3e:7a:bb:65
>Feb 12 11:11:12 Controller1 dnsmasq-dhcp[14159]: 
>DHCPDISCOVER(tapd788c180-85) fa:16:3e:3a:a3:d5 no address available
>Feb 12 11:11:51  dnsmasq-dhcp[14159]: last message repeated 2 times
>Feb 12 11:11:51 Controller1 ovs-vsctl: 00001|vsctl|INFO|Called as 
>/usr/bin/ovs-vsctl -- --may-exist add-port br-ex qg-f34f83e7-87 -- set 
>Interface qg-f34f83e7-87 type=internal -- set Interface qg-f34f83e7-87 
>external-ids:iface-id=f34f83e7-873f-47d4-8b4c-1d042c25e368 -- set 
>Interface qg-f34f83e7-87 external-ids:iface-status=active -- set 
>Interface qg-f34f83e7-87 external-ids:attached-mac=fa:16:3e:7a:bb:65
>
>What could be the problem? Have you seen similar behavior? If yes, how 
>did you fix this?
>
>Regards,
>Rajshree
>
>
>
>------------------------------
>
>Message: 21
>Date: Wed, 12 Feb 2014 06:11:09 +0000
>From: "Li, Chen" <chen.li at intel.com>
>To: Rajshree Thorat <rajshree.thorat at gslab.com>, openstack Users
>	<openstack at lists.openstack.org>
>Subject: Re: [Openstack] Problem with dnsmasq - DHCPDISCOVER no
>	address available
>Message-ID:
>	<988E98D31B01E44893AF6E48ED9DEFD4019BCFA3 at SHSMSX101.ccr.corp.intel.com>
>	
>Content-Type: text/plain; charset="us-ascii"
>
>Might be the bug:
>	https://bugs.launchpad.net/neutron/+bug/1192381
>
>
>Thanks.
>-chen
>
>-----Original Message-----
>From: Rajshree Thorat [mailto:rajshree.thorat at gslab.com] 
>Sent: Wednesday, February 12, 2014 1:50 PM
>To: openstack Users
>Subject: [Openstack] Problem with dnsmasq - DHCPDISCOVER no address available
>
>Hi All,
>
>I have installed Grizzly with quantum and OVS-plugin. I'm using dnsmasq and never had any problems with it until now. Dnsmasq is working fine in most of the cases. But lately there were a few reports that dnsmasq is not working entirely reliable. Sometimes dnsmasq claims that no address is available for the specific mac-address, while other clients are still receiving their addresses fine.
>
>Details:
>
>Used dnsmasq version is 2.59. The issue was recognized on Ubuntu 12.04 LTS.
>
>Logs:
>
>Feb 12 11:10:31 Controller1 ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port br-ex qg-f34f83e7-87 -- set Interface qg-f34f83e7-87 type=internal -- set Interface qg-f34f83e7-87
>external-ids:iface-id=f34f83e7-873f-47d4-8b4c-1d042c25e368 -- set Interface qg-f34f83e7-87 external-ids:iface-status=active -- set Interface qg-f34f83e7-87 external-ids:attached-mac=fa:16:3e:7a:bb:65
>Feb 12 11:10:36 Controller1 dnsmasq-dhcp[14159]: 
>DHCPDISCOVER(tapd788c180-85) fa:16:3e:3a:a3:d5 no address available Feb 12 11:11:11  dnsmasq-dhcp[14159]: last message repeated 2 times Feb 12 11:11:11 Controller1 ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port br-ex qg-f34f83e7-87 -- set Interface qg-f34f83e7-87 type=internal -- set Interface qg-f34f83e7-87
>external-ids:iface-id=f34f83e7-873f-47d4-8b4c-1d042c25e368 -- set Interface qg-f34f83e7-87 external-ids:iface-status=active -- set Interface qg-f34f83e7-87 external-ids:attached-mac=fa:16:3e:7a:bb:65
>Feb 12 11:11:12 Controller1 dnsmasq-dhcp[14159]: 
>DHCPDISCOVER(tapd788c180-85) fa:16:3e:3a:a3:d5 no address available Feb 12 11:11:51  dnsmasq-dhcp[14159]: last message repeated 2 times Feb 12 11:11:51 Controller1 ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port br-ex qg-f34f83e7-87 -- set Interface qg-f34f83e7-87 type=internal -- set Interface qg-f34f83e7-87
>external-ids:iface-id=f34f83e7-873f-47d4-8b4c-1d042c25e368 -- set Interface qg-f34f83e7-87 external-ids:iface-status=active -- set Interface qg-f34f83e7-87 external-ids:attached-mac=fa:16:3e:7a:bb:65
>
>What could be the problem? Have you seen similar behavior? If yes, how did you fix this?
>
>Regards,
>Rajshree
>
>_______________________________________________
>Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>Post to     : openstack at lists.openstack.org
>Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>------------------------------
>
>Message: 22
>Date: Wed, 12 Feb 2014 11:43:55 +0530
>From: Vikash Kumar <vikash.kumar at oneconvergence.com>
>To: Robert Collins <robertc at robertcollins.net>
>Cc: openstack <openstack at lists.openstack.org>
>Subject: Re: [Openstack] dhcp request can't reach br-int of network
>	node
>Message-ID:
>	<CAPHM3WbRRmEB0K5gKaw7vvX6VtsE9H4Qv-4jvA1d1bpTTCYCsQ at mail.gmail.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>Chen ,
>
>    I think ur packets is getting dropped. U can verify this by just
>checking the packets coming from compute node hitting which of the rules in
>br-int.
>
>
>On Wed, Feb 12, 2014 at 10:56 AM, Robert Collins
><robertc at robertcollins.net>wrote:
>
>> Check the outbound ofctl rules on your hypervisor nodes. If they
>> aren't tagging traffic properly, it won't be processed by the incoming
>> gre rules and you'll see the symptoms you have.
>>
>> On 8 February 2014 05:41, Chris Baker <openstack2014 at qq.com> wrote:
>> > Hi guys,
>> >
>> > My havana installation has 3 nodes:
>> > control node, runs keystone APIs and neutron server;
>> > network node, runs l3, dhcp, metadata, ovs agents; with VLAN mode
>> > compute node, runs nova compute and ovs agents;
>> >
>> > repo from http://repos.fedorapeople.org/repos/openstack/openstack-havana
>> ,
>> > and the os gets updated well with latest kernel.
>> >
>> >
>> > Currently my VM can't get dhcp ack from network node.
>> > Based the topology picture for the package flow:
>> >
>> http://docs.openstack.org/admin-guide-cloud/content/figures/10/a/common/figures/under-the-hood-scenario-1-ovs-network.png
>> >
>> > With the tcpdump result on both network and compute node, it says the
>> dhcp
>> > request successfully leaves compute node, and it can reach the
>> "int-br-eth1"
>> > of network node, but not the next step br "br-int", so the dnsmasq would
>> not
>> > able to ack. I think this is the problem why I can't get dhcp ipaddr.
>> >
>> > # uname -r
>> > 2.6.32-358.123.2.openstack.el6.x86_64
>> >
>> > # ip netns  (should we say namespace works well?)
>> > qdhcp-11f3adc1-6a2e-429b-9679-b565347e2f74
>> > qdhcp-4aaa7c19-7864-4b17-aebc-d6aa354d4cd5
>> > qdhcp-285e259e-e3ec-4149-81db-8e94e1713aa2
>> > qdhcp-4044cdf0-717b-4628-9ce0-a9ff49533d8f
>> > qrouter-76c9b884-5928-42f4-a016-afab1b72066b
>> >
>> > Anyone can help where/which section I should check for the issue next? or
>> > let me know if other info is needed.
>> > Thanks a lot.
>> >
>> > Chris
>> >
>> > _______________________________________________
>> > Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> > Post to     : openstack at lists.openstack.org
>> > Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >
>>
>>
>>
>> --
>> Robert Collins <rbtcollins at hp.com>
>> Distinguished Technologist
>> HP Converged Cloud
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/0bae9df6/attachment-0001.html>
>
>------------------------------
>
>Message: 23
>Date: Wed, 12 Feb 2014 12:04:11 +0530
>From: Akshat Kansal <akshatknsl at gmail.com>
>To: openstack at lists.openstack.org
>Subject: [Openstack] Connection breaking between the controller and
>	the	compute node.
>Message-ID:
>	<CAEJ_rX7rNup386yDPnvwoJX52p8fTXvpRBbtJrivdDjoKe-gtQ at mail.gmail.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>Hi All,
>
>I have a openstack setup(Grizzly) with 1 controller and multiple compute
>nodes. I am facing an issue where the connectivity of controller and one of
>the compute nodes breaks. The nova service-list command shows that compute
>as down.
>
>I see the below error in the compute logs at the same time:
>
>2014-02-11 03:06:16.216 28879 ERROR nova.servicegroup.drivers.db [-] model
>server went away 2014-02-11 03:06:16.216 28879 TRACE
>nova.servicegroup.drivers.db Traceback (most recent call last): 2014-02-11
>03:06:16.216 28879 TRACE nova.servicegroup.drivers.db File
>"/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py", line
>88, in _report_state 2014-02-11 03:06:16.216 28879 TRACE
>nova.servicegroup.drivers.db report_count =
>service.service_ref['report_count'] + 1 2014-02-11 03:06:16.216 28879 TRACE
>nova.servicegroup.drivers.db TypeError: 'NoneType' object is unsubscriptable
>
>Any pointers to debug this will be helpful.
>
>Thanks
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/9cad174d/attachment-0001.html>
>
>------------------------------
>
>Message: 24
>Date: Wed, 12 Feb 2014 03:55:04 -0500
>From: Aryeh Friedman <aryeh.friedman at gmail.com>
>To: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: [Openstack] PeiiteCloud 0.2.5 Released
>Message-ID:
>	<CAGBxaXn5P4PvSfmzMLOm_RWYyFnDDBHhLHbYefZwc4VTrn895w at mail.gmail.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>PetiteCloud is a 100% Free Open Source and Open Knowledge bare metal
>capable Cloud Foundation Layer for Unix-like operating systems. Ithas the
>following features:
>
>    * Support for bhyve (FreeBSD only) and QEMU
>    * Any x86 OS as a guest (FreeBSD and Linux via bhyve or QEMU; all
>others via QEMU only)  and all supported software (including running
>OpenStack on VM's)
>    * Install, import, start, stop and reboot instances safely (guest OS
>needs to be controlled independently)
>    * Clone, backup/export, delete stopped instances 100% safely
>    * Keep track of all your instances on one screen
>    * All transactions that change instance state are password protected at
>all critical stages
>    * Advanced options:
>        * Ability to use/make bootable bare metal disks for backing stores
>        * Multiple NIC's and disks
>        * User settable (vs. auto assigned) backing store locations
>-- 
>Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/dbdc4bcb/attachment-0001.html>
>
>------------------------------
>
>Message: 25
>Date: Wed, 12 Feb 2014 17:33:41 +0800
>From: Dnsbed Ops <ops at dnsbed.com>
>To: openstack at lists.openstack.org
>Subject: Re: [Openstack] PeiiteCloud 0.2.5 Released
>Message-ID: <52FB3FF5.5050703 at dnsbed.com>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>does it support Mac OSX?
>
>On 02/12/2014 04:55 PM, Aryeh Friedman wrote:
>>      * Support for bhyve (FreeBSD only) and QEMU
>>      * Any x86 OS as a guest (FreeBSD and Linux via bhyve or QEMU; all
>> others via QEMU only)  and all supported software (including running
>> OpenStack on VM's)
>
>
>
>------------------------------
>
>Message: 26
>Date: Wed, 12 Feb 2014 10:35:34 +0100
>From: "MICHON Anthony" <anthony.michon at eurogiciel.fr>
>To: <openstack at lists.openstack.org>
>Subject: Re: [Openstack] [Ceilometer/Telemetry] How to receive events
>	fromkeystone/identity component ?
>Message-ID:
>	<9D8CF494BE1AFE428437E41168EFE66505ACB4AC at 31l-syracuse.eurogiciel.fr>
>Content-Type: text/plain; charset="iso-8859-1"
>
>I answer to myself ;)
>
>I've added some configuration stuff in keystone.conf :
>
>notification_driver = keystone.openstack.common.notifier.rpc_notifier
>
>notification_topics = notifications
>control_exchange = identity
> 
>
>And  I've added a entry points for a plugin ceilometers.identity.notifications :
>
>[ceilometer.collector]
>project_created = ceilometer.identity.notifications.project:ProjectCreated
>project_deleted = ceilometer.identity.notifications.project:ProjectDeleted
>project_updated = ceilometer.identity.notifications.project:ProjectUpdated
>
>In ceilometers.conf, subscribe to the newly identity exchange channel
>
> 
>
>http_control_exchanges=identity
>identity_control_exchange=ceilometer.identity.notifications
>
> 
>
>Voila It works, I received notification for project.* J
>
> 
>
> 
>
>De : MICHON Anthony [mailto:anthony.michon at eurogiciel.fr] 
>Envoy? : lundi 10 f?vrier 2014 12:44
>? : openstack at lists.openstack.org
>Objet : [Openstack] [Ceilometer/Telemetry] How to receive events fromkeystone/identity component ?
>
> 
>
>Hi all
>
> 
>
>I need to call custom action on receiving event identity.project.updated sent by keystone.
>
> 
>
>I began to write a plugin notification agent in ceilometers to listen to it.
>
>But I figured lately in the following diagram http://docs.openstack.org/developer/ceilometer/_images/1-Collectorandagents.png that keystone would not sending event in the notification bus !
>
>Besides there is no configuration variable on exchange channel to listen from keystone, only nova, glance, neutron and cinder :
>
># Exchanges name to listen for notifications (multi valued)
>
>#http_control_exchanges=nova
>
>#http_control_exchanges=glance
>
>#http_control_exchanges=neutron
>
>#http_control_exchanges=cinder
>
> 
>
> 
>
>My environnements : 
>
>RDO on Centos (qpid based bus)
>
>Devstack on ubuntu (rabbitmq), notice that the process_notification is never called, even for nova events (instance.*), that's another problem
>
> 
>
> 
>
>So my questions :
>
>-          Is keystone definitely out of bus notification with ceilometers ?
>
>-          Should I use a pollster (and call the identity api) rather ? 
>
>-          Or should I adopt another strategy to listen keystone events ?
>
> 
>
>Thanks.
>
>Anthony
>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/bb34d2fe/attachment-0001.html>
>
>------------------------------
>
>Message: 27
>Date: Wed, 12 Feb 2014 04:57:00 -0500
>From: Aryeh Friedman <aryeh.friedman at gmail.com>
>To: Dnsbed Ops <ops at dnsbed.com>
>Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: Re: [Openstack] PeiiteCloud 0.2.5 Released
>Message-ID:
>	<CAGBxaX=Ys6gXpMJ7Or96gCisyOMJjQGAXHPHQ-ND3LcSAnQong at mail.gmail.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>You would likely need to do a hand compile and some messing with the
>resulting scripts but I see no reason why it can't as long as QEMU is
>supported.   If your interested join our mailing list and we will help you
>set it up.
>
>
>On Wed, Feb 12, 2014 at 4:33 AM, Dnsbed Ops <ops at dnsbed.com> wrote:
>
>> does it support Mac OSX?
>>
>>
>> On 02/12/2014 04:55 PM, Aryeh Friedman wrote:
>>
>>>      * Support for bhyve (FreeBSD only) and QEMU
>>>      * Any x86 OS as a guest (FreeBSD and Linux via bhyve or QEMU; all
>>> others via QEMU only)  and all supported software (including running
>>> OpenStack on VM's)
>>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>>
>
>
>
>-- 
>Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/d01ed2f2/attachment-0001.html>
>
>------------------------------
>
>Message: 28
>Date: Wed, 12 Feb 2014 18:19:27 +0800
>From: Dnsbed Ops <ops at dnsbed.com>
>To: openstack at lists.openstack.org
>Subject: Re: [Openstack] [Swift] Question regarding global scaling
>Message-ID: <52FB4AAF.8090600 at dnsbed.com>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>For multi-IDC deployment, we once considered to use regions, and I think 
>it's a good solution, though we didn't use swift finally due to some 
>reasons.
>
>
>On 02/12/2014 07:38 AM, Adam Lawson wrote:
>> For those who are scaling to that degree, are you building multiple
>> unique clusters and replicating between them somehow or using regions
>> and replicating within essentially one giant cluster and using affinity
>> rules like read_affinity and write_affinity*?
>
>
>
>------------------------------
>
>Message: 29
>Date: Wed, 12 Feb 2014 02:24:42 -0800 (PST)
>From: Staicu Gabriel <gabriel_staicu at yahoo.com>
>To: "openstack at lists.openstack.org" <openstack at lists.openstack.org>
>Subject: [Openstack] openstack havana cinder chooses wrong host to
>	create	new volume
>Message-ID:
>	<1392200682.6362.YahooMailNeo at web122605.mail.ne1.yahoo.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>
>
>
>Hi,
>
>I have a setup with Openstack Havana on ubuntu precise with multiple schedulers and volumes.
>root at opstck10:~# cinder service-list
>+------------------+----------+------+---------+-------+----------------------------+
>|????? Binary????? |?? Host?? | Zone |? Status | State |???????? Updated_at???????? |
>+------------------+----------+------+---------+-------+----------------------------+
>| cinder-scheduler | opstck08 | nova | enabled |?? up? | 2014-02-12T10:08:28.000000 |
>| cinder-scheduler | opstck09 | nova | enabled |?? up? | 2014-02-12T10:08:29.000000 |
>| cinder-scheduler | opstck10 | nova | enabled |?? up? | 2014-02-12T10:08:28.000000 |
>|? cinder-volume?? | opstck01 | nova | enabled |? down | 2014-02-12T09:39:09.000000 |
>|? cinder-volume?? | opstck04 | nova | enabled |? down |
> 2014-02-12T09:39:09.000000 |
>|? cinder-volume?? | opstck05 | nova | enabled |? down | 2014-02-12T09:39:09.000000 |
>|? cinder-volume?? | opstck08 | nova | enabled |?? up? | 2014-02-12T10:08:28.000000 |
>|? cinder-volume?? | opstck09 | nova | enabled |?? up? | 2014-02-12T10:08:28.000000 |
>|? cinder-volume?? | opstck10 | nova | enabled |?? up? | 2014-02-12T10:08:28.000000 |
>+------------------+----------+------+---------+-------+----------------------------+
>
>
>When I am trying to create a new instance from a volume snapshot it keeps 
>choosing for the creation of the volume opstck01 on which cinder-volume is down.
>Did anyone encounter the same problem?
>Thanks
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/ec3c2374/attachment-0001.html>
>
>------------------------------
>
>Message: 30
>Date: Wed, 12 Feb 2014 12:23:47 +0100
>From: GALAMBOS Daniel <dancsa at dancsa.hu>
>To: openstack at lists.openstack.org
>Subject: Re: [Openstack] dhcp request can't reach br-int of network
>	node
>Message-ID: <52FB59C3.1010207 at dancsa.hu>
>Content-Type: text/plain; charset="iso-8859-1"
>
>Hi,
>
>I had the same problem.
>My problem was that the openvswitch plugin's config file was missing on
>the controller node, and because of this, the horizon used "local"
>network not "gre" (the same problem can be the problem with VLAN setups)
>
>you can chek it with neutron net-show.
>
>I hope it helps
>
>
>Dancsa
>
>
>On 2014-02-07 17:41, Chris Baker wrote:
>> Hi guys,
>>
>> My havana installation has 3 nodes:
>> control node, runs keystone APIs and neutron server;
>> network node, runs l3, dhcp, metadata, ovs agents; with VLAN mode
>> compute node, runs nova compute and ovs agents;
>>
>> repo from
>> http://repos.fedorapeople.org/repos/openstack/openstack-havana
>> <http://repos.fedorapeople.org/repos/openstack/openstack-havana/>, and
>> the os gets updated well with latest kernel.
>>
>>
>> Currently my VM can't get dhcp ack from network node.
>> Based the topology picture for the package flow:
>> http://docs.openstack.org/admin-guide-cloud/content/figures/10/a/common/figures/under-the-hood-scenario-1-ovs-network.png
>> <http://docs.openstack.org/admin-guide-cloud/content/figures/10/a/common/figures/under-the-hood-scenario-1-ovs-network.png>
>>
>> With the tcpdump result on both network and compute node, it says the
>> dhcp request successfully leaves compute node, and it can reach the
>> "int-br-eth1" of network node, but not the next step br "br-int", so
>> the dnsmasq would not able to ack. I think this is the problem why I
>> can't get dhcp ipaddr.
>>
>> # uname -r
>> 2.6.32-358.123.2.openstack.el6.x86_64
>>
>> # ip netns  (should we say namespace works well?)
>> qdhcp-11f3adc1-6a2e-429b-9679-b565347e2f74
>> qdhcp-4aaa7c19-7864-4b17-aebc-d6aa354d4cd5
>> qdhcp-285e259e-e3ec-4149-81db-8e94e1713aa2
>> qdhcp-4044cdf0-717b-4628-9ce0-a9ff49533d8f
>> qrouter-76c9b884-5928-42f4-a016-afab1b72066b
>>
>> Anyone can help where/which section I should check for the issue next?
>> or let me know if other info is needed.
>> Thanks a lot.
>>
>> Chris
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>-------------- next part --------------
>An HTML attachment was scrubbed...
>URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/495377e2/attachment-0001.html>
>
>------------------------------
>
>_______________________________________________
>Openstack mailing list
>openstack at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>End of Openstack Digest, Vol 8, Issue 13
>****************************************


More information about the Openstack mailing list