[Openstack-operators] Small openstack

gustavo panizzo (gfa) gfa at zumbi.com.ar
Tue Jan 13 16:02:25 UTC 2015


forgot to send this email before

On 01/09/2015 12:50 AM, Kris G. Lindgren wrote:

>>> neutron net-create --shared should do the trick
>>
>> I guess the problem is that I was creating *external* _and_ *shared*
>> network, but if I don't want to use floating IPs from that network I
>> probably don't need the network to be external, right?

external and shared is ok if you want to allow many tenants to plug into
it


>
> Correct.  We have our "external networks" (private ip) only configured as
> shared so any vm from any tenant can create a port on it.  External means
> it's meant to be plugged into a router/l3 agent.

external means that openstack does not manage the router, an external
thing (routers, cheap server, etc) does it

>
>
>>
>>>> 3) small external networks dedicated to a tenant
>>>
>>>
>>> neutron net-create --tenant-id XXXXX-XXXXXX
>>>
>>>
>>> i've made that mix (also add tenantn networks) in my lab running
>>> icehouse
>>> (2014.1.2) worked fine, i've upgraded it to juno but i haven't test
>>> that yet
>>
>> Thank you, I will test it further.
>>
>>> do you run more than one l3 agent? are you floating ips configured on
>>> br-ex?
>>> i think if you fix the policy.json on nova you should get it working

sorry, i wasn't clear enough, i have floating ip working on a different
network on the same cloud, as Kris says bellow you need the use internal
neutron router to have floating ip

i have 3 kinds of networks working at the same time

- provider (also called external) networks, the router is a physical
box, tenant can plug into this networks if you modify nova's policy.json

- tenant networks, router is a neutron router, tenants can plug into
this networks.

- floating ip networks, router is a physical box, if you modify
policy.json tenants can create a port on it or boot a vm on it


actually, is a mess. you cannot mix provider networks and floating ip on
the same cloud safely (maybe if you play with policy.json long enough
you can) i don't run floating ip in prod so i never care much for it




>>
>> I currently have only 1 l3-agent running, but it's suboptimal. I would
>> like to have L3-HA for the floating IPs and DVR for the shared and
>> tenant networks, but as far as I know it's not currently supported. I
>> plan to deploy the production cloud after Kilo release, and crossing
>> my fingers...


sorry i miss remembered here, i wasn't referring to multiple l3 agent
for HA, but each for different functions.
since havana .3 you can run more than one external network on the same
l3 agent, before that you need one agent for each external network


>>
>> Floating IPs are currently configured on br-ex.
>
> You have floating ips working under this configuration?
>
> If so then that means you are using the neutron router as the gateway for
> all of your vm's and not the gateway provided by your network device.  As
> soon as you created a router and attached the the shared network to it the
> router got configured with the gateway address that you configured in your
> network.  You may want to make sure that you don¹t have traffic flapping
> between your real network gateway and your neutron gateway on your l3
> router.
>
> The reason why floating ip's won't work under this configuration (using a
> real network device for the gateway) is the fact that the floating ip is
> applied at the router as a nat so the traffic flows like: Client (4.2.2.1)
> -> floating ip (8.8.8.8) -> external port on router -> The router changes
> the destination IP from the floating ip to the private IP of the vm
> (172.23.2.40) and send the traffic out to the vm via the routers
> connection on the shared network.  The VM responds to the client traffic
> which is not in its subnet directly to the default gateway (172.23.0.1).
> The gateway (if it is a neutron gateway) will then undo the nat and send
> the traffic back to the client.
>
> Where this breaks down with a physical network gateway is when the traffic
> from the VM is sent to the real network gateway - the gateway doesn't know
> anything about the nat and does not re-map the vm ip to the floating ip.
> IE The client ip never gets changed to the floating ip and sent back to
> the client IP.  The Response would be seen as coming directly from
> 172.23.2.40, instead of 8.8.8.8.  So you will never establish a tcp
> connection.
>
> If you have this working somehow - please let me know.  When we attempted
> to run this configuration we ran into the above issues with floating ips
> when using a real gateway on a network device.
>
> Note: this is also why you need to create your shared network with a
> specific router to the metadata service.  Most likely on your real network
> gateway you haven't added a route for 169.254.169.254 to a specific ip.
>

you can tell to your vm to reach 169.254.169.254 pushing a route to it.
easiest way to do it, edit /etc/neutron/dhcp_agent.ini and set

enable_isolated_metadata = True



-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333



More information about the OpenStack-operators mailing list