[Openstack-operators] Outbound and inbound external access for projects
Adam Huffman
adam.huffman at gmail.com
Thu Jul 16 15:59:28 UTC 2015
Hi Kevin,
On Wed, Jul 15, 2015 at 4:42 PM, Kevin Bringard (kevinbri)
<kevinbri at cisco.com> wrote:
> You don't need "per project vlans" for inbound and outbound access. Public
> Ips only need a single VLAN between the logical routers
> (net-hosts/l3-agent hosts) and their next hop... It's the internal
> networks which require multiple VLANs if you wish to do such a thing, and
> those VLANs are only necessary on your internal switches. Alternatively
> you can use GRE or STT or some other segregation method and avoid the VLAN
> cap altogether (on internal networks).
>
It's all VLAN-based, with an allocation for provider and project
networks already configured on the switches, so it makes sense to
continue that approach.
Let's see if I can translate my original message into English.
It's an Icehouse setup with Neutron and heavy use of VLANs. Each
project network has its own VLAN. We also have a VLAN range designated
on the switches for provider networks, more than sufficient for the
number of projects we're expecting over the lifetime of this system.
At the moment there's a single provider network, which is used for
floating IP access to instances via SSH.
We have received a small allocation of public IPs (32) and some Cisco
firewall/VPN hardware that connects to the upstream internet router.
We would like to provide outbound access to all projects, but we don't
want instances within a project to be able to see instances within
another project, which rules out having a single provider network for
all projects (unless there's a way of adding restrictions within
Neutron and/or OVS that I've missed?).
For outbound access, the default idea is to create a new provider
network for each project, on its own VLAN. Then we create PAT rules on
the external firewall to pass through outbound traffic coming from
each of these VLANs.
For inbound access, the two main ideas are 1:1 NAT rules, mapping from
the public IPs to project RFC1918 IPs, or adding another external
network that connects directly to these public IPs, using the firewall
as the external router.
I've read some of the discussions that have taken here about related
topics, and everyone seems to be doing it differently, or heavily
patching Neutron, which isn't particularly appealing.
Does this approach make sense? I'm quite happy to accept a derisive
response, so long as a better alternative is provided...
Cheers,
Adam
> Basically, the flow looks like so:
>
> Internet -> Floating IP (hosted on your logical router host... All a
> single "public VLAN") -> NAT translation to internal tenant subnet (and
> tagged with the "internal OVS VLAN" -> VLAN translation flow (if it needs
> to go to the wire) tags the packet with the VLAN assigned to the tenant's
> subnet (or goes over the requisite GRE tunnel) -> ...
>
> It's kind of complicated, I know, but hopefully that helps some? Or
> perhaps I just misunderstood your scenario/question, which is also
> entirely possible :-D
>
>
> On 7/15/15, 9:24 AM, "Adam Huffman" <adam.huffman at gmail.com> wrote:
>
>>Hello
>>
>>We're at the stage of working out how to integrate our Icehouse system
>>with the external network, using Neutron.
>>
>>We have a limited set of public IPs available for inbound access, and
>>we'd also like to make outbound access optional, in case some projects
>>want to be completely isolated.
>>
>>One suggestion is as follows:
>>
>>- each project is allocated a single /24 VLAN
>>
>>- within this VLAN, there are 2 subnets
>>
>>- the first subnet (/25) would be for outbound access, using floating IPs
>>
>>- the second (/25) subnet would be for inbound access, drawing from
>>the limited public pool, also with floating IPs
>>
>>Does that sound sensible/feasible? The Cisco hardware that's providing
>>the route to the external network has constraints in the numbers of
>>VLANs it will support, so we prefer this approach to having separate
>>per-project VLANs for outbound and inbound access.
>>
>>If there's a different way of achieving this, I'd be interested to
>>hear that too.
>>
>>
>>Cheers,
>>Adam
>>
>>_______________________________________________
>>OpenStack-operators mailing list
>>OpenStack-operators at lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
More information about the OpenStack-operators
mailing list