[openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking
Monty Taylor
mordred at inaugust.com
Sun May 14 17:02:30 UTC 2017
Hey all!
LONG EMAIL WARNING
I'm working on a proposal to formalize a cloud profile document. (we
keep and support these in os-client-config, but it's grown up ad-hoc and
is hard for other languages to consume -so we're going to rev it and try
to encode information in it more sanely) I need help in coming up with
names for some things that we can, if not all agree on, at least not
have pockets of violent dissent against.
tl;dr: What do we name some enum values?
First, some long-winded background
== Background ==
The profile document is where we keep information about a cloud that an
API consumer needs to know to effectively use the cloud - and is stored
in a machine readable manner so that libraries and tools (including but
hopefully not limited to shade) can make appropriate choices.
Information in profiles is the information that's generally true for all
normal users. OpenStack is flexible, and some API consumers have
different access. That's fine - the cloud profiles are not for them.
Cloud profiles define the qualities about a cloud that end users can
safely expect to be true. Advanced use is never restricted by annotating
the general case.
First off, we need to define two terms:
"external" - an address that can be used for north-south communication
off the cloud
"internal" - an address that can be used for east-west communication
with and only with other things on the same cloud
Again, there are more complex combinations possible. For now this is
focused on the 80% case. I'm deliberately ignoring questions like vpn or
tricircle-style intra-cloud networks for now. If we can agree on an
outcome here - we can always come back and add words to describe more
things.
** Bikeshed #1 **
Are "internal" and "external" ok with folks as terms for those two ideas?
We need a term for each - if we prefer different terms, replacing their
use in the following is simple.
== Booting Servers ==
When booting a server, a typical user wants one of the following:
- Give me a server with an external address
- Give me a server with an internal address
- Give me a server with both
- Give me a server with very specific networking connections
The fourth doesn't need any help - it's the current state of the world
today and is well served. It's the "I have a network I am aware of
and/or a pre-existing floating ip, etc and I want to use them". This is
not about those people - they're fine.
Related to the first three cases, depending on how the cloud is
deployed, any of the following can be non-exclusively true:
- External addresses are provided via Fixed IPs
- External addresses are provided via Floating IPs
- Internal addresses are provided via Fixed IPs
- Internal addresses can be provided via Floating IPs
- Users can create and define their own internal networks
Additionally, External addresses can be IPv4 or IPv6
== Proposal - complete with Unpainted Sheds ==
I want to add information to the existing cloud profile telling the API
user which of the models above are available.
The cloud profile will gain a field called "network-models" which will
contain one or more names from an enum of pre-defined models. Multiple
values can be listed, because some clouds provide more than one option.
** Bikeshed #2 **
Anybody have a problem with the key name "network-models"?
(Incidentally, the idea from this is borrowed from GCE's
"compute#accessConfig" [0] - although they only have one model in their
enum: "ONE_TO_ONE_NAT")
In a perfect future world where we have per-service capabilities
discovery I'd love for such information to be exposed directly by
neutron. Therefore, I'd LOVE if we can at agree that the concepts are
concepts and on what to name them so that users who get the info from a
cloud profile today can get it from a discovery call in the future and
have some expectation that the terms carry the same meaning.
Reminder - this is about the 80% case. Complex cases are already handled.
** Bikeshed #3 **
What do we call the general concepts represented by fixed and floating
ips? Do we use the words "fixed" and "floating"? Do we instead try
something else, such as "direct" and "nat"?
I have two proposals for the values in our enum:
#1 - using fixed / floating
ipv4-external-fixed
ipv4-external-floating
ipv4-internal-fixed
ipv4-internal-floating
ipv6-fixed
#2 - using direct / nat
ipv4-external-direct
ipv4-external-nat
ipv4-internal-direct
ipv4-internal-nat
ipv6-direct
Does anyone have strong feelings one way or the other?
My personal preference is direct/nat. "floating" has a tendency to imply
different things to different people (just watch, we're going to have at
least one rabbit hole that will be an argument about the meaning of
floating ips) ... while anyone with a background in IT knows what "nat"
is. It's also a characteristic from a server/workload perspective that
is related to a choice the user might want to make:
Does the workload need the server to know its own IP?
Does the workload prefer to be behind NAT?
Does the workload not care and just wants connectivity?
On the other hand, "direct" isn't exactly a commonly used word in this
context. I asked a ton of people at the Summit last week and nobody
could come up with a better term for "IP that is configured inside of
the server's network stack". "non-natted", "attached", "routed" and
"normal" were all suggested. I'm not sure any of those are super-great -
so I'm proposing "direct" - but please if you have a better suggestion
please make it.
== Maybe Examples will make it Clearer ==
Some examples of how these would be applied in cloud profiles, based on
some public and private clouds. Using the direct/nat terms for the
examples- could just as easily get search/replaced:
vexxhost:
network-models:
- ipv4-external-direct
- ipv4-external-nat
- ipv4-internal-direct
- ipv6-direct
citycloud:
network-models:
- ipv4-external-nat
- ipv4-internal-direct
rackspace:
network-models:
- ipv4-external-direct
- ipv4-internal-direct
- ipv6-direct
infra-cloud:
network-models:
- ipv4-external-direct
- ipv6-direct
internap:
network-models:
- ipv4-external-direct
- ipv4-internal-direct
Having those available should allow for a user to express to a library
or framework such as shade or terraform:
"get me a server with an external ipv4 using direct if it's available
and nat if not"
create_server('my-server', external_network=True)
or
"get me a server with an external ipv4 using nat if it's available but
direct is ok"
create_server(
'my-server', external_network=True,
external_models=['ipv4-external-nat', 'ipv4-external-direct'])
or
"get me a server with an external ipv4 using direct and fail if it's not
available"
create_server(
'my-server', external_network=True,
external_models=['ipv4-external-direct'])
or
"get me a server with only an internal ipv4 and please fail if that
isn't possible"
create_server(
'my-server', external_network=False, internal_network=True)
== Follow up future - Capabilities Discovery ==
This is WAY out of scope for right now, but here are some initial
thoughts about how the same values could be exposed from the API. It's
not necessary that we like this part - or that if we do that this API is
what we'd like. The important point would be that IF we decided to
expose this information via the API in the future that the same terms be
used.
Here's a straw man of how the info _could_ be exposed in the future:
1) In /capabilities (or whatever we call it)
Whenever we add capabilities discovery, the models in the cloud-profile
could be one of the fields exposed:
GET /capabilities.json
{
"capabilities": {
"network-models": {
"ipv4-external-nat",
"ipv4-internal-direct",
"ipv6-direct"
}
}
}
2) As information on networks themselves:
GET /networks.json
{
"networks": [
{
"status": "ACTIVE",
"name": "GATEWAY_NET_V6",
"id": "54753d2c-0a58-4928-9b32-084c59dd20a6",
"network-models": [
"ipv4-internal-direct",
"ipv6-direct"
]
},
{
"status": "ACTIVE",
"name": "GATEWAY_NET",
"id": "7004a83a-13d3-4dcd-8cf5-52af1ace4cae",
"network-models": [
"ipv4-external-nat",
]
},
{
"status": "ACTIVE",
"name": "my-awesome-network",
"id": "0be66687-9358-46cd-9093-9ce62cb4ece7",
"network-models": [
"ipv4-internal-direct"
]
}
]
}
OR
GET /networks
{
"networks": [
{
"status": "ACTIVE",
"name": "public",
"id": "6d6357ac-0f70-4afa-8bd7-c274cc4ea235",
"network-models": [
"ipv4-external-direct",
"ipv4-external-nat",
"ipv6-direct"
]
}
]
}
I'm not suggesting putting info on subnets, since one requests
connectivity from a network, not a subnet.
networks with more than one "direct" model ("ipv4-external-direct",
"ipv4-internal-direct", "ipv6-direct") would communicate "you get all
the listed models". A network with a direct model AND a nat model
communicate that the network can be used as a target for direct booting
_and_ can be used as a source of NAT.
It's worth noting that operators _could_ annotate their networks with
this information today using tags ... but since this is in service of
interoperability, just falling back on expecting operators to add some
specific strings to an otherwise free-form tags field seems like a thing
that would be messier to add to the Interop Guidelines in the future
compared to a strict set of pre-defined enum values with their own
non-overloaded field.
== Conclusion ==
Ok. I know that's yet-another billion-line pile of text from me. Sorry
about that. In case you've forgotten the bikeshed topics at this point:
* Internal and External OK?
* network-models OK?
* fixed/floating vs. direct/nat vs. something else?
Comments, feedback, concerns, raves and rants are all welcome.
[0]
https://cloud.google.com/compute/docs/reference/latest/instances/addAccessConfig
More information about the OpenStack-dev
mailing list