[openstack-dev] Is the pendulum swinging on PaaS layers?
Zane Bitter
zbitter at redhat.com
Tue May 23 15:23:38 UTC 2017
On 22/05/17 22:58, Jay Pipes wrote:
> On 05/22/2017 12:01 PM, Zane Bitter wrote:
>> On 19/05/17 17:59, Matt Riedemann wrote:
>>> I'm not really sure what you're referring to here with 'update' and [1].
>>> Can you expand on that? I know it's a bit of a tangent.
>>
>> If the user does a stack update that changes the network from 'auto'
>> to 'none', or vice-versa.
>
> Detour here, apologies...
>
> Why would it matter whether a user changes a stack definition for some
> resource from auto-created network to none? Why would you want
> *anything* to change about instances that had already been created by
> Heat with the previous version of the stack definition?
The short answer is that's just how Heat works. A large part of the
value is the ability with Heat to make changes to your application over
time by describing it declaratively. (In the past I've compared this to
the advantage configuration management tools provided over shell scripts
- e.g. in
https://www.openstack.org/videos/atlanta-2013/introduction-to-openstack-orchestration).
> In other words, why shouldn't the change to the stack simply affect
> *new* resources that the stack might create?
Our job is to make the world look like the template the user provides.
If the user changes something, Heat takes them seriously and does not
imagine that it knows better than the user what the user wants. If the
user doesn't want to change anything then they're welcome to not change
the template.
(We *could* do better on protection against accidental changes...
there's an update-preview command and ways of marking resources as
immutable such that updates will fail if they try to change it, but I
don't know that the workflow/UX is great. There are some technical
limitations on how much we can even determine in update-preview.)
> After all, get-me-a-network
> is intended for instance *creation* and nothing else...
So it may be intended for that, but there's any number of legitimate
reasons why a user might want to change things after the server is created:
* Server was created with network: none, but something went horribly
wrong and now you need to ssh in to debug it.
* Server was created with network: auto, but it was compromised by an
attacker and now you want to get it off the network while you conduct a
post-mortem through the console.
* Server was created with network: auto, but now you need more
sophisticated networking and you don't want to delete your server and
all its data to change it.
&c.
That's why it's dangerous, as Matt said in another part of the thread,
to just do the easy part of the job (create) and forget about how a
feature will interact with all of the other things that can happen over
time. At the very least you want a way for users to move from the 'easy'
way to the 'full control' way without starting over. (Semi-professional
cameras and digital oscilloscopes are a couple of examples of where this
is routinely done very well.)
(None of this is to suggest that get-me-a-network is a particularly bad
offender here - it isn't IMO.)
> Why not treat already-provisioned resources of a stack as immutable once
> provisioned? That is, after all, one of the primary benefits of a "cloud
> native application" -- immutability of application images once deployed
> and the clean separation of configuration from data.
I could equally ask why Nova and Neutron allow stuff to be changed after
it has been provisioned? Heat is only providing an interface to public
APIs that exist. You can bet that if we told our users that they can't
use those APIs because we know better than them, we'd have a long list
of feature request and many fewer users.
There are some things that cannot be changed through the underlying APIs
once a resource is created, and in those cases we mark the property with
'update_allowed=False' in the resource schema. However, if it _does_
change then Heat will create a _new_ resource with the property value
you want, and delete the original. So we could have done that with the
get-me-a-network thing, but it wouldn't have been the Right Thing for
our users.
> This is one of the reasons that the (application) container world has it
> easy with regards to resource management.
Yes! Everything is much easier if you tell all the users to re-architect
their applications from scratch :) Which, I mean, if you can... great!
Meanwhile here on planet Earth, it's 2017 and 95% of payment card
transactions are still processed using COBOL at some point. (Studies
show that 79% of statistics are made up, but I actually legit read this
last week.)
That's one reason I don't buy any of the 'OpenStack is dead' commentary.
If we respond appropriately to the needs of users who run a *mixture* of
legacy, cloud-aware, and cloud-native applications then OpenStack will
be relevant for a very long time indeed.
> If you need to change the
> sizing of a deployment [1], Kubernetes doesn't need to go through all
> the hoops we do in resize/migrate/live-migrate. They just blow away one
> or more of the application container replicas [2] and start up new ones.
> [3] Of course, this doesn't work out so well with stateful applications
> (aka the good ol' Nova VM), which is why there's a whole slew of
> constraints on the automatic orchestration potential of StatefulSets in
> Kubernetes [4], constraints that (surprise!) map pretty much one-to-one
> with all the Heat resource dependency management bugs that you
> highlighted in a previous ML response (network identifier is static and
> must follow a pre-described pattern, storage for all pods in the
> StatefulSet must be a PersistentVolume, updating a StatefulSet is
> currently a manual process, etc).
This is really interesting, thanks!
cheers,
Zane.
> Best,
> -jay
>
> [1] A deployment in the Kubernetes sense of the term, ala
> https://kubernetes.io/docs/concepts/workloads/controllers/deployment
>
> [2]
> https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/replicaset/replica_set.go#L508
>
>
> [3] In fact, changing the size/scale of a deployment *does not*
> automatically trigger any action in Kubernetes. Only changes to the
> configuration of the deployment's containers (.spec.template) will
> automatically trigger some action being taken.
>
> [4]
> https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list