[openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

Vitaly Kramskikh vkramskikh at mirantis.com
Mon Feb 9 14:00:52 UTC 2015


Hi, my opinion on this:

Yes, it is technically possible to implement this feature using only tasks.
This may require adding new field to tasks to distinguish whether task was
run for saved or unsaved changes. But I'm against this approach because:

1) It will require handling more than 1 task of a single type which
automatically leads to increase of complexity of the code. We will need 2
tasks in the following case: there are 1 task for unsaved data and 1 for
saved. We need to show result of the first task on the network tab and use
status of the second task to determine whether the network check was
performed.

2) We have 2 similar tasks: for deploying a cluster and settings up a
release. Both cluster and release models have "status" field which
represent status of these entities so we don't perform complex checks with
tasks. So I think the same approach should be used for network verification
status.

As for tasks deletion, there are 2 reasons for this:

1) If we don't delete old tasks, it increases the traffic between backend
and UI. There are still no way to fetch the latest task or 2 latest tasks
using our API.

2) We delete tasks manually when their results are not needed anymore or
become invalid. For example, when user adds another node, we remove network
check task as its result is not valid anymore. Yet another example - when
user clicks X button on the message with deployment result, we remove this
task so it won't be shown anymore. If you want us not to delete these
tasks, please provide us with another way to cover these cases.

2015-02-09 15:51 GMT+03:00 Przemyslaw Kaminski <pkaminski at mirantis.com>:

>
>
> On 02/09/2015 01:18 PM, Dmitriy Shulyak wrote:
> >
> > On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski
> > <pkaminski at mirantis.com <mailto:pkaminski at mirantis.com>> wrote:
> >
> >> Well i think there should be finished_at field anyway, why not
> >> to add it for this purpose?
> >
> > So you're suggesting to add another column and modify all tasks
> > for this one feature?
> >
> >
> > Such things as time stamps should be on all tasks anyway.
> >
> >>
> >> I dont actually recall what was the reason to delete them, but
> >> if it happens imo it is ok to show right now that network
> >> verification wasnt performed.
> >
> > Is this how one does predictible and easy to understand software?
> > Sometimes we'll say that verification is OK, othertimes that it
> > wasn't performed?
> >
> > In my opinion the questions that needs to be answered - what is
> > the reason or event to remove verify_networks tasks history?
> >
> >>
> >> 3. Just having network verification status as ready is NOT
> >> enough. From the UI you can fire off network verification for
> >> unsaved changes. Some JSON request is made, network configuration
> >> validated by tasks and RPC call made returing that all is OK for
> >> example. But if you haven't saved your changes then in fact you
> >> haven't verified your current configuration, just some other one.
> >> So in this case task status 'ready' doesn't mean that current
> >> cluster config is valid. What do you propose in this case? Fail
> >> the task on purpose?
> >
> > Issue #3 I described is still valid -- what is your solution in
> > this case?
> >
> > Ok, sorry. What do you think if in such case we will remove old
> > tasks? It seems to me that is correct event in which old
> > verify_networks is invalid anyway, and there is no point to store
> > history.
>
> Well, not exactly. Configure networks, save settings, do network check
> all assume that all went fine. Now change one thing without saving,
> check settings, didn't pass but it doesn't affect the flag because
> that's some different configuration from the saved one. And your
> original cluster is OK still. So in this case user will have to yet
> again run the original check. The plus of the network_check_status
> column is actually you don't need to store any history -- task can be
> deleted or whatever and still last checked saved configuration
> matters. User can perform other checks 'for free' and is not required
> to rerun the working configuration checks.
>
> With data depending on tasks you actually have to store a lot of
> history because you need to keep last working saved configuration --
> otherwise user will have to rerun original configuration. So from
> usability point of view this is a worse solution.
>
> >
> >
> > As far as I understand, there's one supertask 'verify_networks'
> > (called in nailgu/task/manager.py line 751). It spawns other tasks
> > that do verification. When all is OK verify_networks calls RPC's
> > 'verify_networks_resp' method and returns a 'ready' status and at
> > that point I can inject code to also set the DB column in cluster
> > saying that network verification was OK for the saved
> > configuration. Adding other tasks should in no way affect this
> > behavior since they're just subtasks of this task -- or am I
> > wrong?
> >
> >
> > It is not that smooth, but in general yes - it can be done when
> > state of verify_networks is changed. But lets say we have
> > some_settings_verify task? Would be it valid to add one more field
> > on cluster model, like some_settings_status?
>
> Well, why not? Cluster deployment is a task and it's status is saved
> in cluster colum and not fetched from tasks. As you see the logic of
> network task verification is not simply based on ready/error status
> reading but more subtle. What other settings you have in mind? I guess
> when we have more of them one can create a separate table to keep
> them, but for now I don't see a point in doing this.
>
> P.
>
> >
> >
> >
> >
> >
> >
> __________________________________________________________________________
> >
> >
> OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150209/3156a17c/attachment.html>


More information about the OpenStack-dev mailing list