[openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

Przemyslaw Kaminski pkaminski at mirantis.com
Mon Feb 9 12:51:26 UTC 2015



On 02/09/2015 01:18 PM, Dmitriy Shulyak wrote:
> 
> On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski 
> <pkaminski at mirantis.com <mailto:pkaminski at mirantis.com>> wrote:
> 
>> Well i think there should be finished_at field anyway, why not
>> to add it for this purpose?
> 
> So you're suggesting to add another column and modify all tasks
> for this one feature?
> 
> 
> Such things as time stamps should be on all tasks anyway.
> 
>> 
>> I dont actually recall what was the reason to delete them, but
>> if it happens imo it is ok to show right now that network
>> verification wasnt performed.
> 
> Is this how one does predictible and easy to understand software? 
> Sometimes we'll say that verification is OK, othertimes that it
> wasn't performed?
> 
> In my opinion the questions that needs to be answered - what is
> the reason or event to remove verify_networks tasks history?
> 
>> 
>> 3. Just having network verification status as ready is NOT
>> enough. From the UI you can fire off network verification for
>> unsaved changes. Some JSON request is made, network configuration
>> validated by tasks and RPC call made returing that all is OK for
>> example. But if you haven't saved your changes then in fact you
>> haven't verified your current configuration, just some other one.
>> So in this case task status 'ready' doesn't mean that current
>> cluster config is valid. What do you propose in this case? Fail
>> the task on purpose?
> 
> Issue #3 I described is still valid -- what is your solution in
> this case?
> 
> Ok, sorry. What do you think if in such case we will remove old
> tasks? It seems to me that is correct event in which old
> verify_networks is invalid anyway, and there is no point to store
> history.

Well, not exactly. Configure networks, save settings, do network check
all assume that all went fine. Now change one thing without saving,
check settings, didn't pass but it doesn't affect the flag because
that's some different configuration from the saved one. And your
original cluster is OK still. So in this case user will have to yet
again run the original check. The plus of the network_check_status
column is actually you don't need to store any history -- task can be
deleted or whatever and still last checked saved configuration
matters. User can perform other checks 'for free' and is not required
to rerun the working configuration checks.

With data depending on tasks you actually have to store a lot of
history because you need to keep last working saved configuration --
otherwise user will have to rerun original configuration. So from
usability point of view this is a worse solution.

> 
> 
> As far as I understand, there's one supertask 'verify_networks' 
> (called in nailgu/task/manager.py line 751). It spawns other tasks 
> that do verification. When all is OK verify_networks calls RPC's 
> 'verify_networks_resp' method and returns a 'ready' status and at
> that point I can inject code to also set the DB column in cluster
> saying that network verification was OK for the saved
> configuration. Adding other tasks should in no way affect this
> behavior since they're just subtasks of this task -- or am I
> wrong?
> 
> 
> It is not that smooth, but in general yes - it can be done when
> state of verify_networks is changed. But lets say we have
> some_settings_verify task? Would be it valid to add one more field
> on cluster model, like some_settings_status?

Well, why not? Cluster deployment is a task and it's status is saved
in cluster colum and not fetched from tasks. As you see the logic of
network task verification is not simply based on ready/error status
reading but more subtle. What other settings you have in mind? I guess
when we have more of them one can create a separate table to keep
them, but for now I don't see a point in doing this.

P.

> 
> 
> 
> 
> 
> __________________________________________________________________________
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



More information about the OpenStack-dev mailing list