[openstack-dev] [Fuel] Assigning VIPs on network config serialization

Igor Kalnitsky ikalnitsky at mirantis.com
Thu Oct 22 16:51:41 UTC 2015


Hi all,

Let's summarize what we have by now.

1/ We agreed on removing *VIP allocation* code from GET
network_configuration API handler. Already allocated VIPs should be
returned by this call though.
2/ We agreed on checking whether it's enough IP addressed in the pool
as a part of Verify Network API call (currently we do it only in
before deployment checks).
3/ We should keep in mind (and perhaps make some ticket) about ability
to allocate VIPs on demand.. on some API call, or even be able to
assign custom one.

I think we need update LP ticket of (1), and file a ticket for (2).

Sounds like a plan, ha?

Thanks,
Igor

On Wed, Oct 21, 2015 at 3:54 PM, Aleksey Kasatkin
<akasatkin at mirantis.com> wrote:
>> Then we should close [1] as invalid, shoudn’t we?
>
> AFAIC, no. We should check whether there is enough IPs for nodes /
> VIPs with current network configuration.
>
> I'd propose to add a handler for allocation of VIPs if VIPs can be useful
> before deployment.
> I'm not sure what are the cases.
>
>
>
> Aleksey Kasatkin
>
>
> On Wed, Oct 21, 2015 at 11:45 AM, Roman Prykhodchenko <me at romcheg.me> wrote:
>>
>> Then we should close [1] as invalid, shoudn’t we?
>>
>> > 20 жовт. 2015 р. о 15:55 Igor Kalnitsky <ikalnitsky at mirantis.com>
>> > написав(ла):
>> >
>> > Roman,
>> >
>> >> This behavior is actually described in [1]. Should we allocate
>> >> VIPs on network check as well?
>> >
>> > No, we shouldn't. We should check whether it's enough IPs for nodes /
>> > VIPs with current network configuration, but no more.
>> >
>> > - igor
>> >
>> > On Tue, Oct 20, 2015 at 4:54 PM, Igor Kalnitsky
>> > <ikalnitsky at mirantis.com> wrote:
>> >> Andrew,
>> >>
>> >>> but the problem lies that VIP's need to already be allocated.
>> >>
>> >> Why? Fuel doesn't use them on this stage.
>> >>
>> >>
>> >>> They need to be allocated on network update, or when a node role
>> >>> requiring
>> >>> one is added to the environment.
>> >>
>> >> It looks like either you or me misunderstood something. AFAIK, node
>> >> role itself has nothing in common with VIPs. It doesn't require any of
>> >> them.
>> >>
>> >> Currently VIPs are requested by network roles, and network roles are
>> >> the same for all nodes (except Network Templating case). Network roles
>> >> are assigned on network, and if VIP is required for network role it
>> >> will be allocated in the assigned network.
>> >>
>> >> So basically, it requires a huge effort to redesign our allocation
>> >> system to achieve what you want, because:
>> >>
>> >> * Each time you reassign network role, the corresponding VIP should be
>> >> re-allocated in the database either.
>> >> * Each time you enable/disable plugins, the VIPs should be
>> >> re-allocated, because plugins may export network roles.
>> >> * Each time you add new node to cluster, the VIPs should be
>> >> re-allocated, because with new node you simply may run out of free
>> >> IPs. And, btw, should we assign IP on added nodes here? Or maybe
>> >> postpone to serialization step?
>> >>
>> >> Well, Andrew, I believe we don't have enough resources to implement
>> >> your proposal. Moreover, the proposed approach requires a lot of
>> >> discussions and design meetings. And it definitely should be
>> >> implemented in scope of some blueprint, not a bugfix.
>> >>
>> >>
>> >>> Not knowing the address until serialization for deployment is too
>> >>> late.
>> >>
>> >> Once again - why? I agree, perhaps it would be useful, but there's no
>> >> strict requirement on this and we should resolve our issues
>> >> step-by-step. See my response above.
>> >>
>> >>
>> >>> No, Again, this is too late.
>> >>
>> >> Too late for what?
>> >>
>> >>
>> >> - Igor
>> >>
>> >> On Tue, Oct 20, 2015 at 12:38 PM, Roman Prykhodchenko <me at romcheg.me>
>> >> wrote:
>> >>> My concern here is that there is also a Network check feature that
>> >>> according to its name should check things like whether or not there’s enough
>> >>> IP addresses in all ranges in a network. The problem is that it may be run
>> >>> at any time, even when VIPs are not yet allocated. From a user-side the
>> >>> workflow may look a little wrong:
>> >>>
>> >>> 1. Check network => get "Everything is fine"
>> >>> 2. Right after that press Apply Changes => get "Network configuration
>> >>> is bad"
>> >>>
>> >>> This behavior is actually described in [1]. Should we allocate VIPs on
>> >>> network check as well?
>> >>>
>> >>>
>> >>> 1. https://bugs.launchpad.net/fuel/+bug/1487996
>> >>>
>> >>>
>> >>> - romcheg
>> >>>
>> >>>
>> >>>> 19 жовт. 2015 р. о 18:28 Igor Kalnitsky <ikalnitsky at mirantis.com>
>> >>>> написав(ла):
>> >>>>
>> >>>> Hi Roman,
>> >>>>
>> >>>>> Not assign addresses to VIPs is a network configuration is being
>> >>>>> serialized for API output.
>> >>>>
>> >>>> AFAIK, that's not truth. Fuel UI and OSTF relies only on *public*
>> >>>> VIP.
>> >>>> So we can keep only *public* VIP, and do not assign / show others.
>> >>>>
>> >>>>> Check number of IP addresses wherever it is possible to "spoil"
>> >>>>> network
>> >>>>> configuration: when a role get’s assigned to a node, when network
>> >>>>> changes or network templates are applied.
>> >>>>
>> >>>> It won't work that way. What if you enable plugin when all roles are
>> >>>> assigned? What if you change networks, and now it's not enough IPs?
>> >>>> Or
>> >>>> what if enable plugin that requires 5 VIPs in public network by
>> >>>> default, and it's not enough. But by using network templates you
>> >>>> assign this netrole to management network?
>> >>>>
>> >>>> From what I can say the proposed approach requires to put checks
>> >>>> here-and-there around the code. Let's do not overcomplicate things
>> >>>> without real need.
>> >>>>
>> >>>> So let me share my thoughts regarding this issue.
>> >>>>
>> >>>> * We shouldn't *allocate* VIPs when we make GET request on network
>> >>>> configuration handler. It should return only *already allocated* VIPs
>> >>>> and no more.
>> >>>> * VIP allocation should be performed when we run deployment.
>> >>>> * Before deployment checks should fail, if there's not enough VIPs or
>> >>>> other resources. So users fix them, and try again.
>> >>>> * Both Fuel UI and OSTF needs VIPs only when cluster is deployed, and
>> >>>> it's safe to return allocated VIPs on that stage.
>> >>>>
>> >>>> So what do you think guys?
>> >>>>
>> >>>> Thanks,
>> >>>> Igor
>> >>>>
>> >>>> On Fri, Oct 16, 2015 at 5:25 PM, Roman Prykhodchenko <me at romcheg.me>
>> >>>> wrote:
>> >>>>> Hi folks!
>> >>>>>
>> >>>>> I’ve been discussing several bugs [1-3] with some folks and noticed
>> >>>>> that they share the same root cause which is that network serialization
>> >>>>> fails, if there’s not enough IP addresses in all available ranges of one of
>> >>>>> the available networks to assign them to all VIPs. There are several
>> >>>>> possible solutions for this issue:
>> >>>>>
>> >>>>> a. Not assign addresses to VIPs is a network configuration is being
>> >>>>> serialized for API output.
>> >>>>> A lot of external tools and modules, i. e., OSTF, rely on that
>> >>>>> information so this relatively small change in Nailgun will require big
>> >>>>> cross-components changes. Therefore this change can only be done as a
>> >>>>> feature but it seems to be the way this issue must be fixed.
>> >>>>>
>> >>>>> b. Leave some VIPs without IP addresses
>> >>>>> If network configuration is generated for API output it is possible
>> >>>>> to leave some VIPs without IP addresses assigned. This will only create more
>> >>>>> mess around Nailgun and bring more issues that it will resolve.
>> >>>>>
>> >>>>> c. Check number of IP addresses wherever it is possible to "spoil"
>> >>>>> network configuration: when a role get’s assigned to a node, when network
>> >>>>> changes or network templates are applied.
>> >>>>>
>> >>>>>
>> >>>>> The proposal is to follow [c] as a fast solution and file a
>> >>>>> blueprint for [a]. Opinions?
>> >>>>>
>> >>>>>
>> >>>>> 1 https://bugs.launchpad.net/fuel/+bug/1487996
>> >>>>> 2 https://bugs.launchpad.net/fuel/+bug/1500394
>> >>>>> 3 https://bugs.launchpad.net/fuel/+bug/1504572
>> >>>>>
>> >>>>>
>> >>>>> - romcheg
>> >>>>>
>> >>>>>
>> >>>>> __________________________________________________________________________
>> >>>>> OpenStack Development Mailing List (not for usage questions)
>> >>>>> Unsubscribe:
>> >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>>>
>> >>>>
>> >>>>
>> >>>> __________________________________________________________________________
>> >>>> OpenStack Development Mailing List (not for usage questions)
>> >>>> Unsubscribe:
>> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>>
>> >>>
>> >>> __________________________________________________________________________
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list