[openstack-dev] [Congress] Austin recap
Eric K
ekcs.openstack at gmail.com
Fri May 6 03:08:28 UTC 2016
Thanks for the summary, Tim. Very productive summit!
I¹ve added more notes and a diagram to the HA ether pad. Is that the best
way to continue the discussion?
https://etherpad.openstack.org/p/newton-congress-availability
From: Tim Hinrichs <tim at styra.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Date: Tuesday, May 3, 2016 at 11:37 AM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Austin recap
> Hi all,
>
>
> Here¹s a quick summary of the Congress activities in Austin. Everyone should
> feel free to chime in with corrections and things I missed.
>
>
> 1. Talks
> Masahito gave a talk on applying Congress for fault recovery in the context of
> NFV.
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/7199
> <https://www.openstack.org/summit/austin-2016/summit-schedule/events/7199>
>
>
> Fabio gave a talk on applying Congress + Monasca to enforce application-level
> SLAs.
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/7363
> <https://www.openstack.org/summit/austin-2016/summit-schedule/events/7363>
>
>
> 2. Integrations
> We had discussions, both within the Congress Integrations fishbowl session,
> and outside of that session on potential integrations with other OpenStack
> projects. Here's a quick overview.
>
>
> - Monasca (fabiog). The proposed integration: Monasca pushes data to Congress
> using the push driver to let Congress know about the alarms Monasca
> configured. Can use multiple alarms using a single table. Eventually we
> talked about having Congress analyze the policy to configure the alarms that
> Monasca uses, completing the loop.
>
>
> - Watcher (acabot). Watcher aims to optimize the placement of VMs by pulling
> data from Ceilometer/Monasca and Nova (including affinity/anti-affinity info),
> computing necessary migrations for whichever strategy is configured, and
> migrates the VMs. Want to use Congress as a source of policies that they take
> into account when computing the necessary migrations.
>
>
> - Nova scheduler. There¹s interest in policy-enabling the Nova scheduler, and
> then integrating that with Congress in the context of delegation, both to give
> Congress the ability to pull in the scheduling policy and to push the
> scheduling policy.
>
>
> - Mistral. The use case for this integration is to help people create an HA
> solution for VMs. So have Congress monitor VMs, identify when they have
> failed, and kick off a Mistral workflow to resurrect them.
>
>
> - Vintrage. Vintrage does root-cause-analysis. It provides a graph-based
> model for the structure of the datacenter (switches attached to hypervisors,
> servers attached to hypervisors, etc.) and a templating language for defining
> how to create new alarms from existing alarms. The action item that we left
> is that the Vintrage team will initiate a mailing list thread where we discuss
> which Vintrage data might be valuable for Congress policies.
>
>
> 3. Working sessions
> - The new distributed architecture is nearing completion. There seems to be 1
> blocker for having the basic functionality ready to test: at boot, Congress
> doesn¹t properly spin up datasources that have already been configured in the
> database. As an experiment to see how close we were to completion, we started
> up the Congress server with just the API and policy engine and saw the basics
> actually working! When we added the datasources, we found a bug where the API
> was assuming the datasources could be referenced by UUID, when in fact they
> can only be referenced by Name on the message-bus. So while there¹s still
> quite a bit to do, we¹re getting close to having all the basics working.
>
>
> - We made progress on the high-availability and high-throughput design. This
> is still very much open to design and discussion, so continuing the design on
> the mailing list would be great. Here are the highlights.
> o Policy engine: split into (i) active-active for queries to deal with
> high-throughput (ii) active-passive for action-execution (requiring
> leader-election, etc.). Policy CRUD modifies DB; undecided whether API also
> informs all policy-engines, or whether they all sync from the DB.
> o Pull datasources: no obvious need for replication, since they restart
> really fast and will just re-pull the latest data anyhow
> o Push datasources: Need HA for ensuring the pusher can always push, e.g.
> the pusher drops the message onto oslo-messaging. Still up for debate is
> whether we also need HA for storing the data since there is no way to ask for
> it after a restart; one suggestion is that every datasource must allow us to
> ask for the state. HT does not require replication, since syncing the state
> between several instances would be required and would be less performant than
> a single instance.
> o API (didn¹t really discuss this, so here¹s my take). No obvious need
> for replication for HT, since if the API is a bottleneck, the backend will be
> an even bigger bottleneck. For HA, could do active-active since the API is
> just a front-end to the message bus + database, though we would need to look
> into locking now that there is no GIL.
>
>
> It was great seeing everyone in Austin!
> Tim
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions) Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160505/c897d9e5/attachment.html>
More information about the OpenStack-dev
mailing list