[openstack-dev] [Congress] Summit recap

Tim Hinrichs tim at styra.com
Thu Nov 5 18:12:54 UTC 2015

Hi all,

It was great seeing so many Congress people in Tokyo last week!  Hopefully
you've all had a chance to recover by now.  Here's an overview of what
happened.  I was planning to go over this at this week's IRC meeting, but
forgot about the U.S. time change and missed the meeting--sorry about that.

1. Hands On Lab.   There were 40-50 people who attended, and all but 3-4 of
them got the VM we provided installed and worked through the lab.  1 of the
failures didn't have enough memory; 1 was something to do with VDX (?
Eric--is that right?); 1 was a version of Linux for which there wasn't a
VirtualBox installer.  The only weird problem was a glitch with the Horizon
interface that wouldn't show a table that we could show on the command
line.  Overall, people seemed to like Congress and what it had to offer.

2. Working session: distributed architecture
Base class is working with oslo-messaging, but unit tests are not working.
Peter is planning to debug and push to review in the next few weeks.

One thing we discussed was that the distributed architecture is only a
building block for an HA design.  But it does not deliver HA.  In
particular, for HA we will want to have multiple copies of the policy
engine, and these copies should be hidden from the user; the system should
take care of mapping an API call intended for the policy engine to one of
the copies.  The distributed architecture does not hide the existence of
multiple policy engines; rather, the user is responsible for spinning up
multiple policy engines, giving them different names, and directing API
requests to whichever one of the policy engines she wants to interact with.

3. Working session: infrastructure/testing
- We agreed to add Murano tests to our gate (as non-voting) to ensure that
we know when we add something to Congress that breaks Murano.  Should be
sufficient to simply copy their jenkins job into the Congress job-list and
make that job non-voting.

- We discussed the problem of datasource drivers, where to store them, and
how to test them.  Neutron has a similar issue with vendor-specific
plugins.  We thought it would be nice to have a separate requirements.txt
file for each driver; but then it is unclear how to test datasource drivers
in the gate because setup.py only installs the 1 requirements.txt in the
root directory.  So in the end, we decided the right thing was to have 1
requirements.txt file that includes all the dependencies for the OpenStack
drivers so that we can test those in the gate, and to have a separate
requirements.txt for each of the non-OpenStack drivers, since we can't test
those in the gate anyway.

4. Working session: Monasca and NFV.
- Fabio introduced us to Monasca, which is a monitoring project about to be
accepted into the BigTent.  It is an alternative to Ceilometer and focused
on high-performance.  They have alarms that can be set to inform the caller
any time a certain kind of event occurs.  Monasca is supposed to get a
superset of the data that Congress currently has drivers for.  They
suggested that Congress could automatically generate alarms based on the
data required by policy.  As a first step, we decided to write a simple
datasource driver to integrate with Monasca, as an easy way for the
Congress team to get familiar with Monasca.

- OPNFV Doctor project.  The Doctor project aims to detect and manage
faults in OPNFV platforms.  They hoped to use Congress to help identify
faults.  They wanted to connect Zabbix to Congress, which creates events
and have Congress push out config changes.  Concretely they asked for a
push-style datasource driver so that Zabbix could push data to Congress
through the API.  The blueprint for that work is here:

5. Discussion about Application-level Intent.

Outside the working sessions we talked with Ken Owens and his team about
application-level intent.  They are planning on building an
application-specific policy engine within the Congress framework.  For each
VM in an application, the user can rank the sensitivity of that VM as
low/medium/high for a handful of properties, e.g. latency, throughput.  The
provisioning system (which is external to Congress) then provisions the app
according to that policy, and the policy engine within Congress continually
monitors those properties and corrects violations.  The plan is to start
this as a completely standalone policy engine running a Congress node but
build it with an eye toward eventually delegating from the agnostic policy
engine to the application-intent engine.

6. Senlin project.  I heard about this project for the first time at the
summit.  It's policy-based cluster management.  Here's an email with more


It'd be great if those attended could respond with clarifications,
comments, and things I missed.

Let me know if anyone has questions/comments.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151105/0eb7c24f/attachment.html>

More information about the OpenStack-dev mailing list