[openstack-dev] [Octavia] Minutes from 8/20/2013 meeting
Stephen Balukoff
sbalukoff at bluebox.net
Thu Aug 21 23:52:27 UTC 2014
Hi Trevor!
Thanks for this, I've also transcribed these onto the wiki here:
https://wiki.openstack.org/wiki/Octavia/Meeting_Minutes#2014-08-20_Weekly_meeting
:
Obviously, y'all should feel free to fix any error you find appropriately!
Stephen
On Thu, Aug 21, 2014 at 2:52 PM, Trevor Vardeman <
trevor.vardeman at rackspace.com> wrote:
> Agenda items are numbered, and topics, as discussed, are described beneath
> in list format.
>
>
> 1) Revisit some basic features of loadbalancing as a service's object
> model and api.
> a) Brandon advocated Loadbalancer as only root object
> + The reason for root objects was for sharing.
> b) Will we allow sharing of pools in a listener?
> + Stephen suggests providing sharing to the customer for benefits
> - provides simplicity to the user
> - Example: L7 rules all referencing the same pool: simpler for
> the user to handle it.
> - Without sharing there may also be a series of extra health
> checks that are unnecessary
> + German wants placement of the pool to be on the load balancer
> - This allows sharing pools between different listeners.
> - Counter argument by Stephen: Sharing pools between HTTP/HTTPS
> load balancers would
> be really rare, where normally people would use a different
> port. Adding another health
> check wouldn't be a big deal. Proposed L7 policies where
> you have a complicated rule
> set causing duplication for a "or" set, would increase the
> health check requirement.
> (Refer to email in list)
> c) If we desire many to many, there will be more root objects than just
> load balancer.
> + Moving to many-to-many after establishing one root object would be
> difficult
>
> 2) Get consensus on initial project direction and implementation details
> a) One HA proxy instance per load balancer or one HA proxy instance per
> listener?
> + Per ML discussion: Keeping listener on one HA Proxy instance
> increases performance on one
> Octavia VM
> - Desires benchmarks for this to support (German has this
> included in his next sprint)
> + Suggested shelving this until benchmarks are researched.
> + Future discussions on the ML for this decision
> + A concern from Vijay: with one HA Proxy instance per listener,
> would that affect scalability?
> - This was suggested to move to the mailing list
>
> 3) When decisions (like #2) have been made, where should this be stored,
> wiki or in code?
> a) Bad thing about wiki is if Openstack makes a documentation overhaul
> the decision information
> might get lost.
> b) Bad thing about code is its harder to find and read.
> c) Decision was to keep it in the Wiki.
>
> 4) Whose responsibility is it to update the wiki with these decisions?
> a) For now, Stephen has been updating the wiki
> b) In the future, people involved in the decision will decide someone
> to update the wiki at the time
>
> 5) What else is needed to change in the 0.5 design before it can be
> approved and implementation
> can begin?
> a) Action item for everyone: Review this design before next week's
> meeting. Keep in mind the
> document is supposed to be somewhat general.
>
> 6) Start going over action items (
> https://etherpad.openstack.org/p/Octavia_Action_Items)
> a) Action Item for everyone: Review the migration information proposed
> by Brandon.
> b) Per link above, start from 1 and move the way down the list.
> c) How can we decide who is working on what?
> + Get launchpad set up for octavia to allow for blueprint additions
> and thus allow people to
> contribute to a specific effort
> d) We need a list of things that are required to do and what needs
> hooked up how (the glue
> between the different pieces)
> e) What kind of communication between different components?
> + XMLRPC?
> + A REST interface?
> + Something different?
> f) Brandon working on Data Models and SQL Alchemy Models.
> g) Stephen working on Octavia VM API interface, including what
> technology to use
> h) Doug working on Skeleton Structure
> i) Brandon working on launchpad and blueprints issue as well
> j) Stephen will also prioritize this list
> k) Topics that need discussed should be expressed and discussed in the
> mailing list
> l) Michael Johnson working on the base image scripts
> + Would we use an image we've built or set it up after creation of a
> VM
> - Start with a base image with pre-packaging of Octavia scripts
> and such instead of Cloud init
> doing all the work downloading and such. Saves
> time/resources.
> - Ideally we would have a place in the Octavia repo with a script
> or something that when run
> would create an image.
> + The images will potentially change based on flavoring options.
> - This includes custom images via customer requirements
>
> -- After meeting --
> Q: Are we going to be incubated?
> A: Yes, we are basically destined for incubation, period. Note: we will
> assuredly not be in Juno.
>
> Q: Why be part of Neutron? Why not just be our own program?
> A: We want to distance ourselves from Neutron to some extent. We will
> formalize this via a
> networking driver in Octavia. Note: we do not want to burn any
> bridges here, so we want to
> be appropriate in our spin-out process.
>
>
>
>
> Sorry for the delay in sending this out. Not sure if I missed anything
> here, but please feel free to add
> anything necessary that I might have missed. Thanks!
>
>
> -Trevor
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140821/eac347c8/attachment.html>
More information about the OpenStack-dev
mailing list