[openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

Susanne Balle sleipnir012 at gmail.com
Thu Aug 28 21:36:54 UTC 2014


We need to be careful. I believe that a user can use these filters to keep
requesting VMs in the case of nova to get to the size of your cloud.

Also given that nova now has ServerGroups let's not make a quick decision
on using something that is being replaced with something better. I suggest
we investigated ServerGroups a little more before we discard it.

The operator should really decide how he/she wants Anti-affinity by setting
the right filters in nova.

Susanne


On Thu, Aug 28, 2014 at 5:12 PM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Trevor and I just worked through some scenarios to make sure it can
> handle colocation and apolocation.  It looks like it does, however not
> everything will so simple, especially when we introduce horizontal
> scaling.  Trevor's going to write up an email about some of the caveats
> but so far just using a table to track what LB has what VMs and on what
> hosts will be sufficient.
>
> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> > I'm trying to think of a use case that wouldn't be satisfied using
> > those filters and am not coming up with anything. As such, I don't see
> > a problem using them to fulfill our requirements around colocation and
> > apolocation.
> >
> >
> > Stephen
> >
> >
> > On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
> > <brandon.logan at rackspace.com> wrote:
> >         Yeah we were looking at the SameHost and DifferentHost filters
> >         and that
> >         will probably do what we need.  Though I was hoping we could
> >         do a
> >         combination of both but we can make it work with those filters
> >         I
> >         believe.
> >
> >         Thanks,
> >         Brandon
> >
> >         On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> >         > Brandon
> >         >
> >         >
> >         > I am not sure how ready that nova feature is for general use
> >         and have
> >         > asked our nova lead about that. He is on vacation but should
> >         be back
> >         > by the start of next week. I believe this is the right
> >         approach for us
> >         > moving forward.
> >         >
> >         >
> >         >
> >         > We cannot make it mandatory to run the 2 filters but we can
> >         say in the
> >         > documentation that if these two filters aren't set that we
> >         cannot
> >         > guaranty Anti-affinity or Affinity.
> >         >
> >         >
> >         > The other way we can implement this is by using availability
> >         zones and
> >         > host aggregates. This is one technique we use to make sure
> >         we deploy
> >         > our in-cloud services in an HA model. This also would assume
> >         that the
> >         > operator is setting up Availabiltiy zones which we can't.
> >         >
> >         >
> >         >
> >
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> >         >
> >         >
> >         >
> >         > Sahara is currently using the following filters to support
> >         host
> >         > affinity which is probably due to the fact that they did the
> >         work
> >         > before ServerGroups. I am not advocating the use of those
> >         filters but
> >         > just showing you that we can document the feature and it
> >         will be up to
> >         > the operator to set it up to get the right behavior.
> >         >
> >         >
> >         > Regards
> >         >
> >         >
> >         > Susanne
> >         >
> >         >
> >         >
> >         > Anti-affinity
> >         > One of the problems in Hadoop running on OpenStack is that
> >         there is no
> >         > ability to control where machine is actually running. We
> >         cannot be
> >         > sure that two new virtual machines are started on different
> >         physical
> >         > machines. As a result, any replication with cluster is not
> >         reliable
> >         > because all replicas may turn up on one physical machine.
> >         > Anti-affinity feature provides an ability to explicitly tell
> >         Sahara to
> >         > run specified processes on different compute nodes. This is
> >         especially
> >         > useful for Hadoop datanode process to make HDFS replicas
> >         reliable.
> >         > The Anti-Affinity feature requires certain scheduler filters
> >         to be
> >         > enabled on Nova. Edit your/etc/nova/nova.conf in the
> >         following way:
> >         >
> >         > [DEFAULT]
> >         >
> >         > ...
> >         >
> >         >
> >         scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> >         > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> >         > This feature is supported by all plugins out of the box.
> >         >
> >         >
> >         >
> >         http://docs.openstack.org/developer/sahara/userdoc/features.html
> >         >
> >         >
> >         >
> >         >
> >         >
> >         > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> >         > <brandon.logan at rackspace.com> wrote:
> >         >         Nova scheduler has ServerGroupAffinityFilter and
> >         >         ServerGroupAntiAffinityFilter which does the
> >         colocation and
> >         >         apolocation
> >         >         for VMs.  I think this is something we've discussed
> >         before
> >         >         about taking
> >         >         advantage of nova's scheduling.  I need to verify
> >         that this
> >         >         will work
> >         >         with what we (RAX) plan to do, but I'd like to get
> >         everyone
> >         >         else's
> >         >         thoughts.  Also, if we do decide this works for
> >         everyone
> >         >         involved,
> >         >         should we make it mandatory that the nova-compute
> >         services are
> >         >         running
> >         >         these two filters?  I'm also trying to see if we can
> >         use this
> >         >         to also do
> >         >         our own colocation and apolocation on load
> >         balancers, but it
> >         >         looks like
> >         >         it will be a bit complex if it can even work.
> >         Hopefully, I
> >         >         can have
> >         >         something definitive on that soon.
> >         >
> >         >         Thanks,
> >         >         Brandon
> >         >         _______________________________________________
> >         >         OpenStack-dev mailing list
> >         >         OpenStack-dev at lists.openstack.org
> >         >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >         >
> >         >
> >         > _______________________________________________
> >         > OpenStack-dev mailing list
> >         > OpenStack-dev at lists.openstack.org
> >         >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >         _______________________________________________
> >         OpenStack-dev mailing list
> >         OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> > --
> > Stephen Balukoff
> > Blue Box Group, LLC
> > (800)613-4305 x807
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/41b92595/attachment.html>


More information about the OpenStack-dev mailing list