[openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

Stephen Balukoff sbalukoff at bluebox.net
Thu Aug 28 20:49:49 UTC 2014


I'm trying to think of a use case that wouldn't be satisfied using those
filters and am not coming up with anything. As such, I don't see a problem
using them to fulfill our requirements around colocation and apolocation.

Stephen


On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Yeah we were looking at the SameHost and DifferentHost filters and that
> will probably do what we need.  Though I was hoping we could do a
> combination of both but we can make it work with those filters I
> believe.
>
> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> > Brandon
> >
> >
> > I am not sure how ready that nova feature is for general use and have
> > asked our nova lead about that. He is on vacation but should be back
> > by the start of next week. I believe this is the right approach for us
> > moving forward.
> >
> >
> >
> > We cannot make it mandatory to run the 2 filters but we can say in the
> > documentation that if these two filters aren't set that we cannot
> > guaranty Anti-affinity or Affinity.
> >
> >
> > The other way we can implement this is by using availability zones and
> > host aggregates. This is one technique we use to make sure we deploy
> > our in-cloud services in an HA model. This also would assume that the
> > operator is setting up Availabiltiy zones which we can't.
> >
> >
> >
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> >
> >
> >
> > Sahara is currently using the following filters to support host
> > affinity which is probably due to the fact that they did the work
> > before ServerGroups. I am not advocating the use of those filters but
> > just showing you that we can document the feature and it will be up to
> > the operator to set it up to get the right behavior.
> >
> >
> > Regards
> >
> >
> > Susanne
> >
> >
> >
> > Anti-affinity
> > One of the problems in Hadoop running on OpenStack is that there is no
> > ability to control where machine is actually running. We cannot be
> > sure that two new virtual machines are started on different physical
> > machines. As a result, any replication with cluster is not reliable
> > because all replicas may turn up on one physical machine.
> > Anti-affinity feature provides an ability to explicitly tell Sahara to
> > run specified processes on different compute nodes. This is especially
> > useful for Hadoop datanode process to make HDFS replicas reliable.
> > The Anti-Affinity feature requires certain scheduler filters to be
> > enabled on Nova. Edit your/etc/nova/nova.conf in the following way:
> >
> > [DEFAULT]
> >
> > ...
> >
> > scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> > This feature is supported by all plugins out of the box.
> >
> >
> > http://docs.openstack.org/developer/sahara/userdoc/features.html
> >
> >
> >
> >
> >
> > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> > <brandon.logan at rackspace.com> wrote:
> >         Nova scheduler has ServerGroupAffinityFilter and
> >         ServerGroupAntiAffinityFilter which does the colocation and
> >         apolocation
> >         for VMs.  I think this is something we've discussed before
> >         about taking
> >         advantage of nova's scheduling.  I need to verify that this
> >         will work
> >         with what we (RAX) plan to do, but I'd like to get everyone
> >         else's
> >         thoughts.  Also, if we do decide this works for everyone
> >         involved,
> >         should we make it mandatory that the nova-compute services are
> >         running
> >         these two filters?  I'm also trying to see if we can use this
> >         to also do
> >         our own colocation and apolocation on load balancers, but it
> >         looks like
> >         it will be a bit complex if it can even work.  Hopefully, I
> >         can have
> >         something definitive on that soon.
> >
> >         Thanks,
> >         Brandon
> >         _______________________________________________
> >         OpenStack-dev mailing list
> >         OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/ae1dbb44/attachment.html>


More information about the OpenStack-dev mailing list