[Openstack] [OpenStack] Can Mitaka RamFilter support free hugepages?

Yaguang Tang heut2008 at gmail.com
Fri Sep 8 08:47:01 UTC 2017


On Thu, Sep 7, 2017 at 4:05 PM, Sahid Orentino Ferdjaoui <
sferdjao at redhat.com> wrote:

> On Wed, Sep 06, 2017 at 11:57:25PM -0400, Jay Pipes wrote:
> > Sahid, Stephen, what are your thoughts on this?
> >
> > On 09/06/2017 10:17 PM, Yaguang Tang wrote:
> > > I think the fact that RamFilter can't deal with huge pages is a bug ,
> > > duo to this limit, we have to set a balance  between normal memory and
> > > huge pages to use RamFilter and NUMATopologyFilter. what do you think
> > > Jay?
>
> Huge Pages has been built on top of the NUMA Topology
> implementation. You have to consider isolate all hosts which are going
> to handle NUMA instances to a specific host aggregate.
>
> We don't want RAMFilter to handle any NUMA related feature (in Nova
> world: hugepages, pinning, realtime...) but we don't want it blocking
> scheduling and that should not be the case.
>
> I'm surprised to see this bug I think libvirt is reporting the total
> amount of physical RAM available on the host and that is not depending
> of the size of the pages.
>
> So If that is true libvirt is reporting only the amount of small pages
> memory available we will probably have to fix that point instead of
> the RAMFilter.
>

 If we fix  it this way, the issue still exists, as long as user use
RamFilter and NUMATopologyFilter toghther.  Libvirt reports total memory
which contains huge pages
but normal instance can't use.

>
> That because we don't want to add "hack" on all filters to just pass
> when instances provide NUMA Topology constraints and we can't
> configure the scheduler to use specific filters per aggregate.
>

How about fixing RamFilter to just pass if the instance extra specs
specifies huge pages ?



>
> s.
>
> > >
> > > On Wed, Sep 6, 2017 at 9:22 PM, Jay Pipes <jaypipes at gmail.com
> > > <mailto:jaypipes at gmail.com>> wrote:
> > >
> > >     On 09/06/2017 01:21 AM, Weichih Lu wrote:
> > >
> > >         Thanks for your response.
> > >
> > >         Is this mean if I want to create an instance with flavor: 16G
> > >         memory (hw:mem_page_size=large), I need to preserve memory more
> > >         than 16GB ?
> > >         This instance consume hugepages resource.
> > >
> > >
> > >     You need to reserve fewer 1GB huge pages than 50 if you want to
> > >     launch a 16GB instance on a host with 64GB of RAM. Try reserving 32
> > >     1GB huge pages.
> > >
> > >     Best,
> > >     -jay
> > >
> > >         2017-09-06 1:47 GMT+08:00 Jay Pipes <jaypipes at gmail.com
> > >         <mailto:jaypipes at gmail.com> <mailto:jaypipes at gmail.com
> > >         <mailto:jaypipes at gmail.com>>>:
> > >
> > >
> > >              Please remember to add a topic [nova] marker to your
> > >         subject line.
> > >              Answer below.
> > >
> > >              On 09/05/2017 04:45 AM, Weichih Lu wrote:
> > >
> > >                  Dear all,
> > >
> > >                  I have a compute node with 64GB ram. And I set 50
> > >         hugepages wiht
> > >                  1GB hugepage size. I used command "free", it shows
> free
> > >         memory
> > >                  is about 12GB. And free hugepages is 50.
> > >
> > >
> > >              Correct. By assigning hugepages, you use the memory
> > >         allocated to the
> > >              hugepages.
> > >
> > >                  Then I launch an instance with 16GB memory, set flavor
> > >         tag :
> > >                  hw:mem_page_size=large. It show Error: No valid host
> > >         was found.
> > >                  There are not enough hosts available.
> > >
> > >
> > >              Right, because you have only 12G of RAM available after
> > >              creating/allocating 50G out of your 64G.
> > >
> > >              Huge pages are entirely separate from the normal memory
> that a
> > >              flavor consumes. The 16GB memory in your flavor is RAM
> > >         consumed on
> > >              the host. The huge pages are individual things that are
> > >         consumed by
> > >              the NUMA topology that your instance will take. RAM !=
> huge
> > >         pages.
> > >              Totally different things.
> > >
> > >                And I check nova-scheduler log. My
> > >
> > >                  compute is removed by RamFilter. I can launch an
> > >         instance with
> > >                  8GB memory successfully, or I can launch an instance
> > >         with 16GB
> > >                  memory sucessfully by remove RamFilter.
> > >
> > >
> > >              That's because RamFilter doesn't deal with huge pages.
> > >         Because huge
> > >              pages are a different resource than memory. The page
> itself
> > >         is the
> > >              resource.
> > >
> > >              The NUMATopologyFilter is the scheduler filter that
> > >         evaluates the
> > >              huge page resources on a compute host and determines if
> the
> > >         there
> > >              are enough *pages* available for the instance. Note that
> I say
> > >              *pages* because the unit of resource consumption for huge
> > >         pages is
> > >              not MB of RAM. It's a single memory page.
> > >
> > >              Please read this excellent article by Steve Gordon for
> > >         information
> > >              on what NUMA and huge pages are and how to use them in
> Nova:
> > >
> > >         http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-
> fast-lane-huge-page-support-in-openstack-compute/
> > >         <http://redhatstackblog.redhat.com/2015/09/15/driving-
> in-the-fast-lane-huge-page-support-in-openstack-compute/>
> > >         <http://redhatstackblog.redhat.com/2015/09/15/driving-
> in-the-fast-lane-huge-page-support-in-openstack-compute/
> > >         <http://redhatstackblog.redhat.com/2015/09/15/driving-
> in-the-fast-lane-huge-page-support-in-openstack-compute/>>
> > >
> > >              Best,
> > >              -jay
> > >
> > >                  Does RamFilter only check free memory but not free
> > >         hugepages?
> > >                  How can I solve this problem?
> > >
> > >                  I use openstack mitaka version.
> > >
> > >                  thanks
> > >
> > >                  WeiChih, Lu.
> > >
> > >                  Best Regards.
> > >
> > >
> > >                  _______________________________________________
> > >                  Mailing list:
> > >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >>
> > >                  Post to     : openstack at lists.openstack.org
> > >         <mailto:openstack at lists.openstack.org>
> > >                  <mailto:openstack at lists.openstack.org
> > >         <mailto:openstack at lists.openstack.org>>
> > >                  Unsubscribe :
> > >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >>
> > >
> > >
> > >              _______________________________________________
> > >              Mailing list:
> > >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >>
> > >              Post to     : openstack at lists.openstack.org
> > >         <mailto:openstack at lists.openstack.org>
> > >              <mailto:openstack at lists.openstack.org
> > >         <mailto:openstack at lists.openstack.org>>
> > >              Unsubscribe :
> > >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >>
> > >
> > >
> > >
> > >
> > >         _______________________________________________
> > >         Mailing list:
> > >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> > >         Post to     : openstack at lists.openstack.org
> > >         <mailto:openstack at lists.openstack.org>
> > >         Unsubscribe :
> > >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> > >
> > >
> > >     _______________________________________________
> > >     Mailing list:
> > >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
> > >     Post to     : openstack at lists.openstack.org
> > >     <mailto:openstack at lists.openstack.org>
> > >     Unsubscribe :
> > >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > >     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
> > >
> > >
> > >
> > >
> > > --
> > > Tang Yaguang
> > >
> > >
>



-- 
Tang Yaguang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170908/f9ef9daa/attachment.html>


More information about the Openstack mailing list