Create OpenStack VMs in few seconds

open infra openinfradn at gmail.com
Thu Apr 22 15:06:30 UTC 2021


Thanks Donny.

On Thu, Apr 22, 2021 at 8:15 PM Donny Davis <donny at fortnebula.com> wrote:

> On Thu, Apr 22, 2021 at 10:24 AM open infra <openinfradn at gmail.com> wrote:
>
>>
>>
>> On Thu, Apr 22, 2021 at 7:46 PM Donny Davis <donny at fortnebula.com> wrote:
>>
>>>
>>> https://grafana.opendev.org/d/BskTteEGk/nodepool-openedge?orgId=1&from=1587349229268&to=1594290731707
>>>
>>> I know nodepool keeps a number of instances up and available for use,
>>> but on FN it was usually tapped to the max so I am not sure this logic
>>> applies.
>>>
>>> Anyways in my performance testing and tuning of Openstack to get it
>>> moving as fast as possible, my determination was local NVME storage fit the
>>> rapid fire use case the best.  Next best was on shared NVME storage via
>>> ISCSI using cinder caching.
>>>
>>
>> Local nvme means, utilization of worker node storage?
>>
>>
>>> On Thu, Apr 22, 2021 at 10:06 AM Donny Davis <donny at fortnebula.com>
>>> wrote:
>>>
>>>> FWIW I had some excellent startup times in Fort Nebula due to using
>>>> local storage backed by nvme drives. Once the cloud image is copied to the
>>>> hypervisor, startup's of the vms were usually measured in seconds. Not sure
>>>> if that fits the requirements, but sub 30 second startups were the norm.
>>>> This time was including the actual connection from nodepool to the
>>>> instance. So I imagine the local start time was even faster.
>>>>
>>>> What is the requirement for startup times?
>>>>
>>>
>> End-user supposed to access an application based on his/her choice, and
>> it's a dedicated VM for the user.
>> Based on the application end-user selection, VM should be available to
>> the end-user along with required apps and provided hardware.
>>
>>
>>
>>>
>>>> On Mon, Apr 19, 2021 at 9:31 AM Jeremy Stanley <fungi at yuggoth.org>
>>>> wrote:
>>>>
>>>>> On 2021-04-19 16:31:24 +0530 (+0530), open infra wrote:
>>>>> > On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley <fungi at yuggoth.org>
>>>>> wrote:
>>>>> [...]
>>>>> > > The next best thing is basically what Nodepool[*] does: start new
>>>>> > > virtual machines ahead of time and keep them available in the
>>>>> > > tenant. This does of course mean you're occupying additional quota
>>>>> > > for whatever base "ready" capacity you've set for your various
>>>>> > > images/flavors, and that you need to be able to predict how many of
>>>>> > > what kinds of virtual machines you're going to need in advance.
>>>>> > >
>>>>> > > [*] https://zuul-ci.org/docs/nodepool/
>>>>> >
>>>>> > Is it recommended to use nodepool in a production environment?
>>>>>
>>>>> I can't begin to guess what you mean by "in a production
>>>>> environment," but it forms the lifecycle management basis for our
>>>>> production CI/CD system (as it does for many other Zuul
>>>>> installations). In the case of the deployment I help run, it's
>>>>> continuously connected to over a dozen production clouds, both
>>>>> public and private.
>>>>>
>>>>> But anyway, I didn't say "use Nodepool." I suggested you look at
>>>>> "what Nodepool does" as a model for starting server instances in
>>>>> advance within the tenants/projects which regularly require instant
>>>>> access to new virtual machines.
>>>>> --
>>>>> Jeremy Stanley
>>>>>
>>>>
>>>>
>>>> --
>>>> ~/DonnyD
>>>> C: 805 814 6800
>>>> "No mission too difficult. No sacrifice too great. Duty First"
>>>>
>>>
>>>
>>> --
>>> ~/DonnyD
>>> C: 805 814 6800
>>> "No mission too difficult. No sacrifice too great. Duty First"
>>>
>>
> >Local nvme means, utilization of worker node storage?
> Yes, the compute nodes were configured to just use the local storage
> (which is the default I do believe). The directory /var/lib/nova was
> mounted onto a dedicated NVME device.
>
> >End-user supposed to access an application based on his/her choice, and
> it's a dedicated VM for the user.
> >Based on the application end-user selection, VM should be available to
> the end-user along with required apps and provided hardware.
> This is really a two part answer. You need a portal or mechanism for end
> users to request a templated application stack (meaning it may take one or
> more machines to support the application)
> You also need a method to create cloud images with your applications
> pre-baked into them.
> I would also consider creating a mechanism to bake the cloud images with
> all of the application code and configuration (best you can) already
> contained in the cloud image. Disk Image Builder is some really great
> software that can handle the image creation part for you. The Openstack
> infra team does this on the daily to support the CI. They use DIB, so it's
> battle tested.
>
> It is likely you would end up creating some portal or using a CMP (Cloud
> Management Platform) to handle end user requests. For instance End users
> want access to X application, the CMP would orchestrate wiring together all
> of the bits and pieces and then respond back with access information to the
> app. You can use Openstack's Heat to do the same thing depending on what
> you want the experience for end users to be.
>
> At this point in Openstack's maturity, it can do just about anything you
> want it to. You just need to be specific in what you are asking your
> infrastructure to do and configure it for the use case.
>
>

I was supposed to use StarlingX to manage Openstack's underlying
infrastructure and storage to keep separately (from worker/compute nodes)
but not sure how this affects the boot time of VMs.
Trying to achieve low latency.



> Hope this helps and good luck.
>


Thanks again!


> Cheers
> ~DonnyD
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210422/892212cc/attachment-0001.html>


More information about the openstack-discuss mailing list