Good morning,
I set up openstack, on an Ubuntu 22.04 VM.
Installation went well:
I created an account and a project, as well as all the steps for
creating an Ubuntu 18 instance.
But when launching the instance, I only have error messages.
My question:
Is there someone who can help me and provide me with a procedure
complete and tested.
With my sincere thanks

Le sam. 6 mai 2023 à 10:21, <openstack-discuss-request@lists.openstack.org> a écrit :
Send openstack-discuss mailing list submissions to
        openstack-discuss@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
        https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss

or, via email, send a message with subject or body 'help' to
        openstack-discuss-request@lists.openstack.org

You can reach the person managing the list at
        openstack-discuss-owner@lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of openstack-discuss digest..."


Today's Topics:

   1. [Nova] Clean up dead vms entries from DB (Satish Patel)
   2. [OPENSTACK][rabbitmq] using quorum queues (Nguy?n H?u Kh?i)
   3. Re: openstack-discuss Digest, Vol 55, Issue 16 (BEDDA Fadhel)


----------------------------------------------------------------------

Message: 1
Date: Fri, 5 May 2023 15:14:18 -0400
From: Satish Patel <satish.txt@gmail.com>
To: OpenStack Discuss <openstack-discuss@lists.openstack.org>
Subject: [Nova] Clean up dead vms entries from DB
Message-ID:
        <CAPgF-fqVSj3YKx7cYSF3gzAbL54veXM=S18K5-sX6XMCbjTWKA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Folks,

I have a small environment where controllers and computers run on the same
nodes and yes that is a bad idea. But now what happened the machine got OOM
out and crashed and stuck somewhere in Zombie stat. I have deleted all vms
but it didn't work. Later I used the "virsh destroy" command to delete
those vms and everything recovered but my openstack hypervisor state show"
command still says you have 91 VMs hanging in DB.

How do i clean up vms entered from nova DB which doesn't exist at all.

I have tried the command "nova-manage db archive_deleted_rows" but it
didn't help. How does nova sync DB with current stat?

# openstack hypervisor stats show
This command is deprecated.
+----------------------+--------+
| Field                | Value  |
+----------------------+--------+
| count                | 3      |
| current_workload     | 11     |
| disk_available_least | 15452  |
| free_disk_gb         | 48375  |
| free_ram_mb          | 192464 |
| local_gb             | 50286  |
| local_gb_used        | 1911   |
| memory_mb            | 386570 |
| memory_mb_used       | 194106 |
| running_vms          | 91     |
| vcpus                | 144    |
| vcpus_used           | 184    |
+----------------------+--------+


Technically I have only single VM

# openstack server list --all
+--------------------------------------+---------+--------+---------------------------------+------------+----------+
| ID                                   | Name    | Status | Networks
                 | Image      | Flavor   |
+--------------------------------------+---------+--------+---------------------------------+------------+----------+
| b33fd79d-2b90-41dd-a070-29f92ce205e7 | foo1 | ACTIVE | 1=100.100.75.22,
192.168.1.139 | ubuntu2204 | m1.small |
+--------------------------------------+---------+--------+---------------------------------+------------+----------+
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230505/529f5c69/attachment-0001.htm>

------------------------------

Message: 2
Date: Sat, 6 May 2023 09:29:21 +0700
From: Nguy?n H?u Kh?i <nguyenhuukhoinw@gmail.com>
To: OpenStack Discuss <openstack-discuss@lists.openstack.org>
Subject: [OPENSTACK][rabbitmq] using quorum queues
Message-ID:
        <CABAODReP0sq9JpiHyzKJqa3EhWp=8ajDJxTXupj_7pXA+_46wQ@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hello guys.
IS there any guy who uses the quorum queue for openstack? Could you give
some feedback to compare with classic queue?
Thank you.
Nguyen Huu Khoi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230506/63494316/attachment-0001.htm>

------------------------------

Message: 3
Date: Sat, 6 May 2023 10:18:32 +0200
From: BEDDA Fadhel <fadhel.bedda@gmail.com>
To: openstack-discuss@lists.openstack.org
Subject: Re: openstack-discuss Digest, Vol 55, Issue 16
Message-ID:
        <CAE1GhS4387gW87jZEdxULPgmXfcMcMY9yiZffpP36dBwSLhMww@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Bonjour,
J?ai mis en place openstack, sur une VM Ubuntu 22.04.
L?installation s?est bien pass?:
J?ai cr?? un compte et un projet, ainsi que toute les ?tapes pour la
cr?ation d?une instance Ubuntu 18.
Mais lors de lancement de l?instance, je n?ai que des messages d?erreur.
Ma question:
Est ce qu?il y?a quelqu?un qui peut m?aider et me fournir une proc?dure
compl?te et test?e.
Avec mes vifs remerciements

Le ven. 5 mai 2023 ? 20:09, <openstack-discuss-request@lists.openstack.org>
a ?crit :

> Send openstack-discuss mailing list submissions to
>         openstack-discuss@lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
>
> or, via email, send a message with subject or body 'help' to
>         openstack-discuss-request@lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-discuss-owner@lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of openstack-discuss digest..."
>
>
> Today's Topics:
>
>    1. [release] Release countdown for week R-21, May 08-12 (El?d Ill?s)
>    2. Re: Kubernetes Conformance 1.24 + 1.25 (Guilherme Steinm?ller)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 5 May 2023 14:30:58 +0000
> From: El?d Ill?s <elod.illes@est.tech>
> To: "openstack-discuss@lists.openstack.org"
>         <openstack-discuss@lists.openstack.org>
> Subject: [release] Release countdown for week R-21, May 08-12
> Message-ID:
>         <
> VI1P18901MB07511B14C9655567C4A5EC88FF729@VI1P18901MB0751.EURP189.PROD.OUTLOOK.COM
> >
>
> Content-Type: text/plain; charset="utf-8"
>
> Development Focus
> -----------------
>
> The Bobcat-1 milestone is next week, on May 11th, 2023! Project team
> plans for the 2023.2 Bobcat cycle should now be solidified.
>
> General Information
> -------------------
>
> Libraries need to be released at least once per milestone period. Next
> week, the release team will propose releases for any library which had
> changes but has not been otherwise released since the 2023.1 Antelope
> release.
> PTLs or release liaisons, please watch for these and give a +1 to
> acknowledge them. If there is some reason to hold off on a release, let
> us know that as well, by posting a -1. If we do not hear anything at all
> by the end of the week, we will assume things are OK to proceed.
>
> NB: If one of your libraries is still releasing 0.x versions, start
> thinking about when it will be appropriate to do a 1.0 version. The
> version number does signal the state, real or perceived, of the library,
> so we strongly encourage going to a full major version once things are
> in a good and usable state.
>
> Upcoming Deadlines & Dates
> --------------------------
>
> Bobcat-1 milestone: May 11th, 2023
> OpenInfra Summit Vancouver (including PTG): June 13-15, 2023
> Final 2023.2 Bobcat release:  October 4th, 2023
>
>
> El?d Ill?s
> irc: elodilles @ #openstack-release
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230505/df4fecbf/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 2
> Date: Fri, 5 May 2023 18:03:48 +0000
> From: Guilherme Steinm?ller <gsteinmuller@vexxhost.com>
> To: Kendall Nelson <kennelson11@gmail.com>
> Cc: Jake Yip <jake.yip@ardc.edu.au>, OpenStack Discuss
>         <openstack-discuss@lists.openstack.org>, "dale@catalystcloud.nz"
>         <dale@catalystcloud.nz>
> Subject: Re: Kubernetes Conformance 1.24 + 1.25
> Message-ID:
>         <
> YT3P288MB02720867AA20B5F11C042EFCAB729@YT3P288MB0272.CANP288.PROD.OUTLOOK.COM
> >
>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hey there!
>
> I am trying to run conformance against 1.25 and 1.26 now, but it looks
> like we are still with this ongoing?
> https://review.opendev.org/c/openstack/magnum/+/874092
>
> Im still facing issues to create the cluster due to "PodSecurityPolicy\"
> is unknown.
>
> Thank you,
> Guilherme Steinmuller
> ________________________________
> From: Kendall Nelson <kennelson11@gmail.com>
> Sent: 21 February 2023 17:38
> To: Guilherme Steinm?ller <gsteinmuller@vexxhost.com>
> Cc: Jake Yip <jake.yip@ardc.edu.au>; OpenStack Discuss <
> openstack-discuss@lists.openstack.org>; dale@catalystcloud.nz <
> dale@catalystcloud.nz>
> Subject: Re: Kubernetes Conformance 1.24 + 1.25
>
> Circling back to this thread-
>
> Thanks Jake for getting this rolling!
> https://review.opendev.org/c/openstack/magnum/+/874092
>
> -Kendall
>
> On Wed, Feb 15, 2023 at 6:34 AM Guilherme Steinm?ller <
> gsteinmuller@vexxhost.com<mailto:gsteinmuller@vexxhost.com>> wrote:
> Hi Jake,
>
> Yeah, that could be it.
>
> On devstack magnum master, the kube-apiserver pod fails to start with
> rancher 1.25 hyperkube image with:
>
> Feb 14 20:24:06 k8s-cluster-dgpwfkugdna5-master-0 conmon[119164]: E0214
> 20:24:06.615919       1 run.go:74] "command failed" err="admission-control
> plugin \"PodSecurityPolicy\" is unknown"
>
> Regards,
> Guilherme Steinmuller
>
> On Tue, Feb 14, 2023 at 10:03 AM Jake Yip <jake.yip@ardc.edu.au<mailto:
> jake.yip@ardc.edu.au>> wrote:
> Hi Guilherme Steinmuller,
>
> Is the issue with 1.25 the removal of PodSecurityPolicy? And that there
> are pieces of PSP in Magnum code. I've been trying to remove it.
>
> Regards,
> Jake
>
>
> On 14/2/2023 11:35 pm, Guilherme Steinm?ller wrote:
> > Hi everyone!
> >
> > Dale, thanks for your comments here. I no longer have my devstack which
> > I tested v1.25. However, you pointed out something I haven't noticed:
> > for v1.25 I tried using the fedora coreos that is shipped with devstack,
> > which is f36.
> >
> > I will try to reproduce it again, but now using a newer fedora coreos.
> > If it fails, I will be happy to share my results here for us to figure
> > out and get certified for 1.25!
> >
> > Keep in tune!
> >
> > Thank you,
> > Guilherme Steinmuller
> >
> > On Tue, Feb 14, 2023 at 9:26 AM Jake Yip <jake.yip@ardc.edu.au<mailto:
> jake.yip@ardc.edu.au>
> > <mailto:jake.yip@ardc.edu.au<mailto:jake.yip@ardc.edu.au>>> wrote:
> >
> >     On 14/2/2023 6:53 am, Kendall Nelson wrote:
> >      > Hello All!
> >      >
> >      > First of all, I want to say a huge thanks to Guilherme
> >     Steinmuller for
> >      > all his help ensuring that OpenStack Magnum remains Kubernetes
> >     Certified
> >      > [1]! We are certified for v1.24!
> >      >
> >     Wow great work Guilherme Steinmuller!
> >
> >     - Jake
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230505/9afb1d74/attachment.htm
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> openstack-discuss mailing list
> openstack-discuss@lists.openstack.org
>
>
> ------------------------------
>
> End of openstack-discuss Digest, Vol 55, Issue 16
> *************************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230506/385b520c/attachment.htm>

------------------------------

Subject: Digest Footer

_______________________________________________
openstack-discuss mailing list
openstack-discuss@lists.openstack.org


------------------------------

End of openstack-discuss Digest, Vol 55, Issue 17
*************************************************