[Openstack-operators] Venom vulnerability

Fox, Kevin M Kevin.Fox at pnnl.gov
Thu Jun 4 16:10:57 UTC 2015

Great. Thanks for sharing. I'll have to try it myself. :)

From: Cynthia Lopes [clsacramento at gmail.com]
Sent: Thursday, June 04, 2015 9:08 AM
To: Fox, Kevin M
Cc: Steve Gordon; OpenStack Operations Mailing List
Subject: Re: [Openstack-operators] Venom vulnerability

Well it works with Ceph Giant here, I did not upgrade to CentOS7.1 though, and I had ceph client installed before updating qemu.
I did that, installed qemu from 7.1 and it didnt break my conf. I was able to restart old VMs and deploy new ones.
No need to re-compile qemu-kvm anymore \o/

Yeh, I can't find anything to prove the hosts are not vulnerable anymore, I think I'll just give it sometime for now...


2015-06-04 17:50 GMT+02:00 Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>>:
I was curious because we're running giant and built a custom qemu too. I just rebuilt a patched one to cover Venom, but want to switch to stock 7.1 qemu at some point if it supports Ceph Giant.

I'm not aware of any check that actually tests the vulnerability. Just checks package versions.


From: Cynthia Lopes [clsacramento at gmail.com<mailto:clsacramento at gmail.com>]
Sent: Thursday, June 04, 2015 8:05 AM
To: Fox, Kevin M
Cc: Steve Gordon; OpenStack Operations Mailing List

Subject: Re: [Openstack-operators] Venom vulnerability


I dit not update my ceph client. The version before and after is:

# ceph -v
ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)

Apart from checking my qemu-kvm version and having shutdown/up my instances, any ideas on how to validate that my host is no longer vulnerable?


2015-06-04 16:59 GMT+02:00 Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>>:
For the record, what version of ceph are you using before and after?


From: Cynthia Lopes
Sent: Thursday, June 04, 2015 1:27:53 AM
To: Steve Gordon
Cc: OpenStack Operations Mailing List
Subject: Re: [Openstack-operators] Venom vulnerability

Hi guys,

Just for feedback and if somebody else has compute nodes on CentOS 7.0, IceHouse and uses Ceph.

What I did that worked for me:

#Remove all QEMU and Livirt related RPMs. I had recompiled QEMU for RBD and Libvirt that I had was not compatible with the patched QEMU.
#This removes openstack-nova-compute and so on, be careful...
yum remove -y `rpm -qa | grep qemu`
yum remove -y `rpm -qa | grep libvirt`

#I updated base and update centos repositories to gether from most up to date versions. I have local repositories, so the commands should be adapted...
sed -i "s|/centos7|/centos7.1|g" CentOS-Base7.repo
sed -i "s|/centos7update|/centos7.1update|g" CentOS-Base7.repo

#I had to do an update...
yum clean all
yum -y update #check problem only with ceph... I had some dependencies problems with the ceph packages. But just Ceph

yum -y update --skip-broken #but ignoring them worked just fine

cd /etc/yum.repos.d/
#The update added all these repos on my yum.repos.d so I deleted (because I use local repositories)
rm -f CentOS-Base.repo CentOS-Debuginfo.repo CentOS-fasttrack.repo CentOS-Sources.repo CentOS-Vault.repo

#Then I re-installed QEMU and Libvirt with CentOS7.1 repositories (base and update)
yum -y install kvm qemu-kvm python-virtinst libvirt libvirt-python virt-manager libguestfs-tools
service libvirtd start

#I use puppet to configure my host, so I just re-run it to re-install nova-compute and re-configure
puppet agent -t #so replace this with your procedure for configure your compute node

service openstack-nova-compute status #chek nova-compute is running...

#I had a console.log file in the instances directory that became owned by root. So be sure to have everything owned by nova
chown -R nova:nova /var/lib/nova/

#Of course, at this moment all my instances were shutoff, so just restart them...

source keystonerc* #credentials

vms=`nova list --all-tenants --minimal --host $host | grep -v ID | grep -v "+-" | awk '{print $2}'` #guest vms ids on the host...

for vm in $vms ; do nova start $vm; done  #start vms...

Hope this might be useful for someone...

Cynthia Lopes do Sacramento

2015-06-03 2:35 GMT+02:00 Steve Gordon <sgordon at redhat.com<mailto:sgordon at redhat.com>>:
----- Original Message -----
> From: "Erik McCormick" <emccormick at cirrusseven.com<mailto:emccormick at cirrusseven.com>>
> To: "Tim Bell" <Tim.Bell at cern.ch<mailto:Tim.Bell at cern.ch>>
> On Tue, Jun 2, 2015 at 5:34 AM, Tim Bell <Tim.Bell at cern.ch<mailto:Tim.Bell at cern.ch>> wrote:
> >  I had understood that CentOS 7.1 qemu-kvm has RBD support built-in. It
> > was not there on 7.0 but http://tracker.ceph.com/issues/10480 implies it
> > is in 7.1.
> >
> >
> >
> > You could check on the centos mailing lists to be sure.
> >
> >
> >
> > Tim
> >
> >
> It's about time! Thanks for the pointer Tim.
> Cynthia, If for some reason it's not in the Centos ones yet, I've been
> using the RHEV SRPMs and building the packages. You don't have to mess with
> the spec or anything. Just run them through rpmbuild and push them out.
> http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/
> -Erik

FWIW equivalents builds for use with oVirt, RDO, etc. are being created under the auspices of the CentOS Virt SIG:




OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150604/e5e04150/attachment.html>

More information about the OpenStack-operators mailing list