[openstack-dev] A few questions on using COE puppet/cobbler...

Paul Michali pcm at cisco.com
Mon May 20 16:20:39 UTC 2013


Hmm… getting worse… I tried the index-url and time out and then just timeout, but it still is failing.  With the latest run, I see this:

err: /Stage[main]/Graphite/Package[graphite-web]/ensure: change from absent to present failed: Execution of '/usr/bin/pip install -q --index-url=http://ucs-build-server/packages/simple/ graphite-web' returned 1:   Cannot fetch index base URL http://ucs-build-server/packages/simple/
  Could not find any downloads that satisfy the requirement graphite-web
No distributions at all found for graphite-web
Storing complete log in /root/.pip/pip.log

debug: Puppet::Type::Package::ProviderPip: Executing '/usr/bin/pip freeze'
debug: Puppet::Type::Package::ProviderPip: Executing '/usr/bin/pip install -q --index-url=http://ucs-build-server/packages/simple/ collectd '
err: /Stage[main]/Collectd/Package[collectd ]/ensure: change from absent to present failed: Execution of '/usr/bin/pip install -q --index-url=http://ucs-build-server/packages/simple/ collectd ' returned 1:   Cannot fetch index base URL http://ucs-build-server/packages/simple/
  Could not find any downloads that satisfy the requirement collectd
No distributions at all found for collectd
Storing complete log in /root/.pip/pip.log

notice: /Stage[main]/Graphite/File[/etc/apache2/sites-available/graphite]: Dependency Package[graphite-web] has failures: true
warning: /Stage[main]/Graphite/File[/etc/apache2/sites-available/graphite]: Skipping because of failed dependencies
notice: /Stage[main]/Graphite/File[/etc/apache2/sites-enabled/graphite]: Dependency Package[graphite-web] has failures: true
warning: /Stage[main]/Graphite/File[/etc/apache2/sites-enabled/graphite]: Skipping because of failed dependencies
notice: /Stage[main]/Graphite/File[/opt/graphite/webapp/graphite/local_settings.py]: Dependency Package[graphite-web] has failures: true
warning: /Stage[main]/Graphite/File[/opt/graphite/webapp/graphite/local_settings.py]: Skipping because of failed dependencies
notice: /Stage[main]/Graphite/Exec[graphite-syncdb]: Dependency Package[graphite-web] has failures: true
warning: /Stage[main]/Graphite/Exec[graphite-syncdb]: Skipping because of failed dependencies
debug: Puppet::Type::Package::ProviderPip: Executing '/usr/bin/pip freeze'
debug: Puppet::Type::Package::ProviderPip: Executing '/usr/bin/pip install -q --index-url=http://ucs-build-server/packages/simple/ carbon'
err: /Stage[main]/Graphite/Package[carbon]/ensure: change from absent to present failed: Execution of '/usr/bin/pip install -q --index-url=http://ucs-build-server/packages/simple/ carbon' returned 1:   Cannot fetch index base URL http://ucs-build-server/packages/simple/
  Could not find any downloads that satisfy the requirement carbon
No distributions at all found for carbon
Storing complete log in /root/.pip/pip.log

So, it looks like three packages are not installing. Odd thing is that it is not taking 180 seconds per try. It seems to come right back w/a failure. The pip.log says:

root at ucs-build-server:/etc/puppet/manifests# more ~/.pip/pip.log 
------------------------------------------------------------
/usr/bin/pip run on Mon May 20 12:14:20 2013
Downloading/unpacking carbon
  Getting page http://ucs-build-server/packages/simple/carbon
  Could not fetch URL http://ucs-build-server/packages/simple/carbon: HTTP Error 502: Cannot Connect
  Will skip URL http://ucs-build-server/packages/simple/carbon when looking for download links for carbon
  Getting page http://ucs-build-server/packages/simple/
  Could not fetch URL http://ucs-build-server/packages/simple/: HTTP Error 502: Cannot Connect
  Will skip URL http://ucs-build-server/packages/simple/ when looking for download links for carbon
  Cannot fetch index base URL http://ucs-build-server/packages/simple/
  URLs to search for versions for carbon:
  * http://ucs-build-server/packages/simple/carbon/
  Getting page http://ucs-build-server/packages/simple/carbon/
  Could not fetch URL http://ucs-build-server/packages/simple/carbon/: HTTP Error 502: Cannot Connect
  Will skip URL http://ucs-build-server/packages/simple/carbon/ when looking for download links for carbon
  Could not find any downloads that satisfy the requirement carbon
No distributions at all found for carbon
Exception information:
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 126, in main
    self.run(options, args)
  File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 223, in run
    requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
  File "/usr/lib/python2.7/dist-packages/pip/req.py", line 948, in prepare_files
    url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
  File "/usr/lib/python2.7/dist-packages/pip/index.py", line 152, in find_requirement
    raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for carbon

It seems like it is looking locally only? How do I get it to go back to trying the repos? I tried removing the .pip directory.

Regards,


PCM (Paul Michali)

Contact info for Cisco users http://twiki.cisco.com/Main/pcm


On May 20, 2013, at 11:23 AM, Paul Michali wrote:

> On May 20, 2013, at 11:00 AM, Mark T. Voelker wrote:
> 
>>> *Q: Any idea as to what I'm doing wrong on the proxy setup here?*
>> 
>> Is it happening consistently?  We've often seen Cisco's internal proxies
>> fail intermittently, so if you've only tried it once I wouldn't be
>> surprised if it worked again next time around.
> 
> PCM: Yeah, and actually, "pip install collectd" failed too.  I just tried creating a pip.conf file, with a longer timeout and using a different index:
> 
> [global]
> timeout = 180
> index-url = http://g.pypi.python.org/simple
> download-cache = ~/.pip/cache
> 
> Now it says:
> 
> err: /Stage[main]/Collectd/Package[collectd ]/ensure: change from absent to present failed: Execution of '/usr/bin/pip install -q --index-url=http://ucs-build-server/packages/simple/ collectd ' returned 1:   Could not find any downloads that satisfy the requirement collectd
> No distributions at all found for collectd
> Storing complete log in /root/.pip/pip.log
> 
> Maybe that repo is not good to use. I could retry with a different repo.
> 
> 
>> 
>> Note that collectd is being installed via pip rather than apt, so the
>> $location actually doesn't come into play here.  That also means that
>> you need to be able to reach the public internet, of course.
> 
> PCM: The other pip installs worked, so I have access in general.  I'll try w/just the timeout.  Any other ideas?
> 
> 
>> 
>>> *Q: Will we run into an issue with the IP addresses for the management
>>> interfaces given we'll have two build servers, each with DHCP servers
>>> handling the PXE boots?*
>> 
>> It would be far simpler not to have multiple DHCP servers serving the
>> same network (consider segregating them on separate VLANs), but this can
>> work.  You mostly just have to make sure that each DHCP server only
>> responds to requests for the hosts it's "supposed to".  Still, it's
>> probably going to make life harder than it could be.
>> 
>>> *Q: Should we create subnets and partition up the space?*
>> 
>> That would probably make your life easier.
> 
> 
> PCM: So maybe use 14.1.0.10-.19 for the HP servers when we re-IP their management IP addresses?
> 
> The thought, BTW, was to use the lowest part of the host IP to indicate the system, so that power IP, management IP, and host name all relate (UCS using 10-19, Eclipse 20-29, and HP 30-39). We also will allocate a block of ten VLANs, going from # * 10 to # * 10 + 9, to each node for their data/private network, so that they have exclusive VLANs for use on the Nexus switch.
> 
> Regards,
> 
> PCM
> 
>> 
>> At Your Service,
>> 
>> Mark T. Voelker
>> Systems Development Unit
>> +1 919 392-4326
>> 
>> On 05/20/2013 10:43 AM, Paul Michali wrote:
>>> Hi!
>>> 
>>> We're trying to setup multiple build servers (in VMs) in the lab, so
>>> that we can automatically provision different types of hardware (have HP
>>> Pro Liant, Eclipse, and now UCS boxes). With the current setup we have
>>> an operational build server that provisions the HP Pro Liants with
>>> either COE or Devstack. Works well.  Here are the questions, related to
>>> the second build server VM that we are setting up (and later will do a
>>> third for the Eclipse boxes):
>>> 
>>> With this build server, I ran puppet apply and am getting this error:
>>> 
>>> err: /Stage[main]//Node[master-node]/Exec[pip-cache]/returns: change
>>> from notrun to 0 failed: /usr/bin/env
>>> http_proxy=http://proxy-wsa.esl.cisco.com:80
>>> https_proxy=http://proxy-wsa.esl.cisco.com:80 /usr/local/bin/pip2pi
>>> /var/www/packages collectd xenapi django-tagging graphite-web carbon
>>> whisper returned 1 instead of one of [0] at
>>> /etc/puppet/manifests/core.pp:421
>>> 
>>> It appears to be a proxy issue, but I'm not sure what is wrong as this
>>> build server has the same thing as the other (working build server).
>>> The site.pp has:
>>> 
>>> $proxy                  = "http://proxy-wsa.esl.cisco.com:80"
>>> $location               = "http://128.107.252.163/openstack/cisco"
>>> 
>>> The /etc/apt/sources.list.d/cisco-openstack-mirror_folsom has:
>>> 
>>> # cisco-openstack-mirror_folsom
>>> deb http://128.107.252.163/openstack/cisco folsom main
>>> deb-src http://128.107.252.163/openstack/cisco folsom main
>>> 
>>> This command fails with a timeout, when run manually as well.
>>> 
>>> *Q: Any idea as to what I'm doing wrong on the proxy setup here?*
>>> 
>>> 
>>> The second question relates to idea that we want both of these build
>>> server VMs running at the same time. The build servers are on the
>>> 192.168.220.0/24 network. The UCS boxes will have power management IPs
>>> on 13.0.0.0/16 and management IP will be on 14.0.0.0/16 network, using
>>> host part of .30 to .39 for the ten systems.
>>> 
>>> Currently, the HP boxes power and management ports are on the
>>> 192.168.220.0 network, but the intent is to move these to the 13.0.0.0
>>> and 14.0.0.0 networks as well using host IP parts (.10 to .19). In the
>>> future, we'll alter the Exclipe boxes too, to use .20 to .29 in these
>>> same IP ranges (only we'll need to use a managed APS for these).
>>> 
>>> We've statically set the IPs for the power management ports and will
>>> rely on the MAC addresses to assign the IPs via cobbler/puppet.
>>> 
>>> *Q: Will we run into an issue with the IP addresses for the management
>>> interfaces given we'll have two build servers, each with DHCP servers
>>> handling the PXE boots?*
>>> *
>>> *
>>> *Q: Should we create subnets and partition up the space?*
>>> *
>>> *
>>> *Q: Any other issues that you see with this plan?*
>>> *
>>> *
>>> *
>>> *
>>> Thanks in advance!
>>> 
>>> 
>>> PCM (Paul Michali)
>>> 
>>> Contact info for Cisco users http://twiki.cisco.com/Main/pcm
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130520/e71fcf3d/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130520/e71fcf3d/attachment.pgp>


More information about the OpenStack-dev mailing list