I know it's "Available ", however that doesn't imply attachment. Cinder uses iSCSI or NFS, to attach the volume to a running instance on a compute node. If you're missing the required protocol packages, the attachment will fail. You can have "Available " volumes, and lack tgtadm (or nfs-utils if that's your protocol). <div><br></div><div>Secondly, Is your compute node able to resolve "controller"?<span></span><br><br>On Wednesday, January 14, 2015, Geo Varghese <<a href="mailto:gvarghese@aqorn.com">gvarghese@aqorn.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Hi Abel,<br><br></div>Thanks for the reply.<br><br></div>I have created volume and its in available state. Please check attached screenshot.<br><br></div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jan 15, 2015 at 11:34 AM, Abel Lopez <span dir="ltr"><<a href="javascript:_e(%7B%7D,'cvml','alopgeek@gmail.com');" target="_blank">alopgeek@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Do your compute nodes have the required iSCSI packages installed?<div><div><br><br>On Wednesday, January 14, 2015, Geo Varghese <<a href="javascript:_e(%7B%7D,'cvml','gvarghese@aqorn.com');" target="_blank">gvarghese@aqorn.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Hi Jay,<br><br></div>Thanks for the reply. Just pasting the details below<br><br>keystone catalog<br>================================================<br>Service: compute<br>+-------------+------------------------------------------------------------+<br>|   Property  |                           Value                            |<br>+-------------+------------------------------------------------------------+<br>|   adminURL  | <a href="http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1</a> |<br>|      id     |              02028b1f4c0849c68eb79f5887516299              |<br>| internalURL | <a href="http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1</a> |<br>|  publicURL  | <a href="http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1</a> |<br>|    region   |                         RegionOne                          |<br>+-------------+------------------------------------------------------------+<br>Service: network<br>+-------------+----------------------------------+<br>|   Property  |              Value               |<br>+-------------+----------------------------------+<br>|   adminURL  |      <a href="http://controller:9696" target="_blank">http://controller:9696</a>      |<br>|      id     | 32f687d4f7474769852d88932288b893 |<br>| internalURL |      <a href="http://controller:9696" target="_blank">http://controller:9696</a>      |<br>|  publicURL  |      <a href="http://controller:9696" target="_blank">http://controller:9696</a>      |<br>|    region   |            RegionOne             |<br>+-------------+----------------------------------+<br>Service: volumev2<br>+-------------+------------------------------------------------------------+<br>|   Property  |                           Value                            |<br>+-------------+------------------------------------------------------------+<br>|   adminURL  | <a href="http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1</a> |<br>|      id     |              5bca493cdde2439887d54fb805c4d2d4              |<br>| internalURL | <a href="http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1</a> |<br>|  publicURL  | <a href="http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1</a> |<br>|    region   |                         RegionOne                          |<br>+-------------+------------------------------------------------------------+<br>Service: image<br>+-------------+----------------------------------+<br>|   Property  |              Value               |<br>+-------------+----------------------------------+<br>|   adminURL  |      <a href="http://controller:9292" target="_blank">http://controller:9292</a>      |<br>|      id     | 2e2294b9151e4fb9b6efccf33c62181b |<br>| internalURL |      <a href="http://controller:9292" target="_blank">http://controller:9292</a>      |<br>|  publicURL  |      <a href="http://controller:9292" target="_blank">http://controller:9292</a>      |<br>|    region   |            RegionOne             |<br>+-------------+----------------------------------+<br>Service: volume<br>+-------------+------------------------------------------------------------+<br>|   Property  |                           Value                            |<br>+-------------+------------------------------------------------------------+<br>|   adminURL  | <a href="http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1</a> |<br>|      id     |              0e29cfaa785e4e148c57601b182a5e26              |<br>| internalURL | <a href="http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1</a> |<br>|  publicURL  | <a href="http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1" target="_blank">http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1</a> |<br>|    region   |                         RegionOne                          |<br>+-------------+------------------------------------------------------------+<br>Service: ec2<br>+-------------+---------------------------------------+<br>|   Property  |                 Value                 |<br>+-------------+---------------------------------------+<br>|   adminURL  | <a href="http://controller:8773/services/Admin" target="_blank">http://controller:8773/services/Admin</a> |<br>|      id     |    8f4957d98cd04130b055b8b80b051833   |<br>| internalURL | <a href="http://controller:8773/services/Cloud" target="_blank">http://controller:8773/services/Cloud</a> |<br>|  publicURL  | <a href="http://controller:8773/services/Cloud" target="_blank">http://controller:8773/services/Cloud</a> |<br>|    region   |               RegionOne               |<br>+-------------+---------------------------------------+<br>Service: identity<br>+-------------+----------------------------------+<br>|   Property  |              Value               |<br>+-------------+----------------------------------+<br>|   adminURL  |   <a href="http://controller:35357/v2.0" target="_blank">http://controller:35357/v2.0</a>   |<br>|      id     | 146f7bbb0ad54740b95f8499f04b2ee2 |<br>| internalURL |   <a href="http://controller:5000/v2.0" target="_blank">http://controller:5000/v2.0</a>    |<br>|  publicURL  |   <a href="http://controller:5000/v2.0" target="_blank">http://controller:5000/v2.0</a>    |<br>|    region   |            RegionOne             |<br>+-------------+----------------------------------+<br>==============================================<br><br></div>Nova.conf<br><div><br>================================================<br># This file autogenerated by Chef<br># Do not edit, changes will be overwritten<br><br><br>[DEFAULT]<br><br># LOGS/STATE<br>debug=False<br>verbose=False<br>auth_strategy=keystone<br>dhcpbridge_flagfile=/etc/nova/nova.conf<br>dhcpbridge=/usr/bin/nova-dhcpbridge<br>log_dir=/var/log/nova<br>state_path=/var/lib/nova<br>instances_path=/var/lib/nova/instances<br>instance_name_template=instance-%08x<br>network_allocate_retries=0<br>lock_path=/var/lib/nova/lock<br><br>ssl_only=false<br>cert=self.pem<br>key=<br><br># Command prefix to use for running commands as root (default: sudo)<br>rootwrap_config=/etc/nova/rootwrap.conf<br><br># Should unused base images be removed? (default: false)<br>remove_unused_base_images=true<br><br># Unused unresized base images younger than this will not be removed (default: 86400)<br>remove_unused_original_minimum_age_seconds=3600<br><br># Options defined in nova.openstack.common.rpc<br>rpc_thread_pool_size=64<br>rpc_conn_pool_size=30<br>rpc_response_timeout=60<br>rpc_backend=nova.openstack.common.rpc.impl_kombu<br>amqp_durable_queues=false<br>amqp_auto_delete=false<br><br>##### RABBITMQ #####<br>rabbit_userid=guest<br>rabbit_password=guest<br>rabbit_virtual_host=/<br>rabbit_hosts=rabbit1:5672,rabbit2:5672<br>rabbit_retry_interval=1<br>rabbit_retry_backoff=2<br>rabbit_max_retries=0<br>rabbit_durable_queues=false<br>rabbit_ha_queues=True<br><br><br><br>##### SCHEDULER #####<br>scheduler_manager=nova.scheduler.manager.SchedulerManager<br>scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler<br>scheduler_available_filters=nova.scheduler.filters.all_filters<br># which filter class names to use for filtering hosts when not specified in the request.<br>scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,CoreFilter,SameHostFilter,DifferentHostFilter<br>default_availability_zone=nova<br>default_schedule_zone=nova<br><br>##### NETWORK #####<br><br><br><br># N.B. due to <a href="https://bugs.launchpad.net/nova/+bug/1206330" target="_blank">https://bugs.launchpad.net/nova/+bug/1206330</a><br># we override the endpoint scheme below, ignore the port<br># and essentially force http<br>neutron_url=<a href="http://controller:9696" target="_blank">http://controller:9696</a><br>neutron_api_insecure=false<br>network_api_class=nova.network.neutronv2.api.API<br>neutron_auth_strategy=keystone<br>neutron_admin_tenant_name=service<br>neutron_admin_username=neutron<br>neutron_admin_password=openstack-network<br>neutron_admin_auth_url=<a href="http://controller:5000/v2.0" target="_blank">http://controller:5000/v2.0</a><br>neutron_url_timeout=30<br>neutron_region_name=<br>neutron_ovs_bridge=br-int<br>neutron_extension_sync_interval=600<br>neutron_ca_certificates_file=<br>linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver<br>firewall_driver = nova.virt.firewall.NoopFirewallDriver<br>security_group_api=neutron<br>service_neutron_metadata_proxy=true<br>neutron_metadata_proxy_shared_secret=secret123<br>default_floating_pool=public<br>dns_server=8.8.8.8<br><br>use_ipv6=false<br><br>##### GLANCE #####<br>image_service=nova.image.glance.GlanceImageService<br>glance_api_servers=<a href="http://controller:9292" target="_blank">http://controller:9292</a><br>glance_api_insecure=false<br><br>##### Cinder #####<br># Location of ca certificates file to use for cinder client requests<br>cinder_ca_certificates_file=<br><br># Allow to perform insecure SSL requests to cinder<br>cinder_api_insecure=false<br><br># Info to match when looking for cinder in the service catalog<br>cinder_catalog_info=volumev2:cinderv2:publicURL<br><br>##### COMPUTE #####<br>compute_driver=libvirt.LibvirtDriver<br>preallocate_images=none<br>use_cow_images=true<br>vif_plugging_is_fatal=false<br>vif_plugging_timeout=0<br>compute_manager=nova.compute.manager.ComputeManager<br>sql_connection=mysql://nova:nova@loadbalancer:3306/nova?charset=utf8<br>connection_type=libvirt<br><br>##### NOTIFICATIONS #####<br># Driver or drivers to handle sending notifications (multi valued)<br><br># AMQP topic used for OpenStack notifications. (list value)<br># Deprecated group/name - [rpc_notifier2]/topics<br>notification_topics=notifications<br><br># Generate periodic compute.instance.exists notifications<br>instance_usage_audit=False<br><br># Time period to generate instance usages for.  Time period<br># must be hour, day, month or year (string value)<br>instance_usage_audit_period=month<br><br><br># The IP address on which the OpenStack API will listen. (string value)<br>osapi_compute_listen=0.0.0.0<br># The port on which the OpenStack API will listen. (integer value)<br>osapi_compute_listen_port=8774<br><br># The IP address on which the metadata will listen. (string value)<br>metadata_listen=0.0.0.0<br># The port on which the metadata will listen. (integer value)<br>metadata_listen_port=8775<br><br>##### VNCPROXY #####<br>novncproxy_base_url=<a href="http://controller:6080/vnc_auto.html" target="_blank">http://controller:6080/vnc_auto.html</a><br>xvpvncproxy_base_url=<a href="http://controller:6081/console" target="_blank">http://controller:6081/console</a><br><br># This is only required on the server running xvpvncproxy<br>xvpvncproxy_host=0.0.0.0<br>xvpvncproxy_port=6081<br><br># This is only required on the server running novncproxy<br>novncproxy_host=0.0.0.0<br>novncproxy_port=6080<br><br>vncserver_listen=0.0.0.0<br>vncserver_proxyclient_address=0.0.0.0<br><br>vnc_keymap=en-us<br><br># store consoleauth tokens in memcached<br><br>##### MISC #####<br># force backing images to raw format<br>force_raw_images=false<br>allow_same_net_traffic=true<br>osapi_max_limit=1000<br># If you terminate SSL with a load balancer, the HTTP_HOST environ<br># variable that generates the request_uri in webob.Request will lack<br># the HTTPS scheme. Setting this overrides the default and allows<br># URIs returned in the various links collections to contain the proper<br># HTTPS endpoint.<br>osapi_compute_link_prefix = <a href="http://controller:8774/v2/%(tenant_id)s" target="_blank">http://controller:8774/v2/%(tenant_id)s</a><br>start_guests_on_host_boot=false<br>resume_guests_state_on_host_boot=true<br>allow_resize_to_same_host=false<br>resize_confirm_window=0<br>live_migration_retry_count=30<br><br>##### QUOTAS #####<br># (StrOpt) default driver to use for quota checks (default: nova.quota.DbQuotaDriver)<br>quota_driver=nova.quota.DbQuotaDriver<br># number of security groups per project (default: 10)<br>quota_security_groups=50<br># number of security rules per security group (default: 20)<br>quota_security_group_rules=20<br># number of instance cores allowed per project (default: 20)<br>quota_cores=20<br># number of fixed ips allowed per project (this should be at least the number of instances allowed) (default: -1)<br>quota_fixed_ips=-1<br># number of floating ips allowed per project (default: 10)<br>quota_floating_ips=10<br># number of bytes allowed per injected file (default: 10240)<br>quota_injected_file_content_bytes=10240<br># number of bytes allowed per injected file path (default: 255)<br>quota_injected_file_path_length=255<br># number of injected files allowed (default: 5)<br>quota_injected_files=5<br># number of instances allowed per project (defailt: 10)<br>quota_instances=10<br># number of key pairs per user (default: 100)<br>quota_key_pairs=100<br># number of metadata items allowed per instance (default: 128)<br>quota_metadata_items=128<br># megabytes of instance ram allowed per project (default: 51200)<br>quota_ram=51200<br><br># virtual CPU to Physical CPU allocation ratio (default: 16.0)<br>cpu_allocation_ratio=16.0<br># virtual ram to physical ram allocation ratio (default: 1.5)<br>ram_allocation_ratio=1.5<br><br>mkisofs_cmd=genisoimage<br>injected_network_template=$pybasedir/nova/virt/interfaces.template<br>flat_injected=false<br><br># The IP address on which the EC2 API will listen. (string value)<br>ec2_listen=0.0.0.0<br># The port on which the EC2 API will listen. (integer value)<br>ec2_listen_port=8773<br><br><br>##### WORKERS ######<br><br>##### KEYSTONE #####<br>keystone_ec2_url=<a href="http://controller:5000/v2.0/ec2tokens" target="_blank">http://controller:5000/v2.0/ec2tokens</a><br><br># a list of APIs to enable by default (list value)<br>enabled_apis=ec2,osapi_compute,metadata<br><br>##### WORKERS ######<br><br>##### MONITORS ######<br># Monitor classes available to the compute which may be<br># specified more than once. (multi valued)<br>compute_available_monitors=nova.compute.monitors.all_monitors<br><br># A list of monitors that can be used for getting compute<br># metrics. (list value)<br>compute_monitors=<br><br>##### VOLUMES #####<br># iscsi target user-land tool to use<br>iscsi_helper=tgtadm<br>volume_api_class=nova.volume.cinder.API<br># Region name of this node (string value)<br>os_region_name=RegionOne<br><br># Override the default dnsmasq settings with this file (String value)<br>dnsmasq_config_file=<br><br>##### THIRD PARTY ADDITIONS #####<br><br><br>[ssl]<br><br># CA certificate file to use to verify connecting clients<br>ca_file=<br><br># Certificate file to use when starting the server securely<br>cert_file=<br><br># Private key file to use when starting the server securely<br>key_file=<br><br>[conductor]<br><br>use_local=False<br><br><br>[libvirt]<br><br>#<br># Options defined in nova.virt.libvirt.driver<br>#<br><br># Rescue ami image (string value)<br>#rescue_image_id=<None><br><br># Rescue aki image (string value)<br>#rescue_kernel_id=<None><br><br># Rescue ari image (string value)<br>#rescue_ramdisk_id=<None><br><br># Libvirt domain type (valid options are: kvm, lxc, qemu, uml,<br># xen) (string value)<br># Deprecated group/name - [DEFAULT]/libvirt_type<br>virt_type=kvm<br><br># Override the default libvirt URI (which is dependent on<br># virt_type) (string value)<br># Deprecated group/name - [DEFAULT]/libvirt_uri<br>#connection_uri=<br><br># Inject the admin password at boot time, without an agent.<br># (boolean value)<br># Deprecated group/name - [DEFAULT]/libvirt_inject_password<br>inject_password=false<br><br># Inject the ssh public key at boot time (boolean value)<br># Deprecated group/name - [DEFAULT]/libvirt_inject_key<br>inject_key=true<br><br># The partition to inject to : -2 => disable, -1 => inspect<br># (libguestfs only), 0 => not partitioned, >0 => partition<br># number (integer value)<br># Deprecated group/name - [DEFAULT]/libvirt_inject_partition<br>inject_partition=-2<br><br># Sync virtual and real mouse cursors in Windows VMs (boolean<br># value)<br>#use_usb_tablet=true<br><br># Migration target URI (any included "%s" is replaced with the<br># migration target hostname) (string value)<br>live_migration_uri=qemu+tcp://%s/system<br><br># Migration flags to be set for live migration (string value)<br>live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER<br><br># Migration flags to be set for block migration (string value)<br>block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC<br><br># Maximum bandwidth to be used during migration, in Mbps<br># (integer value)<br>live_migration_bandwidth=0<br><br># Snapshot image format (valid options are : raw, qcow2, vmdk,<br># vdi). Defaults to same as source image (string value)<br>snapshot_image_format=qcow2<br><br># The libvirt VIF driver to configure the VIFs. (string value)<br># Deprecated group/name - [DEFAULT]/libvirt_vif_driver<br>vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver<br><br># Libvirt handlers for remote volumes. (list value)<br># Deprecated group/name - [DEFAULT]/libvirt_volume_drivers<br>#volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver,iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver,local=nova.virt.libvirt.volume.LibvirtVolumeDriver,fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver,rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,nfs=nova.virt.libvirt.volume.LibvirtNFSVolumeDriver,aoe=nova.virt.libvirt.volume.LibvirtAOEVolumeDriver,glusterfs=nova.virt.libvirt.volume.LibvirtGlusterfsVolumeDriver,fibre_channel=nova.virt.libvirt.volume.LibvirtFibreChannelVolumeDriver,scality=nova.virt.libvirt.volume.LibvirtScalityVolumeDriver<br><br># Override the default disk prefix for the devices attached to<br># a server, which is dependent on virt_type. (valid options<br># are: sd, xvd, uvd, vd) (string value)<br># Deprecated group/name - [DEFAULT]/libvirt_disk_prefix<br>#disk_prefix=<None><br><br># Number of seconds to wait for instance to shut down after<br># soft reboot request is made. We fall back to hard reboot if<br># instance does not shutdown within this window. (integer<br># value)<br># Deprecated group/name - [DEFAULT]/libvirt_wait_soft_reboot_seconds<br>#wait_soft_reboot_seconds=120<br><br># Set to "host-model" to clone the host CPU feature flags; to<br># "host-passthrough" to use the host CPU model exactly; to<br># "custom" to use a named CPU model; to "none" to not set any<br># CPU model. If virt_type="kvm|qemu", it will default to<br># "host-model", otherwise it will default to "none" (string<br># value)<br># Deprecated group/name - [DEFAULT]/libvirt_cpu_mode<br><br># Set to a named libvirt CPU model (see names listed in<br># /usr/share/libvirt/cpu_map.xml). Only has effect if<br># cpu_mode="custom" and virt_type="kvm|qemu" (string value)<br># Deprecated group/name - [DEFAULT]/libvirt_cpu_model<br>#cpu_model=<none><br><br># Location where libvirt driver will store snapshots before<br># uploading them to image service (string value)<br># Deprecated group/name - [DEFAULT]/libvirt_snapshots_directory<br>#snapshots_directory=$instances_path/snapshots<br><br># Location where the Xen hvmloader is kept (string value)<br>#xen_hvmloader_path=/usr/lib/xen/boot/hvmloader<br><br># Specific cachemodes to use for different disk types e.g:<br># file=directsync,block=none (list value)<br><br># A path to a device that will be used as source of entropy on<br># the host. Permitted options are: /dev/random or /dev/hwrng<br># (string value)<br><br>#<br># Options defined in nova.virt.libvirt.imagecache<br>#<br><br># Unused resized base images younger than this will not be removed (default: 3600)<br>remove_unused_resized_minimum_age_seconds=3600<br><br># Write a checksum for files in _base to disk (default: false)<br>checksum_base_images=false<br><br>#<br># Options defined in nova.virt.libvirt.vif<br>#<br><br>use_virtio_for_bridges=true<br><br>#<br># Options defined in nova.virt.libvirt.imagebackend<br>#<br><br># VM Images format. Acceptable values are: raw, qcow2, lvm, rbd, default. If default is specified,<br># then use_cow_images flag is used instead of this one.<br>images_type=default<br><br><br>[keystone_authtoken]<br>auth_uri = <a href="http://controller:5000/v2.0" target="_blank">http://controller:5000/v2.0</a><br>auth_host = controller<br>auth_port = 35357<br>auth_protocol = http<br>auth_version = v2.0<br>admin_tenant_name = service<br>admin_user = nova<br>admin_password = openstack-compute<br>signing_dir = /var/cache/nova/api<br>hash_algorithms = md5<br>insecure = false<br>========================================<br><br><br></div><div>Please check it.<br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 14, 2015 at 8:23 PM, Jay Pipes <span dir="ltr"><<a>jaypipes@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Could you pastebin the output of:<br>
<br>
 keystone catalog<br>
<br>
and also pastebin your nova.conf for the node running the Nova API service?<br>
<br>
Thanks!<br>
-jay<div><div><br>
<br>
On 01/14/2015 02:25 AM, Geo Varghese wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>
Hi Team,<br>
<br>
I need a help with cinder volume attachment with an instance.<br>
<br>
I have succesfully created cinder volume and it is in available state.<br>
Please check attached screenshot.<br>
<br>
Later I tried to attach volume to an instance but attachment failed.<br>
While checking logs in /var/log/nova/nova-api-os-<u></u>compute.log<br>
<br>
========<br>
<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack     res =<br>
method(self, ctx, volume_id, *args, **kwargs)<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack   File<br>
"/usr/lib/python2.7/dist-<u></u>packages/nova/volume/cinder.<u></u>py", line 206, in get<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack     item =<br>
cinderclient(context).volumes.<u></u>get(volume_id)<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack   File<br>
"/usr/lib/python2.7/dist-<u></u>packages/nova/volume/cinder.<u></u>py", line 91, in<br>
cinderclient<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack<br>
endpoint_type=endpoint_type)<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack   File<br>
"/usr/lib/python2.7/dist-<u></u>packages/cinderclient/service_<u></u>catalog.py", line<br>
80, in url_for<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack     raise<br>
cinderclient.exceptions.<u></u>EndpointNotFound()<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack EndpointNotFound<br>
2015-01-13 19:14:46.563 1736 TRACE nova.api.openstack<br>
==============================<u></u>===========<br>
<br>
<br>
I have already created endpints in v1 and v2.<br>
Please check endpints I have created for cinder below<br>
<br>
==============================<u></u>==============================<u></u>=============<br>
root@controller:/home/geo# keystone endpoint-list | grep 8776<br>
| 5c7bcc79daa74532ac9ca19949e0d8<u></u>72 | regionOne |<br>
<a href="http://controller:8776/v1/%(tenant_id)s" target="_blank">http://controller:8776/v1/%(<u></u>tenant_id)s</a> |<br>
<a href="http://controller:8776/v1/%(tenant_id)s" target="_blank">http://controller:8776/v1/%(<u></u>tenant_id)s</a> |<br>
<a href="http://controller:8776/v1/%(tenant_id)s" target="_blank">http://controller:8776/v1/%(<u></u>tenant_id)s</a> | 8ce0898aa7c84fec9b011823d34b55<u></u>cb |<br>
| 5d71e0a1237c483990b84c36346602<u></u>b4 | RegionOne |<br>
<a href="http://controller:8776/v2/%(tenant_id)s" target="_blank">http://controller:8776/v2/%(<u></u>tenant_id)s</a> |<br>
<a href="http://controller:8776/v2/%(tenant_id)s" target="_blank">http://controller:8776/v2/%(<u></u>tenant_id)s</a> |<br>
<a href="http://controller:8776/v2/%(tenant_id)s" target="_blank">http://controller:8776/v2/%(<u></u>tenant_id)s</a> | 251eca5fdb6b4550a9f521c10fa9f2<u></u>ca |<br>
==============================<u></u>==============================<u></u>===================<br>
<br>
Anyone please help me. Thanks for the support guys.<br>
<br>
--<br>
Regards,<br>
Geo Varghese<br>
<br>
<br></div></div>
______________________________<u></u>_________________<br>
OpenStack-operators mailing list<br>
<a>OpenStack-operators@lists.<u></u>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-operators</a><br>
<br>
</blockquote>
<br>
______________________________<u></u>_________________<br>
OpenStack-operators mailing list<br>
<a>OpenStack-operators@lists.<u></u>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-operators</a><br>
</blockquote></div><br><br clear="all"><br>-- <br><div><div dir="ltr">--<div>Regards,</div><div>Geo Varghese</div></div></div>
</div>
</blockquote>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div><div dir="ltr">--<div>Regards,</div><div>Geo Varghese</div></div></div>
</div>
</blockquote></div>