[Openstack-operators] Properties missing in Nova Scheduler Filter

Matt Fischer matt at mattfischer.com
Sun Nov 13 02:31:19 UTC 2016


Its pretty hard for me to parse the above or help more without a live pdb
shell looking at this but I wonder if this is a Liberty vs Mitaka
difference? We're still on nova liberty. The nova team may know more and/or
I can figure out more once we upgrade since we may hit this same issue. One
difference is that I'm not using the metadefs stuff but I don't know if
that is relevant or not.

On Fri, Nov 11, 2016 at 3:24 AM, Keller, Mario <Mario.Keller at cornelsen.de>
wrote:

> Hello Matt,
>
> I found you blog post to this and tried your code, but the problem is,
> that I get an error:
>
> “Returning exception 'RequestSpec' object has no attribute 'get' to caller”
>
> It seems the the call “image_props = spec_obj.get('request_spec', {})”
> provides an empty object.
> If I write an str(spec_obj.image.__dict__) I get:
>
> {'_obj_checksum': u'793c47d1b98f9df93bbc09de4d155c1b', '_context':
> <nova.context.RequestContext object at 0x636eed0>, '_obj_container_format':
> u'bare', '_obj_name': u'_MY_WINDOWS1', '_obj_min_disk': 1,
> '_obj_disk_format': u'iso', '_obj_owner': u'cec13ed6b7bc42879cea9628dbad01dc',
> '_obj_status': u'active', 'VERSION': u'1.8', '_obj_properties':
> ImageMetaProps(hw_architecture=<?>,hw_auto_disk_
> config=<?>,hw_boot_menu=<?>,hw_cdrom_bus=<?>,hw_cpu_cores=
> <?>,hw_cpu_max_cores=<?>,hw_cpu_max_sockets=<?>,hw_cpu_
> max_threads=<?>,hw_cpu_policy=<?>,hw_cpu_realtime_mask=<?>,
> hw_cpu_sockets=<?>,hw_cpu_thread_policy=<?>,hw_cpu_
> threads=<?>,hw_device_id=<?>,hw_disk_bus='scsi',hw_disk_
> type='preallocated',hw_firmware_type=<?>,hw_floppy_
> bus=<?>,hw_ipxe_boot=<?>,hw_machine_type=<?>,hw_mem_page_
> size=<?>,hw_numa_cpus=<?>,hw_numa_mem=<?>,hw_numa_nodes=<?>
> ,hw_qemu_guest_agent=<?>,hw_rng_model=<?>,hw_scsi_model='
> lsisas1068',hw_serial_port_count=<?>,hw_video_model=<?>,
> hw_video_ram=<?>,hw_vif_model='vmxnet3',hw_vif_multiqueue_
> enabled=<?>,hw_vm_mode=<?>,hw_watchdog_action=<?>,img_bdm_
> v2=<?>,img_bittorrent=<?>,img_block_device_mapping=<?>,img_
> cache_in_nova=<?>,img_compression_level=<?>,img_config_drive=<?>,img_hv_
> requested_version=<?>,img_hv_type='vmware',img_linked_
> clone=<?>,img_mappings=<?>,img_owner_id=<?>,img_root_
> device_name=<?>,img_signature=<?>,img_signature_certificate_
> uuid=<?>,img_signature_hash_method=<?>,img_signature_key_
> type=<?>,img_use_agent=<?>,img_version=<?>,os_admin_user=
> <?>,os_command_line=<?>,os_distro='windows9Server64Guest'
> ,os_require_quiesce=<?>,os_skip_agent_inject_files_at_
> boot=<?>,os_skip_agent_inject_ssh=<?>,os_type=<?>), '_obj_size':
> 281018368, '_obj_id': 'e62da4df-318f-48dc-be26-b634e82ec4a1',
> '_changed_fields': set([u'status', u'name', u'container_format',
> u'created_at', u'disk_format', u'updated_at', u'properties', u'owner',
> u'min_ram', u'checksum', u'min_disk', u'id', u'size']), '_obj_min_ram':
> 1024, '_obj_created_at': datetime.datetime(2016, 8, 29, 8, 27, 47,
> tzinfo=<iso8601.Utc>), '_obj_updated_at': datetime.datetime(2016, 11, 9,
> 13, 10, 48, tzinfo=<iso8601.Utc>)}
>
>
> Using the spec_obj object itself, I get:
>
> {'_obj_instance_uuid': '22313c7f-0338-4bed-9131-900b458347d9',
> '_obj_flavor': Flavor(created_at=2016-11-01T14:26:31Z,deleted=False,
> deleted_at=None,disabled=False,ephemeral_gb=0,extra_
> specs={},flavorid='7d5dbdd9-62f9-4824-9e5e-803c69eef223',
> id=23,is_public=True,memory_mb=1024,name='1vCPU_1GB-RAM_
> 30GB-HDD',projects=<?>,root_gb=30,rxtx_factor=1.0,swap=0,
> updated_at=None,vcpu_weight=0,vcpus=1), '_obj_scheduler_hints': {},
> '_context': <nova.context.RequestContext object at 0x6ae7810>,
> '_obj_project_id': u'027d9ea220bd41e88f9c55227788a863',
> '_obj_num_instances': 1, '_obj_limits': SchedulerLimits(disk_gb=None,
> memory_mb=None,numa_topology=None,vcpu=None), '_obj_instance_group':
> None, '_obj_ignore_hosts': None, '_obj_image': ImageMeta(checksum='
> 793c47d1b98f9df93bbc09de4d155c1b',container_format='bare',
> created_at=2016-08-29T08:27:47Z,direct_url=<?>,disk_
> format='iso',id=e62da4df-318f-48dc-be26-b634e82ec4a1,min_
> disk=1,min_ram=1024,name='_MY_WINDOWS1',owner='
> cec13ed6b7bc42879cea9628dbad01dc',properties=ImageMetaProps,
> protected=<?>,size=281018368,status='active',tags=<?>,
> updated_at=2016-11-09T13:10:48Z,virtual_size=<?>,visibility=<?>),
> '_obj_force_hosts': None, 'VERSION': u'1.5', '_obj_force_nodes': None,
> '_obj_pci_requests': InstancePCIRequests(instance_
> uuid=22313c7f-0338-4bed-9131-900b458347d9,requests=[]), '_obj_retry':
> SchedulerRetries(hosts=ComputeNodeList,num_attempts=1),
> '_changed_fields': set([u'instance_uuid', u'retry', u'num_instances',
> u'pci_requests', u'limits', u'availability_zone', u'force_nodes', u'image',
> u'instance_group', u'force_hosts', u'numa_topology', u'ignore_hosts',
> u'flavor', u'project_id', u'scheduler_hints']), '_obj_numa_topology': None,
> '_obj_availability_zone': u'CV_Inhouse_RZ2', 'config_options': {}}
>
> So there seems no request_spec present. There's an attribute "image"
> within spec_obj that has an attribute properties of the type ImageMetaProps
> that has all the vmware related properties that are defined the same way
> then our properties, but not our self defined property.
>
> Mario.
>
>
> Von: tadowguy at gmail.com [mailto:tadowguy at gmail.com] Im Auftrag von Matt
> Fischer
> Gesendet: Donnerstag, 10. November 2016 15:27
> An: Keller, Mario
> Cc: openstack-operators at lists.openstack.org
> Betreff: Re: [Openstack-operators] Properties missing in Nova Scheduler
> Filter
>
> Mario,
>
> If I remember right I had a similar issue with getting image_props when I
> was doing this to pull in custom properties. Through some trial and error
> and poking around with pdb I ended up with this:
>
>         image_props = spec_obj.get('request_spec', {}).\
>             get('image', {}).get('properties', {})
>
> Perhaps that will help?  If not I'd recommend putting a pdb break at the
> top of host_passes and digging through the spec_obj.
>
>
> On Thu, Nov 10, 2016 at 12:05 AM, Keller, Mario <Mario.Keller at cornelsen.de>
> wrote:
> Hello,
>
> we are trying to build our own nova scheduler filter to separate machines
> to different compute nodes / host aggregates.
> Our setup is based on OpenStack Mitaka and we are using VMware as
> hypervisor on 3 different compute nodes.
>
> We have created a /etc/glance/metadefs/CV_AggSelect.json file to define
> the new property "os_selectagg"
>
> {
>     "namespace": "OS::Compute::cv-host-agg",
>     "display_name": "CV-CUSTOM: Select Host Aggregate",
>     "description": "Cornelsen CUSTOM: Select Host Aggregate",
>     "visibility": "public",
>     "protected": true,
>     "resource_type_associations": [
>         {
>             "name": "OS::Glance::Image"
>         },
>         {
>             "name": "OS::Nova::Aggregate"
>         }
>     ],
>         "properties": {
>             "os_selectagg": {
>                 "title": "selectagg",
>                 "description": "Cornelsen CUSTOM: Select Host Aggregate",
>                 "type": "string",
>                 "enum": [
>                     "windows",
>                     "linux",
>                     "desktop",
>                     "test1",
>                     "test2"
>                 ],
>                 "default" : "test2"
>         }
>     },
>     "objects": []
> }
>
>
> Getting the details from our image and the host aggregate we see that the
> property is set correctly:
>
> openstack image show e62da4df-318f-48dc-be26-b634e82ec4a1
> +------------------+----------------------------------------
> ------------------------------------------------------------
> ------------------------------+
> | Field            | Value
>
> |
> +------------------+----------------------------------------
> ------------------------------------------------------------
> ------------------------------+
> ...
> | properties       | description='', hw_vif_model='VirtualVmxnet3',
> hypervisor_type='vmware', os_selectagg='windows',            |
> |                  | vmware_adaptertype='lsiLogicsas',
> vmware_disktype='preallocated', vmware_ostype='windows9Server64Guest
> ...
>
> We also see the property in the aggregate:
>
> openstack aggregate show 5
> +-------------------+--------------------------------------------------+
> | Field             | Value                                            |
> +-------------------+--------------------------------------------------+
> ...
> | properties        | hypervisor_type='vmware', os_selectagg='windows' |
> ..
>
> I have created a new simple filter in /usr/lib/python2.7/site-
> packages/nova/scheduler/filters  just to see what properties are set for
> the current image and the host_state.
> The filter is also set in the  /etc/nova/nova.conf and is executed,
> because I'm getting the logfile that ist created by the filter.
>
> The filter only implements the " def host_passes(self, host_state,
> spec_obj)" function.
>
> I'm getting the image properties by " image_props =
> spec_obj.image.properties if spec_obj.image else {} ", but the property
> "os_selectagg" is missing. All other properties like
> hw_vif_model='VirtualVmxnet3' are set.
>
> The property is set in the host_state.aggregates list, but not in the
> spec_obj.image.properties. What do we miss?
>
> With best regards,
> Mario Keller.
>
>
>
> Mit freundlichen Grüßen
> Mario Keller
> IT-Operations Engineer
>
> --
> Cornelsen Verlag GmbH, Mecklenburgische Straße 53, 14197 Berlin
> Tel: +49 30 897 85-8364, Fax: +49 30 897 85-97-8364
> E-Mail: mario.keller at cornelsen.de | cornelsen.de
>
> AG Charlottenburg, HRB 114796 B
> Geschäftsführung: Dr. Anja Hagen, Joachim Herbst, Mark van Mierle
> (Vorsitz),
> Patrick Neiss, Michael von Smolinski, Frank Thalhofer
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161112/bf0f79e4/attachment.html>


More information about the OpenStack-operators mailing list