[Openstack] Nova Instance failed to spawn

Uwe Sauter uwe.sauter.de at gmail.com
Mon Apr 13 20:35:43 UTC 2015


I'm facing the same problem. I do have my hosts configured with a generic hostname like os484001 (which describes the
location in the racks) but also gave them "secondary" hostnames via DNS. All services start up OK and can communicate
with each other (as far as I can tell) but I get "NovaException: Unexpected vif_type=binding_failed" when starting an
instance.

Others pointed out that this most likely is a mis-configured Neutron configuration but since then I wasn't able to
correct that and I didn't get a step forward.

Is it really necessary that the system-configured hostname is the same as in the OpenStack configuration?

Regards,

	Uwe

Am 13.04.2015 um 22:21 schrieb Remo Mattei:
> Can it finds the host name? Looks like it is miss configured. 
> 
> Remo
> 
> Inviato da iPhone
> 
>> Il giorno 13/apr/2015, alle ore 12:36, Joerg Streckfuss <openstack at dirtyhack.org> ha scritto:
>>
>> Hi Walter,
>>
>> as you can see the md5sum is exactly the same
>>
>> # glance image-show cirros-0.3.3-x86_64
>> +------------------+--------------------------------------+
>> | Property         | Value                                |
>> +------------------+--------------------------------------+
>> | checksum         | 133eae9fb1c98f45894a4e60d8736619     |
>> | container_format | bare                                 |
>> | created_at       | 2015-03-02T19:52:36                  |
>> | deleted          | False                                |
>> | disk_format      | qcow2                                |
>> | id               | 80f91497-838f-4484-8cfd-de9ee61b5aa4 |
>> | is_public        | True                                 |
>> | min_disk         | 0                                    |
>> | min_ram          | 0                                    |
>> | name             | cirros-0.3.3-x86_64                  |
>> | owner            | 0cf5356efb4344c387b3aba6d9536300     |
>> | protected        | False                                |
>> | size             | 13200896                             |
>> | status           | active                               |
>> | updated_at       | 2015-03-02T19:52:36                  |
>> +------------------+--------------------------------------+
>>
>> I tried an older cirros image (0.3.2) with the same result.
>>
>> The "nova show" command listed the fault message "No valid host was found. ", "code": 500". So the scheduler couldn't find a compute host, where it can launch the instance. But why?
>>
>>
>>
>>> Am 13.04.2015 um 12:36 schrieb walterxj:
>>> hi:
>>>     I had met this problem just like you and I advice you to check the
>>> cirros-0.3.3-x86_64-disk.img's MD5 code first. the correct md5 code is
>>> 133eae9fb1c98f45894a4e60d8736619.
>>>     Good luck!
>>>
>>> ------------------------------------------------------------------------
>>> walterxj
>>>
>>>    *From:* Joerg Streckfuss <mailto:openstack at dirtyhack.org>
>>>    *Date:* 2015-04-13 16:15
>>>    *To:* openstack <mailto:openstack at lists.openstack.org>
>>>    *Subject:* [Openstack] Nova Instance failed to spawn
>>>    Dear list,
>>>    I'm testing an openstack setup (openstack juno) based on the following
>>>    architektur:
>>>    I have a 3 node cluster with one controller node, one compute node and
>>>    one network network.
>>>    All nodes are vms running CentOS 7 on a fedora 21 host.
>>>    I setup the cluster following this guide
>>>    http://docs.openstack.org/juno/install-guide/install/yum/content/
>>>    I tried to boot an instance: "nova boot --flavor m1.tiny --image
>>>    cirros-0.3.3-x86_64 --nic net-id8f82145c-b345-4614-8610-3f569346b35b
>>>    --security-group default --key-name demo-key demo-instance1"
>>>    But after some seconds the instance goes into ERROR state.
>>>    # nova show demo-instance1
>>>    +--------------------------------------+------------------------------------------------------------------------------------------+
>>>    | Property                             | Value |
>>>    +--------------------------------------+------------------------------------------------------------------------------------------+
>>>    | OS-DCF:diskConfig                    | MANUAL |
>>>    | OS-EXT-AZ:availability_zone          | nova |
>>>    | OS-EXT-STS:power_state               | 0 |
>>>    | OS-EXT-STS:task_state                | - |
>>>    | OS-EXT-STS:vm_state                  | error |
>>>    | OS-SRV-USG:launched_at               | - |
>>>    | OS-SRV-USG:terminated_at             | - |
>>>    | accessIPv4 | |
>>>    | accessIPv6 | |
>>>    | config_drive | |
>>>    | created                              | 2015-04-13T07:55:55Z |
>>>    | demo-net network                     | 192.168.1.16 |
>>>    | fault                                | {"message": "No valid host was
>>>    found. ", "code": 500, "created": "2015-04-13T07:56:03Z"} |
>>>    | flavor                               | m1.tiny (1) |
>>>    | hostId                               |
>>>    6d039c4764c7cf52c3a384272555ba65fa07efeab46951d884cc0b7f |
>>>    | id                                   |
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d |
>>>    | image                                | cirros-0.3.3-x86_64
>>>    (80f91497-838f-4484-8cfd-de9ee61b5aa4) |
>>>    | key_name                             | demo-key |
>>>    | metadata                             | {} |
>>>    | name                                 | demo-instance1 |
>>>    | os-extended-volumes:volumes_attached | [] |
>>>    | security_groups                      | default |
>>>    | status                               | ERROR |
>>>    | tenant_id                            |
>>>    715bbf0613004e50b541e266a8781cdb |
>>>    | updated                              | 2015-04-13T07:56:03Z |
>>>    | user_id                              |
>>>    af955750ddef48f7b708f65c6d4d1c5b |
>>>    +--------------------------------------+------------------------------------------------------------------------------------------+
>>>    /var/log/nova/nova-api.log (controller node)
>>>    <snip>
>>>    2015-04-13 09:55:55.324 2992 INFO nova.osapi_compute.wsgi.server
>>>    [req-2c714c3f-8ebf-440d-9a50-55eabffdd7ed None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/images HTTP/1.1" status: 200 len:
>>>    714 time: 0.0871668
>>>    2015-04-13 09:55:55.399 2992 INFO nova.osapi_compute.wsgi.server
>>>    [req-bdf40a36-ef87-4af7-ba56-9341df591f59 None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/images HTTP/1.1" status: 200 len:
>>>    714 time: 0.0724702
>>>    2015-04-13 09:55:55.424 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-9e8f8e5a-5c54-40cf-8f44-c0fb79e71c76 None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/images/80f91497-838f-4484-8cfd-de9ee61b5aa4
>>>
>>>    HTTP/1.1" status: 200 len: 895 time: 0.0235929
>>>    2015-04-13 09:55:55.434 2991 INFO nova.api.openstack.wsgi
>>>    [req-289b1fc3-e106-4f86-932b-e26f585b4c23 None] HTTP exception thrown:
>>>    The resource could not be found.
>>>    2015-04-13 09:55:55.434 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-289b1fc3-e106-4f86-932b-e26f585b4c23 None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/flavors/m1.tiny HTTP/1.1" status:
>>>    404 len: 272 time: 0.0080121
>>>    2015-04-13 09:55:55.444 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-a3ca73e4-2ee9-4eff-a51d-32614361320e None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/flavors?is_public=None HTTP/1.1"
>>>    status: 200 len: 1383 time: 0.0082920
>>>    2015-04-13 09:55:55.456 2992 INFO nova.osapi_compute.wsgi.server
>>>    [req-114a36ce-aefa-4cf9-ae57-85e043f0e94e None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/flavors?is_public=None HTTP/1.1"
>>>    status: 200 len: 1383 time: 0.0085731
>>>    2015-04-13 09:55:55.466 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-66be59e1-2325-40f7-9e25-794987de6aee None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/flavors/1 HTTP/1.1" status: 200
>>>    len: 591 time: 0.0073400
>>>    2015-04-13 09:55:55.963 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-0c0bde64-3687-42bb-ab81-baf5415e23f9 None] 10.0.0.10 "POST
>>>    /v2/715bbf0613004e50b541e266a8781cdb/servers HTTP/1.1" status: 202 len:
>>>    730 time: 0.4953289
>>>    2015-04-13 09:55:56.005 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-deb8355c-fde8-482e-a61d-189e965ffbac None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/servers/3c280f74-c33c-41ec-a83d-8161c9a2d62d
>>>
>>>    HTTP/1.1" status: 200 len: 1476 time: 0.0392690
>>>    2015-04-13 09:55:56.015 2992 INFO nova.osapi_compute.wsgi.server
>>>    [req-45c40580-9c7a-42fa-b208-030a2036619b None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/flavors/1 HTTP/1.1" status: 200
>>>    len: 591 time: 0.0083711
>>>    2015-04-13 09:55:56.031 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-ded919e4-873f-4cee-9b10-a1ce742f2e34 None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/images/80f91497-838f-4484-8cfd-de9ee61b5aa4
>>>
>>>    HTTP/1.1" status: 200 len: 895 time: 0.0137031
>>>    2015-04-13 09:56:18.695 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-fbeac9da-5dae-4dc2-9b58-8d4c5e22e0fb None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/servers?name=demo-instance1
>>>    HTTP/1.1" status: 200 len: 536 time: 0.0326240
>>>    2015-04-13 09:56:18.784 2992 INFO nova.osapi_compute.wsgi.server
>>>    [req-9c88d9fc-d15b-4e10-8975-21ff386315c3 None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/servers/3c280f74-c33c-41ec-a83d-8161c9a2d62d
>>>
>>>    HTTP/1.1" status: 200 len: 1775 time: 0.0867519
>>>    2015-04-13 09:56:18.794 2991 INFO nova.osapi_compute.wsgi.server
>>>    [req-f5a0b96f-c889-4085-b679-a25366ee0ddc None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/flavors/1 HTTP/1.1" status: 200
>>>    len: 591 time: 0.0075860
>>>    2015-04-13 09:56:18.861 2992 INFO nova.osapi_compute.wsgi.server
>>>    [req-64c1cdf9-5440-43ef-a321-2e0fbe03d692 None] 10.0.0.10 "GET
>>>    /v2/715bbf0613004e50b541e266a8781cdb/images/80f91497-838f-4484-8cfd-de9ee61b5aa4
>>>
>>>    HTTP/1.1" status: 200 len: 895 time: 0.0651841
>>>    <snap>
>>>    /var/log/nova/nova-scheduler.log (controller node)
>>>    <snip>
>>>    2015-04-13 09:07:06.771 1035 AUDIT nova.service [-] Starting scheduler
>>>    node (version 2014.2.2-1.el7)
>>>    2015-04-13 09:07:07.330 1035 WARNING oslo.db.sqlalchemy.session
>>>    [req-ce8730f3-c69c-4490-ae7c-f78719e59842 ] SQL connection failed. -1
>>>    attempts left.
>>>    2015-04-13 09:07:17.844 1035 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-ce8730f3-c69c-4490-ae7c-f78719e59842 ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:07:17.885 1035 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-ce8730f3-c69c-4490-ae7c-f78719e59842 ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:20:49.172 1035 INFO nova.openstack.common.periodic_task
>>>    [req-f5f3429e-12ac-4277-a634-003eab00e7cc None] Skipping periodic task
>>>    _periodic_update_dns because its interval is negative
>>>    2015-04-13 09:20:49.181 1035 INFO oslo.messaging._drivers.impl_rabbit
>>>    [-] Connecting to AMQP server on controller:5672
>>>    2015-04-13 09:20:49.187 1035 INFO oslo.messaging._drivers.impl_rabbit
>>>    [-] Connected to AMQP server on controller:5672
>>>    2015-04-13 09:21:10.122 1035 INFO nova.filters
>>>    [req-f5f3429e-12ac-4277-a634-003eab00e7cc None] Filter RetryFilter
>>>    returned 0 hosts
>>>    2015-04-13 09:30:01.918 1035 INFO nova.filters
>>>    [req-6a5f9135-2efc-4d90-ae9a-11c2f2fee079 None] Filter RetryFilter
>>>    returned 0 hosts
>>>    2015-04-13 09:48:19.150 1035 INFO nova.filters
>>>    [req-0f9d3cdd-18b4-4477-b20a-f1ef1051b045 None] Filter RetryFilter
>>>    returned 0 hosts
>>>    2015-04-13 09:56:03.617 1035 INFO nova.filters
>>>    [req-0c0bde64-3687-42bb-ab81-baf5415e23f9 None] Filter RetryFilter
>>>    returned 0 hosts
>>>    <snap>
>>>    /var/log/nova/nova-compute.log (compute node)
>>>    <snip>
>>>    2015-04-13 09:55:56.892 1308 AUDIT nova.compute.manager
>>>    [req-0c0bde64-3687-42bb-ab81-baf5415e23f9 None] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] Starting instance...
>>>    2015-04-13 09:55:56.948 1308 AUDIT nova.compute.claims [-] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] Attempting claim: memory 512 MB,
>>>    disk 1 GB
>>>    2015-04-13 09:55:56.948 1308 AUDIT nova.compute.claims [-] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] Total memory: 3952 MB, used:
>>>    512.00 MB
>>>    2015-04-13 09:55:56.949 1308 AUDIT nova.compute.claims [-] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] memory limit: 5928.00 MB, free:
>>>    5416.00 MB
>>>    2015-04-13 09:55:56.949 1308 AUDIT nova.compute.claims [-] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] Total disk: 29 GB, used: 0.00 GB
>>>    2015-04-13 09:55:56.949 1308 AUDIT nova.compute.claims [-] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] disk limit not specified,
>>>    defaulting to unlimited
>>>    2015-04-13 09:55:56.954 1308 AUDIT nova.compute.claims [-] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] Claim successful
>>>    2015-04-13 09:55:57.079 1308 INFO nova.scheduler.client.report [-]
>>>    Compute_service record updated for ('compute.dirtyhack.org',
>>>    'compute.dirtyhack.org')
>>>    2015-04-13 09:55:57.156 1308 INFO nova.scheduler.client.report [-]
>>>    Compute_service record updated for ('compute.dirtyhack.org',
>>>    'compute.dirtyhack.org')
>>>    2015-04-13 09:55:57.368 1308 INFO nova.virt.libvirt.driver [-]
>>>    [instance: 3c280f74-c33c-41ec-a83d-8161c9a2d62d] Creating image
>>>    2015-04-13 09:55:57.719 1308 INFO nova.scheduler.client.report [-]
>>>    Compute_service record updated for ('compute.dirtyhack.org',
>>>    'compute.dirtyhack.org')
>>>    2015-04-13 09:56:04.253 1308 WARNING nova.virt.disk.vfs.guestfs [-]
>>>    Failed to close augeas aug_close: do_aug_close: you must call
>>>    'aug-init'
>>>    first to initialize Augeas
>>>    2015-04-13 09:56:04.286 1308 ERROR nova.compute.manager [-] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] Instance failed to spawn
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] Traceback (most recent call last):
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]   File
>>>    "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2246,
>>>    in _build_resources
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]     yield resources
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]   File
>>>    "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2116,
>>>    in _build_and_run_instance
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]
>>>    block_device_info=block_device_info)
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]   File
>>>    "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
>>>    2620, in spawn
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]     write_to_disk=True)
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]   File
>>>    "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
>>>    4159, in _get_guest_xml
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]     context)
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]   File
>>>    "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
>>>    3937, in _get_guest_config
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]     flavor,
>>>    CONF.libvirt.virt_type)
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]   File
>>>    "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 352,
>>>    in get_config
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]     _("Unexpected vif_type=%s") %
>>>    vif_type)
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] NovaException: Unexpected
>>>    vif_type=binding_failed
>>>    2015-04-13 09:56:04.286 1308 TRACE nova.compute.manager [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d]
>>>    2015-04-13 09:56:04.287 1308 AUDIT nova.compute.manager
>>>    [req-0c0bde64-3687-42bb-ab81-baf5415e23f9 None] [instance:
>>>    3c280f74-c33c-41ec-a83d-8161c9a2d62d] Terminating instance
>>>    2015-04-13 09:56:04.288 1308 WARNING nova.virt.libvirt.driver [-]
>>>    [instance: 3c280f74-c33c-41ec-a83d-8161c9a2d62d] During wait destroy,
>>>    instance disappeared.
>>>    2015-04-13 09:56:04.339 1308 INFO nova.virt.libvirt.driver [-]
>>>    [instance: 3c280f74-c33c-41ec-a83d-8161c9a2d62d] Deleting instance
>>>    files
>>>    /var/lib/nova/instances/3c280f74-c33c-41ec-a83d-8161c9a2d62d_del
>>>    2015-04-13 09:56:04.340 1308 INFO nova.virt.libvirt.driver [-]
>>>    [instance: 3c280f74-c33c-41ec-a83d-8161c9a2d62d] Deletion of
>>>    /var/lib/nova/instances/3c280f74-c33c-41ec-a83d-8161c9a2d62d_del
>>>    complete
>>>    <snap>
>>>    /var/log/neutron/openvswitch-agent.log (compute node)
>>>    <snip>
>>>    2015-04-13 09:07:43.740 1309 WARNING neutron.agent.securitygroups_rpc
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 None] Driver configuration
>>>    doesn't match with enable_security_group
>>>    2015-04-13 09:07:43.740 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.747 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.752 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.758 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.763 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.773 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.778 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.784 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.789 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.795 1309 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:07:43.799 1309 INFO
>>>    neutron.plugins.openvswitch.agent.ovs_neutron_agent
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 None] Agent initialized
>>>    successfully, now running...
>>>    2015-04-13 09:07:43.802 1309 INFO
>>>    neutron.plugins.openvswitch.agent.ovs_neutron_agent
>>>    [req-24c7d7d2-505f-4d95-94ab-0f8d14c1e5e8 None] Agent out of sync with
>>>    plugin!
>>>    2015-04-13 09:20:50.913 1309 INFO neutron.agent.securitygroups_rpc
>>>    [req-e5302669-561d-4d0f-9346-93f36c563f78 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:29:55.567 1309 INFO neutron.agent.securitygroups_rpc
>>>    [req-f30442b7-b5e4-48bd-9460-2df168168158 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:48:12.709 1309 INFO neutron.agent.securitygroups_rpc
>>>    [req-9af9abc7-22c2-4989-a1f5-ef65005376c5 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:49:24.864 1309 INFO neutron.agent.securitygroups_rpc
>>>    [req-1fa069aa-f4d2-418e-b969-2cf6a1ad4261 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:49:28.187 1309 INFO neutron.agent.securitygroups_rpc
>>>    [req-e89b568f-75cf-4f49-991b-5634e10568c5 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:49:34.623 1309 INFO neutron.agent.securitygroups_rpc
>>>    [req-e9886087-b1dd-4b50-b0a0-b717ce68082a None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:55:57.285 1309 INFO neutron.agent.securitygroups_rpc
>>>    [req-03dd94f6-9681-4357-b16a-2691a848ab68 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    <snap>
>>>    /var/log/neutron/openvswitch-agent.log (network node)
>>>    <snip>
>>>    2015-04-13 09:45:25.998 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [-] Connecting to AMQP server on controller:5672
>>>    2015-04-13 09:45:26.008 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [-] Connected to AMQP server on controller:5672
>>>    2015-04-13 09:45:26.010 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [-] Connecting to AMQP server on controller:5672
>>>    2015-04-13 09:45:26.017 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [-] Connected to AMQP server on controller:5672
>>>    2015-04-13 09:45:26.128 2590 WARNING neutron.agent.securitygroups_rpc
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf None] Driver configuration
>>>    doesn't match with enable_security_group
>>>    2015-04-13 09:45:26.129 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.136 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.140 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.147 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.152 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.159 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.164 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.170 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.175 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.182 2590 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:45:26.186 2590 INFO
>>>    neutron.plugins.openvswitch.agent.ovs_neutron_agent
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf None] Agent initialized
>>>    successfully, now running...
>>>    2015-04-13 09:45:26.189 2590 INFO
>>>    neutron.plugins.openvswitch.agent.ovs_neutron_agent
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf None] Agent out of sync with
>>>    plugin!
>>>    2015-04-13 09:45:28.360 2590 INFO neutron.agent.securitygroups_rpc
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf None] Skipping method
>>>    prepare_devices_filter as firewall is disabled or configured as
>>>    NoopFirewallDriver.
>>>    2015-04-13 09:45:28.497 2590 WARNING
>>>    neutron.plugins.openvswitch.agent.ovs_neutron_agent
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf None] Device
>>>    1fd3d4a1-1c3e-464c-856f-dc4a7c502b4a not defined on plugin
>>>    2015-04-13 09:45:28.721 2590 WARNING
>>>    neutron.plugins.openvswitch.agent.ovs_neutron_agent
>>>    [req-aff27075-620a-430d-805d-b07cf8086aaf None] Device
>>>    dad453a8-9a4b-48b2-842f-07063a9911c9 not defined on plugin
>>>    2015-04-13 09:48:11.962 2590 INFO neutron.agent.securitygroups_rpc
>>>    [req-9af9abc7-22c2-4989-a1f5-ef65005376c5 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:49:24.118 2590 INFO neutron.agent.securitygroups_rpc
>>>    [req-1fa069aa-f4d2-418e-b969-2cf6a1ad4261 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:49:27.440 2590 INFO neutron.agent.securitygroups_rpc
>>>    [req-e89b568f-75cf-4f49-991b-5634e10568c5 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:49:33.877 2590 INFO neutron.agent.securitygroups_rpc
>>>    [req-e9886087-b1dd-4b50-b0a0-b717ce68082a None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    2015-04-13 09:55:56.538 2590 INFO neutron.agent.securitygroups_rpc
>>>    [req-03dd94f6-9681-4357-b16a-2691a848ab68 None] Security group member
>>>    updated [u'd1e0c56d-d46a-4081-ba39-05ae4dcbda2c']
>>>    <snap>
>>>    /var/log/neutron/dhcp-agent.log (network node)
>>>    <snip>2015-04-13 09:45:25.550 2589 WARNING neutron.agent.linux.dhcp
>>>    [req-403bb372-32c3-4a1d-b616-9e6ee7fa7ad2 None] FAILED VERSION
>>>    REQUIREMENT FOR DNSMASQ. DHCP AGENT MAY NOT RUN CORRECTLY WHEN SERVING
>>>    IPV6 STATEFUL SUBNETS! Please ensure that its version is 2.67 or above!
>>>    2015-04-13 09:45:25.556 2589 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-a4773a7a-3b90-440a-8a12-f75549f4ca39 ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:45:25.617 2589 INFO oslo.messaging._drivers.impl_rabbit
>>>    [-] Connecting to AMQP server on controller:5672
>>>    2015-04-13 09:45:25.625 2589 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-a4773a7a-3b90-440a-8a12-f75549f4ca39 ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:45:25.625 2589 INFO oslo.messaging._drivers.impl_rabbit
>>>    [-] Connected to AMQP server on controller:5672
>>>    2015-04-13 09:45:25.627 2589 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-a4773a7a-3b90-440a-8a12-f75549f4ca39 ] Connecting to AMQP
>>>    server on
>>>    controller:5672
>>>    2015-04-13 09:45:25.634 2589 INFO oslo.messaging._drivers.impl_rabbit
>>>    [req-a4773a7a-3b90-440a-8a12-f75549f4ca39 ] Connected to AMQP server on
>>>    controller:5672
>>>    2015-04-13 09:45:25.637 2589 INFO neutron.agent.dhcp_agent [-] DHCP
>>>    agent started
>>>    2015-04-13 09:45:26.025 2589 INFO neutron.agent.dhcp_agent
>>>    [req-a4773a7a-3b90-440a-8a12-f75549f4ca39 None] Synchronizing state
>>>    2015-04-13 09:45:27.034 2589 INFO neutron.agent.dhcp_agent
>>>    [req-a4773a7a-3b90-440a-8a12-f75549f4ca39 None] Synchronizing state
>>>    complete
>>>    <snap>
>>>    controller node
>>>    ---------------
>>>    keystone.conf:
>>>    [DEFAULT]
>>>    admin_token=xxxxxxxxx
>>>    debug=false
>>>    verbose=true
>>>    [database]
>>>    connection = mysql://keystone:xxxxxx@controller/keystone
>>>    [revoke]
>>>    driver = keystone.contrib.revoke.backends.sql.Revoke
>>>    [token]
>>>    provider = keystone.token.providers.uuid.Provider
>>>    driver = keystone.token.persistence.backends.sql.Token
>>>    neutron.conf:
>>>    [database]
>>>    connection = mysql://neutron:xxxxxx@controller/neutron
>>>    [DEFAULT]
>>>    verbose = True
>>>    rpc_backend = rabbit
>>>    rabbit_host = controller
>>>    rabbit_password = xxxxx
>>>    core_plugin = ml2
>>>    service_plugins = router
>>>    allow_overlapping_ips = True
>>>    notify_nova_on_port_status_changes = True
>>>    notify_nova_on_port_data_changes = True
>>>    nova_url = http://controller:8774/v2
>>>    nova_admin_auth_url = http://controller:35357/v2.0
>>>    nova_region_name = regionOne
>>>    nova_admin_username = nova
>>>    nova_admin_tenant_id = xxxxxxx
>>>    nova_admin_password = xxxxxx
>>>    auth_strategy = keystone
>>>    [keystone_authtoken]
>>>    auth_uri = http://controller:5000/v2.0
>>>    identity_uri = http://controller:35357
>>>    admin_tenant_name = service
>>>    admin_user = neutron
>>>    admin_password = foobar
>>>    [neutron]
>>>    service_metadata_proxy = True
>>>    metadata_proxy_shared_secret = foobar
>>>    ml2_conf.ini:
>>>    [ml2]
>>>    type_drivers = flat,gre
>>>    tenant_network_types = gre
>>>    mechanism_drivers = openvswitch
>>>    [ml2_type_gre]
>>>    tunnel_id_ranges = 1:1000
>>>    [securitygroup]
>>>    enable_security_group = True
>>>    enable_ipset = True
>>>    firewall_driver =
>>>    neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>>    compute node:
>>>    -------------
>>>    nova.conf
>>>    [DEFAULT]
>>>    my_ip = 10.0.0.20
>>>    verbose = True
>>>    rpc_backend = rabbit
>>>    rabbit_host = controller
>>>    rabbit_password = xxxxx
>>>    vnc_enabled = True
>>>    vncserver_listen = 0.0.0.0
>>>    vncserver_proxyclient_address = 10.0.0.20
>>>    novncproxy_base_url = http://controller:6080/vnc_auto.html
>>>    network_api_class = nova.network.neutronv2.api.API
>>>    security_group_api = neutron
>>>    linuxnet_interface_driver =
>>>    nova.network.linux_net.LinuxOVSInterfaceDriver
>>>    firewall_driver = nova.virt.firewall.NoopFirewallDriver
>>>    auth_strategy = keystone
>>>    [keystone_authtoken]
>>>    auth_uri = http://controller:5000/v2.0
>>>    identity_uri = http://controller:35357
>>>    admin_tenant_name = service
>>>    admin_user = nova
>>>    admin_password = xxxxx
>>>    [glance]
>>>    host = controller
>>>    [libvirt]
>>>    virt_type = qemu
>>>    [neutron]
>>>    url = http://controller:9696
>>>    auth_strategy = keystone
>>>    admin_auth_url = http://controller:35357/v2.0
>>>    admin_tenant_name = service
>>>    admin_username = neutron
>>>    admin_password = xxxx
>>>    neutron.conf:
>>>    [DEFAULT]
>>>    verbose = True
>>>    rpc_backend = rabbit
>>>    rabbit_host = controller
>>>    rabbit_password = xxxxxx
>>>    core_plugin = ml2
>>>    service_plugins = router
>>>    allow_overlapping_ips = True
>>>    auth_strategy = keystone
>>>    [keystone_authtoken]
>>>    auth_uri = http://controller:5000/v2.0
>>>    identity_uri = http://controller:35357
>>>    admin_tenant_name = service
>>>    admin_user = neutron
>>>    admin_password = xxxx
>>>    ml2_conf.ini:
>>>    [ml2]
>>>    type_drivers = flat,gre
>>>    tenant_network_types = gre
>>>    mechanism_drivers = openvswitch
>>>    [ml2_type_gre]
>>>    tunnel_id_ranges = 1:1000
>>>    [securitygroup]
>>>    enable_security_group = True
>>>    enable_ipset = True
>>>    firewall_driver =
>>>    neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>>    [ovs]
>>>    local_ip = 10.1.0.20
>>>    enable_tunneling = True
>>>    [agent]
>>>    tunnel_types = gre
>>>    network node:
>>>    -------------
>>>    neutron.conf:
>>>    [DEFAULT]
>>>    verbose = True
>>>    rpc_backend = rabbit
>>>    rabbit_host = controller
>>>    rabbit_password = xxxxx
>>>    core_plugin = ml2
>>>    service_plugins = router
>>>    allow_overlapping_ips = True
>>>    auth_strategy = keystone
>>>    [keystone_authtoken]
>>>    auth_uri = http://controller:5000/v2.0
>>>    identity_uri = http://controller:35357
>>>    admin_tenant_name = service
>>>    admin_user = neutron
>>>    admin_password = xxxxx
>>>    ml2_conf.ini:
>>>    [ml2]
>>>    type_drivers = flat,gre
>>>    tenant_network_types = gre
>>>    mechanism_drivers = openvswitch
>>>    [ml2_type_flat]
>>>    flat_networks = external
>>>    [ml2_type_gre]
>>>    tunnel_id_ranges = 1:1000
>>>    [securitygroup]
>>>    enable_security_group = True
>>>    enable_ipset = True
>>>    firewall_driver =
>>>    neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>>    [ovs]
>>>    local_ip = 10.1.0.30
>>>    enable_tunneling = True
>>>    bridge_mappings = external:br-ex
>>>    [agent]
>>>    tunnel_types = gre
>>>    Regards,
>>>    Joerg
>>>    _______________________________________________
>>>    Mailing list:
>>>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>    Post to     : openstack at lists.openstack.org
>>>    Unsubscribe :
>>>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>> !DSPAM:1,552c1da6322181578812629!
>>
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 




More information about the Openstack mailing list