[Openstack] Trove MySQL v5.[67] images?

Mark Kirkwood mark.kirkwood at catalyst.net.nz
Mon Oct 10 02:44:33 UTC 2016


On 10/10/16 00:53, Turbo Fredriksson wrote:

> On Oct 2, 2016, at 7:07 PM, Turbo Fredriksson wrote:
>
>> My original question is still valid though:
>>
>> Do anyone have a [non-*stack, working] Trove image with MySQL v5.[67] that
>> they could consider sharing with me?
> No response for almost a week.. Either no one is using Trove (successfully)
> or people don't want to share..
>
> Both choices begs questions, but I guess I have to give up on this again.
> I don't know if it's bugs in Trove, the guest image or my configuration
> that's faulty, but every single attempt is met by an ERROR.

Turbo,

Here's some notes I wrote (in 2014) [1]! for installing Trove (and most 
of Openstack) on a single Ubuntu 14.04 server using packages. It is 
obviously only for the purpose of learning...*but* a) it did work and b) 
no devstack was used :-)

Casting my mind back to that time, I note that you have to be pretty 
determined as there are many components to get working together and any 
one of them not working right will trip you up! (I document quote a few 
bugs I encountered).

Also probably worth noting - I *first* got things going in devstack, and 
this provided much help for pin pointing things I'd got wrong in the 
package install (essentially I ran 2 Ubuntu VMs...1 with devstack one of 
my packaging setup).

Best wishes

Mark

[1] I'm working on Swift ATM - when that is all in production I may look 
at Trove again
-------------- next part --------------
Install openstack on single Ubuntu 14.04 node
=============================================

We loosely follow http://fosskb.wordpress.com/2014/04/12/openstack-icehouse-on-ubuntu-12-04-lts-single-machine-setup/


1/ Support packages

$ sudo apt-get install rabbitmq-server
$ sudo apt-get install mysql-server python-mysqldb
$ sudo vi /etc/mysql/my.cnf
[mysqld]
...
# Openstack settings
bind-address = 0.0.0.0
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

$ sudo service mysql restart

$ sudo apt-get install ntp vlan bridge-utils
$ sudo  vi /etc/sysctl.conf
# Openstack settings
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

$ sudo sysctl -p


2/ Keystone

$ sudo apt-get install keystone
$ mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'password';
quit;

$ sudo vi /etc/keystone/keystone.conf
{DEFAULT]
log_file=/var/log/keystone/keystone.log

[database]
connection = mysql://keystone:password@192.168.122.33/keystone


$ sudo service keystone restart
$ sudo keystone-manage db_sync

$ export OS_SERVICE_TOKEN=ADMIN
$ export OS_SERVICE_ENDPOINT=http://192.168.122.33:35357/v2.0

$ keystone tenant-create --name=admin --description="Admin Tenant"
$ keystone tenant-create --name=service --description="Service Tenant"
$ keystone user-create --name=admin --pass=ADMIN --email=admin at example.com
$ keystone role-create --name=admin
$ keystone user-role-add --user=admin --tenant=admin --role=admin

$ keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
$ keystone endpoint-create --service=keystone --publicurl=http://192.168.122.33:5000/v2.0 --internalurl=http://192.168.122.33:5000/v2.0 --adminurl=http://192.168.122.33:35357/v2.0

$ vi creds
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.122.33:35357/v2.0

$ unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
$ . creds                                     # test it works
$ keystone token-get
$ keystone user-list

$ keystone tenant-create --name=demo          # add another user and tenant (proj)
$ keystone user-create --name demo --tenant demo --pass password
$ keystone user-role-add --user admin --role admin --tenant demo

$ vi creds-demo
export OS_USERNAME=demo
export OS_PASSWORD=password
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://192.168.122.33:35357/v2.0


3/ Glance

$ sudo apt-get install glance
$ . creds                                    # if not done already

$ mysql -u root -p
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'password';
quit;

$ keystone user-create --name=glance --pass=password --email=glance at example.com
$ keystone user-role-add --user=glance --tenant=service --role=admin
$ keystone service-create --name=glance --type=image --description="Glance Image Service"
$ keystone endpoint-create --service=glance --publicurl=http://192.168.122.33:9292 --internalurl=http:192.168.122.33//:9292 --adminurl=http://192.168.122.33:9292

$ sudo vi /etc/glance/glance-api.conf
# sqlite_db = /var/lib/glance/glance.sqlite
connection = mysql://glance:password@192.168.122.33/glance

[keystone_authtoken]
auth_host = 192.168.122.33
#auth_port = 5000
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password

[paste_deploy]
flavor = keystone


$ sudo vi /etc/glance/glance-registry.conf
# sqlite_db = /var/lib/glance/glance.sqlite
connection = mysql://glance:password@192.168.122.33/glance

[keystone_authtoken]
auth_host = 192.168.122.33
#auth_port = 5000
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password

[paste_deploy]
flavor = keystone

$ sudo service glance-api restart
$ sudo service glance-registry restart

$ sudo glance-manage db_sync

$ glance image-create --name Cirros --is-public true --container-format bare --disk-format qcow2 --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
$ glance index


4/ Nova

$ sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient nova-compute nova-console

$ mysql -u root -p
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'password';
quit

$ keystone user-create --name=nova --pass=password --email=nova at example.com
$ keystone user-role-add --user=nova --tenant=service --role=admin
$ keystone service-create --name=nova --type=compute --description="OpenStack Compute"
$ keystone endpoint-create --service=nova --publicurl=http://192.168.122.33:8774/v2/%\(tenant_id\)s --internalurl=http://192.168.122.33:8774/v2/%\(tenant_id\)s --adminurl=http://192.168.122.33:8774/v2/%\(tenant_id\)s

$ sudo vi /etc/nova/nova.conf
[DEFAULT]
vif_plugging_timeout = 300
vif_plugging_is_fatal = True
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
service_neutron_metadata_proxy = True
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
dhcpbridge_flagfile = /etc/nova/nova.conf
iscsi_helper=tgtadm
# if these are enabled, add the mangle rule in rc.local
libvirt_use_virtio_for_bridges=False
#connection_type=libvirt
root_helper=sudo /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
debug=True
rpc_backend = nova.rpc.impl_kombu
glance_api_servers = 192.168.122.33:9292
rabbit_hosts = 192.168.122.33
my_ip = 192.168.122.33
vncserver_listen = 192.168.122.33
vncserver_proxyclient_address = 192.168.122.33
novncproxy_base_url=http://192.168.122.33:6080/vnc_auto.html
auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://192.168.122.33:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=password
neutron_admin_auth_url=http://192.168.122.33:35357/v2.0
neutron_region_name = regionOne
linuxnet_interface_driver = 
firewall_driver=nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
security_group_api=neutron
api_paste_config=/etc/nova/api-paste.ini
#volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

[database]
connection = mysql://nova:password@192.168.122.33/nova

[keystone_authtoken]
signing_dir = /var/cache/nova
#auth_uri = http://192.168.122.33:5000
auth_host = 192.168.122.33
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password

$ sudo vi /etc/nova/nova-compute.conf
[DEFAULT]
compute_driver = libvirt.LibvirtDriver
force_config_drive = always

[libvirt]
inject_partition = -2
cpu_mode = none
virt_type = kvm


$ sudo nova-manage db sync

$ sudo mkdir /var/cache/nova
$ sudo chown nova /var/cache/nova
$ sudo chmod 700 /var/cache/nova
$ sudo visudo                                       # so can run helper
...
nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf *

$ sudo service nova-api restart 
$ sudo service nova-cert restart
$ sudo service nova-consoleauth restart 
$ sudo service nova-scheduler restart
$ sudo service nova-conductor restart
$ sudo service nova-novncproxy restart
$ sudo service nova-compute restart
$ sudo service nova-console restart

$ sudo nova-manage service list                     # check running
$ nova list                                         # check client connect


4/ Neutron

$ sudo apt-get install neutron-server neutron-common neutron-dhcp-agent neutron-l3-agent neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-metadata-agent openvswitch-switch

$ mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'password';
quit;

$ keystone user-create --name=neutron --pass=password --email=neutron at example.com
$ keystone service-create --name=neutron --type=network --description="OpenStack Networking"
$ keystone user-role-add --user=neutron --tenant=service --role=admin
$ keystone endpoint-create --service=neutron --publicurl http://192.168.122.33:9696 --adminurl http://192.168.122.33:9696  --internalurl http://192.168.122.33:9696

$ sudo vi  /etc/neutron/neutron.conf
[DEFAULT]
verbose = True
debug = True
state_path = /var/lib/neutron
lock_path = $state_path/lock
policy_file = /etc/neutron/policy.json
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
auth_strategy = keystone
#dhcp_agent_notification = True
allow_overlapping_ips = True
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_hosts = 192.168.122.33
rabbit_port = 5672
#rabbit_userid = rabbit
notification_driver = neutron.openstack.common.notifier.rpc_notifier
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://192.168.122.33:8774/v2
nova_admin_username = nova
# keystone tenant 'service'
nova_admin_tenant_id = c85d14ae25c241598c4b0de04b491121
nova_admin_password = password
nova_admin_auth_url = http://192.168.122.33:35357/v2.0
neutron_region_name = regionOne

[quotas]
# quota_driver = neutron.db.quota_db.DbQuotaDriver

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken]
auth_host = 192.168.122.33
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = password
signing_dir = /var/cache/neutron

[database]
# set in plugin
#connection = 

[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default


$ sudo vi /etc/neutron/metadata_agent.ini
[DEFAULT]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
nova_metadata_ip = 192.168.122.33
debug = True
verbose = True
auth_url = http://192.168.122.33:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = password


$ sudo vi /etc/neutron/l3_agent.ini
[DEFAULT]
l3_agent_manager = neutron.agent.l3_agent.L3NATAgentWithStateReport
external_network_bridge = br-ex
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
ovs_use_veth = False
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
use_namespaces = True
debug = True
verbose = True


$ sudo vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
dhcp_agent_manager = neutron.agent.dhcp_agent.DhcpAgentWithStateReport
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
ovs_use_veth = False
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
use_namespaces = True
debug = True
verbose = True
#dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

$ sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,linuxbridge

[ml2_type_flat]
# flat_networks =

[ml2_type_vlan]
# network_vlan_ranges =

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
vni_ranges = 1001:2000

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[database]
connection = mysql://neutron:password@192.168.122.33/neutron?charset=utf8

[ovs]
local_ip = 192.168.122.33

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf


$ sudo vi /etc/neutron/rootwrap.conf
[DEFAULT]
filters_path=/etc/neutron/rootwrap.d


$ sudo visudo                                       # so can run helper
...
neutron ALL = (root) NOPASSWD: /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf *

$ sudo neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

$ sudo mkdir /var/cache/neutron
$ sudo chown neutron /var/cache/neutron
$ sudo chmod 700 /var/cache/neutron

$ sudo ovs-vsctl add-br br-int
$ sudo ovs-vsctl br-set-external-id br-int bridge-id br-int
$ sudo ovs-vsctl add-br br-ex
$ sudo ovs-vsctl br-set-external-id br-ex bridge-id br-ex

$ sudo service neutron-server restart
$ sudo service neutron-metadata-agent restart
$ sudo service neutron-plugin-openvswitch-agent restart
$ sudo service neutron-dhcp-agent restart
$ sudo service neutron-l3-agent restart

$ neutron agent-list
+--------------------------------------+--------------------+--------+-------+----------------+
| id                                   | agent_type         | host   | alive | admin_state_up |
+--------------------------------------+--------------------+--------+-------+----------------+
| 2b850806-a930-48a5-ad6f-0897f8aa83a0 | L3 agent           | stack3 | :-)   | True           |
| 6b3ef5ef-d1fd-4120-8001-a963c53d9579 | Metadata agent     | stack3 | :-)   | True           |
| a73687fb-0069-4256-b117-4a1b728f0487 | Open vSwitch agent | stack3 | :-)   | True           |
| b7996011-aac2-4428-b73d-cad40766d825 | DHCP agent         | stack3 | :-)   | True           |
+--------------------------------------+--------------------+--------+-------+----------------+


5/ Horizon

$ sudo apt-get install openstack-dashboard

[url stack3/horizon]
[user: admin, password: ADMIN]


6/ Setup network


$ neutron net-create --os-tenant-name demo private     # private 10.0.0.0/24 net
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 68a5a249-b8f5-4417-9d54-3bb5fdbd236b |
| name                      | private                              |
| provider:network_type     | local                                |
| provider:physical_network |                                      |
| provider:segmentation_id  |                                      |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 96b48cbb1b0d4a8e9ad813453af3a300     |
+---------------------------+--------------------------------------+

$ neutron subnet-create --os-tenant-name demo --ip_version 4 --gateway 10.0.0.1 --name private-subnet 68a5a249-b8f5-4417-9d54-3bb5fdbd236b 10.0.0.0/24
+------------------+--------------------------------------------+
| Field            | Value                                      |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr             | 10.0.0.0/24                                |
| dns_nameservers  |                                            |
| enable_dhcp      | True                                       |
| gateway_ip       | 10.0.0.1                                   |
| host_routes      |                                            |
| id               | d55614c0-fce9-4042-99f6-0b96effd550f       |
| ip_version       | 4                                          |
| name             | private-subnet                             |
| network_id       | 68a5a249-b8f5-4417-9d54-3bb5fdbd236b       |
| tenant_id        | 96b48cbb1b0d4a8e9ad813453af3a300           |
+------------------+--------------------------------------------+

$ neutron router-create --os-tenant-name demo router1
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 626d031f-72d6-48e4-a276-f608752d4f68 |
| name                  | router1                              |
| status                | ACTIVE                               |
| tenant_id             | 96b48cbb1b0d4a8e9ad813453af3a300     |
+-----------------------+--------------------------------------+

$ neutron router-interface-add 626d031f-72d6-48e4-a276-f608752d4f68 d55614c0-fce9-4042-99f6-0b96effd550f
Added interface 89ca3f7c-a0ad-49a2-b723-20a3c1d99c97 to router 626d031f-72d6-48e4-a276-f608752d4f68.

$ neutron net-create public -- --router:external=True       # public net 172.24.0.0/24
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 3fecfd74-7222-4c42-9029-4344660447c3 |
| name                      | public                               |
| provider:network_type     | local                                |
| provider:physical_network |                                      |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 96b48cbb1b0d4a8e9ad813453af3a300     |
+---------------------------+--------------------------------------+

$ neutron subnet-create --ip_version 4 --gateway 172.24.4.1 --name public-subnet \
  3fecfd74-7222-4c42-9029-4344660447c3 172.24.4.0/24 -- --enable_dhcp=False
+------------------+------------------------------------------------+
| Field            | Value                                          |
+------------------+------------------------------------------------+
| allocation_pools | {"start": "172.24.4.2", "end": "172.24.4.254"} |
| cidr             | 172.24.4.0/24                                  |
| dns_nameservers  |                                                |
| enable_dhcp      | False                                          |
| gateway_ip       | 172.24.4.1                                     |
| host_routes      |                                                |
| id               | e8063c60-dc0b-4313-9918-a14359a608b6           |
| ip_version       | 4                                              |
| name             | public-subnet                                  |
| network_id       | 3fecfd74-7222-4c42-9029-4344660447c3           |
| tenant_id        | 96b48cbb1b0d4a8e9ad813453af3a300               |
+------------------+------------------------------------------------+

$ neutron router-gateway-set 626d031f-72d6-48e4-a276-f608752d4f68 3fecfd74-7222-4c42-9029-4344660447c3
Set gateway for router 626d031f-72d6-48e4-a276-f608752d4f68

$ sudo ip addr flush dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
$ sudo ip link set br-ex up
$ sudo route add -net 10.0.0.0/24 gw 172.24.4.2

$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.122.1   0.0.0.0         UG    0      0        0 eth0
10.0.0.0        172.24.4.2      255.255.255.0   UG    0      0        0 br-ex
172.24.4.0      *               255.255.255.0   U     0      0        0 br-ex
192.168.122.0   *               255.255.255.0   U     0      0        0 eth0

$ nova boot  --flavor 2 --image fab89c61-7c8c-4a41-a823-29ed09b60603 cirros0  
$ nova list
+--------------------------------------+-----------+--------+------------+-------------+------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks         |
+--------------------------------------+-----------+--------+------------+-------------+------------------+
| f2a93731-b954-4580-88eb-cabc38628bba | instance0 | ACTIVE | -          | Running     | private=10.0.0.2 |
+--------------------------------------+-----------+--------+------------+-------------+------------------+

$ neutron security-group-list
+--------------------------------------+---------+-------------+
| id                                   | name    | description |
+--------------------------------------+---------+-------------+
| a7ccd9ee-a7e2-45e8-8d62-691228f7c798 | default | default     |
| c502658b-31fb-461c-ac1d-446bd1a14bb9 | default | default     |
+--------------------------------------+---------+-------------+


$ neutron security-group-rule-create --protocol icmp \
  --direction ingress a7ccd9ee-a7e2-45e8-8d62-691228f7c798
$ neutron security-group-rule-create --protocol icmp \
  --direction ingress c502658b-31fb-461c-ac1d-446bd1a14bb9

$ neutron security-group-rule-create --protocol tcp \
  --port-range-min 22 --port-range-max 22 --direction ingress a7ccd9ee-a7e2-45e8-8d62-691228f7c798
$ neutron security-group-rule-create --protocol tcp \
  --port-range-min 22 --port-range-max 22 --direction ingress c502658b-31fb-461c-ac1d-446bd1a14bb9


$ ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.993 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.650 ms


$ sudo vi /etc/network/interfaces             # these two steps are wrong...there has to be a better way!
auto br-ex
iface br-ex inet static
  address 172.24.4.1
  netmask 255.255.255.0
  network 172.24.4.0
  broadcast 172.24.4.255

$ sudo vi /etc/rc.local
route add -net 10.0.0.0/24 gw 172.24.4.2
# if bridge virtio is enabled, then add this.
#iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM --checksum-fill

exit 0


7/ Cinder

$ sudo apt-get install cinder-api cinder-scheduler lvm2
$ sudo vi /etc/cinder/cinder.conf
[database]
connection = mysql://cinder:password@192.168.122.33/cinder

$ mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'password';
quit;

$ sudo cinder-manage db sync

$ keystone user-create --name=cinder --pass=password --email=cinder at example.com
$ keystone user-role-add --user=cinder --tenant=service --role=admin

$ sudo vi /etc/cinder/cinder.conf
[DEFAULT]
...
#rootwrap_config = /etc/cinder/rootwrap.conf
root_helper=sudo /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_hosts = 192.168.122.33
rabbit_port = 5672
glance_host = 192.168.122.33

[keystone_authtoken]
auth_uri = http://192.168.122.33:5000
auth_host = 192.168.122.33
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = password

$ keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
$ keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ volume / {print $2}') \
  --publicurl=http://192.168.122.33:8776/v1/%\(tenant_id\)s \
  --internalurl=http://192.168.122.33:8776/v1/%\(tenant_id\)s \
  --adminurl=http://192.168.122.33:8776/v1/%\(tenant_id\)s

$ keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
$ keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') \
  --publicurl=http://192.168.122.33:8776/v2/%\(tenant_id\)s \
  --internalurl=http://192.168.122.33:8776/v2/%\(tenant_id\)s \
  --adminurl=http://192.168.122.33:8776/v2/%\(tenant_id\)s

$ sudo visudo
cinder ALL=(root) NOPASSWD: /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf *

$ sudo service cinder-scheduler restart
$ sudo service cinder-api restart

$ sudo pvcreate /dev/vdb
$ sudo vgcreate cinder-volumes /dev/vdb

$ sudo vi /etc/lvm/lvm.conf
devices {
...
filter = [ "a/vda1/", "a/vdb/", "r/.*/"]
...
}

$ sudo apt-get install cinder-volume
$ sudo service cinder-volume restart
$ sudo service tgt restart

$ cinder create 1                   # check it works!


8/ Heat

See http://docs.openstack.org/trunk/install-guide/install/apt/content/heat-install.html

$ sudo apt-get install heat-api heat-api-cfn heat-engine
$ sudo vi /etc/heat/heat.conf
[database]
connection = mysql://heat:password@192.168.122.33/heat

$ sudo rm /var/lib/heat/heat.sqlite
$ mysql -u root -p
CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%'  IDENTIFIED BY 'password';

$ sudo heat-manage db_sync

$ sudo vi /etc/heat/heat.conf
[DEFAULT]
verbose = True
log_dir=/var/log/heat
rabbit_host = 192.168.122.33

[keystone_authtoken]
auth_host = 192.168.122.33
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.122.33:5000/v2.0
admin_tenant_name = service
admin_user = heat
admin_password = password
[ec2authtoken]
auth_uri = http://192.168.122.33:5000/v2.0

$ keystone user-create --name=heat --pass=password \
  --email=heat at example.com
$ keystone user-role-add --user=heat --tenant=service --role=admin
$ keystone service-create --name=heat --type=orchestration \
  --description="Orchestration"
$ keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ orchestration / {print $2}') \
  --publicurl=http://192.168.122.33:8004/v1/%\(tenant_id\)s \
  --internalurl=http://192.168.122.33:8004/v1/%\(tenant_id\)s \
  --adminurl=http://192.168.122.33:8004/v1/%\(tenant_id\)s
$ keystone service-create --name=heat-cfn --type=cloudformation \
  --description="Orchestration CloudFormation"
$ keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ cloudformation / {print $2}') \
  --publicurl=http://192.168.122.33:8000/v1 \
  --internalurl=http://192.168.122.33:8000/v1 \
  --adminurl=http://192.168.122.33:8000/v1
$ keystone role-create --name heat_stack_user

$ sudo vi /etc/heat/heat.conf
[DEFAULT]
...
heat_metadata_server_url = http://192.168.122.33:8000
heat_waitcondition_server_url = http://192.168.122.33:8000/v1/waitcondition

$ sudo service heat-api restart
$ sudo service heat-api-cfn restart
$ sudo service heat-engine restart

9/ Trove image

$ sudo apt-get install cloud-utils genisoimage

Get the ubuntu cloud image and set it up for easy access:

$ cd images
$ wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
$ cat > my-user-data <<EOF
#cloud-config
password: password
chpasswd: { expire: False }
ssh_pwauth: True
EOF

$ cloud-localds my-seed.img my-user-data

Boot the vm see [1]

$ sudo kvm -net nic -net user -hda trusty-server-cloudimg-amd64-disk1.img -hdb my-seed.img -m 512
(guest) $ sudo vi /etc/cloud/cloud.cfg.d/90_dpkg.cfg
datasource_list: [ ConfigDrive, OpenStack ]

(guest) $ sudo vi /etc/cloud/cloud.cfg
manage_etc_hosts: true

(guest) $ sudo apt-get install mysql-server-5.5 mysql-client-5.5
(guest) $ sudo vi /root/.mysql_secret                # if you set a mysql root passwd
# The random password set for the root user at Fri May 23 00:00:00 2014 (local time): password
(guest) $ sudo chmod 600 /root/.mysql_secret
(guest) $ sudo vi /etc/mysql/my.cnf                  # if you set a mysql root passwd
[client]
...
password       = password

(guest) $ sudo apt-get install trove-guestagent
(guest) $ sudo vi /etc/init/trove-guestagent.conf
...
            --exec /usr/bin/trove-guestagent -- --config-file=/etc/guest_info --config-file=/etc/trove/trove-guestagent.conf --log-dir=/var/log/trove --logfile=guestagent.log

(guest) $ sudo vi /etc/trove/trove-guestagent.conf
[DEFAULT]
datastore_manager = mysql
rabbit_host = 172.24.4.1
verbose = True
debug = True
bind_port = 8778
bind_host = 0.0.0.0
nova_proxy_admin_user = admin
nova_proxy_admin_pass = ADMIN
nova_proxy_admin_tenant_name = service
trove_auth_url = http://172.24.4.1:35357/v2.0
control_exchange = trove
root_grant = ALL
root_grant_option = True
log_dir = /tmp
log_file = guest.log
ignore_users = os_admin
ignore_dbs = lost+found, mysql, information_schema

# Backups
backup_namespace = trove.guestagent.strategies.backup.mysql_impl
restore_namespace = trove.guestagent.strategies.restore.mysql_impl
storage_strategy = SwiftStorage
storage_namespace = trove.guestagent.strategies.storage.swift
swift_url = http://172.24.4.1:8080/v1/AUTH_
backup_swift_container = database_backups
backup_use_gzip_compression = True
backup_use_openssl_encryption = True
backup_aes_cbc_key = "default_aes_cbc_key"
backup_use_snet = False
backup_chunk_size = 65536
backup_segment_max_size = 2147483648

[mysql]
backup_strategy = MySQLDump


(guest) $ sudo chmod 755 /var/log/trove                       # fix package install perm error
(guest) $ sudo vi /home/ubuntu/.ssh/authorized_keys           # for admin ssh if required.
(guest) $ sudo visudo
trove ALL = (ALL) NOPASSWD: ALL

(guest) $ usermod -a -G root trove
(guest) $ exit

Stop the guest, and make a compressed qcow2

$ qemu-img convert -f qcow2 -O qcow2 -c trusty-server-cloudimg-amd64-disk1.img ubuntu_mysql.qcow2


10/ Trove

Following from http://docs.openstack.org/trunk/install-guide/install/apt/content/trove-install.html
$ sudo apt-get install python-trove python-troveclient trove-common trove-api trove-taskmanager

$ mysql -u root -p
CREATE DATABASE trove;
GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'password';
quit;

$ keystone user-create --name=trove --pass=password --tenant=service \
           --email=trove at example.com
$ keystone user-role-add --user=trove --tenant=service --role=admin

$ sudo vi /etc/trove/trove.conf
[DEFAULT]
default_datastore = mysql
verbose = True
debug = True
log_dir = /var/log/trove
#trove_auth_url = http://192.168.122.33:5000/v2.0
nova_compute_url = http://192.168.122.33:8774/v2
cinder_url = http://192.168.122.33:8776/v1
swift_url = http://192.168.122.33:8080/v1/AUTH_
sql_connection = mysql://trove:password@192.168.122.33/trove
#notifier_queue_hostname = 192.168.122.33
add_addresses = True
#network_label_regex = ^NETWORK_LABEL$
api_extensions_path = /usr/lib/python2.7/dist-packages/trove/extensions/routes

$ sudo vi /etc/trove/trove-taskmanager.conf
[DEFAULT]
log_dir = /var/log/trove
use_syslog = False
verbose = True
debug = True
taskmanager_manager = trove.taskmanager.manager.Manager
trove_auth_url = http://192.168.122.33:5000/v2.0
nova_compute_url = http://192.168.122.33:8774/v2
cinder_url = http://192.168.122.33:8776/v1
swift_url = http://192.168.122.33:8080/v1/AUTH_
sql_connection = mysql://trove:password@192.168.122.33/trove
#notifier_queue_hostname = 192.168.122.33
nova_proxy_admin_user = admin
nova_proxy_admin_tenant_name = service
nova_proxy_admin_pass = ADMIN

$ sudo vi /etc/trove/trove-conductor.conf
[DEFAULT]
use_syslog = False
verbose = True
debug = True
control_exchange = trove
trove_auth_url = http://192.168.122.33:5000/v2.0
#nova_compute_url = http://192.168.122.33:8774/v2
#cinder_url = http://192.168.122.33:8776/v1
#swift_url = http://192.168.122.33:8080/v1/AUTH_
nova_proxy_admin_user = admin
nova_proxy_admin_tenant_name = service
nova_proxy_admin_pass = ADMIN
sql_connection = mysql://trove:password@192.168.122.33/trove
#notifier_queue_hostname = 192.168.122.33

$ sudo vi /etc/trove/api-paste.ini
[composite:trove]
use = call:trove.common.wsgi:versioned_urlmap
/: versions
/v1.0: troveapi

[app:versions]
paste.app_factory = trove.versions:app_factory

[pipeline:troveapi]
pipeline = faultwrapper authtoken authorization contextwrapper ratelimit extensions troveapp
#pipeline = debug extensions troveapp

[filter:extensions]
paste.filter_factory = trove.common.extensions:factory

[filter:authtoken]
signing_dir = /var/cache/trove
admin_password = password
admin_user = trove
admin_tenant_name = service
admin_token = ADMIN
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.168.122.33
auth_port = 35357
auth_protocol = http

[filter:authorization]
paste.filter_factory = trove.common.auth:AuthorizationMiddleware.factory

[filter:contextwrapper]
paste.filter_factory = trove.common.wsgi:ContextMiddleware.factory

[filter:faultwrapper]
paste.filter_factory = trove.common.wsgi:FaultWrapper.factory

[filter:ratelimit]
paste.filter_factory = trove.common.limits:RateLimitingMiddleware.factory

[app:troveapp]
paste.app_factory = trove.common.api:app_factory

#Add this filter to log request and response for debugging
[filter:debug]
paste.filter_factory = trove.common.wsgi:Debug

$ sudo chmod 775 /var/log/trove              # weird install bugs
$ sudo mkdir /var/cache/trove
$ sudo chmod 700 /var/cache/trove            # weird install bugs
$ sudo chown trove /var/cache/trove

$ sudo trove-manage db_sync
$ sudo trove-manage datastore_update mysql ''

[make image now, see 8]
$ glance --os-username admin --os-password ADMIN --os-tenant-name admin \
         --os-auth-url http://stack3:35357/v2.0 \
         image-create --name trusty_mysql --public --container-format ovf \
         --disk-format qcow2 --owner admin < ubuntu_mysql.qcow2
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | e90a26d84c9a540648264c4379c5a554     |
| container_format | ovf                                  |
| created_at       | 2014-05-02T01:03:05                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | e707520b-a3b3-4c68-9b4a-8b59ec9c2e6d |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | trusty_mysql                         |
| owner            | admin                                |
| protected        | False                                |
| size             | 364601856                            |
| status           | active                               |
| updated_at       | 2014-05-02T01:03:06                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

$ sudo trove-manage datastore_version_update mysql mysql-5.5 mysql e707520b-a3b3-4c68-9b4a-8b59ec9c2e6d mysql-server-5.5 1
$ sudo trove-manage datastore_update mysql mysql-5.5

$ keystone service-create --name=trove --type=database \
  --description="OpenStack Database Service"
$ keystone endpoint-create \
  --region=regionOne \
  --service-id=$(keystone service-list | awk '/ trove / {print $2}') \
  --publicurl=http://192.168.122.33:8779/v1.0/%\(tenant_id\)s \
  --internalurl=http://192.168.122.33:8779/v1.0/%\(tenant_id\)s \
  --adminurl=http://192.168.122.33:8779/v1.0/%\(tenant_id\)s

$ sudo cp /etc/init/trove-taskmanager.conf /etc/init/trove-conductor.conf
$ sudo vi /etc/init/trove-conductor.conf # s/taskmanager/conductor/g
$ sudo service trove-api restart
$ sudo service trove-taskmanager restart
$ sudo service trove-conductor restart

$ . creds-demo

$ trove create db0 2 --size 4
$ trove list 
+--------------------------------------+------+--------+-----------+------+
|                  id                  | name | status | flavor_id | size |
+--------------------------------------+------+--------+-----------+------+
| bad64d75-a298-4f20-a401-ad8f6656d945 | db0  | ACTIVE |     2     |  4   |
+--------------------------------------+------+--------+-----------+------+

$ trove database-create bad64d75-a298-4f20-a401-ad8f6656d945 db
$ trove user-create bad64d75-a298-4f20-a401-ad8f6656d945 user pass --databases db
$ trove show bad64d75-a298-4f20-a401-ad8f6656d945
+-----------+-----------------------------------------------+
|  Property |                     Value                     |
+-----------+-----------------------------------------------+
|  created  |              2014-06-04T22:16:31              |
| datastore | {u'version': u'mysql-5.5', u'type': u'mysql'} |
|   flavor  |                       2                       |
|     id    |      bad64d75-a298-4f20-a401-ad8f6656d945     |
|     ip    |                 [u'10.0.0.12']                |
|    name   |                      db0                      |
|   status  |                     ACTIVE                    |
|  updated  |              2014-06-04T22:16:34              |
|   volume  |                       4                       |
+-----------+-----------------------------------------------+

$ mysql -h 10.0.0.12 -u user -p db    # password is 'pass'
mysql> CREATE TABLE tab(id INTEGER PRIMARY KEY);
Query OK, 0 rows affected (0.11 sec)


11/ Swift

Follow http://docs.openstack.org/trunk/install-guide/install/apt/content/installing-openstack-object-storage.html

Need swift or equivalant to get Trove backups working.

$ . creds

$ keystone user-create --name=swift --pass=password \
  --email=swift at example.com
$ keystone user-role-add --user=swift --tenant=service --role=admin
$ keystone service-create --name=swift --type=object-store \
  --description="OpenStack Object Storage"
$ keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ object-store / {print $2}') \
  --publicurl='http://192.168.122.33:8080/v1/AUTH_%(tenant_id)s' \
  --internalurl='http://192.168.122.33:8080/v1/AUTH_%(tenant_id)s' \
  --adminurl=http://192.168.122.33:8080

$ sudo mkdir -p /etc/swift
$ sudo vi /etc/swift/swift.conf
[swift-hash]
# random unique string that can never change (DO NOT LOSE)
swift_hash_path_suffix = theswifthash

$ sudo apt-get install swift swift-account swift-container swift-object 

$ sudo fdisk /dev/vdc
$ sudo mkfs.ext4 /dev/vdc1
$ sudo vi /etc/fstab
/dev/vdc1 /srv/node/vdc1 ext4 noatime 0 0
$ sudo mkdir -p /srv/node/vdc1
$ sudo mount /srv/node/vdc1
$ sudo chown -R swift:swift /srv/node

$ sudo vi  /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.122.33
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

$ sudo vi  /etc/default/rsync
RSYNC_ENABLE=true

$ sudo service rsync start
$ sudo  mkdir -p /var/swift/recon
$ sudo chown -R swift:swift /var/swift/recon

$ sudo apt-get install swift-proxy memcached python-keystoneclient python-swiftclient python-webob
$ sudo vi /etc/memcached.conf 
...
-l 192.168.122.33

$ sudo service memcached restart

$ sudo vi /etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
user = swift

[pipeline:main]
pipeline = healthcheck cache authtoken keystoneauth proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = Member,admin,swiftoperator

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true
# cache directory for signing certificate
signing_dir = /home/swift/keystone-signing
# auth_* settings refer to the Keystone server
auth_protocol = http
auth_host = 192.168.122.33
auth_port = 35357
# the service tenant and swift username and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = password

[filter:cache]
use = egg:swift#memcache

[filter:catch_errors]
use = egg:swift#catch_errors
[filter:healthcheck]
use = egg:swift#healthcheck

$ cd /etc/swift
$ sudo swift-ring-builder account.builder create 18 3 1
$ sudo swift-ring-builder container.builder create 18 3 1
$ sudo swift-ring-builder object.builder create 18 3 1

$ sudo swift-ring-builder account.builder add z1-192.168.122.33:6002/vdc1 100
$ sudo swift-ring-builder container.builder add z1-192.168.122.33:6001/vdc1 100
$ sudo swift-ring-builder object.builder add z1-192.168.122.33:6000/vdc1 100

# check
$ sudo swift-ring-builder account.builder
$ sudo swift-ring-builder container.builder
$ sudo swift-ring-builder object.builder

$ sudo swift-ring-builder account.builder rebalance
$ sudo swift-ring-builder container.builder rebalance
$ sudo swift-ring-builder object.builder rebalance

$ sudo chown -R swift:swift /etc/swift
$ sudo mkdir /home/swift
$ sudo chown -R swift:swift /home/swift

$ sudo service swift-proxy restart

$ sudo swift-init all start

$ cd
$ swift stat
       Account: AUTH_8619bbbd627542f7b9223f645ab8f42a
    Containers: 0
       Objects: 0
         Bytes: 0
  Content-Type: text/plain; charset=utf-8
   X-Timestamp: 1399416342.22608
    X-Trans-Id: tx5fe325309fc544eb8efcb-0053696616
X-Put-Timestamp: 1399416342.22608

$ swift upload adminfiles  file1.txt
file1.txt
$ swift upload adminfiles  file2.txt
file2.txt
$ rm file1.txt file2.txt 
$ swift download adminfiles
file1.txt [auth 0.174s, headers 0.207s, total 0.207s, 0.000 MB/s]
file2.txt [auth 0.175s, headers 0.217s, total 0.217s, 0.000 MB/s]

Check with demo user/tenant:

$ creds-demo
$ swift stat
Account HEAD failed: http://192.168.122.33:8080:8080/v1/AUTH_9a1681043a7e42e3bb7b6817f32f8b4e 403 Forbidden

$ keystone role-create --name swiftoperator
$ keystone user-role-add --role swiftoperator --user demo --tenant demo 
$ swift stat
(works)


12/ Trove Backups

$ creds-demo

$ trove list
+--------------------------------------+------+--------+-----------+------+
|                  id                  | name | status | flavor_id | size |
+--------------------------------------+------+--------+-----------+------+
| 8dcccc94-5f61-45b4-8ec8-cee225169bf6 | db0  | ACTIVE |     2     |  2   |
+--------------------------------------+------+--------+-----------+------+

$ trove backup-create db0-backup-1 8dcccc94-5f61-45b4-8ec8-cee225169bf6 
$  trove backup-list
+--------------------------------------+--------------------------------------+--------------+-------------+-----------+
|                  id                  |             instance_id              |     name     | description |   status  |
+--------------------------------------+--------------------------------------+--------------+-------------+-----------+
| aaf58b79-c99a-45fb-880a-6d7f25b16068 | 8dcccc94-5f61-45b4-8ec8-cee225169bf6 | db0-backup-1 |     None    | COMPLETED |
+--------------------------------------+--------------------------------------+--------------+-------------+-----------+

$ trove delete 8dcccc94-5f61-45b4-8ec8-cee225169bf6                  # wait until it has vanished
$ trove list
+----+------+--------+-----------+------+
| id | name | status | flavor_id | size |
+----+------+--------+-----------+------+
+----+------+--------+-----------+------+

$ trove create db0 2 --size 2 --backup aaf58b79-c99a-45fb-880a-6d7f25b16068
$ trove list
+--------------------------------------+------+--------+-----------+------+
|                  id                  | name | status | flavor_id | size |
+--------------------------------------+------+--------+-----------+------+
| d4b59fd7-7b8e-4af7-9add-986364a1d447 | db0  | ACTIVE |     2     |  2   |
+--------------------------------------+------+--------+-----------+------+


Obscure notes
-------------
[1]  May need to use additional options to make it easy to access vm:
      -net user,net=192.168.122.0/24,host=192.168.122.1,restrict=off
    but reboot with defaults afterwards to avoid any odd settings persisting...


Bugs
----

1/ Cirros image no dhcp (fixed)

Look for differences in (working) devstack setup


/etc/nova/nova.conf
[DEFAULT]
+service_neutron_metadata_proxy = True
+neutron_region_name = regionOne


/etc/neutron/neutron.conf
-dhcp_agent_notification = True
-dhcp_agents_per_network = 2

[database]
-connection =    # do in plugin


/etc/neutron/metadata_agent.ini
+root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
+nova_metadata_ip = 192.168.122.33


/etc/neutron/dhcp_agent.ini
[DEFAULT]
-dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq


Making modifications... get:

[cirros image]
Sending select for 10.0.0.2...
No lease, failing
WARN: /etc/rc3.d/S40-network failed

[stack3]:
May  9 13:13:37 localhost dnsmasq-dhcp[3800]: DHCPDISCOVER(tapa7552977-df) fa:16:3e:c9:a7:b9 
May  9 13:13:37 localhost dnsmasq-dhcp[3800]: DHCPOFFER(tapa7552977-df) 10.0.0.4 fa:16:3e:c9:a7:b9 

May  9 13:13:37 localhost dnsmasq-dhcp[3800]: DHCPREQUEST(tapa7552977-df) 10.0.0.4 fa:16:3e:c9:a7:b9 
May  9 13:13:37 localhost dnsmasq-dhcp[3800]: DHCPACK(tapa7552977-df) 10.0.0.4 fa:16:3e:c9:a7:b9 host-10-0-0-4
{there would mormally be a delay here of 60s or so}
May  9 13:13:40 localhost dnsmasq-dhcp[3800]: DHCPREQUEST(tapa7552977-df) 10.0.0.4 fa:16:3e:c9:a7:b9 
May  9 13:13:40 localhost dnsmasq-dhcp[3800]: DHCPACK(tapa7552977-df) 10.0.0.4 fa:16:3e:c9:a7:b9 host-10-0-0-4

May  9 13:13:43 localhost dnsmasq-dhcp[3800]: DHCPREQUEST(tapa7552977-df) 10.0.0.4 fa:16:3e:c9:a7:b9 
May  9 13:13:43 localhost dnsmasq-dhcp[3800]: DHCPACK(tapa7552977-df) 10.0.0.4 fa:16:3e:c9:a7:b9 host-10-0-0-4

$ ps auxww|grep masq
nobody    3800  0.0  0.0  28204   968 ?        S    13:01   0:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tapa7552977-df --except-interface=lo --pid-file=/var/lib/neutron/dhcp/9d5b67cd-cffb-42ea-bdeb-046482d45cd1/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/9d5b67cd-cffb-42ea-bdeb-046482d45cd1/host --addn-hosts=/var/lib/neutron/dhcp/9d5b67cd-cffb-42ea-bdeb-046482d45cd1/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/9d5b67cd-cffb-42ea-bdeb-046482d45cd1/opts --leasefile-ro --dhcp-range=set:tag0,10.0.0.0,static,86400s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal

markir at stack3:/var/log$ cat /var/lib/neutron/dhcp/9d5b67cd-cffb-42ea-bdeb-046482d45cd1/host
fa:16:3e:c9:a7:b9,host-10-0-0-4.openstacklocal,10.0.0.4
fa:16:3e:ca:c5:59,host-10-0-0-1.openstacklocal,10.0.0.1
fa:16:3e:69:3c:5c,host-10-0-0-3.openstacklocal,10.0.0.3
markir at stack3:/var/log$ cat /var/lib/neutron/dhcp/9d5b67cd-cffb-42ea-bdeb-046482d45cd1/addn_hosts
10.0.0.4	host-10-0-0-4.openstacklocal host-10-0-0-4
10.0.0.1	host-10-0-0-1.openstacklocal host-10-0-0-1
10.0.0.3	host-10-0-0-3.openstacklocal host-10-0-0-3
markir at stack3:/var/log$ cat /var/lib/neutron/dhcp/9d5b67cd-cffb-42ea-bdeb-046482d45cd1/opts
tag:tag0,option:router,10.0.0.1

Looks ok (but notice devstack does not list 10.0.0.3)


[try upgrading devstack host 13.10 -> 14.04 ]

bugs:

https://bugs.launchpad.net/devstack/+bug/1316328

[dhcp again]

https://bugs.launchpad.net/neutron/+bug/1264932

[cirros]
$ sudo udhcpc -i eth0 -T 61

...gets ip ok, so guest dhcp options too agressive...

Hmm try resetting perms on config dirs on [stack3]

$ sudo chmod 755 /etc/neutron /etc/nova /etc/cinder

(no)

See https://github.com/fornyx/OpenStack-Havana-Install-Guide/issues/5
and also http://openstack.markmail.org/message/o36wemwo5k55qvbg?q=dhcp

Try on [stack3]:

$ sudo iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM --checksum-fill

...appears to FIX. However see if can avoid by disabling bridge vertio in nova.

...however cloud-init fails [cirros]:

cloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id
wget: server returned error: HTTP/1.1 500 Internal Server Error


In metadata-agent.log see:

2014-05-14 11:39:51.189 1487 TRACE neutron.agent.metadata.agent EndpointNotFound: Could not find Service or Region in Service Catalog.

See https://bugzilla.redhat.com/show_bug.cgi?id=950201

suggests needs neutron_region_name = 'regionOne' NOT 'RegionOne' Doh. See nova.conf and metadata_agent.conf

...appears to fix.


2/ Host displayed as 'Not Assigned' (fixed)


This is related to:

$ sudo vi /etc/trove/trove.conf
...
#network_label_regex = ^NETWORK_LABEL$


FIX: Lose the regex.


3/ Can not resolv own hostname (fixed)

Neutron does not seem to be setting this up right, so use clould init:

(guest) $ sudo vi /etc/cloud/cloud.cfg
manage_etc_hosts: true

FIX: use cloud-init.


4/ No /etc/guest_info injected (fixed)

Openstack is expecting to bale able to use a config drive;

(guest) $ sudo vi /etc/cloud/cloud.cfg.d/90_dpkg.cfg
datasource_list: [ ConfigDrive, OpenStack ]

$ sudo vi /etc/nova/nova-compute.conf
[DEFAULT]
compute_driver=libvirt.LibvirtDriver
force_config_drive = always

[libvirt]
inject_partition = -2
cpu_mode = none
virt_type = kvm

FIX: use config drive.


5/ API partially broken (fixed)

$ trove create db0 2 --size 2 --databases db --users user:pass

$ trove list
+--------------------------------------+------+--------+-----------+------+
|                  id                  | name | status | flavor_id | size |
+--------------------------------------+------+--------+-----------+------+
| 5c1e2f20-6078-49cd-bc90-0d5a8847b6ce | db0  | ACTIVE |     2     |  2   |
+--------------------------------------+------+--------+-----------+------+

$ trove database-list 5c1e2f20-6078-49cd-bc90-0d5a8847b6ce
ERROR: The resource could not be found.

Investigation shows the database 'db' and user 'user' have been setup, but we cannot add
, or see, and more users, databases etc. This *does* work in devstack.

This is due to extensions (those database-list commands are extensions) failing to load. This
is due to path issues - we need to tell trove where to find them:

$ tail /etc/trove/trove.conf
...
api_extensions_path = /usr/lib/python2.7/dist-packages/trove/extensions/routes


6/ Backup broken (fixed)

Need to specify swift url and type of backup for the guestagent to use

$ vi /etc/trove/trove-guestagent.conf
[DEFAULT]
...
swift_url = http://172.24.4.1:8080/v1/AUTH_

[mysql]
backup_strategy = MySQLDump

Also need to allow demo user + tenant to use swift:

$ vi /etc/swift/proxy-server.conf

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = member, admin

$ keystone user-role-add --role admin --user demo --tenant demo     # overkill, maybe create a role!


Seeing 'broken pipe' errors in restore.

backup stream is backup coming from swift ok
we are doing command 'sudo mysql' on it:

The latter is failing with: error: 'Access denied for user 'root'@'localhost' (using password: NO)'

So removing the root password in the initial image fixes the issue.

Or perhaps adding it to /etc/my.cnf after setting it:

[client]
...
password       = password


More information about the Openstack mailing list