[Openstack] Bug or bad config ? melange IPAM allocating .0, .1 addresses

Troy Toman troy.toman at rackspace.com
Tue Mar 27 12:42:12 UTC 2012


On Mar 27, 2012, at 4:25 AM, Mandar Vaze wrote:

All,

I realized that this question was discussed on [netstack] mailing list earlier (Is that separate mailing list now merged with this list ? If not where do I subscribe to netstack mailinglist ?)
About a month ago, Doude had asked :

So, that's mean the current trunk doesn't work with trio Nova+Quantum+Melange.
That's correct ?

http://www.mail-archive.com/netstack@lists.launchpad.net/msg00731.html

But I didn’t see any clear yes/no answer.

Nova+Quantum+Melange should all be working together with the current release candidates (Essex trunk).

Additionally, while I saw reference to “mélange” command line tool, I am not sure what all (sub) commands need to be executed (and in what order) to get a “working” setup for nova+quantum+mélange

The Melange client can be found at: https://github.com/openstack/python-melangeclient. This client can be used to set policies which I'll discuss below.


Regards,
-Mandar

From: openstack-bounces+mandar.vaze=vertex.co.in at lists.launchpad.net<mailto:openstack-bounces+mandar.vaze=vertex.co.in at lists.launchpad.net> [mailto:openstack-bounces+mandar.vaze=vertex.co.in at lists.launchpad.net<mailto:vertex.co.in at lists.launchpad.net>] On Behalf OfMandar Vaze
Sent: Tuesday, March 27, 2012 12:22 PM
To: Openstack Mail List (openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>)
Subject: [Openstack] Bug or bad config ? melange IPAM allocating .0, .1 addresses

I’ve configured mélange IPAM using devstack (Added mélange and m-svc to ENBALED_SERVICES in stackrc before I executed stack.sh)
I’m using quantum network manager.

As you can see that 10.0.0.0 and 10.0.0.1 addresses are assigned to VMs created using dashboard/horizon – so I didn’t specify any network params when I created the VMs

Output from “nova-manage network list”

id   IPv4        IPv6       start address   DNS1       DNS2      VlanID       project         uuid
1    None        None       10.0.0.2        None       None      None         default         60a27f40-3fd1-4ed5-99ac-833109fd4713

Output from “nova list”
+--------------------------------------+---------+--------+------------------+
|                  ID                  |   Name  | Status |     Networks     |
+--------------------------------------+---------+--------+------------------+
| 17449258-4b26-4f8b-959e-de512331b22b | t1359-2 | ACTIVE | private=10.0.0.1 |
| 1e13ffa3-e8e5-4b5f-856f-7bd9cbcc872b | t1359-3 | ACTIVE | private=10.0.0.2 |
| 7919a412-6373-4578-8b29-4a93a5c99950 | t1359   | ACTIVE | private=10.0.0.0 |
+--------------------------------------+---------+--------+------------------+

(t1359-3 was created using “nova boot t1359-3 --image 6b8d93bf-4344-4594-b6fa-c2cc5a8a5b1d --flavor 1 --key_name mandar-kp” but I don’t think it matters since .2 would have been allocated to the next VM created via dashboard/ as well)

Obviously, I am unable to reach VMs via ssh – even the one with .2 IP address.

The mélange.conf is unchanged from the one created by stack.sh (given at the end, in case someone needs to refer to it)

Is this a defect or bad configuration ?

By default, Melange will hand out any available IP in the block. It doesn't try to derive gateway or broadcast addresses because there are cases where a subnet block could have valid IPs in the first two addresses of the block. The concept of policies was created to enable deployments to define rules for reserving addresses. To use them for solving this problem you could follow this sequence of commands through the CLI:

`melange policy create -t {tennant} name={block_name} desc={policy_name}`
                                               (This should return the policy_id for the next command)
`melange unusable_ip_octet create -t {tennant} policy_id={policy_id} octet=0`
`melange unusable_ip_octet create -t {tennant} policy_id={policy_id} octet=1`
`melange ip_block update -t {tennant} id={block_id} policy_id={policy_id}`


Regards,
-Mandar

===== mélange.conf =====

[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = False

# Show debugging output in logs (sets DEBUG log level output)
debug = False

# Address to bind the API server
bind_host = 0.0.0.0

# Port the bind the API server to
bind_port = 9898

# SQLAlchemy connection string for the reference implementation
# registry server. Any valid SQLAlchemy connection string is fine.
# See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine
sql_connection = mysql://root:nova@localhost/melange
# sql_connection = mysql://root:root@localhost/melange
#sql_connection = postgresql://melange:melange@localhost/melange

# Period in seconds after which SQLAlchemy should reestablish its connection
# to the database.
#
# MySQL uses a default `wait_timeout` of 8 hours, after which it will drop
# idle connections. This can result in 'MySQL Gone Away' exceptions. If you
# notice this, you can lower this value to ensure that SQLAlchemy reconnects
# before MySQL can drop the connection.
sql_idle_timeout = 3600

#DB Api Implementation
db_api_implementation = "melange.db.sqlalchemy.api"

# Path to the extensions
api_extensions_path = melange/extensions

# Cidr for auto creating first ip block in a network
# If unspecified, auto creating is turned off
# default_cidr = 10.0.0.0/24

#IPV6 Generator Factory, defaults to rfc2462
#ipv6_generator=melange.ipv6.tenant_based_generator.TenantBasedIpV6Generator

#DNS info for a data_center
dns1 = 8.8.8.8
dns2 = 8.8.4.4

#Number of days before deallocated IPs are deleted
keep_deallocated_ips_for_days = 2

#Number of retries for allocating an IP
ip_allocation_retries = 5

# ============ notifer queue kombu connection options ========================

notifier_queue_hostname = localhost
notifier_queue_userid = guest
notifier_queue_password = guest
notifier_queue_ssl = False
notifier_queue_port = 5672
notifier_queue_virtual_host = /
notifier_queue_transport = memory

[composite:melange]
use = call:melange.common.wsgi:versioned_urlmap
/: versions
/v0.1: melangeapp_v0_1
/v1.0: melangeapp_v1_0

[app:versions]
paste.app_factory = melange.versions:app_factory

[pipeline:melangeapi_v0_1]
pipeline = extensions melangeapp_v0_1

[pipeline:melangeapi_v1_0]
pipeline = extensions melangeapp_v1_0

[filter:extensions]
paste.filter_factory = melange.common.extensions:factory

[filter:tokenauth]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 808
auth_host = 127.0.0.1
auth_port = 5001
auth_protocol = http
admin_token = 999888777666

[filter:authorization]
paste.filter_factory = melange.common.auth:AuthorizationMiddleware.factory

[app:melangeapp_v0_1]
paste.app_factory = melange.ipam.service:APIV01.app_factory

[app:melangeapp_v1_0]
paste.app_factory = melange.ipam.service:APIV10.app_factory

#Add this filter to log request and response for debugging
[filter:debug]
paste.filter_factory = melange.common.wsgi:Debug.factory


=========



_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120327/22ac5c7e/attachment.html>


More information about the Openstack mailing list