[Openstack] network question on openstack installation

Uwe Sauter uwe.sauter.de at gmail.com
Mon Jun 29 20:44:36 UTC 2015


Go to http://docs.openstack.org/ , select the OpenStack version, then the installation guide
for your distribution and navigate to

6. Add a networking component
  - OpenStack Networking (neutron)
    - Install and configure controller node

and follow the database related stuff:
- create DB
- grant privileges to neutron user
- populate the DB (search for "neutron-db-manage")

Depending on your distribution, you probably need to use sudo.

The number of config files depends on the file layout you are working with, so this is not an exact
answer.

I did the following:

[on controller node]

cd /etc/neutron
cat plugins/ml2/ml2_conf.ini > plugin.ini
cd plugins/ml2
rm -f ml2_config.ini
ln -s ../../plugin.ini ml2_conf.ini


[on network and compute nodes]

cd /etc/neutron
cat plugins/ml2/ml2_conf.ini > plugin.ini
cat plugins/openvswitch/ovs_neutron_plugin.ini >> plugin.ini
cd plugins/ml2
rm -f ml2_config.ini
ln -s ../../plugin.ini ml2_conf.ini
cd ../openvswitch
rm -f ovs_neutron_plugin.ini
ln -s ../../plugin.ini ovs_neutron_plugin.ini



Then I did that sed trick that is needed on RDO because of a packaging bug with Juno
(changes systemd unit file to load /etc/neutron/plugin.ini instead of
/etc/neutron/plugins/ml2/ml2_plugin.ini).
And then 'su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugin.ini upgrade juno" neutron' on the controller node.

As far as I understood the configuration system used in Neutron, each config file will be merged
into the same configuration namespace; only the order will determine, which value is the current
one if they exist in more then one file [] section.
So you could have just one ini file in addition to /etc/neutron/neutron.conf (my way) or
several. But then you need to load them all when sync'ing the DB *and* starting the services.


Hopefully this brightens the dark side of Neutron configuration,

	Uwe





Am 29.06.2015 um 22:19 schrieb Yngvi Páll Þorfinnsson:
> HI Uwe
> No, I did'nt drop the keystone ;-)
> 
> But is this the correct way to resync neutron ?
> 
> # neutron-db-manage --config-file /etc/neutron/neutron.conf \
> # --config-file /etc/neutron/plugins/ml2/ml2_plugin.ini
> 
> I mean, how many config files is necessary to have in the cmd ?
> 
> best regards
> Yngvi
> 
> -----Original Message-----
> From: Uwe Sauter [mailto:uwe.sauter.de at gmail.com] 
> Sent: 29. júní 2015 18:08
> To: YANG LI
> Cc: openstack at lists.openstack.org
> Subject: Re: [Openstack] network question on openstack installation
> 
> It depends on your switch… some drop tagged packets on an access port, others allow tagged packets, if the packet VLAN ID equals the configured VLAN ID.
> 
> I'd reconfigure the provider network type to "flat" but that's personal taste. You could also reconfigure the switch port to be a trunking port (with only one VLAN ID). Currently your in between those configurations that might or might not work lateron…
> 
> 
> Regards,
> 
> 	Uwe
> 
> 
> Am 29.06.2015 um 19:42 schrieb YANG LI:
>> thank you, Uwe. our provider network actually is untagged, but I did 
>> specified VLAN ID when I create our external network and everything still works. will this cause issue later on?
>>
>> eutron net-create --provider:network_type=vlan 
>> —provider:segmentation_id=<vlan id> 
>> --provider:physical_network=physnet1
>> --router:external=true public
>>
>>
>> Thanks,
>> Yang
>>> On Jun 27, 2015, at 11:17 AM, Uwe Sauter <uwe.sauter.de at gmail.com <mailto:uwe.sauter.de at gmail.com>> wrote:
>>>
>>> Hi Yang,
>>>
>>> it depends on whether your provider network is tagged or untagged. If it is untagged (the switch port is an "access"
>>> port) then you don't specify the VLAN ID for the external network (as 
>>> it will get tagged by the switch). If the provider network is tagged 
>>> (the switch port is a "trunk" port) then you have to configure the VLAN ID because the switch might refuse the traffic (depending if there is a default VLAN ID defined on the switch's port).
>>>
>>> Regards,
>>>
>>> Uwe
>>>
>>>
>>>
>>> Am 27.06.2015 um 13:47 schrieb YANG LI:
>>>> Thank you so much, James! This is so helpful. Another confusion I 
>>>> have is about network_vlan_ranges. Is this network VLAN id range? If 
>>>> so, does it has to match external network? For example, we only have one external VLAN we can use as Our provider network and that VLAN id is 775 (xxx.xxx.xxx.0/26). Should I define network_vlan_ranges as following?
>>>>
>>>> [ml2]
>>>> type_drivers=vlan
>>>> tenant_network_types = vlan
>>>> mechanism_drivers=openvswitch
>>>> #
>>>> [ml2_type_vlan]
>>>> #thistellsOpenstackthattheinternalname"physnet1"providesthevlanrange
>>>> 100-199
>>>> network_vlan_ranges=physnet1:775
>>>> #
>>>>
>>>> Thanks,
>>>> Yang
>>>> Sent from my iPhone
>>>>
>>>> On Jun 26, 2015, at 8:54 AM, "James Denton" 
>>>> <james.denton at rackspace.com <mailto:james.denton at rackspace.com>
>>>> <mailto:james.denton at rackspace.com>> wrote:
>>>>
>>>>> You can absolutely have instances in the same network span 
>>>>> different compute nodes. As an admin, you can run ‘nova show <instanceid>’ and see the host in the output:
>>>>>
>>>>> root at controller01:~# nova show 7bb18175-87da-4d1f-8dca-2ef07fee9d21 
>>>>> | grep host
>>>>> | OS-EXT-SRV-ATTR:host                 | compute02                              |
>>>>>
>>>>> That info is not available to non-admin users by default.
>>>>>
>>>>> James
>>>>>
>>>>>> On Jun 26, 2015, at 7:38 AM, YANG LI <yangli at clemson.edu 
>>>>>> <mailto:yangli at clemson.edu> <mailto:yangli at clemson.edu>>
>>>>>> wrote:
>>>>>>
>>>>>> Thanks, James for the explanation. it make more sense now. 
>>>>>> <http://now.it/> it is possible that a instances on same tenant network reside on different compute nodes right? how do I tell which compute node a instance is on?
>>>>>>
>>>>>> Thanks,
>>>>>> Yang
>>>>>>
>>>>>>> On Jun 24, 2015, at 10:27 AM, James Denton 
>>>>>>> <james.denton at rackspace.com <mailto:james.denton at rackspace.com>
>>>>>>> <mailto:james.denton at rackspace.com>> wrote:
>>>>>>>
>>>>>>> Hello.
>>>>>>>
>>>>>>>> all three nodes will have eth0 on management/api network. since 
>>>>>>>> I am using ml2 plugin with vlan for tenant network, I think all 
>>>>>>>> compute node should have eth1 as the second nic on provider 
>>>>>>>> network. Is this correct?  I understand provider network is for instance to get external access  to internet, but how is instance live on compute1 communicate with instance live on compute2? are they also go through provider network?
>>>>>>>
>>>>>>> In short, yes. If you’re connecting instances to vlan “provider” 
>>>>>>> networks, traffic between instances on different compute nodes 
>>>>>>> will traverse the “provider bridge”, get tagged out eth1, and hit 
>>>>>>> the physical switching fabric. Your external gateway device could also sit in that vlan, and the default route on the instance would direct external traffic to that device.
>>>>>>>
>>>>>>> In reality, every network has ‘provider’ attributes that describe 
>>>>>>> the network type, segmentation id, and bridge interface (for 
>>>>>>> vlan/flat only). So tenant networks that leverage vlans would 
>>>>>>> have provider attributes set by Neutron automatically based on 
>>>>>>> the configuration set in the ML2 config file. If you use Neutron 
>>>>>>> routers that connect to both ‘tenant’ vlan-based networks and external ‘provider’ networks, all of that traffic could traverse the same provider bridge on the controller/network node, but would be tagged accordingly based on the network (ie. vlan 100 for external network, vlan 200 for tenant network).
>>>>>>>
>>>>>>> Hope that’s not too confusing!
>>>>>>>
>>>>>>> James
>>>>>>>
>>>>>>>> On Jun 24, 2015, at 8:54 AM, YANG LI <yangli at clemson.edu 
>>>>>>>> <mailto:yangli at clemson.edu> <mailto:yangli at clemson.edu>>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> I am working on install openstack from scratch, but get confused 
>>>>>>>> with network part. I want to have one controller node, two compute nodes.
>>>>>>>>
>>>>>>>> the controller node will only handle following services:
>>>>>>>> glance-api
>>>>>>>> glance-registry
>>>>>>>> keystone
>>>>>>>> nova-api
>>>>>>>> nova-cert
>>>>>>>> nova-conductor
>>>>>>>> nova-consoleauth
>>>>>>>> nova-novncproxy
>>>>>>>> nova-scheduler
>>>>>>>> qpid
>>>>>>>> mysql
>>>>>>>> neutron-server
>>>>>>>>
>>>>>>>> compute nodes will have following services:
>>>>>>>> neutron-dhcp-agent
>>>>>>>> neutron-l3-agent
>>>>>>>> neutron-metadata-agent
>>>>>>>> neutron-openvswitch-agent
>>>>>>>> neutron-ovs-cleanup
>>>>>>>> openvswtich
>>>>>>>> nova-compute
>>>>>>>>
>>>>>>>> all three nodes will have eth0 on management/api network. since 
>>>>>>>> I am using ml2 plugin with vlan for tenant network, I think all 
>>>>>>>> compute node should have eth1 as the second nic on provider 
>>>>>>>> network. Is this correct?  I understand provider network is for instance to get external access  to internet, but how is instance live on compute1 communicate with instance live on compute2? are they also go through provider network?
>>>>>>>> _______________________________________________
>>>>>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>> Post to     : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
>>>>>>>> <mailto:openstack at lists.openstack.org>
>>>>>>>> Unsubscribe : 
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to     : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
>>>> Unsubscribe : 
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>
>>>
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
>>> Unsubscribe : 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 




More information about the Openstack mailing list