[Openstack-operators] I have finally created an instance, and it works! However, there is no ethernet card

Nhan Cao nhanct92 at gmail.com
Sat Aug 16 02:02:45 UTC 2014


you must add to guest (instance)


2014-08-16 1:43 GMT+07:00 Jeff Silverman <jeff at sweetlabs.com>:

> I have been surfing the internet, and one of the ideas that comes to mind
> is modifying the /etc/neutron/agent.ini file on the compute nodes.  In the
> agent.ini file, there is a comment near the top that is almost helpful:
>
> # L3 requires that an interface driver be set. Choose the one that best
> # matches your plugin.
>
> The only plug I know about is ml2.  I have no idea if that is right for me
> or not.  And I have no idea to choose the interface drive that best matches
> my plugin.
>
> Thank you
>
>
> Jeff
>
>
>
>
>
>
> On Fri, Aug 15, 2014 at 10:26 AM, Jeff Silverman <jeff at sweetlabs.com>
> wrote:
>
>> By "defined a network space for your instances", does that mean going
>> through the process as described in
>> http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
>> ?
>>
>> I got part way through that when I realized that the procedure was going
>> to bridge packets through neutron.  That's not what I want.  I want the
>> packets to go directly to the physical router.  For example, I have two
>> tenants, with IP addresses 10.50.15.80/24 and 10.50.18.15.90/24.and the
>> router is at 10.50.15.1.  There is a nice picture of what I am trying to do
>> at
>> http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud
>> .  But if the hypervisor doesn't present a virtual device to the guests,
>> then nothing else is going to happen.  The network troubleshooting guide
>> http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud
>> does not explain what to do if the virtual NIC is missing.
>>
>>
>> Thank you
>>
>> Jeff
>>
>>
>>
>> On Fri, Aug 15, 2014 at 9:38 AM, Abel Lopez <alopgeek at gmail.com> wrote:
>>
>>> Curious if you’ve defined a network space for your instances. If you’re
>>> using the traditional flat_network, this is known as the ‘fixed_address’
>>> space.
>>> If you’re using neutron, you would need to create a network and a subnet
>>> (and router with gateway, etc). You’d then assign the instance to a network
>>> at launch time.
>>>
>>>
>>> On Aug 15, 2014, at 9:17 AM, Jeff Silverman <jeff at sweetlabs.com> wrote:
>>>
>>> <ip_a.png>
>>>>>> For those of you that can't see pictures:
>>> $ sudo ip a
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
>>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>     inet 127.0.0.1/8 scope host lo
>>>     inet6 ::1/128 scope host
>>>         valid_lft forever preferred_1ft forever
>>>
>>> I suspect that the issue is that the hypervisor is not presenting a
>>> virtual ethernet card.
>>>
>>> Thank you
>>>
>>>
>>> Jeff
>>>
>>>
>>>
>>> On Thu, Aug 14, 2014 at 6:57 PM, Nhan Cao <nhanct92 at gmail.com> wrote:
>>>
>>>> can you show output of command:
>>>> ip a
>>>>
>>>>
>>>>
>>>>
>>>> 2014-08-15 7:41 GMT+07:00 Jeff Silverman <jeff at sweetlabs.com>:
>>>>
>>>>> People,
>>>>>
>>>>> I have brought up an instance, and I can connect to it using my
>>>>> browser!  I am so pleased.
>>>>>
>>>>> However, my instance doesn't have an ethernet device, only a loopback
>>>>> device.   My management wants me to use a provider network, which I
>>>>> understand to mean that my instances will have IP addresses in the same
>>>>> space as the controller, block storage, and compute node administrative
>>>>> addresses.  However, I think that discussing addressing is premature until
>>>>> I have a working virtual ethernet card.
>>>>>
>>>>> I am reading through
>>>>> http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
>>>>> and I think that the ML2 plugin is what I need.  However, I think I do not
>>>>> want a network type of GRE, because that encapsulates the packets and I
>>>>> don't have anything to un-encapsulate them.
>>>>>
>>>>> Thank you
>>>>>
>>>>>
>>>>> Jeff
>>>>>
>>>>>
>>>>> --
>>>>> *Jeff Silverman*
>>>>> Systems Engineer
>>>>> (253) 459-2318 (c)
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-operators mailing list
>>>>> OpenStack-operators at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Jeff Silverman*
>>> Systems Engineer
>>> (253) 459-2318 (c)
>>>
>>>  _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>
>>
>> --
>> *Jeff Silverman*
>> Systems Engineer
>> (253) 459-2318 (c)
>>
>>
>
>
> --
> *Jeff Silverman*
> Systems Engineer
> (253) 459-2318 (c)
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140816/c0faa2c1/attachment.html>


More information about the OpenStack-operators mailing list