[openstack-dev] [Quantum][LBaaS] Test run of LBaaS service

Dan Wendlandt dan at nicira.com
Wed Feb 20 16:48:38 UTC 2013


Hi Trinath,

The code is still coming together and is not quite ready for testing.  I'd
suggest waiting a week until the code and appropriate devstack support is
merged.

Dan

On Wed, Feb 20, 2013 at 2:36 AM, Trinath Somanchi <
trinath.somanchi at gmail.com> wrote:

> Hi-
>
> >
> > I Have the folsom code installed with me. Can any one guide me on how to
> > integrate and test LBaaS in the folsom setup I have.
>
> You'll have to run code from master, and cherry-pick onto it the LBaaS
> patches from gerrit.
>
> Can any one guide me on how to get the code base detailed above.
>
> I was unable to get the code base with git. I think I'm missing some thing
> and re-inventing the issue.
>
> Can any one guide me kindly on this.
>
> Thanks in advance.
>
> -
> Trinath
>
>
> On Tue, Feb 19, 2013 at 2:54 PM, Eugene Nikanorov <enikanorov at mirantis.com
> > wrote:
>
>> Hi Salvatore,
>>
>> Per yesterday's meeting it was decided that we'll rework patches to adopt
>> haproxy-on-the-host approach, rather than on VM.
>> It's temporary and much simplified solution just to make things work for
>> grizzly.
>> Therefore there's no point in reviewing and trying those patches.
>>
>> Thanks,
>> Eugene.
>>
>>
>> On Tue, Feb 19, 2013 at 12:51 PM, Salvatore Orlando <sorlando at nicira.com>wrote:
>>
>>> Hi Eugene,
>>>
>>> Thanks for putting this together.
>>> Even if sidelined by other work items, I am assisting Mark and Dan in
>>> the process of reviewing the various LBaaS patches.
>>>
>>> Some comments inline.
>>>
>>> Regards,
>>> Salvatore
>>>
>>> On 19 February 2013 06:15, Trinath Somanchi <trinath.somanchi at gmail.com>
>>> wrote:
>>> > Hi -
>>> >
>>> > Very much excited for the help on testing of LBaaS.
>>> >
>>> > Can you guide me/the enthusiasts  on how to test the LBaaS, and Which
>>> code
>>> > base to download. Because I find, more branches about LBaaS.
>>>
>>> There are several patches on gerrit, as we decided to split them to
>>> simplify review.
>>> You should use all of them. LBaaS API support, as well as the CLI, is
>>> instead already merged in master branches (quantum and
>>> python-quantumclient)
>>>
>>> >
>>> > I Have the folsom code installed with me. Can any one guide me on how
>>> to
>>> > integrate and test LBaaS in the folsom setup I have.
>>>
>>> You'll have to run code from master, and cherry-pick onto it the LBaaS
>>> patches from gerrit.
>>>
>>> >
>>> > Please help me integrate the LBaaS into the folsom installation.
>>>
>>> I don't think you can - at least we haven't tried (and probably we
>>> won't, unless LBaaS goes into the backport roadmap).
>>>
>>> >
>>> > thanks in advance.
>>> >
>>> >
>>> >
>>> >
>>> > On Tue, Feb 19, 2013 at 12:45 AM, Eugene Nikanorov <
>>> enikanorov at mirantis.com>
>>> > wrote:
>>> >>
>>> >> Hi Dan, Mark, folks,
>>> >>
>>> >> I know you have been working on reviewing and testing of LBaaS
>>> patches and
>>> >> run into several problems preventing the service to provide complete
>>> >> solution.
>>> >> We're currently putting all our efforts into integration testing.
>>> Please
>>> >> find the updated instruction on how to setup/run the service:
>>> >>
>>> >> Let me step through the list of problems that Dan has identified:
>>> >> 1. Strict key checking.
>>> >> By default ssh and scp use strict key checking, so once host
>>> fingerprint
>>> >> is changed for the known host, ssh/scp switch into interactive mode
>>> and ask
>>> >> if it is ok.
>>> >> We've fixed it via ssh/scp option that disables strict key checking.
>>>
>>> During yesterday's meeting, and on gerrit as well, I was told that the
>>> requirement for SSH (and hence also the paramiko dependency) were
>>> going, and were being replaced by a RPC mechanism similar to Quantum's
>>> DHCP/L3 agents.
>>> Is this information incorrect?
>>>
>>> >>
>>> >> 2. "VM getting deleted, but then lbaas code not realizing it was
>>> deleted"
>>> >> There was I bug in the code, which incorrectly updated device status
>>> in
>>> >> case of error and didn't delete it from DB.
>>> >> We've fixed it.
>>> >>
>>>
>>> This is good. Another thing I was not sure is whether we're keeping
>>> isolated VMs or running haproxy in isolated namespaces instead. It
>>> seems we're keeping the VMs approach. Do we still have a fixed pool of
>>> VMs?
>>>
>>> >> 3. File permissions on key file
>>> >> Key file is used in ssh/scp that are being run with "sudo ip netns
>>> exec
>>> >> <ns> ssh -i keyfile_path ..."
>>> >> I guess ssh/scp are getting sudo priviledges in this case, so I
>>> wonder,
>>> >> what issues could be experienced here.
>>> >>
>>> >> 4. Keypair injection not working
>>> >> We also has hit this issue several times without stable repro, e.g.
>>> >> sometimes it worked and sometimes it didn't.
>>> >> Currently it's our primary concern, which however could be solved by
>>> >> injecting keys into the image manually.
>>> >>
>>> >> As an alternative we tried to use pexpect library to access VM via
>>> >> login/password in pseudo-interactive mode but later decided that
>>> using key
>>> >> pairs is a more reliable way to access VM.
>>> >>
>>> >> 5. Security groups
>>> >> As far as I uderstood the concern - it's possible that security group
>>> that
>>> >> agent is using to access balancer VM could prohibit icmp packets that
>>> we use
>>> >> for liveliness check.
>>> >> So it was changed to netcat making probe on 22 port.
>>> >>
>>> >> Latest code with all these fixes was just posted on review (HAProxy
>>> >> driver) https://review.openstack.org/#/c/20985/
>>>
>>> This is true if the management traffic between agents/load balancer
>>> VMs goes over tenant network.
>>> Ideally I would avoid this situation in the first place, but this is
>>> probably cumbersome with the VM approach.
>>> Looking for a probe on port 22 will tell you if the port is closed or
>>> not. What would be the behaviour is the port is unreachable?
>>>
>>> The 'default' security group, which is applied to every single VMs,
>>> regardless of the nature of the network, does not allow SSH traffic.
>>>
>>>
>>> >>
>>> >> Thanks,
>>> >> Eugene.
>>> >>
>>> >> _______________________________________________
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev at lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Regards,
>>> > ----------------------------------------------
>>> > Trinath Somanchi,
>>> > +91 9866 235 130
>>> >
>>> > _______________________________________________
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev at lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> ----------------------------------------------
> Trinath Somanchi,
> +91 9866 235 130
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130220/5b18bd5c/attachment.html>


More information about the OpenStack-dev mailing list