[openstack-dev] [all] re-introducing twisted to global-requirements
Ben Meyer
ben.meyer at rackspace.com
Fri Jan 8 16:44:56 UTC 2016
On 01/07/2016 06:28 PM, Jay Pipes wrote:
> On 01/07/2016 06:12 PM, Ben Meyer wrote:
>> On 01/07/2016 03:32 PM, Jay Pipes wrote:
>>> On 01/07/2016 03:01 PM, Jim Rollenhagen wrote:
>>>> On Thu, Jan 07, 2016 at 02:41:12PM -0500, Sean Dague wrote:
>>>>> On 01/07/2016 02:09 PM, Jim Rollenhagen wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> A change to global-requirements[1] introduces mimic, which is an
>>>>>> http
>>>>>> server that can mock various APIs, including nova and ironic,
>>>>>> including
>>>>>> control of error codes and timeouts. The ironic team plans to use
>>>>>> this
>>>>>> for testing python-ironicclient without standing up a full ironic
>>>>>> environment.
>>>>>>
>>>>>> Here's the catch - mimic is built on twisted. I know twisted was
>>>>>> previously removed from OpenStack (or at least people said "pls
>>>>>> no", I
>>>>>> don't know the full history). We didn't intend to stealth-introduce
>>>>>> twisted back into g-r, but it was pointed out to me that it may
>>>>>> appear
>>>>>> this way, so here I am letting everyone know. lifeless pointed out
>>>>>> that
>>>>>> when tests are failing, people may end up digging into mimic or
>>>>>> twisted
>>>>>> code, which most people in this community aren't familiar with
>>>>>> AFAIK,
>>>>>> which is a valid point though I hope it isn't required often.
>>>>>>
>>>>>> So, the primary question here is: do folks have a problem with
>>>>>> adding
>>>>>> twisted here? We're holding off on Ironic changes that depend on
>>>>>> this
>>>>>> until this discussion has happened, but aren't reverting the g-r
>>>>>> change
>>>>>> until we decide one way or another.
>>>>>>
>>>>>> // jim
>>>>>>
>>>>>> [1] https://review.openstack.org/#/c/220268/
>>>>>
>>>>> What is the advantage of running another server like this over using
>>>>> requests-mock (which is used by other OpenStack projects for testing
>>>>> today)? The only difference here seems to be that you actually
>>>>> execute
>>>>> requests code in one case and not in the other.
>>>>>
>>>>> Requests-mock debugging when things go wrong seems a bit simpler.
>>>>>
>>>>> This is less about twisted and more about trying to not introduce yet
>>>>> another way to mock code in the tree that people need to understand.
>>>>>
>>>>> -Sean
>>>>
>>>> We'd be using this for functional tests, not unit, where we can't
>>>> really
>>>> inject mocks. The idea is that we could run a full functional suite
>>>> against either mimic or a full ironic environment, just by changing a
>>>> test setting.
>>>
>>> I don't really see the point of a separate project like Mimic that has
>>> a whole bunch of reimplementations (mocked out) of all sorts of
>>> OpenStack (and RAX-specific) API services. It's just a great way to
>>> introduce a larger surface area for bugs to creep in -- since you have
>>> to keep the Mimic interfaces up to date with the real interfaces.
>>> Better to keep something like this -- if it is TRULY needed -- in-tree
>>> with the API service itself, so that the chances of divergence are
>>> reduced. This is similar to the fakevirt driver in Nova. It's in tree
>>> for good reason: when someone changes the virt driver interface, the
>>> fakevirt driver goes boom and needs to be changed in a corresponding
>>> fashion in the same patch.
>> A tool like my OpenStackInABox could certainly benefit from the models
>> or services being provided by each project - aside from the complexity
>> that that adds to the installation of installing all the dependencies
>> related instead of just implementing a simple model with a file and
>> sqlite backend that has minimal dependencies...but I digress as I
>> haven't looked at how nova's fakevirt driver installs so may be that's
>> not as big an issue. I'll certainly look at it for use in
>> OpenStackInABox, but even there I'm aiming for a more complete scenario
>> where you can interact with Keystone, Nova, Swift, etc...(e.g auth
>> against a fake Keystone, use the token with the fake nova which
>> validates it against the same fake Keystone instance, same with a fake
>> Swift...).
>
> But your fake Keystone wouldn't be "authing" anything at all. Your
> fake Nova wouldn't be "validating" anything at all.
In my OpenStackInABox functionality, it does do authing. It generates
and maintains in the model auth tokens, and can be configured to reject
all tokens, etc. depending on your test.
Same for my swift mock - it stores the data and generates responses just
like real Swift - in accordance with the API documentation in both cases.
> You aren't *functionally* testing anything of importance above if the
> things you are testing aren't doing what they are supposed to do.
But you are. Sure, you're not going to have the same timing, but it at
least proves the code paths really well. It also reduces the necessary
code to run the tests.
> The only things you are functionally testing are the *clients* to
> those fake HTTP services. And what you are *actually* validating the
> client code for isn't *actually* the real HTTP API service -- it's a
> fake which can have its own surface area for bugginess -- which takes
> me back to my original question: what value does one get from
> *functional* tests of a client that calls a faked-out HTTP API versus
> *unit* tests of said client that simply uses requests-mock (or
> similar) to set the returned value of the HTTP API service to some
> expected value?
It a fake yes, but it still based on the documentation (at least in my
case) and/or observed behaviors (in the case of mimic). The responses
build together, and maintain state. Example: Generate a token, and it
keeps the token; invalidate the token and the token now becomes invalid
- it's done in the model which is cross-project instead of in the
test-specific code or project framework specific to anyone project. At
least with StackInABox, it's just as easy to use as using requests-mock
(or HTTPretty or Python Responses).
Further, the responses are dynamic in nature - so there's no hard coding
of values. No chance that your real code will accidentally respond to a
test differently because of detecting a hard coded value.
This works well whether a client or service that is a client to other
services. For instance, one project I was involved in used Swift as a
back-end. We built a Swift mock to be able to do unit tests easily. In
other projects, I started using a Swift Mock based on my StackInABox
functionality - did it once, and just re-used it in each project.
Ben
P.S. Here's an example of my StackInABox functionality -
https://github.com/racker/eom/blob/master/tests/test_metrics.py - not
OpenStack specific, but a good example of the simplicity nonetheless.
Comparatively, here's a keystone mock up -
https://github.com/racker/eom/blob/master/tests/test_auth.py
More information about the OpenStack-dev
mailing list