deltacloud-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jvlcek <jvl...@redhat.com>
Subject Re: making cimi tests pass on fgcp
Date Thu, 07 Feb 2013 17:40:07 GMT
On 02/05/2013 02:29 AM, Koper, Dies wrote:
> Hi Joe,
>
> Thanks for your reply!
>
>> I think this proposal sounds great. The only concern I have is
>> how to address retires that could potentially never succeed.
> There is already retry logic in current tests (both cimi and api) and I
> have been copying what's done there.
> I believe in some that logic was looping endlessly and in others for a
> number of retries (which I had to increase a bit in some cases).
> We can definitely look into capping them all and consistently.
> Maybe we can separate that from this topic as it is an existing issue
> already and will take the patch for this issue longer to develop and
> review.
>
>> I would like to suggest having a retry MAX with a default value
>> that can be customized by someone running the tests. This
>> way the test could be run in a shorter time when the user
>> realizes they may have to do some manual clean-up.
> Retry logic is used in different locations. Are you referring only to
> retry logic in the teardown method?
> Because if you'd decrease the retry max in other locations, it'll just
> mean those tests will fail :)

I was thinking that if special retry logic were to be added to
accommodate the fgcp driver that it should be configurable,
for example with a user configurable MAX retry.



>
>> I would also like to suggest that the test log the state for
>> each iteration of the retry, how many retries were required
>> and the time it took to detect the expected result. This may
> Yes, that is a good point. When I was working on the patch for the api
> tests I added a lot of puts because sometimes I didn't know whether it
> got stuck in an endless loop or whether the tests were still running.
> The problem with my puts was that it made the current output (one line
> test name followed by a '.', 'S' or 'F' status) unreadable so I didn't
> include them in the patch.
> I'd like to hear a suggestion on how to solve that. If we'd make it
> configurable (log level?), would you use it?
> Again, maybe we can separate that from this topic as it is an existing
> issue already and will take the patch for this issue longer to develop
> and review.

Perhaps you could keep an internal counter that could be reported
at the end of the run and optionally produce a spinning pin-wheel
during the run.

>> help us gather some insight into where we may need to try
>> to work to  improve the user experience.
> User experience of?

I was thinking it might help identify possible  bottlenecks
the fgcp driver.


Hope this helps.
   Joe


> It's waiting for (polling) the backend, not much we can do about it in
> Deltacloud.
>
> Cheers,
> Dies Koper
>
>
>> -----Original Message-----
>> From: jvlcek [mailto:jvlcek@redhat.com]
>> Sent: Tuesday, 5 February 2013 1:41 AM
>> To: dev@deltacloud.apache.org
>> Cc: Koper, Dies
>> Subject: Re: making cimi tests pass on fgcp
>>
>> On 02/04/2013 12:15 AM, Koper, Dies wrote:
>>> Hi Ronelle, Marios, all
>>>
>>> Last week we modified the api tests slightly so they would pass on
> fgcp.
>>> I would like to do the same with the cimi tests.
>>> Before I complete my patch, I'd like to consult with you to ensure
> the
>>> direction I'd like to take is acceptable with you.
>>>
>>> Unlike the api tests, I appreciate that the cimi tests (at least the
>>> part* ones) are based on documented scenarios that we need to
> follow. So
>>> I'm trying to come up with a solution that complies with the
> scenarios.
>>> To get the tests to pass on the fgcp I have to work around the same
>>> restriction with the fgcp endpoint API that affected the api tests:
> when
>>> you create a resource (machine/volume) or delete one, it does not
> accept
>>> any other resource creation/deletion requests in that system until
> the
>>> creation/deletion has completed.
>>>
>>> Currently, I'm considering to make the following changes:
>>>
>>> 1) For tests that create resources and deletion is not part of the
> test
>>> scenario, perform the deletion in a teardown operation (as is
> already
>>> done in most cases). The teardown method would loop through the
>>> resources to stop and destroy them. When a destroy operation returns
> a
>>> 405 (Method Not Allowed) or 409 (Conflict), the operation is retried
>>> again and again after a number of seconds.
>>>
>>> As the teardown is not part of the scenario, I hope this non-ideal
>>> method is acceptable.
>>>
>>> 2) For tests that create resources, I'd like to add a checkpoint:
> that
>>> the resource has actually been created (by sleeping and performing a
> GET
>>> on it until its state becomes AVAILABLE/ STARTED/ STOPPED. The test
> then
>>> continues as per the scenario.
>>>
>>> I would say this is actually a better implementation of the
> scenario:
>>> Where e.g. the scenario says the success criteria is "A new Machine
>>> resource is created.", our current test is just checking that the
>>> response of the creation request is 201. There is no check whether
> the
>>> resource has been created. If it failed during the creation process,
> our
>>> test would not catch that. With my proposal it would, because we'd
>>> actually be checking that the machine left the CREATING state and
>>> transitioned into a stable success state.
>>>
>>> I expect again that added sleeps will not affect performance with
> the
>>> mock driver. I can imagine the extra check introduced above does
> incur a
>>> performance impact, depending on the performance of the backend
> cloud
>>> provider under testing.
>>>
>>> What do you think?
>>>
>>> Cheers,
>>> Dies Koper
>>>
>> Dies,
>>
>> I think this proposal sounds great. The only concern I have is
>> how to address retires that could potentially never succeed.
>>
>> I would like to suggest having a retry MAX with a default value
>> that can be customized by someone running the tests. This
>> way the test could be run in a shorter time when the user
>> realizes they may have to do some manual clean-up.
>>
>> I would also like to suggest that the test log the state for
>> each iteration of the retry, how many retries were required
>> and the time it took to detect the expected result. This may
>> help us gather some insight into where we may need to try
>> to work to  improve the user experience.
>>
>> Let me know if I am wrong but I would expect addressing
>> my 2 suggestions would not be overly complex.
>>
>> Hope this helps and thanks,
>>    Joe VLcek
>


Mime
View raw message