Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parasol test framework has no support for dealing with random test failures #7

Open
jecisc opened this issue Mar 30, 2017 · 1 comment

Comments

@jecisc
Copy link
Member

jecisc commented Mar 30, 2017

Since there is some random failure with tests it would be nice to let the user define a number of retry when a test fail. By default it would be 0 retry but the user could add a method numberOfRetry in his test class to enable it. Thus the CI would not fail randomly often.

I would like have other opinions about that.

@Rinzwind Rinzwind changed the title Allow to tolerate some failures during tests Parasol test framework has no support for dealing with random test failures Mar 30, 2017
@Rinzwind
Copy link
Member

I have tried to reformulate the title of this issue so that it reads more like an issue description, rather than a feature idea description, so that it is clearer what the issue is that the feature idea needs to solve.

The issue is that Parasol tests are prone to random failures due to all of the parts running concurrently (browser, javascript engine in browser, tests, ...), while there is no support in the test framework to deal with this. Perhaps the real issue here is not the lack of support for dealing with random failures, as I put it in the title, but the lack of support for dealing with the concurrency.

The feature idea is for the test framework to be able to automatically rerun a failing test to check whether it fails consistently. A question for me is whether an inconsistently passing (or inconsistently failing test) would be marked as “passed” or as “failed”. I think it would have to be the latter, because the random failure could be due to randomness in testing, or due to randomness in the code. Within the code it could be due to sources of randomness that are sometimes missed, like converting IdentitySet with: 'foo' copy with: 'bar' copy to an Array which randomly results in #('foo' 'bar') or #('bar' 'foo'). If it's just the test, it's not that important, but if it's the code, it is important and I need to be aware of it. If the test would be marked as just “passed”, it would hide the fact that it did not pass consistently. Alternatively, a distinction could be made between “passed”, “failed” and “randomly passed/failed” but then this is something that would have to be supported by the entire test framework (the class TestCase, the Test Runner in Pharo, Smalltalk-CI, ...).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants