Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression: "play auto-test --timeout" broke in python3 upgrade #1463

Open
davidcostanzo opened this issue Jan 30, 2024 · 1 comment · May be fixed by #1480
Open

Regression: "play auto-test --timeout" broke in python3 upgrade #1463

davidcostanzo opened this issue Jan 30, 2024 · 1 comment · May be fixed by #1480

Comments

@davidcostanzo
Copy link
Contributor

Play Version (1.5.x / etc)

1.7.1, master

Operating System (Ubuntu 22)

Linux mouse 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

JDK

openjdk version "14.0.2" 2020-07-14
OpenJDK Runtime Environment AdoptOpenJDK (build 14.0.2+12)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 14.0.2+12, mixed mode, sharing)

Library Dependencies

N/A

Expected Behavior

By default, "play auto-test" will give up on a test it it takes longer than 90 seconds. Instead of PASS or FAIL, it prints ERROR and moves on to the next test.

You can customize how long it waits by providing a --timeout parameter. Our continuous integration depends on this, as not all of our tests can run in under 90 seconds on all of our test machines. Moreover, when we have a test class that has a lot of cohesive tests, or some long-running tests, we'd rather not split up the cohesive methods into different classes just to fit under the timeout. Instead, we increase the --timeout.

There is a regression from Play 1.6.0 -> Play 1.7.1 that broke this feature. I expect it's due to upgrading from python2 to python3.

Steps to Reproduce

This is reproducible in any Play app. For example, using the yabe sample

  1. cd samples-and-tests/yabe
  2. play auto-test --timeout=60000

Actual Behavior

The tests don't run. There is an unhandled python exception

  File "/devel/dcostanz/play1_david_costanzo/play", line 168, in <module>
    status = cmdloader.commands[play_command].execute(command=play_command, app=play_app, args=remaining_args, env=play_env, cmdloader=cmdloader)
  File "/devel/dcostanz/play1_david_costanzo/framework/pym/play/commands/autotest.py", line 28, in execute
    autotest(app, args)
  File "/devel/dcostanz/play1_david_costanzo/framework/pym/play/commands/autotest.py", line 86, in autotest
    if weblcient_timeout >= 0:  
TypeError: '>=' not supported between instances of 'str' and 'int'

The tests run with a timeout of 60 seconds. For the "yabe" sample, all tests should pass.

Suggested Fix

The problem seems to be that autotest.py parses the timeout as a str, then wants to use it as an int.

    # Handle timeout parameter
    weblcient_timeout = -1 # <--- initialized to integer
    if app.readConf('webclient.timeout'):
        weblcient_timeout = app.readConf('webclient.timeout')
    
    for arg in args:
        if arg.startswith('--timeout='): # <--- parsed as string
            args.remove(arg)
            weblcient_timeout = arg[10:]
          
    if weblcient_timeout >= 0:   # <--- ERROR: used as integer
        add_options.append('-DwebclientTimeout=' + weblcient_timeout) # <-- used as string

Since the only time it's used as an int is to check if it's uninitialized, I modified my fork to use "None" to mean uninitialized and was able to get this working. I'll submit a PR.

davidcostanzo pushed a commit to davidcostanzo/play1 that referenced this issue Feb 9, 2024
The upgrade to python3 broke the processing of the --timeout parameter
due to a type mismatch (a string was compared to an integer).  The fix
is to always treat the --timeout parameter as a string, deferring to the
test runner to parse it and report errors.

This also fixes a typo in the name of a variable "webclient_timeout"
(it was named "weblcient_timeout").

It also adds some new regression tests for the auto-test functionality.
davidcostanzo pushed a commit to davidcostanzo/play1 that referenced this issue Feb 9, 2024
The upgrade to python3 broke the processing of the --timeout parameter
due to a type mismatch (a string was compared to an integer).  The fix
is to always treat the --timeout parameter as a string, deferring to the
test runner to parse it and report errors.

This also fixes a typo in the name of a variable "webclient_timeout"
(it was named "weblcient_timeout").

It also adds some new regression tests for the auto-test functionality.
davidcostanzo pushed a commit to davidcostanzo/play1 that referenced this issue Mar 11, 2024
The upgrade to python3 broke the processing of the --timeout parameter
due to a type mismatch (a string was compared to an integer).  The fix
is to always treat the --timeout parameter as a string, deferring to the
test runner to parse it and report errors.

This also fixes a typo in the name of a variable "webclient_timeout"
(it was named "weblcient_timeout").

It also adds some new regression tests for the auto-test functionality.
davidcostanzo pushed a commit to davidcostanzo/play1 that referenced this issue Mar 11, 2024
The upgrade to python3 broke the processing of the --timeout parameter
due to a type mismatch (a string was compared to an integer).  The fix
is to always treat the --timeout parameter as a string, deferring to the
test runner to parse it and report errors.

This also fixes a typo in the name of a variable "webclient_timeout"
(it was named "weblcient_timeout").

It also adds some new regression tests for the auto-test functionality.
@davidcostanzo
Copy link
Contributor Author

When I first submitted the PR, there were some problems with the GitHub checks caused by the python3 upgrade. Those were later fixed, but my PR still failed. I saw all of the errors about ModuleNotFoundError: No module named 'win32pdh' and incorrectly concluded that the checks weren't working. I realized today that those are not-fatal problems and that my PR didn't run on Windows because play auto-test has different output on Windows when a tests time out. I have adjusted my PR to account for this, and I think the checks are all passing now. I don't know GitHub, so I'm not sure how to resubmit the PR or if that's even necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant