Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JT-NM Tested profile? #765

Open
garethsb opened this issue Feb 27, 2023 · 1 comment
Open

JT-NM Tested profile? #765

garethsb opened this issue Feb 27, 2023 · 1 comment

Comments

@garethsb
Copy link
Contributor

One item of feedback from JT-NM Tested August 2022 was that the results from the NMOS Testing Tool required several processing steps to collate and filter to produce the familiar tabular results from JT-NM Tested.

Some suggestions for improving automation were made:

  • save JSON directly to repo used to populate Google Sheet (and maybe PDF for vendor)
  • format JSON for the JT-NM test plan
    • e.g. ordering/grouping of test cases (e.g. transmit vs. receive)
  • process NMOS Testing Tool 'amber' results (warnings, etc.) according to the JT-NM test plan (see below)
  • automatic verification all test results are there, comparison with pre-test

See https://static.jt-nm.org/documents/[JT-NM_Tested_Catalog_NMOS_TR_Full-Online-2022-08.pdf

General statements and terms

NOTE: Unless explicitly noted otherwise in the test plan, the testing tool needs to indicate
the ‘PASS’ state for the test case. The test states ‘FAIL’, ‘WARNING’, ‘NOT
IMPLEMENTED’ are NOT considered as a pass. (One general exception is that warnings
about ‘charset’ will be marked as a pass.)

Approx. 7 test cases have notes explicitly indicating that a warning will be marked as a pass in the JT-NM Tested results.

Other questions arose with the testing tool config during JT-NM Tested. The test plan Appendix described how to set up the testing tool and included an example UserConfig.py. But some flexibility was allowed for some settings. Can we standardize timeouts for different cases? What ranges are acceptable for JT-NM Tested? (E.g. shouldn't need HTTP_TIMEOUT of 10 seconds.)

@garethsb
Copy link
Contributor Author

One simple approach to the problem of processing 'amber' results would be to downgrade each of the JT-NM-ignored warnings from the testing tool to pass-with-info...

Though for other aspects, automated post-processing is probably required anyway?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant