Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add capability to run test services on different OCP than Quarkus application #844

Open
mjurc opened this issue Jul 28, 2023 · 5 comments
Open
Assignees

Comments

@mjurc
Copy link
Member

mjurc commented Jul 28, 2023

Add capability to run test services on different OCP than the tested Quarkus application. This is to allow for e.g. running the tested Quarkus version on different platform than OCP.

Required steps:

  1. Add configuration system property to be parsed by the quarkus-test-openshift JUnit plugin with credentials and API url for the two clusters.
  2. https://github.com/quarkus-qe/quarkus-test-framework/blob/main/quarkus-test-openshift/src/main/java/io/quarkus/test/bootstrap/inject/OpenShiftClient.java needs to be updated to allow to have authentication configured by code rather than taking the auth token from default oc client location
  3. https://github.com/quarkus-qe/quarkus-test-framework/blob/main/quarkus-test-openshift/src/main/java/io/quarkus/test/bootstrap/OpenShiftExtensionBootstrap.java#L26 needs to be split into the test application and test service client
  4. https://github.com/quarkus-qe/quarkus-test-framework/tree/main/quarkus-test-openshift/src/main/java/io/quarkus/test/services/containers + https://github.com/quarkus-qe/quarkus-test-framework/tree/main/quarkus-test-openshift/src/main/java/io/quarkus/test/services/operator managed resources need to use test service client
  5. https://github.com/quarkus-qe/quarkus-test-framework/tree/main/quarkus-test-openshift/src/main/java/io/quarkus/test/services/quarkus need to use test application client
  6. https://github.com/quarkus-qe/quarkus-test-framework/tree/main/quarkus-test-openshift/src/main/resources the OpenShift build templates need to expose routes so that the test services are available outside of the test service cluster for the test application cluster
  7. deployment templates in quarkus-test-service-* need to be updated also to expose the routes for the same reason
  8. tests in Quarkus test suite customizing test services with injected client need to be reviewed to make sure they use the correct client (if they modify the test service, they need to use the client for that cluster and vice versa)
@rsvoboda
Copy link
Member

rsvoboda commented Jul 31, 2023

Can we have time estimation for individual items + analysis for necessary changes in https://github.com/quarkus-qe/quarkus-test-suite ?

Will be the change backwards compatible and transparent for users of one cluster?

@michalvavrik any thoughts on this analysis?

@michalvavrik
Copy link
Member

@michalvavrik any thoughts on this analysis?

Many, but sorry, don't have time. I need to finish other stuff right now, I shall comment on this later this week.

@michalvavrik
Copy link
Member

michalvavrik commented Aug 1, 2023

I glanced at the description it seems like a whole lot of changes, and if I get it you right (+ few assumptions):

  • we only need this new capability for containers (and you are adding operators too, thought it is not primary goal ATM)
  • ideally container images are identifiable by their images that ideally are not hardcoded (e.g. ${postgresql.latest.image} => postgresql.latest.image)
  • ARM profile can list images that needs to run on non-ARM cluster as system property
  • io.quarkus.test.services.containers.OpenShiftContainerManagedResource#client (not that final property, just client used) will be selected based on the cluster it should be in - AKA: if and only if you are listed for separate cluster, use different client in OpenShiftContainerManagedResource. You can even pre-create it and put it in the ServiceContext as proxy so that it is only initialized when there is actual test and so that we always use correct one.
  • you are going to have OpenShift client qualifier that determines which client is injected and app client is the default one

https://github.com/quarkus-qe/quarkus-test-framework/tree/main/quarkus-test-openshift/src/main/java/io/quarkus/test/services/containers + https://github.com/quarkus-qe/quarkus-test-framework/tree/main/quarkus-test-openshift/src/main/java/io/quarkus/test/services/operator managed resources need to use test service client

If I read it correctly, you basically have a choice, either run all the services on different cluster or none? I presume it would be complicated for containers to communicate between each other when on different clusters for we would need to adjust routes, urls, etc.?

  1. Add configuration system property to be parsed by the quarkus-test-openshift JUnit plugin with credentials and API url for the two clusters.

At least this feature should only be activated for test classes that really needs it. It must not be enabled for all the tests so that we actually test on ARM what we can. That is why I suggested to only enable it based on list of non-compatible images rather than one flag.

https://github.com/quarkus-qe/quarkus-test-framework/blob/main/quarkus-test-openshift/src/main/java/io/quarkus/test/bootstrap/inject/OpenShiftClient.java needs to be updated to allow to have authentication configured by code rather than taking the auth token from default oc client location

Plus, I can't imagine it will ever work if we use oc tool directly. Having 2 clusters (let say one in OpenStack and one in AWS [arm]) will only work if we rely on fabric8 client. But that is big refactoring. My point - if we use OC tool like we do in many cases - you never know where you are logged in?

@mjurc
Copy link
Member Author

mjurc commented Aug 6, 2023

I'll respond inline:

I glanced at the description it seems like a whole lot of changes, and if I get it you right (+ few assumptions):

* we only need this new capability for containers (and you are adding operators too, thought it is not primary goal ATM)

This is by design. It's kind of hard for me to figure out how to sanely do operators - for example, in case of serverless, the Quarkus application needs to be on the same cluster as the operator as it is operand. For operators that 'only' provide services, it could be done too. If we made a change to operators, we would need to be aware of it in tests.

* ideally container images are identifiable by their images that ideally are not hardcoded (e.g. `${postgresql.latest.image}` => `postgresql.latest.image`)

* ARM profile can list images that needs to run on non-ARM cluster as system property

I would actually like to run all the test services on different cluster if different testing cluster is specified.

* `io.quarkus.test.services.containers.OpenShiftContainerManagedResource#client` (not that final property, just client used) will be selected based on the cluster it should be in - AKA: if and only if you are listed for separate cluster, use different client in `OpenShiftContainerManagedResource`. You can even pre-create it and put it in the `ServiceContext` as proxy so that it is only initialized when there is actual test and so that we always use correct one.

Yes, I think this is correct.

* you are going to have OpenShift client qualifier that determines which client is injected and app client is the default one

Not sure what this means. Ideally, workflow in my mind was this: If the properties (API endpoint, username, password) for the service/application cluster are not set, try to fall back on default kubernetes auth and use this auth for both the service and application clients, which would kind of ensure compatibility of behaviour in the most cases.

We would actually have to inject two clients, though - the 'service' cluster one and the 'application' cluster one. We need to do this because we have tests configuring the test services (for example the ones configuring Streams/Infinispan with SSL/SASL).

https://github.com/quarkus-qe/quarkus-test-framework/tree/main/quarkus-test-openshift/src/main/java/io/quarkus/test/services/containers + https://github.com/quarkus-qe/quarkus-test-framework/tree/main/quarkus-test-openshift/src/main/java/io/quarkus/test/services/operator managed resources need to use test service client

If I read it correctly, you basically have a choice, either run all the services on different cluster or none? I presume it would be complicated for containers to communicate between each other when on different clusters for we would need to adjust routes, urls, etc.?

Correct. I don't think this change is hardest, to be honest - all the OpenShift managed resources are deployed using -some- templates, which are parameterized. IIRC the logic for getting URLs/ports is done around https://github.com/quarkus-qe/quarkus-test-framework/blob/main/quarkus-test-openshift/src/main/java/io/quarkus/test/services/containers/OpenShiftContainerManagedResource.java#L70 . Those would need to be updated to return routes rather than internal OpenShift service URLs.

  1. Add configuration system property to be parsed by the quarkus-test-openshift JUnit plugin with credentials and API url for the two clusters.

At least this feature should only be activated for test classes that really needs it. It must not be enabled for all the tests so that we actually test on ARM what we can. That is why I suggested to only enable it based on list of non-compatible images rather than one flag.

This is a possible way, but the ARM profile is currently test-suite only. I kind of wanted to do this feature the most general/useful way possible. The codebase is kinda complicated already as it is, and I think additional fork in processing that's not general in configuration would make it even harder to maintain.

https://github.com/quarkus-qe/quarkus-test-framework/blob/main/quarkus-test-openshift/src/main/java/io/quarkus/test/bootstrap/inject/OpenShiftClient.java needs to be updated to allow to have authentication configured by code rather than taking the auth token from default oc client location

Plus, I can't imagine it will ever work if we use oc tool directly. Having 2 clusters (let say one in OpenStack and one in AWS [arm]) will only work if we rely on fabric8 client. But that is big refactoring. My point - if we use OC tool like we do in many cases - you never know where you are logged in?

This is one thing I didn't really realise, and that's just how much the oc CLI is used directly. I think it will be necessary to change the usages we have right now to fabric8. On the other hand vast majority of usages we have is oc apply -f <filepath>, and this is also provided as method of fabric8 client - with the upside of having an actual object-oriented Java return values and exceptions as outcomes instead of bunch of logs

Also, I think this will be the harder part.

@mjurc mjurc self-assigned this Aug 31, 2023
@michalvavrik
Copy link
Member

This issue seems like a wonderful opportunity to also address oc: Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants