-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
{WIP} Create a simple way to define systems to test with a module #69
Comments
👎 for storing image information in metadata.json. It's not modulesynced so a maintainer with multiple modules (thinking selfishly: theforeman, voxpupuli) needs a lot of effort to keep this synced. Even if it was modulesynced, I took a lot of effort to move this to beaker-hostgenerator so there are sane defaults for all backends I care about. To me this info is very much tied to the env. Your CI might use different options than your local environment. For example, there are no public RHEL images available but in a downstream build environment you could have these. beaker-hostgenerator allowed you to specify this on the command line. Another example is testing with betas. As a Red Hat employee I have access to early builds so I can verify modules continue to work. There are also other splits I can think of. Saying I have n workers and each worker gets total/n combos. In the case of Voxpupuli we might want Puppet 5, Puppet 6 and Puppet Nightly workers. To not overload them, you can think of a concurrency limit as well. This all does have the risk of trying to build a job scheduler though. |
@ekohl thanks for the feedback 👍 . As i was typing out the nested structure, it felt clunky.
|
rspec-puppet-facts uses the tuple To make this concrete I think we need a "source" tuple and enrich it with the info to provision it. This means we'll implement something like IMHO we need to implement a sane base line (similar to beaker-hostgenerator) that generates the correct config for most users. I think this can live within litmus itself. Then we need an override layer per module. I like environment variables because they're very flexible and you can easily integrate this in your CI. msync/PDK can sync this into The Puppet version can be derived from the installed Puppet gem by taking its major version. It's already common to have a PUPPET_VERSION in your Gemfile. In my experience you either care about a particular version (adding Puppet n+1 support) anyway or it all has to work and you push it to CI. Again, a specific env var can override this, but we'd have a sane default. A side note: I think it's also possible to not set up any repo in case you want to use the distribution version. Again, thinking with my Red Hat on I can imagine wanting to test with our downstream channels. Not a high priority, but something to consider. |
Pushing them down into a commonly formatted directory structure like we have in our suites made it easy to maintain for complex tests. This is our current method for multi-version testing for comparison https://github.com/simp/pupmod-simp-ntpd/blob/master/.gitlab-ci.yml |
From an internal discussion of similar topics:
[internal link for reference https://groups.google.com/a/puppet.com/d/msgid/ci-next/CA%2Bu97umqk9bXvaWUpJ-jSb32WGf1QpXZ-wSrr%3DFzgWHkrfF5eQ%40mail.gmail.com?utm_medium=email&utm_source=footer ] I still think the idea has merit in that it disentangles the shape of the nodeset - which needs to be defined by the test writer - and the actual machines being used for the various roles in the nodes - which needs to be defined by who is running the tests. Apologies for parachuting in here. Please ignore if not relevant. |
Inspired by docker and the While I wrote this I think @DavidS wrote a very similar thing in different wording. |
Hi all, i have made a first attempt at provisioning larger setups. #78. Please note it does not install puppet or do anything else complex. This is the first step towards different deployment scenarios. The next step after this would be to write a deployment. yaml that can call provision. this separates deployment from provisioning.
|
this is stale. closing |
As a module developer i want to be able to define what OS images i use for running acceptance tests. For PR testing / release testing / and a simple default test setup. It should allow for different OSes to be tested depending on what puppet version is used / or what CI system is being used. I should be able to call from the command line these testing groups in a simple fashion.
eg
These hashes / testing groups could be stored in metadata.json EG
It is debatable how to split this structure. This is a nested structure. or it could look like this
The text was updated successfully, but these errors were encountered: