Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

{WIP} Create a simple way to define systems to test with a module #69

Closed
tphoney opened this issue Mar 12, 2019 · 8 comments
Closed

{WIP} Create a simple way to define systems to test with a module #69

tphoney opened this issue Mar 12, 2019 · 8 comments

Comments

@tphoney
Copy link
Contributor

tphoney commented Mar 12, 2019

As a module developer i want to be able to define what OS images i use for running acceptance tests. For PR testing / release testing / and a simple default test setup. It should allow for different OSes to be tested depending on what puppet version is used / or what CI system is being used. I should be able to call from the command line these testing groups in a simple fashion.
eg

bundle exec rake 'litmus:provision_from_metadata[pr]'
or 
bundle exec rake 'litmus:provision_from_metadata[release]'
or 
bundle exec rake 'litmus:provision_from_metadata[default]'

These hashes / testing groups could be stored in metadata.json EG

{
  "testing": {
    "pr": {
      "provisioner": "docker",
      "puppet5_images": [
        "centos7",
        "debian8"
      ],
      "puppet6_images": [
        "debian9"
      ]
    },
    "release": {
      "provisioner": "vagrant",
      "puppet5_images": [
        "special_image_centos7",
        "special_image_debian8"
      ],
      "puppet6_images": [
        "special_image_debian9",
        "special_image_redhat6"
      ]
    },
    "default": {
      "provisioner": "docker",
      "list": [
        "centos7"
      ]
    }
  }
}

It is debatable how to split this structure. This is a nested structure. or it could look like this


@tphoney tphoney changed the title Create a simple way to define systems to test with a module {WIP} Create a simple way to define systems to test with a module Mar 12, 2019
@ekohl
Copy link
Contributor

ekohl commented Mar 12, 2019

👎 for storing image information in metadata.json. It's not modulesynced so a maintainer with multiple modules (thinking selfishly: theforeman, voxpupuli) needs a lot of effort to keep this synced. Even if it was modulesynced, I took a lot of effort to move this to beaker-hostgenerator so there are sane defaults for all backends I care about.

To me this info is very much tied to the env. Your CI might use different options than your local environment. For example, there are no public RHEL images available but in a downstream build environment you could have these. beaker-hostgenerator allowed you to specify this on the command line. Another example is testing with betas. As a Red Hat employee I have access to early builds so I can verify modules continue to work.

There are also other splits I can think of. Saying I have n workers and each worker gets total/n combos.

In the case of Voxpupuli we might want Puppet 5, Puppet 6 and Puppet Nightly workers. To not overload them, you can think of a concurrency limit as well. This all does have the risk of trying to build a job scheduler though.

@tphoney
Copy link
Contributor Author

tphoney commented Mar 13, 2019

@ekohl thanks for the feedback 👍 . As i was typing out the nested structure, it felt clunky.
So we want:

  • a way to modulesync / pdksync the images used for test
  • do not store this information in the metadata.json
  • handle multiple versions of puppet
  • handle concurrency

@ekohl
Copy link
Contributor

ekohl commented Mar 13, 2019

rspec-puppet-facts uses the tuple ${operatingsystem}-${operatingsystemmajrelease}-${architecture} as a base. Then in every test you inherit this and modify where needed. I think we should copy that as the source input.

To make this concrete I think we need a "source" tuple and enrich it with the info to provision it. This means we'll implement something like get_provision_info(tuple, puppet, hypervisor).

IMHO we need to implement a sane base line (similar to beaker-hostgenerator) that generates the correct config for most users. I think this can live within litmus itself.

Then we need an override layer per module. I like environment variables because they're very flexible and you can easily integrate this in your CI. msync/PDK can sync this into .travis.yml and power users can also change this even further on individual runs. Maybe config files can be considered but I'm not sure how it would look. Perhaps we can consider this something to implement once we find a need for it?

The Puppet version can be derived from the installed Puppet gem by taking its major version. It's already common to have a PUPPET_VERSION in your Gemfile. In my experience you either care about a particular version (adding Puppet n+1 support) anyway or it all has to work and you push it to CI. Again, a specific env var can override this, but we'd have a sane default.

A side note: I think it's also possible to not set up any repo in case you want to use the distribution version. Again, thinking with my Red Hat on I can imagine wanting to test with our downstream channels. Not a high priority, but something to consider.

@trevor-vaughan
Copy link

beaker-hostgenerator never met our needs for test isolation and complexity. It was fine for single host tests but multi-host tests became a nightmare with files spread out all over the place.

Pushing them down into a commonly formatted directory structure like we have in our suites made it easy to maintain for complex tests.

This is our current method for multi-version testing for comparison https://github.com/simp/pupmod-simp-ntpd/blob/master/.gitlab-ci.yml

@DavidS
Copy link
Contributor

DavidS commented Mar 13, 2019

From an internal discussion of similar topics:

grouping things into three buckets:

  1. There's the nodeset "shape" which defines the nodes and roles, which belongs in the module.
  2. There's the mapping of nodes to platforms, which depends on [the particular test being run]
  3. There's the mapping of nodes for specific platforms to hosts provided by some "engine".

[internal link for reference https://groups.google.com/a/puppet.com/d/msgid/ci-next/CA%2Bu97umqk9bXvaWUpJ-jSb32WGf1QpXZ-wSrr%3DFzgWHkrfF5eQ%40mail.gmail.com?utm_medium=email&utm_source=footer ]

I still think the idea has merit in that it disentangles the shape of the nodeset - which needs to be defined by the test writer - and the actual machines being used for the various roles in the nodes - which needs to be defined by who is running the tests.

Apologies for parachuting in here. Please ignore if not relevant.

@ekohl
Copy link
Contributor

ekohl commented Mar 13, 2019

Inspired by docker and the box we have in forklift we could inherit. https://github.com/theforeman/forklift/blob/master/vagrant/boxes.d/01-builtin.yaml is an example, but you can also use box: centos7-provision-nightly to inherit from that. In a similar sense litmus could have a node that inherits from centos-7-x86_64 which the generator knows defaults for.

While I wrote this I think @DavidS wrote a very similar thing in different wording.

@tphoney
Copy link
Contributor Author

tphoney commented Mar 26, 2019

Hi all, i have made a first attempt at provisioning larger setups. #78.

Please note it does not install puppet or do anything else complex. This is the first step towards different deployment scenarios.

The next step after this would be to write a deployment. yaml that can call provision. this separates deployment from provisioning.

  • It also simplifies provisioning systems in travis / CI systems.
  • Allows for PDK/modulesync.
  • does not pollute metadata.json
  • opens the posibility of generating provision.yaml from the metadata.json

@tphoney
Copy link
Contributor Author

tphoney commented Jun 6, 2019

this is stale. closing

@tphoney tphoney closed this as completed Jun 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants