Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(feat) provisioning lists of images from a file #78

Closed
wants to merge 2 commits into from

Conversation

tphoney
Copy link
Contributor

@tphoney tphoney commented Mar 26, 2019

No description provided.

@tphoney
Copy link
Contributor Author

tphoney commented Mar 26, 2019

This is related to this commit in MOTD puppetlabs/puppetlabs-motd@6e66d71

Having a simple file for common provisioning steps will simplify appveyor and CI templates.
It will also enable more complex deployments, where different sets of provisioned systems can be called at different stages.

The command is call like so:
bundle exec rake 'litmus:provision_list[travis_deb]'

the provision file looks like

---
default:
  provisioner: docker 
  images: ['centos:7']
travis_deb:
  provisioner: docker 
  images: ['debian:8', 'debian:9', 'ubuntu:14.04', 'ubuntu:16.04', 'ubuntu:18.04']
travis_el:
  provisioner: docker 
  images: ['centos:6', 'centos:7', 'oraclelinux:6', 'oraclelinux:7', 'scientificlinux/sl:6', 
'scientificlinux/sl:7']

@trevor-vaughan
Copy link

Took a quick look and have the following comments;

  • There should be a way to note that you're provisioning in parallel vs sequentially. For many tests, I just care that it works on each one but I certainly don't want to spin up all of the hosts at the same time. For others, I need multiple servers of different types to be active simultaneously.
    • As an example, we found that OpenLDAP connections worked extremely differently between EL6 and EL7 systems and even further differently between SSSD and pam_ldap subsystem connections. This meant that we needed to be able to test all combinations of those entities.
  • I would probably make this a full and extensible Hash for future growth
  • I would like the ability to include files instead of having to put all of the complexity into a single file. Having an includedir directive would be great.

Copy link
Contributor

@ekohl ekohl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be good to add some docs, like a .yaml.example file or in the README.

@@ -83,6 +83,28 @@ def run_local_command(command)
end
end

desc "provision list of machines from provision.yaml file. 'bundle exec rake 'litmus:provision_list[default]'"
task :provision_list, [:key] do |_task, args|
provision_hash = YAML.load_file('./provision.yaml')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some bikeshedding: perhaps litmus.yaml? Also feels like this should be configurable - perhaps an env var with a default?

desc "provision list of machines from provision.yaml file. 'bundle exec rake 'litmus:provision_list[default]'"
task :provision_list, [:key] do |_task, args|
provision_hash = YAML.load_file('./provision.yaml')
provisioner = provision_hash['default']['provisioner']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some nice error if default doesn't exist would be nice. Same below for the key.

include PuppetLitmus
Rake::Task['spec_prep'].invoke
config_data = { 'modulepath' => File.join(Dir.pwd, 'spec', 'fixtures', 'modules') }
raise "the provision module was not found in #{config_data['modulepath']}, please amend the .fixtures.yml file" unless File.directory?(File.join(config_data['modulepath'], 'provision'))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is repeated in various tasks. It'd be good to at least add a helper for this.

results = run_task("provision::#{provisioner}", 'localhost', params, config: config_data, inventory: nil)
results.each do |result|
if result['status'] != 'success'
puts "Failed on #{result['node']}\n#{result}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be on stderr?

@ekohl
Copy link
Contributor

ekohl commented Mar 26, 2019

From the Foreman perspective we have various tools where it's important to have a FQDN. This means we always set a hostname to host.example.com. It'd be nice if you had the option to say server.example.com is centos-7-x86_64, client-c7.example.com is centos-7-x86_64 and client-d9.example.com is debian-9-x86_64.

@trevor-vaughan
Copy link

Completely agree with @ekohl . Setting the hostname to a well-defined value is a must.

@tphoney
Copy link
Contributor Author

tphoney commented Mar 26, 2019

Hi @ekohl @trevor-vaughan , many many thanks for the feedback. I want to keep the conversation going and still keep producing useful building blocks for litmus.

  1. yeah the ruby is shakey / duplicated / no default handling. This can be fixed later. (or sooner)
  2. https://travis-ci.org/puppetlabs/puppetlabs-motd/jobs/511528373 shows us running acceptance tests against multiple OSes in travis. This simplifies travis and templating. which i dont think either of you have issue with.
  3. calling the file provision makes sense, because there is a separate issue of deployment. where we will call the file deployment.yaml 🤷‍♂️ maybe
  4. so from your comments the concept of being able to provision a list of systems is sound.

Where i think you have issue is deployment, which is a separate problem but it pairs tightly with provisioning. Deployment in my head, is where we start to impart shape and definition on the machines that have been provisioned.
As a concrete example of what i mean, taking @ekohl s real world scenario.

I want to provision a debian systems. set a hostname then install puppet agent 5.
I want to provision a 4 ubuntu systems. then install puppet agent 6.

This would look like this

---
ekohl_deb_deployment:
 provision_list: debian_list
 deployment_steps:
  - task:
    run_command: 'hostname www.jim.com'
  - task
    puppet_agent: 'puppet5'
ekohl_ubuntu_deployment:
 provision_list: ubuntu_list
 deployment_steps:
  - task
    puppet_agent: 'puppet6'

The above can also be performed with bolt tasks or a bolt plan. If a 'very' specific deployment pattern is needed. The above is just an example of what could be possible.
What i am trying to get across, is that provisioning only brings a system up. Deployment is different.

@trevor-vaughan
Copy link

@tphoney This makes sense. The deployment scenarios that I see are:

  • A single host
  • A group of hosts that can talk to one another
  • Several groups of hosts that get destroyed between tests
    • Basically, when you need to test combinatorics, you simply can't spin up that many hosts at the same time on a given system so you need to be able to loop through groups of hosts in different situations. The SIMP project enhanced Beaker to support this as 'scenarios' (not to be confused with Beaker scenarios which were added later)

In terms of provisioning, why wrap it in something else and not just use Bolt, full stop?

@ekohl
Copy link
Contributor

ekohl commented Mar 26, 2019

Could you explain the example? Some things that don't make sense to me:

  • You specify the same key twice (ekohl_deb_deployment)
  • You provide a list (provision_list: debian_list) but only set a single hostname.

There are also other reasons why long term this might mess up. For example, in libvirt I have it set up that the requested hostname via DHCP automatically gets a DNS record (within a certain domain). This allows me to connect to any guest without messing with hosts files. It also automatically allows connections between VMs.

That said, I haven't done any cross host testing with beaker so it may be overkill.

What I expect from a tool is:

  • A clean machine with a certain base image
  • I can configure a hostname
  • I can configure the "hardware" (more important with VMs) - memory, CPUs
  • I can get some console to it (SSH or direct with docker)

If you want to allow testing complex applications, then network connectivity is also important. It may be fine to consider this out of scope initially. That means you may want to version the YAML file (version: 1)

@trevor-vaughan
Copy link

The comment from @ekohl just reminded me that you need to support running tests inside different modules at the same time.

This means that the 'unique' names that are used in Docker/libvirt/Virtualbox/whatever must be globally unique so that you don't have tests trying to tear down systems from other tests that are running at the same time.

@tphoney
Copy link
Contributor Author

tphoney commented Mar 26, 2019

@ekohl i fixed the example. my fault.
thanks @trevor-vaughan for the comment.

On to what @ekohl said. for complex deployments, we can utilise bolt tasks / command and bolt plans for these kind of things. or we could use a file format similar to my example.

in terms of unique names they are handled by the inventory.yaml file. this is how we reference the various systems. ie the handle in inventory is not necessarily the same as the host name. alias i think is their euphemism

@trevor-vaughan
Copy link

@tphoney What's the benefit of yet another random format?

  • If you want something standards-based, you could look at TOSCA.
  • Beaker already has a functional host definition format and it would be great to be able to reuse what we already have.
  • Puppet has a DSL that could easily define all of these artifacts (and, in fact, does using various modules)
    • This seems like a great plug for Bolt+Puppet DSL for standing up complex test environments
  • I don't know if you would want to just join forces with the Test Kitchen team and make that do what you need instead of writing the whole thing from scratch (not sure what the background would be there)

@tphoney
Copy link
Contributor Author

tphoney commented Apr 2, 2019

Closing this PR moving the features requested for deployment, setting hostname / hardware / multimachine setup into #72 If i missed something, please feel free to add it.

@tphoney tphoney closed this Apr 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants