If you’re running this demo code on a cloud hosted environment then simply select their pre-configured cloud ready image, and skip to the setup of the demo tooling using Ansible.
Follow these instructions to run a local VM under KVM + Libvirt.
-
rhel-baseos-9.0-beta-0-x86_64-kvm.qcow2
-
rhel-8.5-x86_64-kvm.qcow2
-
Fedora-Cloud-Base-35-1.2.x86_64.qcow2
The process I’m using is I create a working snapshot off the working image and then customize the image to set a simple password for the root user. Ideally we should be using the cloud-init capabilities to inject a valid user and credentials, but this is a quick hack to speed up this demo deployment.
You will need to update the following varibles to suit you local KVM environment
-
VM_BRIDGE to match the network you’re using
-
VM_BASE to the location of the base qcow image we’re working off
-
VM_IMAGE to the location you want to use for the working image.
The current process uses virt-customize
to strip out cloud-init
from the base image and set an initial password for the root user to
password.
VM_NAME=Fedora35
VM_MEMORY=2048
VM_CPU=2
OS_VARIANT=fedora34
VM_BRIDGE=virbr1
VM_BASE=/opt/Virtual/ISO/Fedora-Cloud-Base-35-1.2.x86_64.qcow2
VM_IMAGE=/opt/Virtual/images/Fedora-35.qcow2
# Create our working snapshot
qemu-img create -f qcow2 -F qcow2 \
-o backing_file=${VM_BASE} ${VM_IMAGE}
# Enable root password login and remove cloud-init
virt-customize -a ${VM_IMAGE} \
-root-password password:password --uninstall cloud-init \
--run-command 'echo PermitRootLogin yes >> /etc/ssh/sshd_config'
# Then Create the VM
virt-install --name ${VM_NAME} --memory ${VM_MEMORY} --vcpus ${VM_CPU} \
--disk path=$VM_IMAGE,format=qcow2,bus=scsi,discard=unmap \
--os-variant=${OS_VARIANT} \
--controller=type=scsi,model=virtio-scsi \
--connect qemu:///system \
--virt-type kvm \
--noautoconsole \
--import \
--network type=bridge,source=${VM_BRIDGE} --graphics vnc
virsh console ${VM_NAME}
Normally SSH as root is disabled by default in a Fedora Image, our virt-customize
line
re-adds this capability. If you’re working with a vanilla cloud image on AWS or Azure you might
need to type the following via a remote console.
echo "PermitRootLogin yes" > /etc/ssh/sshd_config.d/01-permitrootlogin.conf
systemctl restart sshd
I also recommend adding an entry in ~/.ssh/config with the correct IP address for your VM which will simplify running the ansible playbook. If you’re using KVM you should see the IP address in the virsh console.
Host fedora35 Hostname 192.168.124.138 StrictHostKeyChecking no UserKnownHostsFile /dev/null User root
You should then enable login via your ssh-key so that Ansible can automate the rest of the setup
ssh-copy-id fedora35
This is effectively the same as the Fedora version
VM_NAME=RHEL85
VM_MEMORY=2048
VM_CPU=2
OS_VARIANT=rhel8.5
VM_BRIDGE=virbr1
VM_BASE=/opt/Virtual/ISO/rhel-8.5-x86_64-kvm.qcow2
VM_IMAGE=/opt/Virtual/images/RHEL-85.qcow2
qemu-img create -f qcow2 -F qcow2 \
-o backing_file=${VM_BASE} ${VM_IMAGE}
virt-customize -a ${VM_IMAGE} \
-root-password password:password --uninstall cloud-init
# Then Create the VM
virt-install --name ${VM_NAME} --memory ${VM_MEMORY} --vcpus ${VM_CPU} \
--disk path=$VM_IMAGE,format=qcow2,bus=scsi,discard=unmap \
--os-variant=${OS_VARIANT} \
--controller=type=scsi,model=virtio-scsi \
--connect qemu:///system \
--virt-type kvm \
--noautoconsole \
--import \
--network type=bridge,source=${VM_BRIDGE} --graphics vnc
virsh console ${VM_NAME}
One big difference with RHEL 8.x is we don’t need to tweak the sshd_config.
I also recommend adding an entry in ~/.ssh/config with the correct IP address for your VM which will simplify running the ansible playbook.
Host rhel85 Hostname 192.168.124.160 StrictHostKeyChecking no UserKnownHostsFile /dev/null User root
You should then enable login via your ssh-key so that Ansible can automate the rest of the setup
ssh-copy-id rhel85