Milestone 7.1 - Rocky Linux VM
VM Creation
A Rocky 9.1 base VM is created in support of this deliverable, with the following settings:
- Name:
rocky - OS:
Linux / Rocky Linux (64-bit) - CPU:
2 - RAM:
2 GB - Hard Disk:
20 GB- Thin Provisioned
- Network Adapter:
VM Network- Set to default for creation of Base VM snapshot
- CD/DVD Drive: Datastore ISO
Initial Setup
The VM is powered on, and the initial setup process is performed to install Rocky. This setup process allows for sudo user creation, as well as installation of VMWare Tools (via Software Selection). Once the VM reboots, change the hostname and reboot once more.
Then, the following script is used for initial provisioning. The script is included below as well:
#!/usr/bin/env bash
# This is the sys_prep script
# It will clear out all non-revelent information for a new VM
# some software
yum update -y
yum install open-vm-tools -y
echo > /etc/machine-id
# 1. Force logs to rotate and clear old.
/usr/sbin/logrotate -f /etc/logrotate.conf
/bin/rm -f /var/log/*-20* /var/log/*.gz
# 2. Clear the audit log & wtmp.
/bin/cat /dev/null > /var/log/audit/audit.log
/bin/cat /dev/null > /var/log/wtmp
# 3. Remove the udev device rules.
/bin/rm -f /etc/udev/rules.d/70*
# 4. Remove the traces of the template MAC address and UUIDs.
/bin/sed -i '/^\(HWADDR\|UUID\|IPADDR\|NETMASK\|GATEWAY\)=/d' /etc/sysconfig/network-scripts/ifcfg-e*
# 5. Clean /tmp out.
/bin/rm -rf /tmp/*
/bin/rm -rf /var/tmp/*
# 6. Remove the SSH host keys.
/bin/rm -f /etc/ssh/*key*
# 7. Remove the root user's shell history.
/bin/rm -f /root/.bash_history
unset HISTFILE
# 8. Set hostname to localhost
/bin/sed -i "s/HOSTNAME=.*/HOSTNAME=localhost.localdomain/g" /etc/sysconfig/network
/bin/hostnamectl set-hostname localhost.localdomain
# 9. Remove rsyslog.conf remote log server IP.
/bin/sed -i '/1.1.1.1.1/'d /etc/rsyslog.conf
#misc cleanup
yum clean all
rm -v /root/.ssh/known_hosts
# 11. Shutdown the VM. Poweron required to scan new HW addresses.
poweroffMilestone 7.2 - Network Preconfiguration
Static Route on 480-fw
On 480-fw, a new static route is created to properly route any traffic destined for the BLUE network (and thus blue1-fw):
set protocols static route 10.0.5.0/24 next-hop 10.0.17.200Create VyOS DHCP Pool on blue1-fw
The DHCP pool on blue1-fw will be created via an Ansible playbook.
The hosts file is first converted to YAML format, and some additional host variables are added for DHCP configuration:
vyos:
hosts:
10.0.17.200:
mac: 00:50:56:ba:41:8d
hostname: blue1-fw
wan_ip: 10.0.17.200
lan_ip: 10.0.5.2
lan: 10.0.5.0/24
name_server: 10.0.17.4
gateway: 10.0.17.2
dhcp_name_server: 10.0.5.5
shared_network: BLUE1
dhcp_domain: blue1.local
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_connection: network_cli
ansible_network_os: vyos
ansible_user: vyosThen, the following playbook can be used to directly execute VyOS commands on the host:
---
- name: blue1 vyos network configuration
hosts: vyos
tasks:
- name: Retrieve VyOS version
vyos.vyos.vyos_command:
commands: show version
register: version
- name: Display VyOS version
ansible.builtin.debug:
var: version.stdout_lines
- name: Configure VyOS DHCP
vyos.vyos.vyos_config:
save: true
lines:
- set service dhcp-server global-parameters 'local-address {{ lan_ip }};'
- set service dhcp-server shared-network-name {{ shared_network }} authoritative
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} default-router '{{ lan_ip }}'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} name-server '{{ dhcp_name_server }}'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} domain-name '{{ dhcp_domain }}'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} lease '86400'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} range {{ shared_network }}-POOL start '10.0.5.75'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} range {{ shared_network }}-POOL stop '10.0.5.125'Rocky Postprovisioning
The following inventory file and Ansible playbook are used for postprovisioning of rocky1-3:
linux:
hosts:
children:
rocky:
hosts:
10.0.5.75:
hostname: rocky-1
lan_ip: 10.0.5.10
10.0.5.76:
hostname: rocky-2
lan_ip: 10.0.5.11
10.0.5.77:
hostname: rocky-3
lan_ip: 10.0.5.12
vars:
device: ens33
ubuntu:
hosts:
10.0.5.80:
hostname: ubuntu-1
lan_ip: 10.0.5.30
10.0.5.79:
hostname: ubuntu-2
lan_ip: 10.0.5.31
vars:
device: ens33
vars:
ansible_user: reed
public_key: "ssh-rsa ....."
prefix: 24
gateway: 10.0.5.2
name_server: 10.0.5.5
domain: blue1.local---
- name: Rocky post-install configuration
hosts: rocky
tasks:
- name: Prepare SSH
block:
- name: Ensure .ssh directory is present
ansible.builtin.file:
path: "/home/{{ ansible_user }}/.ssh"
state: directory
mode: 0700
- name: Create authorized_keys
ansible.builtin.file:
path: "/home/{{ ansible_user }}/.ssh/authorized_keys"
state: touch
mode: 0644
- name: Append public key to authorized_keys
ansible.builtin.blockinfile:
block: "{{ public_key }}"
dest: "/home/{{ ansible_user }}/.ssh/authorized_keys"
- name: Execute tasks as root
block:
- name: Create sudoers drop-in file
ansible.builtin.file:
path: /etc/sudoers.d/480
state: touch
mode: 0440
- name: Add entry to sudoers drop-in file
ansible.builtin.blockinfile:
dest: /etc/sudoers.d/480
block: "{{ ansible_user }} ALL=(ALL) NOPASSWD: ALL"
- name: Set hostname
ansible.builtin.hostname:
name: "{{ hostname }}"
- name: Add host to hostsfile
ansible.builtin.lineinfile:
path: /etc/hosts
line: "127.0.1.1 {{ hostname }}"
- name: Configure network via nmcli
community.general.nmcli:
conn_name: "{{ device }}"
ip4: "{{ lan_ip }}/24"
gw4: "{{ gateway }}"
state: present
type: ethernet
dns4:
- "{{ name_server }}"
- "{{ gateway }}"
method4: manual
- name: Restart the VM
ansible.builtin.shell: "sleep 5 && reboot now"
async: 1
poll: 0
become: trueUbuntu Setup & Configuration
Two linked clones, ubuntu-1 and ubuntu-2, are created from the ubuntu.22.04.base VM. The following Ansible playbook (with the inventory file for Rocky included above) was used for postprovisioning:
---
- name: Ubuntu post-install configuration
hosts: ubuntu
tasks:
- name: Prepare SSH
block:
- name: Ensure .ssh directory is present
ansible.builtin.file:
path: "/home/{{ ansible_user }}/.ssh"
state: directory
mode: 0700
- name: Create authorized_keys
ansible.builtin.file:
path: "/home/{{ ansible_user }}/.ssh/authorized_keys"
state: touch
mode: 0644
- name: Append public key to authorized_keys
ansible.builtin.blockinfile:
block: "{{ public_key }}"
dest: "/home/{{ ansible_user }}/.ssh/authorized_keys"
- name: Execute tasks as root
block:
- name: Create sudoers drop-in file
ansible.builtin.file:
path: /etc/sudoers.d/480
state: touch
mode: 0440
- name: Add entry to sudoers drop-in file
ansible.builtin.blockinfile:
dest: /etc/sudoers.d/480
block: "{{ ansible_user }} ALL=(ALL) NOPASSWD: ALL"
- name: Set hostname
ansible.builtin.hostname:
name: "{{ hostname }}"
- name: Add host to hostsfile
ansible.builtin.lineinfile:
path: /etc/hosts
line: "127.0.1.1 {{ hostname }}"
- name: Remove old netplan file
ansible.builtin.file:
path: /etc/netplan/00-installer-config.yaml
state: absent
- name: Configure network via netplan file
ansible.builtin.template:
src: files/ubuntu-netplan.yml.j2
dest: /etc/netplan/99-blue1-config.yaml
owner: root
group: root
mode: u=rw,g=r,o=r
notify:
- Apply netplan
- name: Restart the VM
ansible.builtin.shell: "sleep 5 && reboot now"
async: 1
poll: 0
become: true
handlers:
- name: Apply netplan
ansible.builtin.shell: "netplan apply"
become: trueThe playbook above relies on this templated netplan configuration for networking:
network:
version: 2
ethernets:
{{ device }}:
addresses:
- {{ lan_ip }}/{{ prefix }}
nameservers:
search: [{{ domain }}]
addresses: [{{ name_server }}, {{ gateway }}]
routes:
- to: default
via: {{ gateway }}