Create Configuration Manifests
Creating IPI Installation Config File
In this section we will create our installation configuration files which will be then used in the next section to deploy an OCP cluster.
Preparing Provisioning VM
We will use your UserXX-LinuxToolsVM as a provisioning VM to deploy OCP.
Logon to your
UserXX-LinuxToolsVMCopy the Root CA's public certificate to the local certificate repository
cd $HOME/ocpuserXX # e.g /home/ubuntu/ocpuser01
sudo cp rootCA.pem /usr/local/share/ca-certificates/rootCA.crtUpdate the local CA trust repository with the new one your created
sudo update-ca-certificatesNow your
UserXX-LinuxToolsVMwill be able to trust SSL connections to Prism Central and will be used byopenshift-installbinaryCreate a local ssh key which we will then use to access all the OCP cluster nodes/vms
infoUse existing ssh key pair that you may have access to if required. This ssh key pair can be used to logon to OCP nodes(vms) for troubleshooting purposes.
ssh-keygen -t rsa -b 2048
Check before Deployment
Check DNS resolution for IPI cluster's API and Ingress IPs.
There should be only one IP each that API and Ingress FQDNs resolve to.
Fix DNS resolution before proceeding.
- Template Command
- Example Command
nslookup api.ocpuser_0X_.ntnxlab.local
nslookup api.apps.ocpuser_0X_.ntnxlab.local
nslookup api.ocpuser01.ntnxlab.local
nslookup api.apps.ocpuser01.ntnxlab.local
Creating the Installation Manifests (files)
In your
UserXX-LinuxToolsVMChange to you working directory that we created before (if not there already)
cd $HOME/ocpuserXXRun the create install config command
tipCopy the pull_secret value from Red Hat Console or the
pull_secret.jsonfile into your clipboard as you will need to input in the interactive command executionopenshift-install create install-configNow we have to prepare the install-config.yaml file by adding the following details:
- Your self hosted Root CA's certificate
- Machine network details (your HPOC's subnet)
Add the details by opening the file in
VSCodeor usingvimcommand linevim install-config.yaml
# if vim is not present, install using #yum install -y vimToggle me for a file editing tip
Editing instructions- Inside vim, move the cursor the to beigging on the line
-----BEGIN CERTIFICATE----- - Use
crtl+vkeyboard combination to enter visual block mode - Select the lines until
-----END CERTIFICATE----- - Press
I(capital I using Shift key) - Type in 4 spaces
- Press
esckey - Type
wq!
Sample install-config.yaml file - Edit the highlighted partsadditionalTrustBundlePolicy: Always # Change the value to Always
additionalTrustBundle: | # Add this key
-----BEGIN CERTIFICATE----- # Copy rootCA.pem contents here
MIIEJzCCAw+gAwIBAgIUfW+2AkjS2Ha3i4RWsXbPRO5jIe0wDQYJKoZIhvcNAQEL
BQAwgaIxCzAJBgNVBAYTAkpQMRAwDgYDVQQIDAdLYXNoaXdhMRAwDgYDVQQHDAdL
YXNoaXdhMRAwDgYDVQQKDAdudXRhbml4MQ8wDQYDVQQLDAZyb290Y2ExHTAbBgNV
< snipped >
Z9ryNVFHsR4OwkHMwnArzyF194hre2SFzGAt/GOV0gM4s+XvQHnYdij7Js1zLhwM
r/QDXtb4Amt1Cdc7otyuXrWHwQZQ4gWgGKE30mJVsdbYMOS2LKswpFHrcyhJ/JWz
fY9uz5gEGT1lwOo=
-----END CERTIFICATE-----
apiVersion: v1
baseDomain: ntnxlab.local
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform: {}
replicas: 2 # Change this 2 workers
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform: {}
replicas: 3
credentialsMode: Manual
metadata:
creationTimestamp: null
name: ocpuser01
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.38.11.0/26 # Change to the Nutanix IPAM Network CIDR
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
nutanix:
apiVIPs:
- 10.38.11.30
ingressVIPs:
- 10.38.11.31
prismCentral:
endpoint:
address: pc.ntnxlab.local
port: 9440
password: techX2024!
username: admin
prismElements:
- endpoint:
address: 10.38.11.7
port: 9440
uuid: 000622c5-9387-880f-2a27-ac1f6b1894ce
subnetUUIDs:
- af91396f-ec79-4306-a398-b896565aa075
publish: External
pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoxxxxxxx......}}}'
sshKey: |
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAxxxx....- Inside vim, move the cursor the to beigging on the line
Make a copy of the install-config.yaml for troubleshooting reference
cp install-config.yaml install-config-bkup.yamlNow we can create all the other manifests required for OCP cluster installation
openshift-install create manifestsSince we will be using CSI provisioned with Nutanix HCI storage later in the lab, we need to enable iSCSI daemon on all the worker nodes (optional for Master nodes) using Machine Configuration Operator (MCO).
cautioniSCSI daemon is usually in a disabled state while deploying OCP clusters. It is up to the customers to decide when and where to enable these daemons.
In our lab, we have decided to enable it now, so we can use Nutanix CSI to provide storage to applications.
Enabling iSCSI daemon at this stage prevents any reboot requirements after the cluster is deployed and serving workloads.
Create MCO config to start iSCSI daemon for worker nodescat << EOF > manifests/99-worker-custom-enable-iscsid.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-custom-enable-iscsid
spec:
config:
ignition:
version: 3.1.0
systemd:
units:
- enabled: true
name: iscsid.service
EOFThe following step is optional. Customer wouldn't usually run workloads on master nodes. But the following would be the way to prepare master nodes to access PV/PVC for their workloads.
Optional - create MCO config to start iSCSI daemon for master nodescat << EOF > manifests/99-master-custom-enable-iscsid.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-master-custom-enable-iscsid
spec:
config:
ignition:
version: 3.1.0
systemd:
units:
- enabled: true
name: iscsid.service
EOFCheck the contents of
manifestsandopenshiftdirectories to make sure all the files are present (including iSCSI daemon MCO config. files)ll openshift manifestsConfirm the contents of the
manifests/cluster-proxy-01-config.yamlfile to make sure you are usinguser-ca-bundlecat manifests/cluster-proxy-01-config.yamlMake sure the trustedCA's name is
user-ca-bundleapiVersion: config.openshift.io/v1
kind: Proxy
metadata:
creationTimestamp: null
name: cluster
spec:
trustedCA:
name: user-ca-bundle
status: {}We will now move on to the IPI deployment part.