Skip to main content

Create Configuration Manifests

Creating IPI Installation Config File

In this section we will create our installation configuration files which will be then used in the next section to deploy an OCP cluster.

Preparing Provisioning VM

We will use your UserXX-LinuxToolsVM as a provisioning VM to deploy OCP.

  1. Logon to your UserXX-LinuxToolsVM

  2. Copy the Root CA's public certificate to the local certificate repository

    cd $HOME/ocpuserXX # e.g /home/ubuntu/ocpuser01
    sudo cp rootCA.pem /usr/local/share/ca-certificates/rootCA.crt
  3. Update the local CA trust repository with the new one your created

    sudo update-ca-certificates

    Now your UserXX-LinuxToolsVM will be able to trust SSL connections to Prism Central and will be used by openshift-install binary

  4. Create a local ssh key which we will then use to access all the OCP cluster nodes/vms

    info

    Use existing ssh key pair that you may have access to if required. This ssh key pair can be used to logon to OCP nodes(vms) for troubleshooting purposes.

    ssh-keygen -t rsa -b 2048
    Execution example - accept default values and no passphrase
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/ubuntu/.ssh/id_rsa
    Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub
    The key fingerprint is:
    SHA256:i3TtWk6hyVBmpiLZt4ZE+jNaU1ca6hUDwK9JaEQNspQ ubuntu@ocpuser10-LinuxToolsVM
    The key's randomart image is:
    +---[RSA 2048]----+
    |.o++... |
    |.E. o . |
    |.. ... O . |
    | o=. .B B |
    | .+.+oB S o |
    | +oB O = . |
    | B = = + |
    | o = = |
    | . . . |
    +----[SHA256]-----+

Check before Deployment

Check DNS resolution for IPI cluster's API and Ingress IPs.

danger

There should be only one IP each that API and Ingress FQDNs resolve to.

Fix DNS resolution before proceeding.

nslookup api.ocpuser_0X_.ntnxlab.local
nslookup api.apps.ocpuser_0X_.ntnxlab.local

Creating the Installation Manifests (files)

  1. In your UserXX-LinuxToolsVM

  2. Change to you working directory that we created before (if not there already)

    cd $HOME/ocpuserXX
  3. Run the create install config command

    tip

    Copy the pull_secret value from Red Hat Console or the pull_secret.json file into your clipboard as you will need to input in the interactive command execution

    openshift-install create install-config 
    Execution example - use up and down arrow keys to choose options
    # openshift-install create install-config 
    ? SSH Public Key /root/.ssh/id_rsa.pub
    ? Platform nutanix
    ? Prism Central pc.ntnxlab.local # << Your Prism Central FQDN address
    ? Port 9440
    ? Username admin
    ? Password [? for help] ********** # << Your Prism Central password entered as plain text
    INFO Connecting to Prism Central pc.ntnxlab.local
    ? Prism Element PHX-SPOC018-4 # << Choose your Prism Element cluster
    INFO Defaulting to only available network: Primary
    ? Virtual IP Address for API 10.0.0.1 # << Type the IP you reserved for API
    ? Virtual IP Address for Ingress 10.0.0.2 # << Type the IP you reserved for Ingress
    ? Base Domain ntnxlab.local # << Type the name of your base domain
    ? Cluster Name ocpuserXX # << Type the name of your sub domain or OCP cluster's name
    ? Pull Secret [? for help] ************************************************
    INFO Install-Config created in: .
  4. Now we have to prepare the install-config.yaml file by adding the following details:

    • Your self hosted Root CA's certificate
    • Machine network details (your HPOC's subnet)
  1. Add the details by opening the file in VSCode or using vim command line

    vim install-config.yaml
    # if vim is not present, install using #yum install -y vim
    Toggle me for a file editing tip
    Editing instructions
    • Inside vim, move the cursor the to beigging on the line -----BEGIN CERTIFICATE-----
    • Use crtl + v keyboard combination to enter visual block mode
    • Select the lines until -----END CERTIFICATE-----
    • Press I (capital I using Shift key)
    • Type in 4 spaces
    • Press esc key
    • Type wq!
    Sample install-config.yaml file - Edit the highlighted parts
    additionalTrustBundlePolicy: Always                                 # Change the value to Always
    additionalTrustBundle: | # Add this key
    -----BEGIN CERTIFICATE----- # Copy rootCA.pem contents here
    MIIEJzCCAw+gAwIBAgIUfW+2AkjS2Ha3i4RWsXbPRO5jIe0wDQYJKoZIhvcNAQEL
    BQAwgaIxCzAJBgNVBAYTAkpQMRAwDgYDVQQIDAdLYXNoaXdhMRAwDgYDVQQHDAdL
    YXNoaXdhMRAwDgYDVQQKDAdudXRhbml4MQ8wDQYDVQQLDAZyb290Y2ExHTAbBgNV
    < snipped >
    Z9ryNVFHsR4OwkHMwnArzyF194hre2SFzGAt/GOV0gM4s+XvQHnYdij7Js1zLhwM
    r/QDXtb4Amt1Cdc7otyuXrWHwQZQ4gWgGKE30mJVsdbYMOS2LKswpFHrcyhJ/JWz
    fY9uz5gEGT1lwOo=
    -----END CERTIFICATE-----
    apiVersion: v1
    baseDomain: ntnxlab.local
    compute:
    - architecture: amd64
    hyperthreading: Enabled
    name: worker
    platform: {}
    replicas: 2 # Change this 2 workers
    controlPlane:
    architecture: amd64
    hyperthreading: Enabled
    name: master
    platform: {}
    replicas: 3
    credentialsMode: Manual
    metadata:
    creationTimestamp: null
    name: ocpuser01
    networking:
    clusterNetwork:
    - cidr: 10.128.0.0/14
    hostPrefix: 23
    machineNetwork:
    - cidr: 10.38.11.0/26 # Change to the Nutanix IPAM Network CIDR
    networkType: OVNKubernetes
    serviceNetwork:
    - 172.30.0.0/16
    platform:
    nutanix:
    apiVIPs:
    - 10.38.11.30
    ingressVIPs:
    - 10.38.11.31
    prismCentral:
    endpoint:
    address: pc.ntnxlab.local
    port: 9440
    password: techX2024!
    username: admin
    prismElements:
    - endpoint:
    address: 10.38.11.7
    port: 9440
    uuid: 000622c5-9387-880f-2a27-ac1f6b1894ce
    subnetUUIDs:
    - af91396f-ec79-4306-a398-b896565aa075
    publish: External
    pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoxxxxxxx......}}}'
    sshKey: |
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAxxxx....
  2. Make a copy of the install-config.yaml for troubleshooting reference

    cp install-config.yaml install-config-bkup.yaml
  3. Now we can create all the other manifests required for OCP cluster installation

    openshift-install create manifests
    Output
    openshift-install create manifests
    INFO Consuming Install Config from target directory
    INFO Manifests created in: manifests and openshift
  4. Since we will be using CSI provisioned with Nutanix HCI storage later in the lab, we need to enable iSCSI daemon on all the worker nodes (optional for Master nodes) using Machine Configuration Operator (MCO).

    caution

    iSCSI daemon is usually in a disabled state while deploying OCP clusters. It is up to the customers to decide when and where to enable these daemons.

    In our lab, we have decided to enable it now, so we can use Nutanix CSI to provide storage to applications.

    Enabling iSCSI daemon at this stage prevents any reboot requirements after the cluster is deployed and serving workloads.

    Create MCO config to start iSCSI daemon for worker nodes
    cat << EOF > manifests/99-worker-custom-enable-iscsid.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
    labels:
    machineconfiguration.openshift.io/role: worker
    name: 99-worker-custom-enable-iscsid
    spec:
    config:
    ignition:
    version: 3.1.0
    systemd:
    units:
    - enabled: true
    name: iscsid.service
    EOF

    The following step is optional. Customer wouldn't usually run workloads on master nodes. But the following would be the way to prepare master nodes to access PV/PVC for their workloads.

    Optional - create MCO config to start iSCSI daemon for master nodes
    cat << EOF > manifests/99-master-custom-enable-iscsid.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
    labels:
    machineconfiguration.openshift.io/role: master
    name: 99-master-custom-enable-iscsid
    spec:
    config:
    ignition:
    version: 3.1.0
    systemd:
    units:
    - enabled: true
    name: iscsid.service
    EOF
  5. Check the contents of manifests and openshift directories to make sure all the files are present (including iSCSI daemon MCO config. files)

    ll openshift manifests
  6. Confirm the contents of the manifests/cluster-proxy-01-config.yaml file to make sure you are using user-ca-bundle

    cat manifests/cluster-proxy-01-config.yaml

    Make sure the trustedCA's name is user-ca-bundle

    apiVersion: config.openshift.io/v1
    kind: Proxy
    metadata:
    creationTimestamp: null
    name: cluster
    spec:
    trustedCA:
    name: user-ca-bundle
    status: {}

    We will now move on to the IPI deployment part.