You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

This page describes how to create a new Kubernetes cluster via OpenStack Magnum by using Terraform or OpenTofu.


Pre-requisites

The following pre-requisites must be satisfied.


There is a set of mandatory inputs required to create a new cluster which are:

  • flavor_name  :  the resources (CPU, RAM, Disk) configuration for the VM (see as reference EWC VM plans )
  • key_pair  :  configured SSH key which is needed to connect to the VM (see EWC - OpenStack Command-Line client for how to import it ) 
  • network  : the network to be attached to the cluster ; private-<tenant>: local private network within the tenant.
  • subnet : network subnet within the tenant 
  • cluster templates : existing set of provided Magnum Kubernetes cluster templates  


The available selectable options could also be checked using the Opnstack CLI commands :

$ openstack keypair list
 
$ openstack flavor list 
 
$ openstack network list

$ openstack subnet list


and for the Magnum cluster templates :

openstack coe cluster template list

The cluster templates are maintained by ECMWF can be recognized by the name which follow the convention "kubernetes-(k8s version)-(ubuntu version name)" (e.g. kubernetes-1-32-jammy)


The predefined settings of the provided cluster templates can be explored by running  the command:

openstack coe cluster template show <cluster-template>

Write configuration files

Create a directory for your configuration and change directory into it:

$ mkdir example-magnum-k8s

$ cd example-magnum-k8s


Create the main configuration file to define the infrastructure:

$ touch main.tf


Open the main.tf file in a text editor and fill it as needed like in the following minimal example :

terraform {
  required_providers {
    openstack = {
      source = "terraform-provider-openstack/openstack"
    }
  }
}

provider "openstack" {
  cloud = "openstack"
}

variable "magnum_cluster_template" {
  description = <<EOT
  The name of the Magnum cluster template to create the kubernetes cluster with

  You may view a list of available template by running `openstack coe cluster template list`
  EOT
  type        = string
  default     = "cluster-template-name"
}

data "openstack_containerinfra_clustertemplate_v1" "clustertemplate" {
  name = var.magnum_cluster_template
}

resource "openstack_containerinfra_cluster_v1" "cluster" {
  name                = "cluster-name"
  cluster_template_id = data.openstack_containerinfra_clustertemplate_v1.clustertemplate.id
  master_count        = "master-count"
  master_flavor       = "master-flavor-name"
  node_count          = "worker-node-count"
  flavor              = "worker-node-flavor-name"
  keypair             = "ssh-keypair-name"
  fixed_network       = "private-network-name"
  fixed_subnet        = "private-subnet-name"
  labels              = {
     monitoring_enabled   = "true"
     auto_healing_enabled = "true"
     }
  merge_labels        = "true"
  create_timeout      = "180"

}


Replace the following fields as desired:



  • cluster-template-name
  • cluster-name
  • master-count
  • master-flavor-name
  • worker-node-count
  • worker-node-flavor-name
  • ssh-keypair-name
  • private-network-name
  • private-subnet-name


For instance it can be :

terraform {
  required_providers {
    openstack = {
      source = "terraform-provider-openstack/openstack"
    }
  }
}

provider "openstack" {
  cloud = "openstack"
}

variable "magnum_cluster_template" {
  description = <<EOT
  The name of the Magnum cluster template to create the kubernetes cluster with

  You may view a list of available template by running `openstack coe cluster template list`
  EOT
  type        = string
  default     = "kubernetes-1-32-jammy"
}

data "openstack_containerinfra_clustertemplate_v1" "clustertemplate" {
  name = var.magnum_cluster_template
}

resource "openstack_containerinfra_cluster_v1" "cluster" {
  name                = "mycluster"
  cluster_template_id = data.openstack_containerinfra_clustertemplate_v1.clustertemplate.id
  master_count        = "3"
  master_flavor       = "4cpu-4gbmem-30gbdisk"
  node_count          = "2"
  flavor              = "4cpu-4gbmem-30gbdisk"
  keypair             = "mykeypair"
  fixed_network       = "private-cci1-ewcloud-ms-nmhs-project"
  fixed_subnet        = "cci1-ewcloud-ms-nmhs-project-private"
  labels              = {
     monitoring_enabled   = "true"
     auto_healing_enabled = "true"
     }
  merge_labels        = "true"
  create_timeout      = "180"

}



Run Terraform or OpenTofu to create a Kubernetes cluster via OpenStack Magnum

Initialize the directory :

Terraform

$ terraform init


OpenTofu

$ tofu init



Review the required changes: 

$ terraform plan


Apply the changes to create the Kubernetes cluster :

Terraform

$ terraform apply


OpenTofu

$ tofu apply



Status can be then seen via:

Terraform

$ terraform show


OpenTofu

$ tofu show




Access the cluster

ECMWF EWC Managed Kubernetes Service - Create a Kubernetes cluster via the OpenStack CLI
Once the cluster has been created successfully it is possible to retrieve the cluster certificates and config in order to connect to it.
  1. Create a directory to store the cluster certificate and config :
    $ mkdir -p ./k8s_config_dir
  2. Retrieve the cluster certificate and config
    $ openstack coe cluster config \
        --dir ./k8s_config_dir \
        --force \
        --output-certs 
        mycluster
    
    
  3. You can inspect the folder content to verify :
    $ ls -1 k8s_config_dir/
    ca.pem
    cert.pem
    config
    key.pem


You can then export the Kubernetes config in order to access the cluster via kubectl :

$ export KUBECONFIG=/<path>/k8s_config_dir/config


and then access the cluster via kubectl , e.g.

$ kubectl get nodes
NAME                                                         STATUS   ROLES           AGE   VERSION
mycluster-y3gdbps5sjfy-control-plane-trjgn          Ready    control-plane   94m   v1.32.1
mycluster-y3gdbps5sjfy-control-plane-df4jk          Ready    control-plane   87m   v1.32.1
mycluster-y3gdbps5sjfy-control-plane-hbwdz          Ready    control-plane   89m   v1.32.1
mycluster-y3gdbps5sjfy-default-worker-sdqnk-96vhd   Ready    <none>          91m   v1.32.1
mycluster-y3gdbps5sjfy-default-worker-sdqnk-pf37n   Ready    <none>          91m   v1.32.1




  • No labels