EWC Kubernetes Self-Service is based on Rancher Kubernetes distribution. This is considered like an EWC community template, therefore this is not a centrally managed service. Users have to maintain the clusters.
This documentation explains how to provision Rancher Manager and RKE2 Kubernetes cluster.
Pre-requisites
The configuration for creating "Rancher Manager" instance and "RKE2 kubernetes clusters" is not enabled by default for the tenancy.
If you're interested in using this deployment, please raise a ticket in the EWC Support Portal requesting access to this feature.
To check if your tenancy is enabled, make sure you have the following items in Tools → Cypher:
- secret/rancher_manager_cloud
- secret/rancher_manager_cloud_id
Rancher Manager deployment
- From the Morpheus portal access the “Provisioning”→”Instances”
- Press “Add” to start the Rancher Instance creation
- Search for Rancher in the search bar and select the Rancher Manager instance type and press “Next”:
- Fill in the basic parameters like “Name” :
- Keep filling the instance configuration parameters :
- Plan (e.g. EWC VM plans for CPU).
- Networks (The instance requires a public IP. This could be achieved either attaching the private network and use a floating-ip or by attaching directly the "external-internet" network. More information can be found in the page Provision a new instance - web).
- Rancher Dashboard Admin Password: this is the password needed to access the Rancher Dashboard at the end of the provisioning (username: "admin") - this same password will be needed for the provisioning and connection to the kubernetes clusters.
This password needs to be at least 12 character long. - Security Group : "ssh-https" or "ssh-http-https"
- Floating IP: Select external or one available in the list (only for EUMETSAT)
- Continue with the default setting in the next panels. After reviewing the settings press "Complete" to start the provisioning:
- Wait for the deployment to complete, it should take on average between 15-20 minutes:
- Once the deployment is completed, it is possible to access the newly created instance by connecting via a browser to the following url: "https://<rancher instance name>.<tenancy name>.<site char>.ewcloud.host" according to the naming convention explained on the page EWC DNS. You can login using the username: admin password: Rancher Dashboard Admin Password that you used for the deployment. You will see the following dashboard:
RKE2 Kubernetes cluster deployment
This workflow needs to be run from the Rancher manager machine you already deployed! The libraries needed for this workflow are already available there. If run from other machines it will fail!
- From the Morpheus portal, access the “Provisioning”→”Instances” , access the "Rancher" instance page and go under "Actions"→"Run workflow" :
- Search for the workflow "Deploy RKE2 cluster" :
- Fill in the necessary parameters:
- Rancher Dashboard Admin Password : this the password that was set to access the Rancher Dashboard instance
- RKE2 Cluster Name: name for the cluster that will appear on the dashboard
- RKE2 Cluster Environment: tag for the cluster (dev, stage, prod)
- RKE2 Cluster Kubernetes version: version of the kubernetes cluster (e.g. 1.28)
- RKE2 Cluster worker node plan: resource plan for the worker node ( select from the list provided, if you want more information about plans, check EWC VM plans)
- RKE2 Cluster Kubernetes deployment mode: "Development" (1 controller node) or "High Availability" (3 controller nodes)
- RKE2 Cluster Autoscaler maximum number of nodes: upper limit for scaling up the size of the cluster (minimum is set to 2 worker nodes)
- Leave the "Command Options" field empty
Once filled the parameters, press "Execute" : - Monitor the execution of the workflow via the progress bar and the "History" tab for more information:
- Wait for the workflow to finish. Once completed, the new cluster should be connected and visible in the Rancher Dashboard:
- If needed, you can provision and connect multiple clusters to the Rancher Dashboard. You just need to re-execute the workflow to deploy the additional clusters.
ssh-keys quota limits with autoscaling
Please note that the provisioned clusters are equipped with an autoscaler component that is able to scale up or down the worker nodes depending on the load.
Currently it is acknowledged a known bug in Rancher which consists on the lack of cleanup of older ssh-keys after the worker nodes deletion by the autoscaler, which remain in the underlying Openstack project. Therefore after a frequent scale up/down it is possible to hit the limits of the Openstack ssh-keys quota preventing the creation of additional nodes by the autoscaler.
In case you hit this problem please raise an issue in the EWC Support Portal.