Cluster Deployment
  • 15 Feb 2021
  • Dark

Cluster Deployment

  • Dark

Article summary

Once the VCD-CLI and CSE extensions are installed Clusters can be created and managed via the commandline.  Once the cluster has been deployed it can be accessed by copying the file located at /etc/kubernetes/admin.conf from the master node to ~/.kube/config on the local machine.  Additional details and description are available at - and a sample Cluster creation command is below.

Cluster Creation Command
vcd cse cluster create --nodes 2 --cpu 2 --memory 4096 --network cse-cluster-test-151-local --storage-profile 151-Storage --template ubuntu-16.04 --ssh-key ~/.ssh/ --enable-nfs clustername



The number of nodes to deploy.  This count does not include the master node so the total vApp will be one VM larger than specified


The number of CPU cores to assign to each node

*By default, our templates use 2 vCPUs per node. You can use as little as 1 vCPU per node, but deployment will fail if you do not have at least 2 vCPU available for use. 

*Note that using less than 2 vCPUs per node is not recommended for production.


The amount of memory (in MB) to assign to each node


The name of the network to attach the Cluster nodes to.  Generally speaking this should be a dedicated network and should have outbound internet access with a Source NAT configured or direct outbound access.


The storage profile to use when creating the new cluster


The template to use as the base for both the master and worker nodes, along with the nfs node if that option is selected at deploy time.


Path to the public key file to provision as an authorized key for ssh access to the nodes.  This should be provided by the client if possible, if not a key should be generated for them and the private key securely sent to the client.


Instructs the CSE service to deploy an NFS node within the cluster to provide persistent volume support. (currently only available on Ubuntu)

Was this article helpful?