Proxmox

This page describes the Terraform configuration for managing Proxmox. It uses the bpg/proxmox provider to manage three types of Proxmox resources:

  • Access management
  • Cloud images
  • VMs

Upload of Cloud Images

The same Terraform configuration in terraform/proxmox can also be used to upload cloud images to Proxmox with a given source URL. These images must have the .img extension or Proxmox will fail.

However, these cloud images cannot be used directly by Packer or Terraform to create VMs. Instead, a template must be created as described in Cloud Images.

VM Management

The Terraform configuration in terraform/cluster is used to create Proxmox VMs for the deployment of server and client cluster nodes. It utilizes a custom module (terraform/modules/vm) that clones an existing VM template and bootstraps it with cloud-init.

Note: The VM template must have cloud-init installed. See Packer for how to create a compatible template.

While root credentials can be used, this configuration accepts an API token (created previously):

provider "proxmox" {
    endpoint = "https://[ip]:8006/api2/json"
    api_token = "terraform@pam!some_secret=api_token"
    insecure = true

    ssh {
      agent = true
    }
}

The number of VMs provisioned are defined by the length of the array variables. The following will deploy four nodes in total: two server and two client nodes with the given IP addresses. All nodes will be cloned from the given VM template.

template_id = 5003
ip_gateway  = "10.10.10.1"

servers = [
  {
    name       = "server"
    id         = 110
    cores      = 2
    sockets    = 2
    memory     = 4096
    disk_size  = 10
    ip_address = "10.10.10.110/24"
  }
]

clients = [
  {
    name       = "client"
    id         = 111
    cores      = 2
    sockets    = 2
    memory     = 10240
    disk_size  = 15
    ip_address = "10.10.10.111/24"
  }
]

On success, the provisioned VMs are accessible via the configured SSH username and public key.

Note: The VM template must have qemu-guest-agent installed and agent=1 set. Otherwise, Terraform will timeout.

Ansible Inventory

Terraform will also generate an Ansible inventory file tf_ansible_inventory in the same directory. Ansible can read this inventory file automatically by appending the following in the ansible.cfg:

inventory=../terraform/cluster/tf_ansible_inventory,/path/to/other/inventory/files

Variables

Proxmox

VariableDescriptionTypeDefault
proxmox_ipProxmox IP addressstring
proxmox_userProxmox API tokenstringroot@pam
proxmox_passwordProxmox API tokenstring

VM

VariableDescriptionTypeDefault
proxmox_ipProxmox IP addressstring
proxmox_api_tokenProxmox API tokenstring
target_nodeProxmox node to start VM instringpve
tagsList of Proxmox VM tagslist(string)[prod]
template_idTemplate ID to clonenumber
onbootStart VM on bootboolfalse
startedStart VM on creationbooltrue
serversList of server config (see above)list(object)[]
clientsList of client config (see above)list(object)[]
disk_datastoreDatastore on which to store VM diskstringvolumes
control_ip_addressControl IPv4 address in CIDR notationstring
ip_gatewayIPv4 gateway addressstring
ssh_usernameUser to SSH into during provisioningstring
ssh_private_key_fileFilepath of private SSH keystring
ssh_public_key_fileFilepath of public SSH keystring
  • The VM template corresponding to template_id must exist
  • The IPv4 addresses must be in CIDR notation with subnet masks (eg. 10.0.0.2/24)

Notes

Proxmox credentials and LXC bind mounts

Root credentials must be used in place of an API token if you require bind mounts with an LXC. There is no support for mounting bind mounts to LXC via an API token.