Azure Container Node Pool

This page shows how to write Terraform and Azure Resource Manager for Container Node Pool and write them securely.

azurerm_kubernetes_cluster_node_pool (Terraform)

The Node Pool in Container can be configured in Terraform with the resource name azurerm_kubernetes_cluster_node_pool. The following sections describe 10 examples of how to use the resource and its parameters.

Example Usage from GitHub

kubernetes_cluster_node_pool_test.tf#L27
resource "azurerm_kubernetes_cluster_node_pool" "example" {
  name                  = "internal"
  kubernetes_cluster_id = azurerm_kubernetes_cluster.example.id
  vm_size               = "Standard_DS2_v2"
}

main.tf#L5
resource "azurerm_kubernetes_cluster_node_pool" "taas-sv-pool" {
  name                  = var.svpool_name
  kubernetes_cluster_id = var.kubernetes_cluster_id
  #kubernetes_cluster_id = azurerm_kubernetes_cluster
  #kubernetes_cluster_id = module.aks-cluster.azurerm_kubernetes_cluster_id
  enable_auto_scaling   = var.svpool_enable_auto_scaling
aks-cluster-user-nodes.tf#L1
resource "azurerm_kubernetes_cluster_node_pool" "user" {
  count = var.usernodepool_enabled ? 1 : 0

  availability_zones    = [1, 2, 3]
  enable_auto_scaling   = true
  kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
main.tf#L3
resource "azurerm_kubernetes_cluster_node_pool" "autoscale_node_pool" {
  count                        = var.enable_auto_scaling ? 1 : 0
  name                         = var.node_pool_name
  kubernetes_cluster_id        = var.aks_cluster_id
  vnet_subnet_id               = var.vnet_subnet_id
  availability_zones           = var.availability_zones
kubernetes_cluster_node_pool_test.tf#L27
resource "azurerm_kubernetes_cluster_node_pool" "example" {
  name                  = "internal"
  kubernetes_cluster_id = azurerm_kubernetes_cluster.example.id
  vm_size               = "Standard_DS2_v2"
}

main.tf#L58
resource "azurerm_kubernetes_cluster_node_pool" "apppool01_spot" {
  count = var.k8s_properties.apppool01_is_spot ? 1 : 0
  name                  = var.k8s_properties.apppool01_name
  kubernetes_cluster_id = azurerm_kubernetes_cluster.aks-np.id
  vm_size               = var.k8s_properties.apppool01_size
  #node_count            = 1
main.tf#L3
resource "azurerm_kubernetes_cluster_node_pool" "autoscale_node_pool" {
  count                        = var.enable_auto_scaling ? 1 : 0
  name                         = var.node_pool_name
  kubernetes_cluster_id        = var.aks_cluster_id
  vnet_subnet_id               = var.vnet_subnet_id
  availability_zones           = var.availability_zones
node_pool.tf#L1
resource "azurerm_kubernetes_cluster_node_pool" "main" {
  for_each              = var.node_pools
  name                  = each.value.name
  kubernetes_cluster_id = azurerm_kubernetes_cluster.main.id
  vm_size               = each.value.vm_size
  node_count            = each.value.node_count
main.tf#L1
resource "azurerm_kubernetes_cluster_node_pool" "spot" {
  for_each = local.spot_node_pools

  lifecycle {
    ignore_changes = [
      node_count,
main.tf#L64
resource "azurerm_kubernetes_cluster_node_pool" "windows" {
  name                  = "win"
  enable_node_public_ip = false
  os_type               = "Windows"
  os_disk_size_gb = 100
  os_disk_type = "Managed"

Review your Terraform file for Azure best practices

Shisho Cloud, our free checker to make sure your Terraform configuration follows best practices, is available (beta).

Parameters

Explanation in Terraform Registry

Manages a Node Pool within a Kubernetes Cluster -> Note: Due to the fast-moving nature of AKS, we recommend using the latest version of the Azure Provider when using AKS - you can find the latest version of the Azure Provider here.

NOTE: Multiple Node Pools are only supported when the Kubernetes Cluster is using Virtual Machine Scale Sets.

Tips: Best Practices for The Other Azure Container Resources

In addition to the azurerm_kubernetes_cluster, Azure Container has the other resources that should be configured for security reasons. Please check some examples of those resources and precautions.

risk-label

azurerm_kubernetes_cluster

Ensure to enable logging for AKS

It is better to enable AKS logging to Azure Monitoring. This provides useful information regarding access and usage.

Review your Azure Container settings

In addition to the above, there are other security points you should be aware of making sure that your .tf files are protected in Shisho Cloud.

Microsoft.ContainerService/managedClusters/agentPools (Azure Resource Manager)

The managedClusters/agentPools in Microsoft.ContainerService can be configured in Azure Resource Manager with the resource name Microsoft.ContainerService/managedClusters/agentPools. The following sections describe how to use the resource and its parameters.

Example Usage from GitHub

An example could not be found in GitHub.

Parameters

  • apiVersion required - string
  • name required - string

    The name of the agent pool.

  • properties required
      • availabilityZones optional - array

        The list of Availability zones to use for nodes. This can only be specified if the AgentPoolType property is 'VirtualMachineScaleSets'.

      • count optional - integer

        Number of agents (VMs) to host docker containers. Allowed values must be in the range of 0 to 1000 (inclusive) for user pools and in the range of 1 to 1000 (inclusive) for system pools. The default value is 1.

      • creationData optional
          • sourceResourceId optional - string

            This is the ARM ID of the source object to be used to create the target object.

      • enableAutoScaling optional - boolean

        Whether to enable auto-scaler

      • enableEncryptionAtHost optional - boolean

        This is only supported on certain VM sizes and in certain Azure regions. For more information, see: https://docs.microsoft.com/azure/aks/enable-host-encryption

      • enableFIPS optional - boolean

        See Add a FIPS-enabled node pool for more details.

      • enableNodePublicIP optional - boolean

        Some scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. For more information see assigning a public IP per node. The default is false.

      • enableUltraSSD optional - boolean

        Whether to enable UltraSSD

      • gpuInstanceProfile optional - string

        GPUInstanceProfile to be used to specify GPU MIG instance profile for supported GPU VM SKU.

      • kubeletConfig optional
          • allowedUnsafeSysctls optional - array

            Allowed list of unsafe sysctls or unsafe sysctl patterns (ending in *).

          • containerLogMaxFiles optional - integer

            The maximum number of container log files that can be present for a container. The number must be ≥ 2.

          • containerLogMaxSizeMB optional - integer

            The maximum size (e.g. 10Mi) of container log file before it is rotated.

          • cpuCfsQuota optional - boolean

            The default is true.

          • cpuCfsQuotaPeriod optional - string

            The default is '100ms.' Valid values are a sequence of decimal numbers with an optional fraction and a unit suffix. For example: '300ms', '2h45m'. Supported units are 'ns', 'us', 'ms', 's', 'm', and 'h'.

          • cpuManagerPolicy optional - string

            The default is 'none'. See Kubernetes CPU management policies for more information. Allowed values are 'none' and 'static'.

          • failSwapOn optional - boolean

            If set to true it will make the Kubelet fail to start if swap is enabled on the node.

          • imageGcHighThreshold optional - integer

            To disable image garbage collection, set to 100. The default is 85%

          • imageGcLowThreshold optional - integer

            This cannot be set higher than imageGcHighThreshold. The default is 80%

          • podMaxPids optional - integer

            The maximum number of processes per pod.

          • topologyManagerPolicy optional - string

            For more information see Kubernetes Topology Manager. The default is 'none'. Allowed values are 'none', 'best-effort', 'restricted', and 'single-numa-node'.

      • kubeletDiskType optional - string
      • linuxOSConfig optional
          • swapFileSizeMB optional - integer

            The size in MB of a swap file that will be created on each node.

          • sysctls optional
              • fsAioMaxNr optional - integer

                Sysctl setting fs.aio-max-nr.

              • fsFileMax optional - integer

                Sysctl setting fs.file-max.

              • fsInotifyMaxUserWatches optional - integer

                Sysctl setting fs.inotify.max_user_watches.

              • fsNrOpen optional - integer

                Sysctl setting fs.nr_open.

              • kernelThreadsMax optional - integer

                Sysctl setting kernel.threads-max.

              • netCoreNetdevMaxBacklog optional - integer

                Sysctl setting net.core.netdev_max_backlog.

              • netCoreOptmemMax optional - integer

                Sysctl setting net.core.optmem_max.

              • netCoreRmemDefault optional - integer

                Sysctl setting net.core.rmem_default.

              • netCoreRmemMax optional - integer

                Sysctl setting net.core.rmem_max.

              • netCoreSomaxconn optional - integer

                Sysctl setting net.core.somaxconn.

              • netCoreWmemDefault optional - integer

                Sysctl setting net.core.wmem_default.

              • netCoreWmemMax optional - integer

                Sysctl setting net.core.wmem_max.

              • netIpv4IpLocalPortRange optional - string

                Sysctl setting net.ipv4.ip_local_port_range.

              • netIpv4NeighDefaultGcThresh1 optional - integer

                Sysctl setting net.ipv4.neigh.default.gc_thresh1.

              • netIpv4NeighDefaultGcThresh2 optional - integer

                Sysctl setting net.ipv4.neigh.default.gc_thresh2.

              • netIpv4NeighDefaultGcThresh3 optional - integer

                Sysctl setting net.ipv4.neigh.default.gc_thresh3.

              • netIpv4TcpFinTimeout optional - integer

                Sysctl setting net.ipv4.tcp_fin_timeout.

              • netIpv4TcpkeepaliveIntvl optional - integer

                Sysctl setting net.ipv4.tcp_keepalive_intvl.

              • netIpv4TcpKeepaliveProbes optional - integer

                Sysctl setting net.ipv4.tcp_keepalive_probes.

              • netIpv4TcpKeepaliveTime optional - integer

                Sysctl setting net.ipv4.tcp_keepalive_time.

              • netIpv4TcpMaxSynBacklog optional - integer

                Sysctl setting net.ipv4.tcp_max_syn_backlog.

              • netIpv4TcpMaxTwBuckets optional - integer

                Sysctl setting net.ipv4.tcp_max_tw_buckets.

              • netIpv4TcpTwReuse optional - boolean

                Sysctl setting net.ipv4.tcp_tw_reuse.

              • netNetfilterNfConntrackBuckets optional - integer

                Sysctl setting net.netfilter.nf_conntrack_buckets.

              • netNetfilterNfConntrackMax optional - integer

                Sysctl setting net.netfilter.nf_conntrack_max.

              • vmMaxMapCount optional - integer

                Sysctl setting vm.max_map_count.

              • vmSwappiness optional - integer

                Sysctl setting vm.swappiness.

              • vmVfsCachePressure optional - integer

                Sysctl setting vm.vfs_cache_pressure.

          • transparentHugePageDefrag optional - string

            Valid values are 'always', 'defer', 'defer+madvise', 'madvise' and 'never'. The default is 'madvise'. For more information see Transparent Hugepages.

          • transparentHugePageEnabled optional - string

            Valid values are 'always', 'madvise', and 'never'. The default is 'always'. For more information see Transparent Hugepages.

      • maxCount optional - integer

        The maximum number of nodes for auto-scaling

      • maxPods optional - integer

        The maximum number of pods that can run on a node.

      • minCount optional - integer

        The minimum number of nodes for auto-scaling

      • mode optional - string
      • nodeLabels optional - string

        The node labels to be persisted across all nodes in agent pool.

      • nodePublicIPPrefixID optional - string

        This is of the form: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPPrefixes/{publicIPPrefixName}

      • nodeTaints optional - array

        The taints added to new nodes during node pool create and scale. For example, key=value:NoSchedule.

      • orchestratorVersion optional - string

        As a best practice, you should upgrade all node pools in an AKS cluster to the same Kubernetes version. The node pool version must have the same major version as the control plane. The node pool minor version must be within two minor versions of the control plane version. The node pool version cannot be greater than the control plane version. For more information see upgrading a node pool.

      • osDiskSizeGB optional - integer

        OS Disk Size in GB to be used to specify the disk size for every machine in the master/agent pool. If you specify 0, it will apply the default osDisk size according to the vmSize specified.

      • osDiskType optional - string
      • osSKU optional - string
      • osType optional - string
      • podSubnetID optional - string

        If omitted, pod IPs are statically assigned on the node subnet (see vnetSubnetID for more details). This is of the form: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName}

      • powerState optional
          • code optional - string

            Tells whether the cluster is Running or Stopped.

      • proximityPlacementGroupID optional - string

        The ID for Proximity Placement Group.

      • scaleDownMode optional - string

        This also effects the cluster autoscaler behavior. If not specified, it defaults to Delete.

      • scaleSetEvictionPolicy optional - string

        This cannot be specified unless the scaleSetPriority is 'Spot'. If not specified, the default is 'Delete'.

      • scaleSetPriority optional - string

        The Virtual Machine Scale Set priority. If not specified, the default is 'Regular'.

      • spotMaxPrice optional - number

        Possible values are any decimal value greater than zero or -1 which indicates the willingness to pay any on-demand price. For more details on spot pricing, see spot VMs pricing

      • tags optional - string

        The tags to be persisted on the agent pool virtual machine scale set.

      • type optional - string
      • upgradeSettings optional
          • maxSurge optional - string

            This can either be set to an integer (e.g. '5') or a percentage (e.g. '50%'). If a percentage is specified, it is the percentage of the total agent pool size at the time of the upgrade. For percentages, fractional nodes are rounded up. If not specified, the default is 1. For more information, including best practices, see: https://docs.microsoft.com/azure/aks/upgrade-cluster#customize-node-surge-upgrade

      • vmSize optional - string

        VM size availability varies by region. If a node contains insufficient compute resources (memory, cpu, etc) pods might fail to run correctly. For more details on restricted VM sizes, see: https://docs.microsoft.com/azure/aks/quotas-skus-regions

      • vnetSubnetID optional - string

        If this is not specified, a VNET and subnet will be generated and used. If no podSubnetID is specified, this applies to nodes and pods, otherwise it applies to just nodes. This is of the form: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName}

      • workloadRuntime optional - string
  • type required - string

Frequently asked questions

What is Azure Container Node Pool?

Azure Container Node Pool is a resource for Container of Microsoft Azure. Settings can be wrote in Terraform.

Where can I find the example code for the Azure Container Node Pool?

For Terraform, the gilyas/infracost, praveens-arch/sv-readyapi-cloud-infra and johnarok/azure-aks-sample source code examples are useful. See the Terraform Example section for further details.