Google Dataproc Cluster
This page shows how to write Terraform for Dataproc Cluster and write them securely.
google_dataproc_cluster (Terraform)
The Cluster in Dataproc can be configured in Terraform with the resource name google_dataproc_cluster. The following sections describe 4 examples of how to use the resource and its parameters.
Example Usage from GitHub
resource "google_dataproc_cluster" "mycluster_kkms" {
provider = google-beta
name = "mycluster"
region = "us-central1"
graceful_decommission_timeout = "120s"
resource "google_dataproc_cluster" "good" {
depends_on = [google_project_service.dataproc]
project = google_project.project.project_id
name = "good"
region = "us-central1"
resource "google_dataproc_cluster" "wsb_cluster" {
name = var.cluster_name
region = var.gcp_region
labels = {
env = var.label_env
}
resource "google_dataproc_cluster" "dataproc-cluster" {
#provider = google-beta # In order for "ednpoint_config to work google-beta must be the provider"
name = var.cluster_name
region = var.region
project = var.project_id
Parameters
-
graceful_decommission_timeoutoptional - string
The timeout duration which allows graceful decomissioning when you change the number of worker nodes directly through a terraform apply
The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including goog-dataproc-cluster-name which is the name of the cluster.
-
namerequired - string
The name of the cluster, unique within the project and zone.
-
projectoptional computed - string
The ID of the project in which the cluster will exist. If it is not provided, the provider project is used.
-
regionoptional - string
The region in which the cluster and associated nodes will be created in. Defaults to global.
-
cluster_configlist block-
bucketoptional computed - string
The name of the cloud storage bucket ultimately used to house the staging data for the cluster. If staging_bucket is specified, it will contain this value, otherwise it will be the auto generated name.
-
staging_bucketoptional - string
The Cloud Storage staging bucket used to stage files, such as Hadoop jars, between client machines and the cluster. Note: If you don't explicitly specify a staging_bucket then GCP will auto create / assign one for you. However, you are not guaranteed an auto generated bucket which is solely dedicated to your cluster; it may be shared with other clusters in the same region/zone also choosing to use the auto generation option.
-
temp_bucketoptional computed - string
The Cloud Storage temp bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. Note: If you don't explicitly specify a temp_bucket then GCP will auto create / assign one for you.
-
autoscaling_configlist block-
policy_urirequired - string
The autoscaling policy used by the cluster.
-
-
encryption_configlist block-
kms_key_namerequired - string
The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
-
-
gce_cluster_configlist block-
internal_ip_onlyoptional - bool
By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. If set to true, all instances in the cluster will only have internal IP addresses. Note: Private Google Access (also known as privateIpGoogleAccess) must be enabled on the subnetwork that the cluster will be launched in.
-
metadataoptional - map from string to string
A map of the Compute Engine metadata entries to add to all instances
-
networkoptional computed - string
The name or self_link of the Google Compute Engine network to the cluster will be part of. Conflicts with subnetwork. If neither is specified, this defaults to the "default" network.
-
service_accountoptional - string
The service account to be used by the Node VMs. If not specified, the "default" service account is used.
-
service_account_scopesoptional computed - set of string
The set of Google API scopes to be made available on all of the node VMs under the service_account specified. These can be either FQDNs, or scope aliases.
-
subnetworkoptional - string
The name or self_link of the Google Compute Engine subnetwork the cluster will be part of. Conflicts with network.
-
tagsoptional - set of string
The list of instance tags applied to instances in the cluster. Tags are used to identify valid sources or targets for network firewalls.
-
zoneoptional computed - string
The GCP zone where your data is stored and used (i.e. where the master and the worker nodes will be created in). If region is set to 'global' (default) then zone is mandatory, otherwise GCP is able to make use of Auto Zone Placement to determine this automatically for you. Note: This setting additionally determines and restricts which computing resources are available for use with other configs such as cluster_config.master_config.machine_type and cluster_config.worker_config.machine_type.
-
-
initialization_actionlist block-
scriptrequired - string
The script to be executed during initialization of the cluster. The script must be a GCS file with a gs:// prefix.
-
timeout_secoptional - number
The maximum duration (in seconds) which script is allowed to take to execute its action. GCP will default to a predetermined computed value if not set (currently 300).
-
-
master_configlist block-
image_urioptional computed - string
The URI for the image to use for this master/worker
-
instance_namesoptional computed - list of string
List of master/worker instance names which have been assigned to the cluster.
-
machine_typeoptional computed - string
The name of a Google Compute Engine machine type to create for the master/worker
-
min_cpu_platformoptional computed - string
The name of a minimum generation of CPU family for the master/worker. If not specified, GCP will default to a predetermined computed value for each zone.
-
num_instancesoptional computed - number
Specifies the number of master/worker nodes to create. If not specified, GCP will default to a predetermined computed value.
-
acceleratorsset block-
accelerator_countrequired - number
The number of the accelerator cards of this type exposed to this instance. Often restricted to one of 1, 2, 4, or 8.
-
accelerator_typerequired - string
The short name of the accelerator type to expose to this instance. For example, nvidia-tesla-k80.
-
-
disk_configlist block-
boot_disk_size_gboptional computed - number
Size of the primary disk attached to each node, specified in GB. The primary disk contains the boot volume and system libraries, and the smallest allowed disk size is 10GB. GCP will default to a predetermined computed value if not set (currently 500GB). Note: If SSDs are not attached, it also contains the HDFS data blocks and Hadoop working directories.
-
boot_disk_typeoptional - string
The disk type of the primary disk attached to each node. One of "pd-ssd" or "pd-standard". Defaults to "pd-standard".
-
num_local_ssdsoptional computed - number
The amount of local SSD disks that will be attached to each master cluster node. Defaults to 0.
-
-
-
preemptible_worker_configlist block-
instance_namesoptional computed - list of string
List of preemptible instance names which have been assigned to the cluster.
-
num_instancesoptional computed - number
Specifies the number of preemptible nodes to create. Defaults to 0.
-
disk_configlist block-
boot_disk_size_gboptional computed - number
Size of the primary disk attached to each preemptible worker node, specified in GB. The smallest allowed disk size is 10GB. GCP will default to a predetermined computed value if not set (currently 500GB). Note: If SSDs are not attached, it also contains the HDFS data blocks and Hadoop working directories.
-
boot_disk_typeoptional - string
The disk type of the primary disk attached to each preemptible worker node. One of "pd-ssd" or "pd-standard". Defaults to "pd-standard".
-
num_local_ssdsoptional computed - number
The amount of local SSD disks that will be attached to each preemptible worker node. Defaults to 0.
-
-
-
security_configlist block-
kerberos_configlist block-
cross_realm_trust_admin_serveroptional - string
The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
-
cross_realm_trust_kdcoptional - string
The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
-
cross_realm_trust_realmoptional - string
The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
-
cross_realm_trust_shared_password_urioptional - string
The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
-
enable_kerberosoptional - bool
Flag to indicate whether to Kerberize the cluster.
-
kdc_db_key_urioptional - string
The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
-
key_password_urioptional - string
The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
-
keystore_password_urioptional - string
The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc
-
keystore_urioptional - string
The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
-
kms_key_urirequired - string
The uri of the KMS key used to encrypt various sensitive files.
-
realmoptional - string
The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
-
root_principal_password_urirequired - string
The cloud Storage URI of a KMS encrypted file containing the root principal password.
-
tgt_lifetime_hoursoptional - number
The lifetime of the ticket granting ticket, in hours.
-
truststore_password_urioptional - string
The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
-
truststore_urioptional - string
The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
-
-
-
software_configlist block-
image_versionoptional computed - string
The Cloud Dataproc image version to use for the cluster - this controls the sets of software versions installed onto the nodes when you create clusters. If not specified, defaults to the latest version.
-
optional_componentsoptional - set of string
The set of optional components to activate on the cluster.
-
override_propertiesoptional - map from string to string
A list of override and additional properties (key/value pairs) used to modify various aspects of the common configuration files used when creating a cluster.
-
propertiesoptional computed - map from string to string
A list of the properties used to set the daemon config files. This will include any values supplied by the user via cluster_config.software_config.override_properties
-
-
worker_configlist block-
image_urioptional computed - string
The URI for the image to use for this master/worker
-
instance_namesoptional computed - list of string
List of master/worker instance names which have been assigned to the cluster.
-
machine_typeoptional computed - string
The name of a Google Compute Engine machine type to create for the master/worker
-
min_cpu_platformoptional computed - string
The name of a minimum generation of CPU family for the master/worker. If not specified, GCP will default to a predetermined computed value for each zone.
-
num_instancesoptional computed - number
Specifies the number of master/worker nodes to create. If not specified, GCP will default to a predetermined computed value.
-
acceleratorsset block-
accelerator_countrequired - number
The number of the accelerator cards of this type exposed to this instance. Often restricted to one of 1, 2, 4, or 8.
-
accelerator_typerequired - string
The short name of the accelerator type to expose to this instance. For example, nvidia-tesla-k80.
-
-
disk_configlist block-
boot_disk_size_gboptional computed - number
Size of the primary disk attached to each node, specified in GB. The primary disk contains the boot volume and system libraries, and the smallest allowed disk size is 10GB. GCP will default to a predetermined computed value if not set (currently 500GB). Note: If SSDs are not attached, it also contains the HDFS data blocks and Hadoop working directories.
-
boot_disk_typeoptional - string
The disk type of the primary disk attached to each node. One of "pd-ssd" or "pd-standard". Defaults to "pd-standard".
-
num_local_ssdsoptional computed - number
The amount of local SSD disks that will be attached to each master cluster node. Defaults to 0.
-
-
-
-
timeoutssingle block
Explanation in Terraform Registry
Manages a Cloud Dataproc cluster resource within GCP.
- API documentation
- How-to Guides
- Official Documentation !>Warning: Due to limitations of the API, all arguments except
labels,cluster_config.worker_config.num_instancesandcluster_config.preemptible_worker_config.num_instancesare non-updatable. Changing others will cause recreation of the whole cluster!
Frequently asked questions
What is Google Dataproc Cluster?
Google Dataproc Cluster is a resource for Dataproc of Google Cloud Platform. Settings can be wrote in Terraform.
Where can I find the example code for the Google Dataproc Cluster?
For Terraform, the anaik91/tfe, GoogleCloudPlatform/gcpdiag and Kacperek0/wsb-dataproc-infra source code examples are useful. See the Terraform Example section for further details.