Google Dataproc Job
This page shows how to write Terraform for Dataproc Job and write them securely.
google_dataproc_job (Terraform)
The Job in Dataproc can be configured in Terraform with the resource name google_dataproc_job. The following sections describe 2 examples of how to use the resource and its parameters.
Example Usage from GitHub
resource "google_dataproc_job" "spark" {
region = google_dataproc_cluster.mycluster.region
force_delete = true
placement {
cluster_name = google_dataproc_cluster.mycluster.name
}
resource "google_dataproc_job" "spark" {
region = google_dataproc_cluster.mycluster.region
force_delete = true
placement {
cluster_name = google_dataproc_cluster.mycluster.name
}
Parameters
-
driver_controls_files_urioptional computed - string
Output-only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
-
driver_output_resource_urioptional computed - string
Output-only. A URI pointing to the location of the stdout of the job's driver program
-
force_deleteoptional - bool
By default, you can only delete inactive jobs within Dataproc. Setting this to true, and calling destroy, will ensure that the job is first cancelled before issuing the delete.
Optional. The labels to associate with this job.
-
projectoptional computed - string
The project in which the cluster can be found and jobs subsequently run against. If it is not provided, the provider project is used.
-
regionoptional - string
The Cloud Dataproc region. This essentially determines which clusters are available for this job to be submitted to. If not specified, defaults to global.
-
statusoptional computed - list of object
The status of the job.
-
details- string -
state- string -
state_start_time- string -
substate- string -
hadoop_configlist block-
archive_urisoptional - list of string
HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.
-
argsoptional - list of string
The arguments to pass to the driver.
-
file_urisoptional - list of string
HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.
-
jar_file_urisoptional - list of string
HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
-
main_classoptional - string
The class containing the main method of the driver. Must be in a provided jar or jar that is already on the classpath. Conflicts with main_jar_file_uri
-
main_jar_file_urioptional - string
The HCFS URI of jar file containing the driver jar. Conflicts with main_class
-
propertiesoptional - map from string to string
A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
-
logging_configlist block-
driver_log_levelsrequired - map from string to string
Optional. The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'.
-
-
-
hive_configlist block-
continue_on_failureoptional - bool
Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries. Defaults to false.
-
jar_file_urisoptional - list of string
HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
-
propertiesoptional - map from string to string
A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
-
query_file_urioptional - string
HCFS URI of file containing Hive script to execute as the job. Conflicts with query_list
-
query_listoptional - list of string
The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri
-
script_variablesoptional - map from string to string
Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
-
-
pig_configlist block-
continue_on_failureoptional - bool
Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries. Defaults to false.
-
jar_file_urisoptional - list of string
HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
-
propertiesoptional - map from string to string
A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
-
query_file_urioptional - string
HCFS URI of file containing Hive script to execute as the job. Conflicts with query_list
-
query_listoptional - list of string
The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri
-
script_variablesoptional - map from string to string
Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
-
logging_configlist block-
driver_log_levelsrequired - map from string to string
Optional. The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'.
-
-
-
placementlist block-
cluster_namerequired - string
The name of the cluster where the job will be submitted
-
cluster_uuidoptional computed - string
Output-only. A cluster UUID generated by the Cloud Dataproc service when the job is submitted
-
-
pyspark_configlist block-
archive_urisoptional - list of string
Optional. HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip
-
argsoptional - list of string
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission
-
file_urisoptional - list of string
Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks
-
jar_file_urisoptional - list of string
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks
-
main_python_file_urirequired - string
Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file
-
propertiesoptional - map from string to string
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code
-
python_file_urisoptional - list of string
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip
-
logging_configlist block-
driver_log_levelsrequired - map from string to string
Optional. The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'.
-
-
-
referencelist block-
job_idoptional computed - string
The job ID, which must be unique within the project. The job ID is generated by the server upon job submission or provided by the user as a means to perform retries without creating duplicate jobs
-
-
schedulinglist block-
max_failures_per_hourrequired - number
Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.
-
max_failures_totalrequired - number
Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.
-
-
spark_configlist block-
archive_urisoptional - list of string
HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.
-
argsoptional - list of string
The arguments to pass to the driver.
-
file_urisoptional - list of string
HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.
-
jar_file_urisoptional - list of string
HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
-
main_classoptional - string
The class containing the main method of the driver. Must be in a provided jar or jar that is already on the classpath. Conflicts with main_jar_file_uri
-
main_jar_file_urioptional - string
The HCFS URI of jar file containing the driver jar. Conflicts with main_class
-
propertiesoptional - map from string to string
A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
-
logging_configlist block-
driver_log_levelsrequired - map from string to string
Optional. The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'.
-
-
-
sparksql_configlist block-
jar_file_urisoptional - list of string
HCFS URIs of jar files to be added to the Spark CLASSPATH.
-
propertiesoptional - map from string to string
A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.
-
query_file_urioptional - string
The HCFS URI of the script that contains SQL queries. Conflicts with query_list
-
query_listoptional - list of string
The list of SQL queries or statements to execute as part of the job. Conflicts with query_file_uri
-
script_variablesoptional - map from string to string
Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
-
logging_configlist block-
driver_log_levelsrequired - map from string to string
Optional. The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'.
-
-
-
timeoutssingle block
Explanation in Terraform Registry
Manages a job resource within a Dataproc cluster within GCE. For more information see the official dataproc documentation. !> Note: This resource does not support 'update' and changing any attributes will cause the resource to be recreated. resource "google_dataproc_job" "spark" { region = google_dataproc_cluster.mycluster.region force_delete = true placement { cluster_name = google_dataproc_cluster.mycluster.name } spark_config { main_class = "org.apache.spark.examples.SparkPi" jar_file_uris = ["file:///usr/lib/spark/examples/jars/spark-examples.jar"] args = ["1000"] properties = { "spark.logConf" = "true" } logging_config { driver_log_levels = { "root" = "INFO" } } } } resource "google_dataproc_job" "pyspark" { region = google_dataproc_cluster.mycluster.region force_delete = true placement { cluster_name = google_dataproc_cluster.mycluster.name } pyspark_config { main_python_file_uri = "gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py" properties = { "spark.logConf" = "true" } } } output "spark_status" { value = google_dataproc_job.spark.status[0].state } output "pyspark_status" { value = google_dataproc_job.pyspark.status[0].state }
resource "google_dataproc_job" "pyspark" { ... pyspark_config { main_python_file_uri = "gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py" properties = { "spark.logConf" = "true" } } }For configurations requiring Hadoop Compatible File System (HCFS) references, the options below are generally applicable: - GCS files with the
gs://prefix - HDFS files on the cluster with thehdfs://prefix - Local files on the cluster with thefile://prefix
main_python_file_uri- (Required) The HCFS URI of the main Python file to use as the driver. Must be a .py file.args- (Optional) The arguments to pass to the driver.python_file_uris- (Optional) HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.jar_file_uris- (Optional) HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.file_uris- (Optional) HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.archive_uris- (Optional) HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.properties- (Optional) A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in/etc/spark/conf/spark-defaults.confand classes in user code.logging_config.driver_log_levels- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' <a name="nested_spark_config"></a>Thespark_configblock supports:resource "google_dataproc_job" "spark" { ... spark_config { main_class = "org.apache.spark.examples.SparkPi" jar_file_uris = ["file:///usr/lib/spark/examples/jars/spark-examples.jar"] args = ["1000"] properties = { "spark.logConf" = "true" } logging_config { driver_log_levels = { "root" = "INFO" } } } }
main_class- (Optional) The class containing the main method of the driver. Must be in a provided jar or jar that is already on the classpath. Conflicts withmain_jar_file_urimain_jar_file_uri- (Optional) The HCFS URI of jar file containing the driver jar. Conflicts withmain_classargs- (Optional) The arguments to pass to the driver.jar_file_uris- (Optional) HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.file_uris- (Optional) HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.archive_uris- (Optional) HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.properties- (Optional) A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in/etc/spark/conf/spark-defaults.confand classes in user code.logging_config.driver_log_levels- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' <a name="nested_hadoop_config"></a>Thehadoop_configblock supports:resource "google_dataproc_job" "hadoop" { ... hadoop_config { main_jar_file_uri = "file:///usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar" args = [ "wordcount", "file:///usr/lib/spark/NOTICE", "gs://${google_dataproc_cluster.basic.cluster_config[0].bucket}/hadoopjob_output", ] } }
main_class- (Optional) The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified injar_file_uris. Conflicts withmain_jar_file_urimain_jar_file_uri- (Optional) The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'. Conflicts withmain_classargs- (Optional) The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.jar_file_uris- (Optional) HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.file_uris- (Optional) HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.archive_uris- (Optional) HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.properties- (Optional) A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in/etc/hadoop/conf/*-siteand classes in user code..logging_config.driver_log_levels- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' <a name="nested_hive_config"></a>Thehive_configblock supports:resource "google_dataproc_job" "hive" { ... hive_config { query_list = [ "DROP TABLE IF EXISTS dprocjob_test", "CREATE EXTERNAL TABLE dprocjob_test(bar int) LOCATION 'gs://${google_dataproc_cluster.basic.cluster_config[0].bucket}/hive_dprocjob_test/'", "SELECT * FROM dprocjob_test WHERE bar > 2", ] } }
query_list- (Optional) The list of Hive queries or statements to execute as part of the job. Conflicts withquery_file_uriquery_file_uri- (Optional) HCFS URI of file containing Hive script to execute as the job. Conflicts withquery_listcontinue_on_failure- (Optional) Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries. Defaults to false.script_variables- (Optional) Mapping of query variable names to values (equivalent to the Hive command:SET name="value";).properties- (Optional) A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in/etc/hadoop/conf/*-site.xml,/etc/hive/conf/hive-site.xml, and classes in user code..jar_file_uris- (Optional) HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. <a name="nested_pig_config"></a>Thepig_configblock supports:resource "google_dataproc_job" "pig" { ... pig_config { query_list = [ "LNS = LOAD 'file:///usr/lib/pig/LICENSE.txt ' AS (line)", "WORDS = FOREACH LNS GENERATE FLATTEN(TOKENIZE(line)) AS word", "GROUPS = GROUP WORDS BY word", "WORD_COUNTS = FOREACH GROUPS GENERATE group, COUNT(WORDS)", "DUMP WORD_COUNTS", ] } }
query_list- (Optional) The list of Hive queries or statements to execute as part of the job. Conflicts withquery_file_uriquery_file_uri- (Optional) HCFS URI of file containing Hive script to execute as the job. Conflicts withquery_listcontinue_on_failure- (Optional) Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries. Defaults to false.script_variables- (Optional) Mapping of query variable names to values (equivalent to the Pig command:name=[value]).properties- (Optional) A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in/etc/hadoop/conf/*-site.xml,/etc/pig/conf/pig.properties, and classes in user code.jar_file_uris- (Optional) HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.logging_config.driver_log_levels- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' <a name="nested_sparksql_config"></a>Thesparksql_configblock supports:resource "google_dataproc_job" "sparksql" { ... sparksql_config { query_list = [ "DROP TABLE IF EXISTS dprocjob_test", "CREATE TABLE dprocjob_test(bar int)", "SELECT * FROM dprocjob_test WHERE bar > 2", ] } }
query_list- (Optional) The list of SQL queries or statements to execute as part of the job. Conflicts withquery_file_uriquery_file_uri- (Optional) The HCFS URI of the script that contains SQL queries. Conflicts withquery_listscript_variables- (Optional) Mapping of query variable names to values (equivalent to the Spark SQL command:SET name="value";).properties- (Optional) A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.jar_file_uris- (Optional) HCFS URIs of jar files to be added to the Spark CLASSPATH.logging_config.driver_log_levels- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
Frequently asked questions
What is Google Dataproc Job?
Google Dataproc Job is a resource for Dataproc of Google Cloud Platform. Settings can be wrote in Terraform.
Where can I find the example code for the Google Dataproc Job?
For Terraform, the yuyatinnefeld/gcp and Gvkrishna/Terraform_GCP source code examples are useful. See the Terraform Example section for further details.