Google Data loss prevention Job Trigger
This page shows how to write Terraform for Data loss prevention Job Trigger and write them securely.
google_data_loss_prevention_job_trigger (Terraform)
The Job Trigger in Data loss prevention can be configured in Terraform with the resource name google_data_loss_prevention_job_trigger. The following sections describe 2 examples of how to use the resource and its parameters.
Example Usage from GitHub
resource "google_data_loss_prevention_job_trigger" "fail" {
parent = "projects/my-project-name"
description = "Description"
display_name = "Displayname"
triggers {
resource "google_data_loss_prevention_job_trigger" "basic" {
parent = "projects/probable-bebop-315904"
description = "tfhybridtfDesc"
display_name = "tfhybridName"
inspect_job {
Parameters
-
descriptionoptional - string
A description of the job trigger.
-
display_nameoptional - string
User set display name of the job trigger.
-
idoptional computed - string -
last_run_timeoptional computed - string
The timestamp of the last time this trigger executed.
-
nameoptional computed - string
The resource name of the job trigger. Set by the server.
-
parentrequired - string
The parent of the trigger, either in the format 'projects/[[project]]' or 'projects/[[project]]/locations/[[location]]'
-
statusoptional - string
Whether the trigger is currently active. Default value: "HEALTHY" Possible values: ["PAUSED", "HEALTHY", "CANCELLED"]
-
inspect_joblist block-
inspect_template_namerequired - string
The name of the template to run when this job is triggered.
-
actionslist block-
save_findingslist block-
output_configlist block-
output_schemaoptional - string
Schema used for writing the findings for Inspect jobs. This field is only used for Inspect and must be unspecified for Risk jobs. Columns are derived from the Finding object. If appending to an existing table, any columns from the predefined schema that are missing will be added. No columns in the existing table will be deleted. If unspecified, then all available columns will be used for a new table or an (existing) table with no schema, and no changes will be made to an existing table that has a schema. Only for use with external storage. Possible values: ["BASIC_COLUMNS", "GCS_COLUMNS", "DATASTORE_COLUMNS", "BIG_QUERY_COLUMNS", "ALL_COLUMNS"]
-
tablelist block-
dataset_idrequired - string
Dataset ID of the table.
-
project_idrequired - string
The Google Cloud Platform project ID of the project containing the table.
-
table_idoptional - string
Name of the table. If is not set a new one will be generated for you with the following format: 'dlpgoogleapis_yyyy_mm_dd[dlp_job_id]'. Pacific timezone will be used for generating the date details.
-
-
-
-
-
storage_configlist block-
big_query_optionslist block-
table_referencelist block-
dataset_idrequired - string
The dataset ID of the table.
-
project_idrequired - string
The Google Cloud Platform project ID of the project containing the table.
-
table_idrequired - string
The name of the table.
-
-
-
cloud_storage_optionslist block-
bytes_limit_per_fileoptional - number
Max number of bytes to scan from a file. If a scanned file's size is bigger than this value then the rest of the bytes are omitted.
-
bytes_limit_per_file_percentoptional - number
Max percentage of bytes to scan from a file. The rest are omitted. The number of bytes scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit.
-
file_typesoptional - list of string
List of file type groups to include in the scan. If empty, all files are scanned and available data format processors are applied. In addition, the binary content of the selected files is always scanned as well. Images are scanned only as binary if the specified region does not support image inspection and no fileTypes were specified. Possible values: ["BINARY_FILE", "TEXT_FILE", "IMAGE", "WORD", "PDF", "AVRO", "CSV", "TSV"]
-
files_limit_percentoptional - number
Limits the number of files to scan to this percentage of the input FileSet. Number of files scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit.
-
sample_methodoptional - string
How to sample bytes if not all bytes are scanned. Meaningful only when used in conjunction with bytesLimitPerFile. If not specified, scanning would start from the top. Possible values: ["TOP", "RANDOM_START"]
-
file_setlist block-
urloptional - string
The Cloud Storage url of the file(s) to scan, in the format 'gs://<bucket>/<path>'. Trailing wildcard in the path is allowed. If the url ends in a trailing slash, the bucket or directory represented by the url will be scanned non-recursively (content in sub-directories will not be scanned). This means that 'gs://mybucket/' is equivalent to 'gs://mybucket/', and 'gs://mybucket/directory/' is equivalent to 'gs://mybucket/directory/'.
-
regex_file_setlist block-
bucket_namerequired - string
The name of a Cloud Storage bucket.
-
exclude_regexoptional - list of string
A list of regular expressions matching file paths to exclude. All files in the bucket that match at least one of these regular expressions will be excluded from the scan.
-
include_regexoptional - list of string
A list of regular expressions matching file paths to include. All files in the bucket that match at least one of these regular expressions will be included in the set of files, except for those that also match an item in excludeRegex. Leaving this field empty will match all files by default (this is equivalent to including .* in the list)
-
-
-
-
datastore_optionslist block-
kindlist block-
namerequired - string
The name of the Datastore kind.
-
-
partition_idlist block-
namespace_idoptional - string
If not empty, the ID of the namespace to which the entities belong.
-
project_idrequired - string
The ID of the project to which the entities belong.
-
-
-
timespan_configlist block-
enable_auto_population_of_timespan_configoptional - bool
When the job is started by a JobTrigger we will automatically figure out a valid startTime to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger.
-
end_timeoptional - string
Exclude files or rows newer than this value. If set to zero, no upper time limit is applied.
-
start_timeoptional - string
Exclude files or rows older than this value.
-
timestamp_fieldlist block-
namerequired - string
Specification of the field containing the timestamp of scanned items. Used for data sources like Datastore and BigQuery. For BigQuery: Required to filter out rows based on the given start and end times. If not specified and the table was modified between the given start and end times, the entire table will be scanned. The valid data types of the timestamp field are: INTEGER, DATE, TIMESTAMP, or DATETIME BigQuery column. For Datastore. Valid data types of the timestamp field are: TIMESTAMP. Datastore entity will be scanned if the timestamp property does not exist or its value is empty or invalid.
-
-
-
-
-
timeoutssingle block -
triggerslist block-
schedulelist block-
recurrence_period_durationoptional - string
With this option a job is started a regular periodic basis. For example: every day (86400 seconds). A scheduled start time will be skipped if the previous execution has not ended when its scheduled time occurs. This value must be set to a time duration greater than or equal to 1 day and can be no longer than 60 days. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".
-
-
Explanation in Terraform Registry
A job trigger configuration. To get more information about JobTrigger, see:
- API documentation
- How-to Guides
Frequently asked questions
What is Google Data loss prevention Job Trigger?
Google Data loss prevention Job Trigger is a resource for Data loss prevention of Google Cloud Platform. Settings can be wrote in Terraform.
Where can I find the example code for the Google Data loss prevention Job Trigger?
For Terraform, the anaik91/tfe and hemantgadiya/tfhybridjob source code examples are useful. See the Terraform Example section for further details.