Optimizing Google Cloud Checks
The English user guide is currently in beta preview. Most of the documents have been automatically translated from the Japanese version. Should you find any inaccuracies, please reach out to Flatt Security.
Example: Changing Check Behavior based on Cloud Storage Bucket Labels
The default settings in Shisho Cloud include check items for Cloud Storage buckets that inspect their public access configuration. While these checks are beneficial in many cases, they often create operational overhead by triggering alerts even for buckets that are intended to be public.
Let's modify this check to change its behavior based on naming conventions or labels attached to the bucket.
Specifically, we'll configure the check to not raise an alert if the label security-team-accepted-at
is present on the bucket, even if it's publicly accessible. For instance, the shisho-20231114-user-01-public
bucket in the image below is currently exposed to the internet but is acceptable as it has the security-team-accepted-at
label attached:
Note that in the Shisho Cloud default settings, an alert is generated because the bucket is public:
Preparation
First, clone
the repository containing the Shisho Cloud workflow by following the steps below:
git clone --recurse-submodules https://github.com/flatt-security/shisho-cloud-managed-workflows.git
Run the opa test
command in the cloned directory to ensure that all workflow tests pass:
$ cd shisho-cloud-managed-workflows
$ opa test -v .
...
workflows/csp/aws-fsbp/elb/logging/decide_test.rego:
data.policy.aws.elb.logging.test_lb_with_log_will_be_allowed: PASS (10.35925ms)
data.policy.aws.elb.logging.test_lb_without_log_will_be_denied: PASS (12.059792ms)
workflows/notification/slack/security-ja/triage/notify_test.rego:
data.notification.triage.test_severity_selected_successfully: PASS (506.667µs)
data.notification.triage.test_critical_severity_selected_as_failsafe_behavior: PASS (273.083µs)
--------------------------------------------------------------------------------
PASS: 186/186
Identify the Policy to Modify
First, you need to identify which policy defines the check you want to modify. The actual identification is easy; just check the detection source information displayed next to the check result details:
This information tells you which workflow and job detected the result. In this case, you can see that it was detected by the bucket-accessibility
job of the workflow with the ID prebundle-googlecloud-storage
.
This workflow is defined in the following path in flatt-security/shisho-cloud-managed-workflows:
workflows/cis-benchmark/googlecloud-v1.3.0/storage/bucket-accessibility
Currently, the following files are included in this path:
decide.graphql
: Definition of the data to be inspecteddecide.rego
: Inspection logicdecide_test.rego
: Test code
These files are the building blocks of workflows in Shisho Cloud. As illustrated in the image below, Shisho Cloud collects the inspection target data defined by the GraphQL query and executes Rego code using it as input to perform security checks:
For example, decide.graphql
, which defines the inspection target data, is defined to retrieve a list of buckets, ACLs, and IAM policies, as shown below:
{
googleCloud {
projects {
cloudStorage {
buckets {
metadata {
id
}
name
acl {
entity
role
}
iamPolicy {
bindings {
role
members {
id
}
}
}
}
}
}
}
}
The content of the inspection logic using this data is as follows. Based on the GraphQL query results stored under input
, the inspection results are stored in a variable called decisions
:
package policy.googlecloud.storage.bucket_accessibility
import data.shisho
decisions[d] {
project := input.googleCloud.projects[_],
bucket := project.cloudStorage.buckets[_],
policy_bindings := public_policy_bindings(bucket.iamPolicy.bindings),
acl_bindings := public_acl_rules(bucket.acl),
d := shisho.decision.googlecloud.storage.bucket_accessibility({
"allowed": (count(policy_bindings) + count(acl_bindings)) == 0,
"subject": bucket.metadata.id,
"payload": shisho.decision.googlecloud.storage.bucket_accessibility_payload({
"public_policy_bindings": policy_bindings,
"public_acl_rules": acl_bindings,
}),
})
}
is_public_principal(p) {
p == "allAuthenticatedUsers"
} else {
p == "allUsers"
} else = false
is_public_entity(p) := is_public_principal(p)
public_acl_rules(acl) := x {
x := [{
"role": a.role,
"entity": a.entity,
} |
a := acl[_],
is_public_entity(a.entity)
]
} else := []
public_policy_bindings(bindings) := x {
x := [{
"role": binding.role,
"principal": member.id,
} |
binding := bindings[_],
member := binding.members[_],
is_public_principal(member.id)
]
} else := []
You have now identified the Shisho Cloud workflow to modify! In the following sections, we will modify this workflow so that its behavior changes based on labels.
Get Labels as Inspection Target Data
Next, let's modify the Shisho Cloud workflow to include Cloud Storage labels in the inspection target data. Access the Shisho Cloud GraphQL Playground.
Paste the query from workflows/cis-benchmark/googlecloud-v1.3.0/storage/bucket-accessibility/decide.graphql
there and search under the buckets
field to find a field called labels
. Modify the GraphQL query to include this field and retrieve labels as inspection target data:
{
googleCloud {
projects {
cloudStorage {
buckets {
metadata {
id
}
name
acl {
entity
role
}
iamPolicy {
bindings {
role
members {
id
}
}
}
labels {
key
value
}
}
}
}
}
}
Modify the Tests
Now that the definition of the inspection target data has been modified, let's modify the tests. Tests are used to verify that the inspection logic is working correctly.
In this case, we decided that if the security-team-accepted-at
label is present, the bucket should be allowed even if it's public. Therefore, let's add a test to verify that the check result is allowed
for buckets that have the security-team-accepted-at
label:
package policy.googlecloud.storage.bucket_accessibility
import data.shisho
import future.keywords
test_whether_proper_accessibility_is_configured_for_storage_buckets if {
# ... snipped ...
# Check if the label will supress the alert
count([d |
decisions[d]
shisho.decision.is_allowed(d)
]) == 1 with input as {"googleCloud": {"projects": [{"cloudStorage": {"buckets": [{
"metadata": {"id": "google-cloud-bq-dataset|514893259785|test-1"},
"labels": [{"key": "security-team-accepted-at", "value": "2023-11-14"}],
"acl": [
{
"role": "WRITER",
"entity": "allUsers",
},
{
"role": "OWNER",
"entity": "projectWriters",
},
],
"iamPolicy": {"bindings": []},
}]}}]}}
}
At this point, you can also check that the added test fails when the opa test
command is executed:
$ opa test workflows/cis-benchmark/googlecloud-v1.3.0/storage/bucket-accessibility ./sdk
...
workflows/cis-benchmark/googlecloud-v1.3.0/storage/bucket-accessibility/decide_test.rego:
data.policy.googlecloud.storage.bucket_accessibility.test_whether_proper_accessibility_is_configured_for_storage_buckets: FAIL (9.491125ms)
--------------------------------------------------------------------------------
PASS: 1/2
FAIL: 1/2
The next step is to modify the inspection logic so that this test passes.
Modify the Inspection Logic to Evaluate Labels
Now, let's modify the inspection logic to evaluate labels. There are several ways to implement this, but this time, we will add a function called should_allow
to evaluate labels as follows:
package policy.googlecloud.storage.bucket_accessibility
import data.shisho
decisions[d] {
project := input.googleCloud.projects[_],
bucket := project.cloudStorage.buckets[_],
policy_bindings := public_policy_bindings(bucket.iamPolicy.bindings),
acl_bindings := public_acl_rules(bucket.acl),
d := shisho.decision.googlecloud.storage.bucket_accessibility({
"allowed": should_allow(bucket, policy_bindings, acl_bindings),
"subject": bucket.metadata.id,
"payload": shisho.decision.googlecloud.storage.bucket_accessibility_payload({
"public_policy_bindings": policy_bindings,
"public_acl_rules": acl_bindings,
}),
})
}
should_allow(bucket, policy_bindings, acl_bindings) {
l := bucket.labels[_],
l.key == "security-team-accepted-at",
l.value != ""
} else {
count(policy_bindings) + count(acl_bindings) == 0
} else := false
is_public_principal(p) {
p == "allAuthenticatedUsers"
} else {
p == "allUsers"
} else = false
is_public_entity(p) := is_public_principal(p)
public_acl_rules(acl) := x {
x := [{
"role": a.role,
"entity": a.entity,
} |
a := acl[_],
is_public_entity(a.entity)
]
} else := []
public_policy_bindings(bindings) := x {
x := [{
"role": binding.role,
"principal": member.id,
} |
binding := bindings[_],
member := binding.members[_],
is_public_principal(member.id)
]
} else := []
Check the Test Results
Now, the test should pass. Let's run the opa test
command:
$ opa test workflows/cis-benchmark/googlecloud-v1.3.0/storage/bucket-accessibility ./sdk
...
workflows/cis-benchmark/googlecloud-v1.3.0/storage/bucket-accessibility/decide_test.rego:
data.policy.googlecloud.storage.bucket_accessibility.test_whether_proper_accessibility_is_configured_for_storage_buckets: PASS (96.768375ms)
--------------------------------------------------------------------------------
PASS: 2/2
The test now passes! All that's left is to deploy this fix to Shisho Cloud and see it in action.
Deployment
Deploy the modified workflow by running the following command:
$ shishoctl workflow apply --org $SHISHO_ORG_ID -f workflows/cis-benchmark/googlecloud-v1.3.0/storage
Applying 1 manifest files...
succeeded to update a manifest; id=prebundle-googlecloud-storage, snapshot=WS01HF4PBYFYHVRRYEG6B9F0VFB4
Run the Workflow and Verify the Behavior
Run the modified workflow immediately by running the following command:
$ shishoctl workflow run --org $SHISHO_ORG_ID prebundle-googlecloud-storage
If you access the workflow list page, you should see the execution status of the workflow:
After the execution is complete, access the check results list page. You can confirm that no alerts are generated for buckets with the security-team-accepted-at
label:
Summary
This concludes the example of modifying a Shisho Cloud workflow. You can customize the behavior of checks in Shisho Cloud by modifying workflows in this manner.
These modifications can also be deployed via continuous deployment using a Git repository. By centrally managing workflow code and approving changes through code review before deployment, you can achieve more secure operations. Try setting it up.