♊️ GemiNews 🗞️
(dev)
🏡
📰 Articles
🏷️ Tags
🧠 Queries
📈 Graphs
☁️ Stats
💁🏻 Assistant
💬
🎙️
Demo 1: Embeddings + Recommendation
Demo 2: Bella RAGa
Demo 3: NewRetriever
Demo 4: Assistant function calling
Editing article
Title
Summary
Content
<h3>Securing Anthos Workloads With Chronicle Backstory — A comprehensive approach</h3><p>Implementation process, threat detection strategies, and remediation workflows to get started.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/900/0*RpN2FuNx2P5E8wai.jpg" /></figure><p>How cool would it be to have a single glass-pane to secure Anthos as well as GKE clusters on the go? Helping implement automation to be granular security findings before it escalates?</p><p>The world of multiple clouds and hybrid systems poses distinct security challenges. The Anthos platform from Google Cloud makes it easier to deploy applications in a variety of environments, but securing these workloads calls for a thorough strategy. With the recent integration of Chronicle Backstory, a threat detection and investigation tool, with Anthos, we can now leverage extended detection and response (XDR) capabilities throughout this intricate environment. This blog delves deeply into the technical aspects of using Chronicle Backstory to secure Anthos workloads, including data ingestion, threat hunting queries, and utilizing the integrations that are already built in.</p><h3>Implementing Backstory with Anthos</h3><p>Chronicle Backstory uses pre-existing data sources to identify potential threats. Our main goal in integrating it with Anthos will be to consume information from two main sources:</p><p><strong>Cloud Audit Logs:</strong> Your GCP projects’ administrative activity, including that of Anthos clusters, is recorded in these logs.<br><strong>GKE Logs: </strong>The Kubernetes Engine (GKE) logs offer valuable information about the activities of containers and possible security incidents that occur in your Anthos workloads.</p><h4>Configuring cloud audit logging for Backstory</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*e6I_qtdeaCqpVhuqPvmYXg.gif" /><figcaption><a href="https://images.app.goo.gl/dM7fV7uT2ZPAD6ZD9">https://images.app.goo.gl/dM7fV7uT2ZPAD6ZD9</a></figcaption></figure><p>We start with enabling cloud audit logs for our GCP project as starters, run the command below to add an IAM binding to the Chronicle ingestor service account.</p><pre>gcloud projects add-iam-policy-binding PROJECT_ID \<br> --member="serviceAccount:chronicle-ingestor@backstory.iam.gserviceaccount.com:user" \<br> --role="roles/logging.logWriter"</pre><p>Now that we have a log ingestor in place we can go ahead and create a logging sink to backstory. With this setup, Anthos cluster activity from Cloud Audit Logs can be ingested by Backstory.</p><pre>gcloud logging sinks create backstory-sink \<br> --log-filter="resource.type=cluster" \<br> --destination="projects/PROJECT_ID/sinks/backstory-sink" \<br> --destination-type=backstory</pre><p>Enabling GKE logging for backstory includes two steps:</p><ul><li>Enable Stackdriver Kubernetes Engine Monitoring for your Anthos cluster.</li><li>Create a sink within Stackdriver Monitoring to export logs to Backstory:</li></ul><pre># Configure the Stackdriver Logging Agent<br>apiVersion: logging.k8s.io/v2<br>kind: LoggingDeployment<br>metadata:<br> name: backstory-agent<br>spec:<br> sinkRefs:<br> - name: "backstory-sink"<br> namespace: "logging"<br> # Replace with your Backstory ingestion endpoint<br> outputDataset: "projects/your-project-id/datasets/anthos-logs"<br> # Filters to select relevant container logs<br> selectors:<br> - expression: "resource.type=k8s_container"<br>---<br># Define the Stackdriver Logging Sink to route logs to Backstory<br>apiVersion: logging.k8s.io/v2<br>kind: LoggingSink<br>metadata:<br> name: backstory-sink<br>spec:<br> # Replace with your Backstory ingestion credentials<br> secretRef:<br> name: backstory-credentials<br> destination: <br> # Configure secure HTTPS destination for Backstory<br> destination: "https://your-backstory-endpoint.google.com/v2/ingest"<br> # Define the log format for Backstory ingestion<br> outputFormat: "json"</pre><p>With this setup, Backstory receives GKE logs from your Anthos workloads that show container activity.</p><h3>Threat Detection With Backstory Queries</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*mkKqN1LfvAgBx4jO.jpg" /></figure><p>Using Chronicle Query Language, Backstory is highly proficient in threat detection (CQL). Here are a few instances:</p><h4>Detecting Suspicious APIs</h4><p>This query finds instances of unauthorized users within Anthos clusters making API calls to the Secrets API.</p><pre>SELECT resource.labels.cluster_name, <br> timestamp, <br> protoPayload.methodName, <br> protoPayload.request.principalEmail <br>FROM audit_log <br>WHERE protoPayload.methodName LIKE '%/v1/secrets%' <br> AND NOT protoPayload.request.principalEmail LIKE '%admin@yourdomain.com'<br>ORDER BY timestamp DESC;</pre><h4>Unusual Container Activity Detection</h4><p>This query finds containers that crash frequently in Anthos workloads, which may be a sign of suspicious activity.</p><pre>SELECT resource.labels.cluster_name, <br> container.name, <br> timestamp, <br> jsonPayload.reason <br>FROM container <br>WHERE jsonPayload.reason LIKE '%CrashLoopBackOff%' <br>ORDER BY timestamp DESC;</pre><pre># Find container executions with unusual resource usage<br>SELECT process.name, container.name, resource.usage.cpu.usage_in_cores, resource.usage.memory.usage_in_bytes<br>FROM logs<br>WHERE resource.type = "k8s_container"<br>AND resource.usage.cpu.usage_in_cores > (AVG(resource.usage.cpu.usage_in_cores) + 3 * STDDEV(resource.usage.cpu.usage_in_cores))<br>OR resource.usage.memory.usage_in_bytes > (AVG(resource.usage.memory.usage_in_bytes) + 3 * STDDEV(resource.usage.memory.usage_in_bytes))<br>ORDER BY resource.usage.cpu.usage_in_cores DESC, resource.usage.memory.usage_in_bytes DESC</pre><h4>Suspicious Login Attempts</h4><p>This query looks for login attempts made during a specified time period from odd locations. Additional filtering options include user accounts or unsuccessful login attempts.</p><pre>SELECT user_email, source_ip, timestamp<br>FROM events<br>WHERE event_type = 'login.attempt' AND<br>timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1d) AND<br>geo_country(source_ip) NOT IN ('US', 'GB') -- Replace with trusted countries</pre><h4>Potential Lateral Movements</h4><p>This query looks for user activity that may indicate lateral movement across clusters involving multiple GCP resources. Events can be narrowed down by particular resource kinds or activities that take place within a constrained time frame.</p><pre>SELECT user_email, resource_type, resource_name, timestamp<br>FROM events<br>WHERE event_type IN ('resource.create', 'resource.access')<br>GROUP BY user_email, resource_type, resource_name<br>HAVING COUNT(*) > 5 -- Adjust threshold based on expected activity</pre><h4>Unusual File Access</h4><p>This query looks for file access events coming from source IP addresses or unexpected user accounts. Additional filters can be applied based on particular file types or attempts to access data after business hours.</p><pre>SELECT user_email, source_ip, file_path, timestamp<br>FROM events<br>WHERE event_type = 'file.access'<br>AND (user_email NOT IN ('admin@example.com', 'service_account@project.com') -- Trusted accounts<br>OR geo_country(source_ip) NOT IN ('US')) -- Trusted location </pre><h3>Remediation?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*x9a8vmb4z1_JhlwL" /></figure><p>For automated remediation, Backstory integrates with a number of tools. Here are some examples using Cloud Functions (because that’s what I found closest at hand).</p><p>Isolating infected workloads on the cluster requires a cloud function to be triggered via Backstory findings after which we can add the function:</p><pre>def isolate_workload(data, context):<br> # Extract cluster name and pod details from Backstory alert.<br> cluster_name = data['resource']['labels']['cluster_name']<br> pod_name = data['container']['name']<br><br> # Use Kubernetes API to cordon the infected node.<br> from kubernetes import client, config<br> config.load_kube_config()<br> v1 = client.AppsV1Api()<br> v1.patch_namespaced_daemon_set(<br> "kube-system", "kube-dns", body={"spec": {"template": {"spec": {"taints": [{"effect": "NoSchedule", "key": "infected"}]}}}}<br> )</pre><p>With the addition of a taint to stop additional pod scheduling, this Cloud Function automatically isolates the compromised node.</p><p>Further, you can implement something like:</p><pre>def remediate_backstory_finding(data, context):<br> """Cloud Function triggered by Backstory detection."""<br> # Parse the Pub/Sub message data<br> pubsub_message = json.loads(data)<br> backstory_finding = json.loads(pubsub_message["data"])<br><br> # Extract relevant details from the detection<br> finding_name = backstory_finding["findingName"]<br> threat_type = backstory_finding["externalSystems"][0]["threatType"]<br><br> # Implement logic for remediation based on threat type<br> if threat_type == "MALWARE":<br> # Example: Isolate the affected workload<br> print(f"Isolating workload associated with finding: {finding_name}")<br> # Replace with your specific isolation workflow (e.g., API call to Anthos)<br> elif threat_type == "PORT_SCAN":<br> # Example: Block suspicious IP addresses<br> print(f"Blocking suspicious IP addresses from finding: {finding_name}")<br> # Replace with your specific IP blocking workflow (e.g., firewall rule update)<br> else:<br> print(f"Unrecognized threat type: {threat_type} for finding: {finding_name}")<br> # Implement logic for handling unknown threats or sending notifications<br><br></pre><p>A Pub/Sub message with the Backstory detection details in JSON format initiates the function. After parsing the message data, the threat type and finding name are extracted.<br>The function carries out particular remediation actions based on the type of threat. Including examples of workload isolation for malware and IP blocking for port scans in this case.</p><h3>Conclude</h3><p>These are but a few simple instances. Depending on your unique Anthos environment, security posture, and the threats you want to find, you’ll need to modify the queries. As the integration develops, it’s also advised to refer to the official Backstory documentation for the most recent syntax and functionalities.</p><h3>Get in touch??</h3><p><a href="https://linktr.ee/imranfosec">imranfosec | Instagram | Linktree</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fcf4a9a3a78b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/securing-anthos-workload-with-chronicle-backstory-a-comprehensive-approchg-fcf4a9a3a78b">Securing Anthos Workload With Chronicle Backstory — A comprehensive approchg</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
Author
Link
Published date
Image url
Feed url
Guid
Hidden blurb
--- !ruby/object:Feedjira::Parser::RSSEntry title: Securing Anthos Workload With Chronicle Backstory — A comprehensive approchg url: https://medium.com/google-cloud/securing-anthos-workload-with-chronicle-backstory-a-comprehensive-approchg-fcf4a9a3a78b?source=rss----e52cf94d98af---4 author: Imran Roshan categories: - ai - cloud-computing - gcp-security-operations - google-cloud-platform - cybersecurity published: 2024-05-13 02:17:57.000000000 Z entry_id: !ruby/object:Feedjira::Parser::GloballyUniqueIdentifier is_perma_link: 'false' guid: https://medium.com/p/fcf4a9a3a78b carlessian_info: news_filer_version: 2 newspaper: Google Cloud - Medium macro_region: Blogs rss_fields: - title - url - author - categories - published - entry_id - content content: '<h3>Securing Anthos Workloads With Chronicle Backstory — A comprehensive approach</h3><p>Implementation process, threat detection strategies, and remediation workflows to get started.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/900/0*RpN2FuNx2P5E8wai.jpg" /></figure><p>How cool would it be to have a single glass-pane to secure Anthos as well as GKE clusters on the go? Helping implement automation to be granular security findings before it escalates?</p><p>The world of multiple clouds and hybrid systems poses distinct security challenges. The Anthos platform from Google Cloud makes it easier to deploy applications in a variety of environments, but securing these workloads calls for a thorough strategy. With the recent integration of Chronicle Backstory, a threat detection and investigation tool, with Anthos, we can now leverage extended detection and response (XDR) capabilities throughout this intricate environment. This blog delves deeply into the technical aspects of using Chronicle Backstory to secure Anthos workloads, including data ingestion, threat hunting queries, and utilizing the integrations that are already built in.</p><h3>Implementing Backstory with Anthos</h3><p>Chronicle Backstory uses pre-existing data sources to identify potential threats. Our main goal in integrating it with Anthos will be to consume information from two main sources:</p><p><strong>Cloud Audit Logs:</strong> Your GCP projects’ administrative activity, including that of Anthos clusters, is recorded in these logs.<br><strong>GKE Logs: </strong>The Kubernetes Engine (GKE) logs offer valuable information about the activities of containers and possible security incidents that occur in your Anthos workloads.</p><h4>Configuring cloud audit logging for Backstory</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*e6I_qtdeaCqpVhuqPvmYXg.gif" /><figcaption><a href="https://images.app.goo.gl/dM7fV7uT2ZPAD6ZD9">https://images.app.goo.gl/dM7fV7uT2ZPAD6ZD9</a></figcaption></figure><p>We start with enabling cloud audit logs for our GCP project as starters, run the command below to add an IAM binding to the Chronicle ingestor service account.</p><pre>gcloud projects add-iam-policy-binding PROJECT_ID \<br> --member="serviceAccount:chronicle-ingestor@backstory.iam.gserviceaccount.com:user" \<br> --role="roles/logging.logWriter"</pre><p>Now that we have a log ingestor in place we can go ahead and create a logging sink to backstory. With this setup, Anthos cluster activity from Cloud Audit Logs can be ingested by Backstory.</p><pre>gcloud logging sinks create backstory-sink \<br> --log-filter="resource.type=cluster" \<br> --destination="projects/PROJECT_ID/sinks/backstory-sink" \<br> --destination-type=backstory</pre><p>Enabling GKE logging for backstory includes two steps:</p><ul><li>Enable Stackdriver Kubernetes Engine Monitoring for your Anthos cluster.</li><li>Create a sink within Stackdriver Monitoring to export logs to Backstory:</li></ul><pre># Configure the Stackdriver Logging Agent<br>apiVersion: logging.k8s.io/v2<br>kind: LoggingDeployment<br>metadata:<br> name: backstory-agent<br>spec:<br> sinkRefs:<br> - name: "backstory-sink"<br> namespace: "logging"<br> # Replace with your Backstory ingestion endpoint<br> outputDataset: "projects/your-project-id/datasets/anthos-logs"<br> # Filters to select relevant container logs<br> selectors:<br> - expression: "resource.type=k8s_container"<br>---<br># Define the Stackdriver Logging Sink to route logs to Backstory<br>apiVersion: logging.k8s.io/v2<br>kind: LoggingSink<br>metadata:<br> name: backstory-sink<br>spec:<br> # Replace with your Backstory ingestion credentials<br> secretRef:<br> name: backstory-credentials<br> destination: <br> # Configure secure HTTPS destination for Backstory<br> destination: "https://your-backstory-endpoint.google.com/v2/ingest"<br> # Define the log format for Backstory ingestion<br> outputFormat: "json"</pre><p>With this setup, Backstory receives GKE logs from your Anthos workloads that show container activity.</p><h3>Threat Detection With Backstory Queries</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*mkKqN1LfvAgBx4jO.jpg" /></figure><p>Using Chronicle Query Language, Backstory is highly proficient in threat detection (CQL). Here are a few instances:</p><h4>Detecting Suspicious APIs</h4><p>This query finds instances of unauthorized users within Anthos clusters making API calls to the Secrets API.</p><pre>SELECT resource.labels.cluster_name, <br> timestamp, <br> protoPayload.methodName, <br> protoPayload.request.principalEmail <br>FROM audit_log <br>WHERE protoPayload.methodName LIKE '%/v1/secrets%' <br> AND NOT protoPayload.request.principalEmail LIKE '%admin@yourdomain.com'<br>ORDER BY timestamp DESC;</pre><h4>Unusual Container Activity Detection</h4><p>This query finds containers that crash frequently in Anthos workloads, which may be a sign of suspicious activity.</p><pre>SELECT resource.labels.cluster_name, <br> container.name, <br> timestamp, <br> jsonPayload.reason <br>FROM container <br>WHERE jsonPayload.reason LIKE '%CrashLoopBackOff%' <br>ORDER BY timestamp DESC;</pre><pre># Find container executions with unusual resource usage<br>SELECT process.name, container.name, resource.usage.cpu.usage_in_cores, resource.usage.memory.usage_in_bytes<br>FROM logs<br>WHERE resource.type = "k8s_container"<br>AND resource.usage.cpu.usage_in_cores > (AVG(resource.usage.cpu.usage_in_cores) + 3 * STDDEV(resource.usage.cpu.usage_in_cores))<br>OR resource.usage.memory.usage_in_bytes > (AVG(resource.usage.memory.usage_in_bytes) + 3 * STDDEV(resource.usage.memory.usage_in_bytes))<br>ORDER BY resource.usage.cpu.usage_in_cores DESC, resource.usage.memory.usage_in_bytes DESC</pre><h4>Suspicious Login Attempts</h4><p>This query looks for login attempts made during a specified time period from odd locations. Additional filtering options include user accounts or unsuccessful login attempts.</p><pre>SELECT user_email, source_ip, timestamp<br>FROM events<br>WHERE event_type = 'login.attempt' AND<br>timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1d) AND<br>geo_country(source_ip) NOT IN ('US', 'GB') -- Replace with trusted countries</pre><h4>Potential Lateral Movements</h4><p>This query looks for user activity that may indicate lateral movement across clusters involving multiple GCP resources. Events can be narrowed down by particular resource kinds or activities that take place within a constrained time frame.</p><pre>SELECT user_email, resource_type, resource_name, timestamp<br>FROM events<br>WHERE event_type IN ('resource.create', 'resource.access')<br>GROUP BY user_email, resource_type, resource_name<br>HAVING COUNT(*) > 5 -- Adjust threshold based on expected activity</pre><h4>Unusual File Access</h4><p>This query looks for file access events coming from source IP addresses or unexpected user accounts. Additional filters can be applied based on particular file types or attempts to access data after business hours.</p><pre>SELECT user_email, source_ip, file_path, timestamp<br>FROM events<br>WHERE event_type = 'file.access'<br>AND (user_email NOT IN ('admin@example.com', 'service_account@project.com') -- Trusted accounts<br>OR geo_country(source_ip) NOT IN ('US')) -- Trusted location </pre><h3>Remediation?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*x9a8vmb4z1_JhlwL" /></figure><p>For automated remediation, Backstory integrates with a number of tools. Here are some examples using Cloud Functions (because that’s what I found closest at hand).</p><p>Isolating infected workloads on the cluster requires a cloud function to be triggered via Backstory findings after which we can add the function:</p><pre>def isolate_workload(data, context):<br> # Extract cluster name and pod details from Backstory alert.<br> cluster_name = data['resource']['labels']['cluster_name']<br> pod_name = data['container']['name']<br><br> # Use Kubernetes API to cordon the infected node.<br> from kubernetes import client, config<br> config.load_kube_config()<br> v1 = client.AppsV1Api()<br> v1.patch_namespaced_daemon_set(<br> "kube-system", "kube-dns", body={"spec": {"template": {"spec": {"taints": [{"effect": "NoSchedule", "key": "infected"}]}}}}<br> )</pre><p>With the addition of a taint to stop additional pod scheduling, this Cloud Function automatically isolates the compromised node.</p><p>Further, you can implement something like:</p><pre>def remediate_backstory_finding(data, context):<br> """Cloud Function triggered by Backstory detection."""<br> # Parse the Pub/Sub message data<br> pubsub_message = json.loads(data)<br> backstory_finding = json.loads(pubsub_message["data"])<br><br> # Extract relevant details from the detection<br> finding_name = backstory_finding["findingName"]<br> threat_type = backstory_finding["externalSystems"][0]["threatType"]<br><br> # Implement logic for remediation based on threat type<br> if threat_type == "MALWARE":<br> # Example: Isolate the affected workload<br> print(f"Isolating workload associated with finding: {finding_name}")<br> # Replace with your specific isolation workflow (e.g., API call to Anthos)<br> elif threat_type == "PORT_SCAN":<br> # Example: Block suspicious IP addresses<br> print(f"Blocking suspicious IP addresses from finding: {finding_name}")<br> # Replace with your specific IP blocking workflow (e.g., firewall rule update)<br> else:<br> print(f"Unrecognized threat type: {threat_type} for finding: {finding_name}")<br> # Implement logic for handling unknown threats or sending notifications<br><br></pre><p>A Pub/Sub message with the Backstory detection details in JSON format initiates the function. After parsing the message data, the threat type and finding name are extracted.<br>The function carries out particular remediation actions based on the type of threat. Including examples of workload isolation for malware and IP blocking for port scans in this case.</p><h3>Conclude</h3><p>These are but a few simple instances. Depending on your unique Anthos environment, security posture, and the threats you want to find, you’ll need to modify the queries. As the integration develops, it’s also advised to refer to the official Backstory documentation for the most recent syntax and functionalities.</p><h3>Get in touch??</h3><p><a href="https://linktr.ee/imranfosec">imranfosec | Instagram | Linktree</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fcf4a9a3a78b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/securing-anthos-workload-with-chronicle-backstory-a-comprehensive-approchg-fcf4a9a3a78b">Securing Anthos Workload With Chronicle Backstory — A comprehensive approchg</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>'
Language
Active
Ricc internal notes
Imported via /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/import-feedjira.rb on 2024-05-13 20:10:32 +0200. Content is EMPTY here. Entried: title,url,author,categories,published,entry_id,content. TODO add Newspaper: filename = /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/../../../crawler/out/feedjira/Blogs/Google Cloud - Medium/2024-05-13-Securing_Anthos_Workload_With_Chronicle_Backstory — A_comprehens-v2.yaml
Ricc source
Show this article
Back to articles