♊️ GemiNews 🗞️ (dev)

Demo 1: Embeddings + Recommendation Demo 2: Bella RAGa Demo 3: NewRetriever Demo 4: Assistant function calling

🗞️Upgrades!!! — Everything new with Kubernetes 1.30

🗿Semantically Similar Articles (by :title_embedding)

Upgrades!!! — Everything new with Kubernetes 1.30

2024-03-28 - Imran Roshan (from Google Cloud - Medium)

Upgrades!!! — Everything new with Kubernetes 1.30New features, enhancements and everything exciting with Kubernetes 1.30Excited? Aren’t we all? A slew of innovative features aimed at enhancing security, simplifying pod management, and empowering developers are included in this version. Now let’s explore the main features that take Kubernetes 1.30 to the next level.Enhanced Security AgainWith the introduction of various improvements, Kubernetes 1.30 further establishes itself as a safe platform for workload deployment and management.User namespaces for greater pod isolation [beta]This ground-breaking feature gives users within pods fine-grained control over their identities; it will graduate to beta in 1.30. It permits mapping the various values on the host system to the UIDs (User IDs) and GIDs (Group IDs) used inside a pod. By drastically lowering the attack surface, this isolation method makes it more difficult for compromised containers to abuse privileges on the underlying host.apiVersion: v1kind: Podmetadata: name: my-secure-podspec: securityContext: userNamespace: true containers: - name: my-app image: my-secure-image:latestTo effectively isolate the container from other processes on the host, the Kubelet is instructed to run it with a unique user namespace in this example by setting the userNamespace: true property within the securityContext.Bound service account tokens [beta]For service account authentication, bound service account tokens (SATs) provide a more secure option than conventional, non-bound tokens. Bound SATs, first released in 1.30 as beta, are associated with particular pods and only provide access to the resources needed by those pods. As a result, the potential damage is minimized by reducing the blast radius of compromised tokens.apiVersion: v1kind: Podmetadata: name: my-pod-with-bound-satspec: serviceAccountName: my-service-account template: spec: securityContext: # Enables the use of the bound service account token podSecurityContext: {}The pod can utilize the bound service account token linked to the designated service account (my-service-account) by incorporating the podSecurityContext: {} section.Node log queriesUnderstanding node logs is essential for security analysis and troubleshooting. With the beta release of Node Log Query in Kubernetes 1.30, administrators can use the kubelet API to directly query system service logs on nodes. This reduces the attack surface and expedites log collection without requiring additional system access, thereby improving security.Imagine running the following command to search logs for kubelet process-related errors:kubectl get --raw "/api/v1/nodes/worker/proxy/logs/?query=kubelet&pattern=error"With this command, logs from the kubelet process running on the “worker” node that specifically contain the keyword “error” are retrieved.AppArmor profile configurations using Pod Security ContextsWithin containers, AppArmor profiles offer a potent way to enforce application security policies. By enabling administrators to specify profiles directly within the PodSecurityContext and container.securityContext fields, Kubernetes 1.30 streamlines the configuration of AppArmor. As a result, policy management is simplified and beta AppArmor annotations are no longer required.apiVersion: v1kind: Podmetadata: name: my-pod-with-apparmorspec: securityContext: apparmorProfile: "restricted-runtime" containers: - name: my-app image: my-app-image:latest securityContext: apparmorProfile: "runtime/default"Here, the container called “my-app” uses the “runtime/default” profile, but the pod itself is assigned the “restricted-runtime” profile. This provides both pod and container level granular control over AppArmor policies.Enhanced Pod ManagementNode Memory SwapNode memory swapping is now supported in Kubernetes 1.30. This may enhance system stability when memory pressure is applied by enabling the kernel to use swap space on nodes for memory management.In Kubernetes 1.30, the node memory swap feature has been redesigned to prioritize stability while providing more control. With the introduction of LimitedSwap in place of UnlimitedSwap, Kubernetes offers a more controlled and predictable method for handling swap usage on Linux nodes. Don’t forget to assess your unique requirements prior to activating swap and to put appropriate monitoring procedures in place.kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1# ... other kubelet configurationsfeatureGates: NodeSwap: "true"memorySwap: swapBehavior: LimitedSwapContainer resource based pod autoscalingBy using this feature, horizontal pod autoscaling (HPA) based on memory or CPU metrics of the container is enabled. This makes it possible to scale more precisely depending on the real needs for containers. You can make the most of your Kubernetes clusters’ resource allocation and scaling strategy by concentrating on the metrics of individual containers.apiVersion: autoscaling/v2beta2kind: HorizontalPodAutoscalermetadata: name: my-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-deployment minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 containerMetrics: - name: web-container # Target container within the PodDuring the deployment, the HPA keeps an eye on how much CPU resource each pod is using. The HPA will scale the deployment to maintain an average CPU usage of 80% across all instances of the web container since the average utilization is set to 80. The container name (web-container) for which the CPU metric is to be monitored is specified in the containerMetrics section.Dynamic resource allocationStructured parameters increase the flexibility of resource allocation for pods. By defining resource requests and limits more precisely, developers can optimize the use of available resources.In this case, the pod makes a minimum and maximum request for one GPU resource (nvidia.com/gpu [invalid URL removed]). It also uses the standard memory resource definition to request 8GB of memory.apiVersion: v1kind: Podmetadata: name: my-gpu-appspec: containers: - name: gpu-container resources: requests: resource.k8s.io/nvidia.com/gpu: type: Resource minimum: 1 maximum: 1 resource.k8s.io/memory: type: Resource requests: memory: "8Gi"DRA in Kubernetes 1.30 opens the door to a more dynamic and effective resource management environment with its structured parameters. As the feature develops, we should anticipate a broader audience and the emergence of a vibrant third-party resource driver ecosystem that meets a variety of application requirements.To ConcludeNow, obviously I am not part of the AI fleet to write down every single one of the feature parameters in details so I would redirect you now to the best thing to exist after Ice Cream. THE DOCUMENTATION!sig-release/releases/release_phases.md at master · kubernetes/sig-releaseKubernetes 1.30 Release InformationConnect with me?Imran RoshanUpgrades!!! — Everything new with Kubernetes 1.30 was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

[Blogs] 🌎 https://medium.com/google-cloud/upgrades-everything-new-with-kubernetes-1-30-b539ebfad4ea?source=rss----e52cf94d98af---4 [🧠] [v2] article_embedding_description: {:llm_project_id=>"Unavailable", :llm_dimensions=>nil, :article_size=>8967, :llm_embeddings_model_name=>"textembedding-gecko"}
[🧠] [v1/3] title_embedding_description: {:ricc_notes=>"[embed-v3] Fixed on 9oct24. Only seems incompatible at first glance with embed v1.", :llm_project_id=>"unavailable possibly not using Vertex", :llm_dimensions=>nil, :article_size=>8967, :poly_field=>"title", :llm_embeddings_model_name=>"textembedding-gecko"}
[🧠] [v1/3] summary_embedding_description:
[🧠] As per bug https://github.com/palladius/gemini-news-crawler/issues/4 we can state this article belongs to titile/summary version: v3 (very few articles updated on 9oct24)

🗿article.to_s

------------------------------
Title: Upgrades!!! — Everything new with Kubernetes 1.30
[content]
Upgrades!!! — Everything new with Kubernetes 1.30New features, enhancements and everything exciting with Kubernetes 1.30Excited? Aren’t we all? A slew of innovative features aimed at enhancing security, simplifying pod management, and empowering developers are included in this version. Now let’s explore the main features that take Kubernetes 1.30 to the next level.Enhanced Security AgainWith the introduction of various improvements, Kubernetes 1.30 further establishes itself as a safe platform for workload deployment and management.User namespaces for greater pod isolation [beta]This ground-breaking feature gives users within pods fine-grained control over their identities; it will graduate to beta in 1.30. It permits mapping the various values on the host system to the UIDs (User IDs) and GIDs (Group IDs) used inside a pod. By drastically lowering the attack surface, this isolation method makes it more difficult for compromised containers to abuse privileges on the underlying host.apiVersion: v1kind: Podmetadata:  name: my-secure-podspec:  securityContext:    userNamespace: true  containers:  - name: my-app    image: my-secure-image:latestTo effectively isolate the container from other processes on the host, the Kubelet is instructed to run it with a unique user namespace in this example by setting the userNamespace: true property within the securityContext.Bound service account tokens [beta]For service account authentication, bound service account tokens (SATs) provide a more secure option than conventional, non-bound tokens. Bound SATs, first released in 1.30 as beta, are associated with particular pods and only provide access to the resources needed by those pods. As a result, the potential damage is minimized by reducing the blast radius of compromised tokens.apiVersion: v1kind: Podmetadata:  name: my-pod-with-bound-satspec:  serviceAccountName: my-service-account  template:    spec:      securityContext:        # Enables the use of the bound service account token        podSecurityContext: {}The pod can utilize the bound service account token linked to the designated service account (my-service-account) by incorporating the podSecurityContext: {} section.Node log queriesUnderstanding node logs is essential for security analysis and troubleshooting. With the beta release of Node Log Query in Kubernetes 1.30, administrators can use the kubelet API to directly query system service logs on nodes. This reduces the attack surface and expedites log collection without requiring additional system access, thereby improving security.Imagine running the following command to search logs for kubelet process-related errors:kubectl get --raw "/api/v1/nodes/worker/proxy/logs/?query=kubelet&pattern=error"With this command, logs from the kubelet process running on the “worker” node that specifically contain the keyword “error” are retrieved.AppArmor profile configurations using Pod Security ContextsWithin containers, AppArmor profiles offer a potent way to enforce application security policies. By enabling administrators to specify profiles directly within the PodSecurityContext and container.securityContext fields, Kubernetes 1.30 streamlines the configuration of AppArmor. As a result, policy management is simplified and beta AppArmor annotations are no longer required.apiVersion: v1kind: Podmetadata:  name: my-pod-with-apparmorspec:  securityContext:    apparmorProfile: "restricted-runtime"  containers:  - name: my-app    image: my-app-image:latest    securityContext:      apparmorProfile: "runtime/default"Here, the container called “my-app” uses the “runtime/default” profile, but the pod itself is assigned the “restricted-runtime” profile. This provides both pod and container level granular control over AppArmor policies.Enhanced Pod ManagementNode Memory SwapNode memory swapping is now supported in Kubernetes 1.30. This may enhance system stability when memory pressure is applied by enabling the kernel to use swap space on nodes for memory management.In Kubernetes 1.30, the node memory swap feature has been redesigned to prioritize stability while providing more control. With the introduction of LimitedSwap in place of UnlimitedSwap, Kubernetes offers a more controlled and predictable method for handling swap usage on Linux nodes. Don’t forget to assess your unique requirements prior to activating swap and to put appropriate monitoring procedures in place.kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1# ... other kubelet configurationsfeatureGates:  NodeSwap: "true"memorySwap:  swapBehavior: LimitedSwapContainer resource based pod autoscalingBy using this feature, horizontal pod autoscaling (HPA) based on memory or CPU metrics of the container is enabled. This makes it possible to scale more precisely depending on the real needs for containers. You can make the most of your Kubernetes clusters’ resource allocation and scaling strategy by concentrating on the metrics of individual containers.apiVersion: autoscaling/v2beta2kind: HorizontalPodAutoscalermetadata:  name: my-hpaspec:  scaleTargetRef:    apiVersion: apps/v1    kind: Deployment    name: my-deployment  minReplicas: 2  maxReplicas: 5  metrics:  - type: Resource    resource:      name: cpu      target:        type: Utilization        averageUtilization: 80  containerMetrics:  - name: web-container # Target container within the PodDuring the deployment, the HPA keeps an eye on how much CPU resource each pod is using. The HPA will scale the deployment to maintain an average CPU usage of 80% across all instances of the web container since the average utilization is set to 80. The container name (web-container) for which the CPU metric is to be monitored is specified in the containerMetrics section.Dynamic resource allocationStructured parameters increase the flexibility of resource allocation for pods. By defining resource requests and limits more precisely, developers can optimize the use of available resources.In this case, the pod makes a minimum and maximum request for one GPU resource (nvidia.com/gpu [invalid URL removed]). It also uses the standard memory resource definition to request 8GB of memory.apiVersion: v1kind: Podmetadata:  name: my-gpu-appspec:  containers:  - name: gpu-container    resources:      requests:        resource.k8s.io/nvidia.com/gpu:          type: Resource          minimum: 1          maximum: 1        resource.k8s.io/memory:          type: Resource          requests:            memory: "8Gi"DRA in Kubernetes 1.30 opens the door to a more dynamic and effective resource management environment with its structured parameters. As the feature develops, we should anticipate a broader audience and the emergence of a vibrant third-party resource driver ecosystem that meets a variety of application requirements.To ConcludeNow, obviously I am not part of the AI fleet to write down every single one of the feature parameters in details so I would redirect you now to the best thing to exist after Ice Cream. THE DOCUMENTATION!sig-release/releases/release_phases.md at master · kubernetes/sig-releaseKubernetes 1.30 Release InformationConnect with me?Imran RoshanUpgrades!!! — Everything new with Kubernetes 1.30 was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
[/content]

Author: Imran Roshan
PublishedDate: 2024-03-28
Category: Blogs
NewsPaper: Google Cloud - Medium
Tags: cloud-computing, cybersecurity, kubernetes, google-cloud-platform
{"id"=>1233,
"title"=>"Upgrades!!! — Everything new with Kubernetes 1.30",
"summary"=>nil,
"content"=>"

Upgrades!!! — Everything new with Kubernetes 1.30

New features, enhancements and everything exciting with Kubernetes 1.30

\"\"

Excited? Aren’t we all? A slew of innovative features aimed at enhancing security, simplifying pod management, and empowering developers are included in this version. Now let’s explore the main features that take Kubernetes 1.30 to the next level.

Enhanced Security Again

With the introduction of various improvements, Kubernetes 1.30 further establishes itself as a safe platform for workload deployment and management.

User namespaces for greater pod isolation [beta]

This ground-breaking feature gives users within pods fine-grained control over their identities; it will graduate to beta in 1.30. It permits mapping the various values on the host system to the UIDs (User IDs) and GIDs (Group IDs) used inside a pod. By drastically lowering the attack surface, this isolation method makes it more difficult for compromised containers to abuse privileges on the underlying host.

apiVersion: v1
kind: Pod
metadata:
name: my-secure-pod
spec:
securityContext:
userNamespace: true
containers:
- name: my-app
image: my-secure-image:latest

To effectively isolate the container from other processes on the host, the Kubelet is instructed to run it with a unique user namespace in this example by setting the userNamespace: true property within the securityContext.

Bound service account tokens [beta]

For service account authentication, bound service account tokens (SATs) provide a more secure option than conventional, non-bound tokens. Bound SATs, first released in 1.30 as beta, are associated with particular pods and only provide access to the resources needed by those pods. As a result, the potential damage is minimized by reducing the blast radius of compromised tokens.

apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-bound-sat
spec:
serviceAccountName: my-service-account
template:
spec:
securityContext:
# Enables the use of the bound service account token
podSecurityContext: {}

The pod can utilize the bound service account token linked to the designated service account (my-service-account) by incorporating the podSecurityContext: {} section.

Node log queries

Understanding node logs is essential for security analysis and troubleshooting. With the beta release of Node Log Query in Kubernetes 1.30, administrators can use the kubelet API to directly query system service logs on nodes. This reduces the attack surface and expedites log collection without requiring additional system access, thereby improving security.

Imagine running the following command to search logs for kubelet process-related errors:

kubectl get --raw "/api/v1/nodes/worker/proxy/logs/?query=kubelet&pattern=error"

With this command, logs from the kubelet process running on the “worker” node that specifically contain the keyword “error” are retrieved.

AppArmor profile configurations using Pod Security Contexts

Within containers, AppArmor profiles offer a potent way to enforce application security policies. By enabling administrators to specify profiles directly within the PodSecurityContext and container.securityContext fields, Kubernetes 1.30 streamlines the configuration of AppArmor. As a result, policy management is simplified and beta AppArmor annotations are no longer required.

apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-apparmor
spec:
securityContext:
apparmorProfile: "restricted-runtime"
containers:
- name: my-app
image: my-app-image:latest
securityContext:
apparmorProfile: "runtime/default"

Here, the container called “my-app” uses the “runtime/default” profile, but the pod itself is assigned the “restricted-runtime” profile. This provides both pod and container level granular control over AppArmor policies.

Enhanced Pod Management

Node Memory Swap

Node memory swapping is now supported in Kubernetes 1.30. This may enhance system stability when memory pressure is applied by enabling the kernel to use swap space on nodes for memory management.

In Kubernetes 1.30, the node memory swap feature has been redesigned to prioritize stability while providing more control. With the introduction of LimitedSwap in place of UnlimitedSwap, Kubernetes offers a more controlled and predictable method for handling swap usage on Linux nodes. Don’t forget to assess your unique requirements prior to activating swap and to put appropriate monitoring procedures in place.

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
# ... other kubelet configurations
featureGates:
NodeSwap: "true"
memorySwap:
swapBehavior: LimitedSwap

Container resource based pod autoscaling

By using this feature, horizontal pod autoscaling (HPA) based on memory or CPU metrics of the container is enabled. This makes it possible to scale more precisely depending on the real needs for containers. You can make the most of your Kubernetes clusters’ resource allocation and scaling strategy by concentrating on the metrics of individual containers.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
containerMetrics:
- name: web-container # Target container within the Pod

During the deployment, the HPA keeps an eye on how much CPU resource each pod is using. The HPA will scale the deployment to maintain an average CPU usage of 80% across all instances of the web container since the average utilization is set to 80. The container name (web-container) for which the CPU metric is to be monitored is specified in the containerMetrics section.

Dynamic resource allocation

Structured parameters increase the flexibility of resource allocation for pods. By defining resource requests and limits more precisely, developers can optimize the use of available resources.

In this case, the pod makes a minimum and maximum request for one GPU resource (nvidia.com/gpu [invalid URL removed]). It also uses the standard memory resource definition to request 8GB of memory.

apiVersion: v1
kind: Pod
metadata:
name: my-gpu-app
spec:
containers:
- name: gpu-container
resources:
requests:
resource.k8s.io/nvidia.com/gpu:
type: Resource
minimum: 1
maximum: 1
resource.k8s.io/memory:
type: Resource
requests:
memory: "8Gi"

DRA in Kubernetes 1.30 opens the door to a more dynamic and effective resource management environment with its structured parameters. As the feature develops, we should anticipate a broader audience and the emergence of a vibrant third-party resource driver ecosystem that meets a variety of application requirements.

To Conclude

Now, obviously I am not part of the AI fleet to write down every single one of the feature parameters in details so I would redirect you now to the best thing to exist after Ice Cream. THE DOCUMENTATION!

Connect with me?

Imran Roshan

\"\"

Upgrades!!! — Everything new with Kubernetes 1.30 was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

",
"author"=>"Imran Roshan",
"link"=>"https://medium.com/google-cloud/upgrades-everything-new-with-kubernetes-1-30-b539ebfad4ea?source=rss----e52cf94d98af---4",
"published_date"=>Thu, 28 Mar 2024 10:20:01.000000000 UTC +00:00,
"image_url"=>nil,
"feed_url"=>"https://medium.com/google-cloud/upgrades-everything-new-with-kubernetes-1-30-b539ebfad4ea?source=rss----e52cf94d98af---4",
"language"=>nil,
"active"=>true,
"ricc_source"=>"feedjira::v1",
"created_at"=>Sun, 31 Mar 2024 20:53:34.199540000 UTC +00:00,
"updated_at"=>Mon, 21 Oct 2024 16:56:25.165126000 UTC +00:00,
"newspaper"=>"Google Cloud - Medium",
"macro_region"=>"Blogs"}
Edit this article
Back to articles