Can use volume with cronjobs
Purpose
This note will content the thing which finding on working progress with K8s. Just take note and link for resolving the problem. Find out detail if it has unique directory
Cronjobs --> Create Jobs (Trigger by scheduled) --> Pod
: In this situation, Pod in K8s can used the volume and mount succeed when the script running. But if you applied it with pods, it will not, your command
will run faster than mount progress. Checked it in this link
Do Kubernetes Pods Really Get Evicted Due to CPU Pressure?
Reference article: Do Kubernetes Pods Really Get Evicted Due to CPU Pressure?
Tip
Pods are not directly evicted due to high CPU pressure or usage alone. Instead, Kubernetes relies on CPU throttling mechanisms to manage and limit a podβs CPU usage, ensuring fair resource sharing among pods on the same node.
While high CPU usage by a pod can indirectly contribute to resource pressure and potentially lead to eviction due to memory or other resource shortages, CPU throttling is the primary mechanism used to manage CPU-intensive workloads
# Restart Statefulset
workload
Related link
Notice
- Do not removing
statefulset
workload, it will scale down to 0 and not bring up anymore. Instead of just removing pods, It will help the pods restart base onstatefulset
strategy - Rollout
statefulset
is not work when status ofstatefulset
iscompleted
- Deleting pods in
statefulset
will not remove associated volume
Note
Deleting the PVC after the pods have terminated might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
Note: Use caution when deleting a PVC, as it may lead to data loss.
- Complete deletion of a
StatefulSet
To delete everything in a StatefulSet
, including the associated pods, you can run a series of commands similar to the following*
Create troubleshoot pods
You can create stateless
pods with no deployments for purpose
- Check and validate the networking in node, cluster like DNS resolve, health check
- Restore and Backup DB
- Debug or access service internal
For doing that, you need to use kubectl
- Use
kubectl
for create manifest of pod
-
Customize your pods, for keep alive, you should set command of pod to
tail -f /dev/null
-
Run
apply
command with manifest
- Wait few second, exec to the pods with command
- Once youβve finished testing, you can press Ctrl+D to escape the terminal session in the Pod. Pod will continue running afterwards. You can keep try with command step 4 or delete.
NOTE: Usually, curlimages/curl
is regular used. Try to create new pod with fast as possible
Stop or run the Cronjob with patch
You can see, cronjob
is scheduled workload of Kubernetes
which trigger on set-time for executing specify job. But sometimes, on during work time, your test job shouldnβt work, therefore you will concert about suspend state of jobs. You can update state with command
Enable again by change true
β false
Furthermore, you can use patch
for multiple purpose
- Update a containerβs image
- Partially update a node
- Disable a deployment livenessProbe using json patch
- Update a deploymentβs replica count
Updating resources
You can handle graceful restart, rollback version with roolout
command
# Graceful restart deployments, statefulset and deamonset
k rollout restart -n <namespace> <type-workload>/<name>
# Rollback version
kubectl rollout undo <type-workload>/<name>
kubectl rollout undo <type-workload>/<name> --to-revision=2
# Check the rollout status
kubectl rollout status -w <type-workload>/<name>
Kubernetes has some values with help to distinguish service with each others, specify identifying attributes of objects, attach arbitrary non-identifying metadata to objects, β¦
- Label
- Annotations
And you can update that with kubectl
via label
and anotation
command
Next, you can update autoscale for deployment by command autoscale
Edit YAML manifest
kubectl
can help you directly change manifest on your shell. If you Linux
or macos
user, you can use nano
or vim
to use feature
When you hit to complete button, your workload or resource will change immediately
Delete resource
Use the delete
command for executing
Health check and interact with cluster, node and workload
Use the events
command for detect what happen occur on cluster node
If the status of workload are not available
or running
, you can use describe
for verbose check workload
When the problem does not come up from workload, you can check log
for extract more information
If you check any situation on workload, especially pods, container without results, you can return to check resources usage on cluster. Before doing that, make sure you install requirements tools for available to use
kubectl
can help you disable or manipulation node with command
Tips
For explore more, you can do lots of things with
kubectl
. To read and understand command, you should use manual with--help
flag
Setup metrics-server
Metrics server will part if you self-hosted your kubernetes
, It means you need learn how setup metrics-server
, and this quite very easily. Read more about metrics-server
at metrics-server
Via kubectl
you can applied manifest
Or you can use helm
to release metrics-server
chart at helm
Warning
Your
metrics-server
will stuck, because it meet problem to not authenticationtls
inside them withkube-apiserver
But donβt worry about it, you can bypass this via some trick. Read more about solution at
So solution about using edit
command of kubectl
to edit manifest of deployments kube-server
, you can do like this
Now scroll to args
of container metrics-server
, you can change them into
And now your metrics-server
will restart and running after 30s
Learn more about kubernetes
metrics, read the article Kubernetesβ Native Metrics and States
Configure Liveness, Readiness and Startup Probes
Kubernetes implement multiple probles type for health check your applications. See more at Liveness, Readiness and Startup Probes
If you want to learn about configuration, use this documentation
Tip
ProbesΒ have a number of fields that you can use to more precisely control the behavior of startup, liveness and readiness checks
Liveness
Info
Liveness probes determine when to restart a container. For example, liveness probes could catch a deadlock, when an application is running, but unable to make progress.
If a container fails its liveness probe repeatedly, the kubelet restarts the container.
You can set up liveness
probe with command configuration
Or use can use liveness
probe with HTTP request configuration
You can use another protocol with liveness
, such as
Readiness
Info
Readiness probes determine when a container is ready to start accepting traffic. This is useful when waiting for an application to perform time-consuming initial tasks, such as establishing network connections, loading files, and warming caches.
If the readiness probe returns a failed state, Kubernetes removes the pod from all matching service endpoints.
You can try configure readiness
proble with
Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.
Info
Readiness and liveness probes can be used in parallel for the same container. Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail.
Note
Readiness probes runs on the container during its whole lifecycle.
Startup
Info
A startup probe verifies whether the application within a container is started. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.
If such a probe is configured, it disables liveness and readiness checks until it succeeds.
You can configure for you pod with configuration
And mostly startup for helping kubernetes toΒ protect slow starting containers
Note
This type of probe is only executed at startup, unlike readiness probes, which are run periodically