Skip to content

Commit 2f298d2

Browse files
committed
Fix trailing whitespace in scheduler section
1 parent e839bf7 commit 2f298d2

File tree

7 files changed

+46
-46
lines changed

7 files changed

+46
-46
lines changed

content/en/docs/concepts/scheduling-eviction/api-eviction.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ using a client of the {{
1111
creates an `Eviction` object, which causes the API server to terminate the Pod.
1212

1313
API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
14-
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
14+
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
1515

1616
Using the API to create an Eviction object for a Pod is like performing a
1717
policy-controlled [`DELETE` operation](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod)
18-
on the Pod.
18+
on the Pod.
1919

2020
## Calling the Eviction API
2121

@@ -75,13 +75,13 @@ checks and responds in one of the following ways:
7575
* `429 Too Many Requests`: the eviction is not currently allowed because of the
7676
configured {{}}.
7777
You may be able to attempt the eviction again later. You might also see this
78-
response because of API rate limiting.
78+
response because of API rate limiting.
7979
* `500 Internal Server Error`: the eviction is not allowed because there is a
8080
misconfiguration, like if multiple PodDisruptionBudgets reference the same Pod.
8181

8282
If the Pod you want to evict isn't part of a workload that has a
8383
PodDisruptionBudget, the API server always returns `200 OK` and allows the
84-
eviction.
84+
eviction.
8585

8686
If the API server allows the eviction, the Pod is deleted as follows:
8787

@@ -103,12 +103,12 @@ If the API server allows the eviction, the Pod is deleted as follows:
103103
## Troubleshooting stuck evictions
104104

105105
In some cases, your applications may enter a broken state, where the Eviction
106-
API will only return `429` or `500` responses until you intervene. This can
107-
happen if, for example, a ReplicaSet creates pods for your application but new
106+
API will only return `429` or `500` responses until you intervene. This can
107+
happen if, for example, a ReplicaSet creates pods for your application but new
108108
pods do not enter a `Ready` state. You may also notice this behavior in cases
109109
where the last evicted Pod had a long termination grace period.
110110

111-
If you notice stuck evictions, try one of the following solutions:
111+
If you notice stuck evictions, try one of the following solutions:
112112

113113
* Abort or pause the automated operation causing the issue. Investigate the stuck
114114
application before you restart the operation.

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ define. Some of the benefits of affinity and anti-affinity include:
9696
The affinity feature consists of two types of affinity:
9797

9898
- *Node affinity* functions like the `nodeSelector` field but is more expressive and
99-
allows you to specify soft rules.
99+
allows you to specify soft rules.
100100
- *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
101101
on other Pods.
102102

@@ -305,22 +305,22 @@ Pod affinity rule uses the "hard"
305305
`requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule
306306
uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`.
307307

308-
The affinity rule specifies that the scheduler is allowed to place the example Pod
308+
The affinity rule specifies that the scheduler is allowed to place the example Pod
309309
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
310-
where other Pods have been labeled with `security=S1`.
311-
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
312-
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
313-
assign the Pod to any node within Zone V, as long as there is at least one Pod within
314-
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
310+
where other Pods have been labeled with `security=S1`.
311+
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
312+
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
313+
assign the Pod to any node within Zone V, as long as there is at least one Pod within
314+
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
315315
labels in Zone V, the scheduler will not assign the example Pod to any node in that zone.
316316

317-
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
317+
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
318318
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
319-
where other Pods have been labeled with `security=S2`.
320-
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
321-
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
322-
assigning the Pod to any node within Zone R, as long as there is at least one Pod within
323-
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
319+
where other Pods have been labeled with `security=S2`.
320+
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
321+
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
322+
assigning the Pod to any node within Zone R, as long as there is at least one Pod within
323+
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
324324
scheduling into Zone R if there are no Pods with `security=S2` labels.
325325

326326
To get yourself more familiar with the examples of Pod affinity and anti-affinity,
@@ -371,12 +371,12 @@ When you want to use it, you have to enable it via the
371371
{{< /note >}}
372372

373373
Kubernetes includes an optional `matchLabelKeys` field for Pod affinity
374-
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
374+
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
375375
when satisfying the Pod (anti)affinity.
376376

377377
The keys are used to look up values from the pod labels; those key-value labels are combined
378378
(using `AND`) with the match restrictions defined using the `labelSelector` field. The combined
379-
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
379+
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
380380

381381
A common use case is to use `matchLabelKeys` with `pod-template-hash` (set on Pods
382382
managed as part of a Deployment, where the value is unique for each revision).
@@ -405,7 +405,7 @@ spec:
405405
# Only Pods from a given rollout are taken into consideration when calculating pod affinity.
406406
# If you update the Deployment, the replacement Pods follow their own affinity rules
407407
# (if there are any defined in the new Pod template)
408-
matchLabelKeys:
408+
matchLabelKeys:
409409
- pod-template-hash
410410
```
411411

@@ -422,7 +422,7 @@ When you want to use it, you have to enable it via the
422422
{{< /note >}}
423423

424424
Kubernetes includes an optional `mismatchLabelKeys` field for Pod affinity
425-
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
425+
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
426426
when satisfying the Pod (anti)affinity.
427427

428428
One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in.
@@ -438,22 +438,22 @@ metadata:
438438
...
439439
spec:
440440
affinity:
441-
podAffinity:
441+
podAffinity:
442442
requiredDuringSchedulingIgnoredDuringExecution:
443443
# ensure that pods associated with this tenant land on the correct node pool
444444
- matchLabelKeys:
445445
- tenant
446446
topologyKey: node-pool
447-
podAntiAffinity:
447+
podAntiAffinity:
448448
requiredDuringSchedulingIgnoredDuringExecution:
449449
# ensure that pods associated with this tenant can't schedule to nodes used for another tenant
450450
- mismatchLabelKeys:
451-
- tenant # whatever the value of the "tenant" label for this Pod, prevent
451+
- tenant # whatever the value of the "tenant" label for this Pod, prevent
452452
# scheduling to nodes in any pool where any Pod from a different
453453
# tenant is running.
454454
labelSelector:
455455
# We have to have the labelSelector which selects only Pods with the tenant label,
456-
# otherwise this Pod would hate Pods from daemonsets as well, for example,
456+
# otherwise this Pod would hate Pods from daemonsets as well, for example,
457457
# which aren't supposed to have the tenant label.
458458
matchExpressions:
459459
- key: tenant
@@ -633,13 +633,13 @@ The following operators can only be used with `nodeAffinity`.
633633

634634
| Operator | Behaviour |
635635
| :------------: | :-------------: |
636-
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
637-
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
636+
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
637+
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
638638

639639

640640
{{}}
641-
`Gt` and `Lt` operators will not work with non-integer values. If the given value
642-
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
641+
`Gt` and `Lt` operators will not work with non-integer values. If the given value
642+
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
643643
are not available for `podAffinity`.
644644
{{}}
645645

content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ and it cannot be prefixed with `system-`.
6464

6565
A PriorityClass object can have any 32-bit integer value smaller than or equal
6666
to 1 billion. This means that the range of values for a PriorityClass object is
67-
from -2147483648 to 1000000000 inclusive. Larger numbers are reserved for
67+
from -2147483648 to 1000000000 inclusive. Larger numbers are reserved for
6868
built-in PriorityClasses that represent critical system Pods. A cluster
6969
admin should create one PriorityClass object for each such mapping that they want.
7070

@@ -256,9 +256,9 @@ the Node is not considered for preemption.
256256

257257
If a pending Pod has inter-pod {{< glossary_tooltip text="affinity" term_id="affinity" >}}
258258
to one or more of the lower-priority Pods on the Node, the inter-Pod affinity
259-
rule cannot be satisfied in the absence of those lower-priority Pods. In this case,
259+
rule cannot be satisfied in the absence of those lower-priority Pods. In this case,
260260
the scheduler does not preempt any Pods on the Node. Instead, it looks for another
261-
Node. The scheduler might find a suitable Node or it might not. There is no
261+
Node. The scheduler might find a suitable Node or it might not. There is no
262262
guarantee that the pending Pod can be scheduled.
263263

264264
Our recommended solution for this problem is to create inter-Pod affinity only
@@ -361,7 +361,7 @@ to get evicted. The kubelet ranks pods for eviction based on the following facto
361361

362362
1. Whether the starved resource usage exceeds requests
363363
1. Pod Priority
364-
1. Amount of resource usage relative to requests
364+
1. Amount of resource usage relative to requests
365365

366366
See [Pod selection for kubelet eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)
367367
for more details.

content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ weight: 40
99
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
1010

1111
Pods were considered ready for scheduling once created. Kubernetes scheduler
12-
does its due diligence to find nodes to place all pending Pods. However, in a
12+
does its due diligence to find nodes to place all pending Pods. However, in a
1313
real-world case, some Pods may stay in a "miss-essential-resources" state for a long period.
1414
These Pods actually churn the scheduler (and downstream integrators like Cluster AutoScaler)
1515
in an unnecessary manner.
@@ -79,7 +79,7 @@ Given the test-pod doesn't request any CPU/memory resources, it's expected that
7979
transited from previous `SchedulingGated` to `Running`:
8080

8181
```none
82-
NAME READY STATUS RESTARTS AGE IP NODE
82+
NAME READY STATUS RESTARTS AGE IP NODE
8383
test-pod 1/1 Running 0 15s 10.0.0.4 node-2
8484
```
8585

@@ -94,8 +94,8 @@ scheduling. You can use `scheduler_pending_pods{queue="gated"}` to check the met
9494
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
9595

9696
You can mutate scheduling directives of Pods while they have scheduling gates, with certain constraints.
97-
At a high level, you can only tighten the scheduling directives of a Pod. In other words, the updated
98-
directives would cause the Pods to only be able to be scheduled on a subset of the nodes that it would
97+
At a high level, you can only tighten the scheduling directives of a Pod. In other words, the updated
98+
directives would cause the Pods to only be able to be scheduled on a subset of the nodes that it would
9999
previously match. More concretely, the rules for updating a Pod's scheduling directives are as follows:
100100

101101
1. For `.spec.nodeSelector`, only additions are allowed. If absent, it will be allowed to be set.
@@ -107,8 +107,8 @@ previously match. More concretely, the rules for updating a Pod's scheduling dir
107107
or `fieldExpressions` are allowed, and no changes to existing `matchExpressions`
108108
and `fieldExpressions` will be allowed. This is because the terms in
109109
`.requiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms`, are ORed
110-
while the expressions in `nodeSelectorTerms[].matchExpressions` and
111-
`nodeSelectorTerms[].fieldExpressions` are ANDed.
110+
while the expressions in `nodeSelectorTerms[].matchExpressions` and
111+
`nodeSelectorTerms[].fieldExpressions` are ANDed.
112112

113113
4. For `.preferredDuringSchedulingIgnoredDuringExecution`, all updates are allowed.
114114
This is because preferred terms are not authoritative, and so policy controllers

content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,8 +57,8 @@ the `NodeResourcesFit` score function can be controlled by the
5757
Within the `scoringStrategy` field, you can configure two parameters: `requestedToCapacityRatio` and
5858
`resources`. The `shape` in the `requestedToCapacityRatio`
5959
parameter allows the user to tune the function as least requested or most
60-
requested based on `utilization` and `score` values. The `resources` parameter
61-
comprises both the `name` of the resource to be considered during scoring and
60+
requested based on `utilization` and `score` values. The `resources` parameter
61+
comprises both the `name` of the resource to be considered during scoring and
6262
its corresponding `weight`, which specifies the weight of each resource.
6363

6464
Below is an example configuration that sets

content/en/docs/concepts/scheduling-eviction/scheduling-framework.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ the Pod is put into the active queue or the backoff queue
8383
so that the scheduler will retry the scheduling of the Pod.
8484

8585
{{< note >}}
86-
QueueingHint evaluation during scheduling is a beta-level feature.
86+
QueueingHint evaluation during scheduling is a beta-level feature.
8787
The v1.28 release series initially enabled the associated feature gate; however, after the
8888
discovery of an excessive memory footprint, the Kubernetes project set that feature gate
8989
to be disabled by default. In Kubernetes {{< skew currentVersion >}}, this feature gate is

content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ your cluster. Those fields are:
9999
{{< note >}}
100100
The `MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
101101
enables `minDomains` for pod topology spread. Starting from v1.28,
102-
the `MinDomainsInPodTopologySpread` gate
102+
the `MinDomainsInPodTopologySpread` gate
103103
is enabled by default. In older Kubernetes clusters it might be explicitly
104104
disabled or the field might not be available.
105105
{{< /note >}}

0 commit comments

Comments
 (0)