Skip to content

Commit b8541d2

Browse files
committed
Tune links in tasks section (1/2)
1 parent fcd8af9 commit b8541d2

24 files changed

+109
-119
lines changed

content/en/docs/tasks/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@ show how to do individual tasks. A task page shows how to do a
1212
single thing, typically by giving a short sequence of steps.
1313

1414
If you would like to write a task page, see
15-
[Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/).
15+
[Creating a Documentation Pull Request](/docs/contribute/new-content/open-a-pr/).

content/en/docs/tasks/debug-application-cluster/audit.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -375,7 +375,7 @@ different audit policies.
375375

376376
### Use fluentd to collect and distribute audit events from log file
377377

378-
[Fluentd](http://www.fluentd.org/) is an open source data collector for unified logging layer.
378+
[Fluentd](https://www.fluentd.org/) is an open source data collector for unified logging layer.
379379
In this example, we will use fluentd to split audit events by different namespaces.
380380

381381
{{< note >}}
@@ -503,7 +503,7 @@ different users into different files.
503503
bin/logstash -f /etc/logstash/config --path.settings /etc/logstash/
504504
```
505505

506-
1. create a [kubeconfig file](/docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig/) for kube-apiserver webhook audit backend
506+
1. create a [kubeconfig file](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) for kube-apiserver webhook audit backend
507507

508508
cat < /etc/kubernetes/audit-webhook-kubeconfig
509509
apiVersion: v1
@@ -537,9 +537,5 @@ plugin which supports full-text search and analytics.
537537

538538
## {{% heading "whatsnext" %}}
539539

540-
541-
Visit [Auditing with Falco](/docs/tasks/debug-application-cluster/falco).
542-
543540
Learn about [Mutating webhook auditing annotations](/docs/reference/access-authn-authz/extensible-admission-controllers/#mutating-webhook-auditing-annotations).
544541

545-

content/en/docs/tasks/debug-application-cluster/debug-application.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,7 @@ content_type: concept
1010

1111
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
1212
This is *not* a guide for people who want to debug their cluster. For that you should check out
13-
[this guide](/docs/admin/cluster-troubleshooting).
14-
15-
16-
13+
[this guide](/docs/tasks/debug-application-cluster/debug-cluster).
1714

1815
1916

@@ -46,7 +43,8 @@ there are insufficient resources of one type or another that prevent scheduling.
4643
your pod. Reasons include:
4744

4845
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
49-
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](/docs/user-guide/compute-resources/#my-pods-are-pending-with-event-message-failedscheduling) for more information.
46+
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See
47+
[Compute Resources document](/docs/concepts/configuration/manage-resources-containers/) for more information.
5048

5149
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
5250
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
@@ -161,13 +159,13 @@ check:
161159
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP.
162160
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
163161

164-
165-
166162
## {{% heading "whatsnext" %}}
167163

164+
If none of the above solves your problem, follow the instructions in
165+
[Debugging Service document](/docs/tasks/debug-application-cluster/debug-service/)
166+
to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are
167+
actually serving; you have DNS working, iptables rules installed, and kube-proxy
168+
does not seem to be misbehaving.
168169

169-
If none of the above solves your problem, follow the instructions in [Debugging Service document](/docs/user-guide/debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
170-
171-
You may also visit [troubleshooting document](/docs/troubleshooting/) for more information.
172-
170+
You may also visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/) for more information.
173171

content/en/docs/tasks/debug-application-cluster/debug-cluster.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,7 @@ content_type: concept
1010
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
1111
problem you are experiencing. See
1212
the [application troubleshooting guide](/docs/tasks/debug-application-cluster/debug-application) for tips on application debugging.
13-
You may also visit [troubleshooting document](/docs/troubleshooting/) for more information.
14-
15-
16-
13+
You may also visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/) for more information.
1714

1815
1916

content/en/docs/tasks/debug-application-cluster/debug-init-containers.md

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -15,22 +15,18 @@ content_type: task
1515

1616
This page shows how to investigate problems related to the execution of
1717
Init Containers. The example command lines below refer to the Pod as
18-
`` and the Init Containers as `` and
19-
``.
20-
21-
18+
`` and the Init Containers as `` and
19+
``.
2220

2321
## {{% heading "prerequisites" %}}
2422

2523

2624
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
2725

2826
* You should be familiar with the basics of
29-
[Init Containers](/docs/concepts/abstractions/init-containers/).
27+
[Init Containers](/docs/concepts/workloads/pods/init-containers/).
3028
* You should have [Configured an Init Container](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container/).
3129

32-
33-
3430
3531

3632
## Checking the status of Init Containers

content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,6 @@ content_type: task
99

1010
This page shows how to debug Pods and ReplicationControllers.
1111

12-
13-
1412
## {{% heading "prerequisites" %}}
1513

1614

@@ -20,8 +18,6 @@ This page shows how to debug Pods and ReplicationControllers.
2018
{{< glossary_tooltip text="Pods" term_id="pod" >}} and with
2119
Pods' [lifecycles](/docs/concepts/workloads/pods/pod-lifecycle/).
2220

23-
24-
2521
2622

2723
## Debugging Pods
@@ -51,9 +47,9 @@ can not schedule your pod. Reasons include:
5147
You may have exhausted the supply of CPU or Memory in your cluster. In this
5248
case you can try several things:
5349

54-
* [Add more nodes](/docs/admin/cluster-management/#resizing-a-cluster) to the cluster.
50+
* [Add more nodes](/docs/tasks/administer-cluster/cluster-management/#resizing-a-cluster) to the cluster.
5551

56-
* [Terminate unneeded pods](/docs/user-guide/pods/single-container/#deleting_a_pod)
52+
* [Terminate unneeded pods](/docs/concepts/workloads/pods/#pod-termination)
5753
to make room for pending pods.
5854

5955
* Check that the pod is not larger than your nodes. For example, if all

content/en/docs/tasks/debug-application-cluster/debug-service.md

Lines changed: 4 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,6 @@ Deployment (or other workload controller) and created a Service, but you
1313
get no response when you try to access it. This document will hopefully help
1414
you to figure out what's going wrong.
1515

16-
17-
18-
1916
2017

2118
## Running commands in a Pod
@@ -658,7 +655,7 @@ This might sound unlikely, but it does happen and it is supposed to work.
658655
This can happen when the network is not properly configured for "hairpin"
659656
traffic, usually when `kube-proxy` is running in `iptables` mode and Pods
660657
are connected with bridge network. The `Kubelet` exposes a `hairpin-mode`
661-
[flag](/docs/admin/kubelet/) that allows endpoints of a Service to loadbalance
658+
[flag](/docs/reference/command-line-tools-reference/kubelet/) that allows endpoints of a Service to loadbalance
662659
back to themselves if they try to access their own Service VIP. The
663660
`hairpin-mode` flag must either be set to `hairpin-veth` or
664661
`promiscuous-bridge`.
@@ -724,15 +721,13 @@ Service is not working. Please let us know what is going on, so we can help
724721
investigate!
725722

726723
Contact us on
727-
[Slack](/docs/troubleshooting/#slack) or
724+
[Slack](/docs/tasks/debug-application-cluster/troubleshooting/#slack) or
728725
[Forum](https://discuss.kubernetes.io) or
729726
[GitHub](https://github.com/kubernetes/kubernetes).
730727

731-
732-
733728
## {{% heading "whatsnext" %}}
734729

735-
736-
Visit [troubleshooting document](/docs/troubleshooting/) for more information.
730+
Visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/)
731+
for more information.
737732

738733

content/en/docs/tasks/debug-application-cluster/debug-stateful-set.md

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -12,19 +12,13 @@ content_type: task
1212
---
1313

1414
15-
1615
This task shows you how to debug a StatefulSet.
1716

18-
19-
2017
## {{% heading "prerequisites" %}}
2118

22-
2319
* You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster.
2420
* You should have a StatefulSet running that you want to investigate.
2521

26-
27-
2822
2923

3024
## Debugging a StatefulSet
@@ -37,18 +31,12 @@ kubectl get pods -l app=myapp
3731
```
3832

3933
If you find that any Pods listed are in `Unknown` or `Terminating` state for an extended period of time,
40-
refer to the [Deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/) task for
34+
refer to the [Deleting StatefulSet Pods](/docs/tasks/run-application/delete-stateful-set/) task for
4135
instructions on how to deal with them.
4236
You can debug individual Pods in a StatefulSet using the
4337
[Debugging Pods](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/) guide.
4438

45-
46-
4739
## {{% heading "whatsnext" %}}
4840

49-
5041
Learn more about [debugging an init-container](/docs/tasks/debug-application-cluster/debug-init-containers/).
5142

52-
53-
54-

content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ title: Logging Using Elasticsearch and Kibana
1010

1111
On the Google Compute Engine (GCE) platform, the default logging support targets
1212
[Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail
13-
in the [Logging With Stackdriver Logging](/docs/user-guide/logging/stackdriver).
13+
in the [Logging With Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver).
1414

1515
This article describes how to set up a cluster to ingest logs into
1616
[Elasticsearch](https://www.elastic.co/products/elasticsearch) and view
@@ -90,7 +90,8 @@ Elasticsearch, and is part of a service named `kibana-logging`.
9090

9191
The Elasticsearch and Kibana services are both in the `kube-system` namespace
9292
and are not directly exposed via a publicly reachable IP address. To reach them,
93-
follow the instructions for [Accessing services running in a cluster](/docs/concepts/cluster-administration/access-cluster/#accessing-services-running-on-the-cluster).
93+
follow the instructions for
94+
[Accessing services running in a cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster).
9495

9596
If you try accessing the `elasticsearch-logging` service in your browser, you'll
9697
see a status page that looks something like this:
@@ -102,7 +103,7 @@ like. See [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasti
102103
for more details on how to do so.
103104

104105
Alternatively, you can view your cluster's logs using Kibana (again using the
105-
[instructions for accessing a service running in the cluster](/docs/user-guide/accessing-the-cluster/#accessing-services-running-on-the-cluster)).
106+
[instructions for accessing a service running in the cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster)).
106107
The first time you visit the Kibana URL you will be presented with a page that
107108
asks you to configure your view of the ingested logs. Select the option for
108109
timeseries values and select `@timestamp`. On the following page select the

content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -317,8 +317,8 @@ After some time, Stackdriver Logging agent pods will be restarted with the new c
317317
### Changing fluentd parameters
318318

319319
Fluentd configuration is stored in the `ConfigMap` object. It is effectively a set of configuration
320-
files that are merged together. You can learn about fluentd configuration on the [official
321-
site](http://docs.fluentd.org).
320+
files that are merged together. You can learn about fluentd configuration on the
321+
[official site](https://docs.fluentd.org).
322322

323323
Imagine you want to add a new parsing logic to the configuration, so that fluentd can understand
324324
default Python logging format. An appropriate fluentd filter looks similar to this:
@@ -356,7 +356,7 @@ using [guide above](#changing-daemonset-parameters).
356356
### Adding fluentd plugins
357357

358358
Fluentd is written in Ruby and allows to extend its capabilities using
359-
[plugins](http://www.fluentd.org/plugins). If you want to use a plugin, which is not included
359+
[plugins](https://www.fluentd.org/plugins). If you want to use a plugin, which is not included
360360
in the default Stackdriver Logging container image, you have to build a custom image. Imagine
361361
you want to add Kafka sink for messages from a particular container for additional processing.
362362
You can re-use the default [container image sources](https://git.k8s.io/contrib/fluentd/fluentd-gcp-image)

content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,6 @@ are available in Kubernetes through the Metrics API. These metrics can be either
1313
by user, for example by using `kubectl top` command, or used by a controller in the cluster, e.g.
1414
Horizontal Pod Autoscaler, to make decisions.
1515

16-
17-
18-
1916
2017

2118
## The Metrics API
@@ -41,11 +38,19 @@ The API requires metrics server to be deployed in the cluster. Otherwise it will
4138

4239
### CPU
4340

44-
CPU is reported as the average usage, in [CPU cores](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu), over a period of time. This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The kubelet chooses the window for the rate calculation.
41+
CPU is reported as the average usage, in
42+
[CPU cores](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu),
43+
over a period of time. This value is derived by taking a rate over a cumulative CPU counter
44+
provided by the kernel (in both Linux and Windows kernels).
45+
The kubelet chooses the window for the rate calculation.
4546

4647
### Memory
4748

48-
Memory is reported as the working set, in bytes, at the instant the metric was collected. In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate. It includes all anonymous (non-file-backed) memory since kubernetes does not support swap. The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
49+
Memory is reported as the working set, in bytes, at the instant the metric was collected.
50+
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure.
51+
However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
52+
It includes all anonymous (non-file-backed) memory since kubernetes does not support swap.
53+
The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
4954

5055
## Metrics Server
5156

@@ -54,9 +59,12 @@ It is deployed by default in clusters created by `kube-up.sh` script
5459
as a Deployment object. If you use a different Kubernetes setup mechanism you can deploy it using the provided
5560
[deployment components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases) file.
5661

57-
Metric server collects metrics from the Summary API, exposed by [Kubelet](/docs/admin/kubelet/) on each node.
62+
Metric server collects metrics from the Summary API, exposed by
63+
[Kubelet](/docs/reference/command-line-tools-reference/kubelet/) on each node.
5864

5965
Metrics Server is registered with the main API server through
60-
[Kubernetes aggregator](/docs/concepts/api-extension/apiserver-aggregation/).
66+
[Kubernetes aggregator](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
67+
68+
Learn more about the metrics server in
69+
[the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
6170

62-
Learn more about the metrics server in [the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).

content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,29 +10,32 @@ title: Tools for Monitoring Resources
1010
To scale an application and provide a reliable service, you need to
1111
understand how the application behaves when it is deployed. You can examine
1212
application performance in a Kubernetes cluster by examining the containers,
13-
[pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and
13+
[pods](/docs/concepts/workloads/pods/),
14+
[services](/docs/concepts/services-networking/service/), and
1415
the characteristics of the overall cluster. Kubernetes provides detailed
1516
information about an application's resource usage at each of these levels.
1617
This information allows you to evaluate your application's performance and
1718
where bottlenecks can be removed to improve overall performance.
1819

19-
20-
2120
2221

23-
In Kubernetes, application monitoring does not depend on a single monitoring solution. On new clusters, you can use [resource metrics](#resource-metrics-pipeline) or [full metrics](#full-metrics-pipeline) pipelines to collect monitoring statistics.
22+
In Kubernetes, application monitoring does not depend on a single monitoring solution.
23+
On new clusters, you can use [resource metrics](#resource-metrics-pipeline) or
24+
[full metrics](#full-metrics-pipeline) pipelines to collect monitoring statistics.
2425

2526
## Resource metrics pipeline
2627

2728
The resource metrics pipeline provides a limited set of metrics related to
28-
cluster components such as the [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale) controller, as well as the `kubectl top` utility.
29+
cluster components such as the
30+
[Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/)
31+
controller, as well as the `kubectl top` utility.
2932
These metrics are collected by the lightweight, short-term, in-memory
3033
[metrics-server](https://github.com/kubernetes-incubator/metrics-server) and
3134
are exposed via the `metrics.k8s.io` API.
3235

3336
metrics-server discovers all nodes on the cluster and
3437
queries each node's
35-
[kubelet](/docs/reference/command-line-tools-reference/kubelet) for CPU and
38+
[kubelet](/docs/reference/command-line-tools-reference/kubelet/) for CPU and
3639
memory usage. The kubelet acts as a bridge between the Kubernetes master and
3740
the nodes, managing the pods and containers running on a machine. The kubelet
3841
translates each pod into its constituent containers and fetches individual

0 commit comments

Comments
 (0)