Skip to content

Commit a0f21a2

Browse files
committed
update indexes
1 parent 9eb2bfb commit a0f21a2

27 files changed

+160
-104
lines changed

book/chapters/array.adoc

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
= Array
2-
2+
(((Array)))
3+
(((Data Structures, Linear, Array)))
34
Arrays are one of the most used data structures. You probably have used it a lot but are you aware of the runtimes of `splice`, `shift` and other operations? In this chapter, we are going deeper into the most common operations and their runtimes.
45

56
== Array Basics
@@ -184,4 +185,5 @@ To sum up, the time complexity on an array is:
184185
^|_Index/Key_ ^|_Value_ ^|_beginning_ ^|_middle_ ^|_end_ ^|_beginning_ ^|_middle_ ^|_end_
185186
| Array ^|O(1) ^|O(n) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(n) ^|O(1) ^|O(n)
186187
|===
187-
indexterm:[Runtime, Linear]
188+
(((Linear)))
189+
(((Runtime, Linear)))

book/chapters/big-o-examples.adoc

Lines changed: 16 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,8 @@ image:image5.png[CPU time needed vs. Algorithm runtime as the input size increas
2323
The above chart shows how the running time of an algorithm is related to the amount of work the CPU has to perform. As you can see O(1) and O(log n) are very scalable. However, O(n^2^) and worst can make your computer run for years [big]#😵# on large datasets. We are going to give some examples so you can identify each one.
2424

2525
== Constant
26-
26+
(((Constant)))
27+
(((Runtime, Constant)))
2728
Represented as *O(1)*, it means that regardless of the input size the number of operations executed is always the same. Let’s see an example.
2829

2930
[#constant-example]
@@ -44,7 +45,8 @@ Another more real life example is adding an element to the begining of a <
4445
As you can see, in both examples (array and linked list) if the input is a collection of 10 elements or 10M it would take the same amount of time to execute. You can't get any more performance than this!
4546

4647
== Logarithmic
47-
48+
(((Logarithmic)))
49+
(((Runtime, Logarithmic)))
4850
Represented in Big O notation as *O(log n)*, when an algorithm has this running time it means that as the size of the input grows the number of operations grows very slowly. Logarithmic algorithms are very scalable. One example is the *binary search*.
4951
indexterm:[Runtime, Logarithmic]
5052

@@ -65,7 +67,8 @@ This binary search implementation is a recursive algorithm, which means that the
6567
Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some tools like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem]. The `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call you are most likely in front of a logarithmic runtime: _O(log n)_.
6668

6769
== Linear
68-
70+
(((Linear)))
71+
(((Runtime, Linear)))
6972
Linear algorithms are one of the most common runtimes. It’s represented as *O(n)*. Usually, an algorithm has a linear running time when it iterates over all the elements in the input.
7073

7174
[#linear-example]
@@ -90,7 +93,8 @@ As we learned before, the big O cares about the worst-case scenario, where we wo
9093
Space complexity is also *O(n)* since we are using an auxiliary data structure. We have a map that in the worst case (no duplicates) it will hold every word.
9194

9295
== Linearithmic
93-
96+
(((Linearithmic)))
97+
(((Runtime, Linearithmic)))
9498
An algorithm with a linearithmic runtime is represented as _O(n log n)_. This one is important because it is the best runtime for sorting! Let’s see the merge-sort.
9599

96100
[#linearithmic-example]
@@ -125,8 +129,8 @@ image:image11.png[Mergesort visualization,width=500,height=600]
125129
How do we obtain the running time of the merge sort algorithm? The mergesort divides the array in half each time in the split phase, _log n_, and the merge function join each splits, _n_. The total work we have *O(n log n)*. There more formal ways to reach to this runtime like using the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Method] and https://www.cs.cornell.edu/courses/cs3110/2012sp/lectures/lec20-master/lec20.html[recursion trees].
126130

127131
== Quadratic
128-
129-
indexterm:[Runtime, Quadratic]
132+
(((Quadratic)))
133+
(((Runtime, Quadratic)))
130134
Running times that are quadratic, O(n^2^), are the ones to watch out for. They usually don’t scale well when they have a large amount of data to process.
131135

132136
Usually, they have double-nested loops that where each one visits all or most elements in the input. One example of this is a naïve implementation to find duplicate words on an array.
@@ -149,7 +153,8 @@ As you can see, we have two nested loops causing the running time to be quadrati
149153
Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <> you will get the answer in seconds! [big]#🚀#
150154

151155
== Cubic
152-
156+
(((Cubic)))
157+
(((Runtime, Cubic)))
153158
Cubic *O(n^3^)* and higher polynomial functions usually involve many nested loops. As an example of a cubic algorithm is a multi-variable equation solver (using brute force):
154159

155160
[#cubic-example]
@@ -174,7 +179,8 @@ WARNING: This just an example, there are better ways to solve multi-variable equ
174179
As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on when we have a runtime in the form of _O(n^c^)_, where _c > 1_, we can refer as a *polynomial runtime*.
175180

176181
== Exponential
177-
182+
(((Exponential)))
183+
(((Runtime, Exponential)))
178184
Exponential runtimes, O(2^n^), means that every time the input grows by one the number of operations doubles. Exponential programs are only usable for a tiny number of elements (<100) otherwise it might not finish on your lifetime. [big]#💀#
179185

180186
Let’s do an example.
@@ -203,7 +209,8 @@ include::{codedir}/runtimes/07-sub-sets.js[tag=snippet]
203209
Every time the input grows by one the resulting array doubles. That’s why it has an *O(2^n^)*.
204210

205211
== Factorial
206-
212+
(((Factorial)))
213+
(((Runtime, Factorial)))
207214
Factorial runtime, O(n!), is not scalable at all. Even with input sizes of ~10 elements, it will take a couple of seconds to compute. It’s that slow! [big]*🍯🐝*
208215

209216
.Factorial

book/chapters/bubble-sort.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ Bubble sort is a simple sorting algorithm that "bubbles up" the biggest values t
77
It's also called _sinking sort_ because the most significant values "sink" to the right side of the array.
88
This algorithm is adaptive, which means that if the array is already sorted, it will take only _O(n)_ to "sort".
99
However, if the array is entirely out of order, it will require _O(n^2^)_ to sort.
10+
(((Quadratic)))
11+
(((Runtime, Quadratic)))
1012

1113
== Bubble Sort Implementation
1214

book/chapters/chapter3.adoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,20 +11,25 @@ include::tree.adoc[]
1111

1212

1313
// (g)
14+
<<<
1415
include::tree--binary-search-tree.adoc[]
1516

17+
<<<
1618
include::tree--search.adoc[]
1719

20+
<<<
1821
include::tree--self-balancing-rotations.adoc[]
1922

2023
:leveloffset: +1
2124

25+
<<<
2226
include::tree--avl.adoc[]
2327

2428
:leveloffset: -1
2529

2630
// (g)
2731
// include::map.adoc[]
32+
<<<
2833
include::map-intro.adoc[]
2934

3035
:leveloffset: +1

book/chapters/divide-and-conquer--fibonacci.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,8 @@ graph G {
5252
....
5353

5454
In the diagram, we see the two recursive calls needed to compute each number. So if we follow the _O(branches^depth^)_ we get O(2^n^). [big]#🐢#
55-
55+
(((Exponential)))
56+
(((Runtime, Exponential)))
5657
NOTE: Fibonacci is not a perfect binary tree since some nodes only have one child instead of two. The exact runtime for recursive Fibonacci is _O(1.6^n^)_ (still exponential time complexity).
5758

5859
Exponential time complexity is pretty bad. Can we do better?

book/chapters/dynamic-programming--fibonacci.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,8 @@ graph G {
2323
....
2424

2525
This graph looks pretty linear now. It's runtime _O(n)_!
26-
indexterm:[Runtime, Linear]
26+
(((Linear)))
27+
(((Runtime, Linear)))
2728

2829
(((Memoization)))
2930
TIP: Saving previous results for later is a technique called "memoization". This is very common to optimize recursive algorithms with overlapping subproblems. It can make exponential algorithms linear!

book/chapters/graph.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
= Graph
2-
2+
(((Graph)))
3+
(((Data Structures, Non-Linear, Graph)))
34
Graphs are one of my favorite data structures.
45
They have a lot of cool applications like optimizing routes, social network analysis to name a few. You are probably using apps that use graphs every day.
56
First, let’s start with the basics.

book/chapters/insertion-sort.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,4 +31,6 @@ include::{codedir}/algorithms/sorting/insertion-sort.js[tag=sort, indent=0]
3131
- <>: [big]#✅# Yes
3232
- Time Complexity: [big]#⛔️# <> _O(n^2^)_
3333
- Space Complexity: [big]#✅# <> _O(1)_
34-
indexterm:[Runtime, Quadratic]
34+
35+
(((Quadratic)))
36+
(((Runtime, Quadratic)))

book/chapters/linked-list.adoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
= Linked List
2-
2+
(((Linked List)))
3+
(((List)))
4+
(((Data Structures, Linear, Linked List)))
35
A list (or Linked List) is a linear data structure where each node is linked to another one.
46

57
Linked Lists can be:
@@ -248,8 +250,9 @@ So far, we have seen two liner data structures with different use cases. Here’
248250
| Linked List (singly) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(n)* ^|O(n)
249251
| Linked List (doubly) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(1)* ^|O(n)
250252
|===
253+
(((Linear)))
254+
(((Runtime, Linear)))
251255

252-
indexterm:[Runtime, Linear]
253256
If you compare the singly linked list vs. doubly linked list, you will notice that the main difference is deleting elements from the end. For a singly list is *O(n)*, while for a doubly list is *O(1)*.
254257

255258
Comparing an array with a doubly linked list, both have different use cases:

book/chapters/map-hashmap-vs-treemap.adoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,4 +26,7 @@ As we discussed so far, there are trade-off between the implementations
2626
|===
2727
{empty}* = Amortized run time. E.g. rehashing might affect run time to *O(n)*.
2828

29-
indexterm:[Runtime, Logarithmic]
29+
(((Linear)))
30+
(((Runtime, Linear)))
31+
(((Logarithmic)))
32+
(((Runtime, Logarithmic)))

0 commit comments

Comments
 (0)