Skip to content

Commit 2864380

Browse files
committed
adding index terms
1 parent 6c53fdd commit 2864380

23 files changed

+77
-15
lines changed

book/chapters/algorithms-analysis.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,8 @@ To give you a clearer picture of how different algorithms perform as the input s
5959
|Find all permutations of a string |4 sec. |> vigintillion years |> centillion years |∞ |∞
6060
|=============================================================================================
6161

62+
indexterm:[Permutation]
63+
6264
However, if you keep the input size constant, you can notice the difference between an efficient algorithm and a slow one. An excellent sorting algorithm is `mergesort` for instance, and inefficient algorithm for large inputs is `bubble sort` .
6365
Organizing 1 million elements with merge sort takes 20 seconds while bubble sort takes 12 days, ouch!
6466
The amazing thing is that both programs are measured on the same hardware with the same data!

book/chapters/array.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -184,3 +184,5 @@ To sum up, the time complexity on an array is:
184184
^|_Index/Key_ ^|_Value_ ^|_beginning_ ^|_middle_ ^|_end_ ^|_beginning_ ^|_middle_ ^|_end_
185185
| Array ^|O(1) ^|O(n) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(n) ^|O(1) ^|O(n)
186186
|===
187+
188+
indexterm:[Runtime, Linear]

book/chapters/backtracking.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,8 @@ Let's do an exercise to explain better how backtracking works.
4343

4444
> Return all the permutations (without repetitions) of a given word.
4545

46+
indexterm:[Permutation]
47+
4648
For instace, if you are given the word `art` these are the possible permutations:
4749

4850
----

book/chapters/big-o-examples.adoc

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,8 @@ As you can see, in both examples (array and linked list) if the input is a colle
4747

4848
Represented in Big O notation as *O(log n)*, when an algorithm has this running time it means that as the size of the input grows the number of operations grows very slowly. Logarithmic algorithms are very scalable. One example is the *binary search*.
4949

50+
indexterm:[Runtime, Logarithmic]
51+
5052
[#logarithmic-example]
5153
=== Searching on a sorted array
5254

@@ -102,17 +104,18 @@ The ((Merge Sort)), like its name indicates, has two functions merge and sort. L
102104
.Sort part of the mergeSort
103105
[source, javascript]
104106
----
105-
include::{codedir}/runtimes/04-merge-sort.js[tag=sort]
107+
include::{codedir}/algorithms/sorting/merge-sort.js[tag=splitSort]
106108
----
107-
108-
Starting with the sort part, we divide the array into two halves and then merge them (line 16) recursively with the following function:
109+
<1> If the array only has two elements we can sort them manually.
110+
<2> We divide the array into two halves.
111+
<3> Merge the two parts recursively with the `merge` function explained below
109112

110113
// image:image10.png[image,width=528,height=380]
111114

112115
.Merge part of the mergeSort
113116
[source, javascript]
114117
----
115-
include::{codedir}/runtimes/04-merge-sort.js[tag=merge]
118+
include::{codedir}/algorithms/sorting/merge-sort.js[tag=merge]
116119
----
117120

118121
The merge function combines two sorted arrays in ascending order. Let’s say that we want to sort the array `[9, 2, 5, 1, 7, 6]`. In the following illustration, you can see what each function does.
@@ -124,6 +127,8 @@ How do we obtain the running time of the merge sort algorithm? The mergesort div
124127

125128
== Quadratic
126129

130+
indexterm:[Runtime, Quadratic]
131+
127132
Running times that are quadratic, O(n^2^), are the ones to watch out for. They usually don’t scale well when they have a large amount of data to process.
128133

129134
Usually, they have double-nested loops that where each one visits all or most elements in the input. One example of this is a naïve implementation to find duplicate words on an array.
@@ -219,6 +224,8 @@ A factorial is the multiplication of all the numbers less than itself down to 1.
219224

220225
One classic example of an _O(n!)_ algorithm is finding all the different words that can be formed with a given set of letters.
221226

227+
indexterm:[Permutation]
228+
222229
.Word's permutations
223230
// image:image15.png[image,width=528,height=377]
224231
[source, javascript]

book/chapters/bubble-sort.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -72,3 +72,5 @@ Bubble sort has a <> running time, as you might infer from the nested
7272
- <>: [big]#✅# Yes, _O(n)_ when already sorted
7373
- Time Complexity: [big]#⛔️# <> _O(n^2^)_
7474
- Space Complexity: [big]#✅# <> _O(1)_
75+
76+
indexterm:[Runtime, Quadratic]

book/chapters/divide-and-conquer--fibonacci.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
To illustrate how we can solve a problem using divide and conquer, let's write a program to find the n-th fibonacci number.
44

5+
indexterm:[Fibonacci]
6+
57
.Fibonacci Numbers
68
****
79
Fibancci sequence is a serie of numbers that starts with `0, 1`, the next values are calculated as the sum of the previous two. So, we have:

book/chapters/divide-and-conquer--intro.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
Divide and conquer is an strategy for solving algorithmic problems.
22
It splits the input into manageble parts recursively and finally join solved pieces to form the end result.
33

4+
indexterm:[Divide and Conquer]
5+
46
We have already done some divide and conquer algorithms. This list will refresh you the memory.
57

68
.Examples of divide and conquer algorithms:

book/chapters/dynamic-programming--fibonacci.adoc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ Let's solve the same Fibonacci problem but this time with dynamic programming.
44

55
When we have recursive functions doing duplicated work is the perfect place for a dynamic programming optimization. We can save (or cache) the results of previous operations and speed up future computations.
66

7+
indexterm:[Fibonacci]
8+
79
.Recursive Fibonacci Implemenation using Dynamic Programming
810
[source, javascript]
911
----
@@ -24,4 +26,6 @@ graph G {
2426

2527
This looks pretty linear now. It's runtime _O(n)_!
2628

29+
indexterm:[Runtime, Linear]
30+
2731
TIP: Saving previous results for later is a technique called "memoization" and is very common to optimize recursive algorithms with exponential time complexity.

book/chapters/insertion-sort.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,5 @@ include::{codedir}/algorithms/sorting/insertion-sort.js[tag=sort, indent=0]
3636
- <>: [big]#✅# Yes
3737
- Time Complexity: [big]#⛔️# <> _O(n^2^)_
3838
- Space Complexity: [big]#✅# <> _O(1)_
39+
40+
indexterm:[Runtime, Quadratic]

book/chapters/linked-list.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -249,6 +249,8 @@ So far, we have seen two liner data structures with different use cases. Here’
249249
| Linked List (doubly) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(1)* ^|O(n)
250250
|===
251251

252+
indexterm:[Runtime, Linear]
253+
252254
If you compare the singly linked list vs. doubly linked list, you will notice that the main difference is deleting elements from the end. For a singly list is *O(n)*, while for a doubly list is *O(1)*.
253255

254256
Comparing an array with a doubly linked list, both have different use cases:

0 commit comments

Comments
 (0)