These notes are heavily based on the following books/references. All the text and images that appear in these notes are not to be redistributed or shared in any way.
Final note: All possible mistakes and inconsistencies are mine (EZ) and I would be grateful if you report them immediately.
The following general rules apply to exams:
Note: The schedule may change at the discretion of the instructor.
Components of the course are as follows:
Your grade is determined as follows:
Students of all backgrounds and abilities are welcome in this course.
Readings relevant to the course include:
There are several support options you can take advantage of:
In Design and Analysis of Efficient Algorithms the following concepts will be discussed:
In Design and Analysis of Efficient Algorithms the following concepts will be discussed:
In Design and Analysis of Efficient Algorithms the following concepts will be discussed:
Definition of complexity classes using asymptotic notation:
Practice break
See you next time.
URod Enterprise buys long steel rods and cuts them into shorter pieces to sell. The management of URod Enterprise would like to know the optimal length for the cuts.
The problem is defined as follows:
Length $i$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
Price $p_i$ | 1 | 5 | 8 | 9 | 10 | 17 | 17 | 20 | 24 | 30 |
The following procedure implements the naive recursive top-down approach. \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} CutMaster(p, n) \\ \quad if ~ n==0 \\ \qquad return~ 0 \\ \quad q = -\infty \\ \quad for~ i=1 ~to~ n \\ \qquad q=max(q,p[i] + CutMaster(p,n-i)) \\ \quad return~ q \\ \end{array} \end{equation*}
[Run]
Idea: Solve subproblems only once.
Subidea: Trade some memory for time.
There are two equivalent ways to implement a dynamic-programming approach:
The bottom-up version is even simpler: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} BottomUpCutMaster(p, n) \\ \quad \text{initialize r[0..n] as new array} \\ \quad r[0]=0 \quad//\quad \text{ no pain, no gain}\\ \quad for~ j=1~ to~ n \\ \qquad q = -\infty \\ \qquad for~ i=1 ~to~ j \\ \qquad\quad q=max(q,p[i] + r[j-i]) \\ \qquad r[j] = q \\ \quad return~r[n] \end{array} \end{equation*}
See you next time
In the rod cutting problem we try all possible cuts for computing the optimal solution.
In the rod cutting problem we try all possible cuts for computing the optimal solution.
Input: |
|
Output: |
|
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} MakeChange(V, D) \\ \quad c[0] = 0 \\ \quad for~\nu = 1~to~V \\ \qquad min = \infty \\ \qquad i = n \\ \qquad while(i>0~and~D[i] <= \nu) \\ \qquad \qquad if~c[\nu - D[i]] < min \\ \qquad \qquad \qquad min = c[\nu - D[i]] \\ \qquad \qquad i = i - 1 \\ \qquad c[\nu] = min + 1 \\ \quad return~c[V] \\ \end{array} \end{equation*}
In order for a problem to admit a greedy algorithm, it must satisfy two properties:
5 minutes
A draft report has five chapters. The report must be 600 pages long.
Goal: Edit the report so that the overall importance is maximized.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c@{\quad} c c c c } Chapter & Pages & Importance & \frac{I}{W} & \frac{W}{I} \\ 1 & 120 & 5 & 0.041 & 24 \\ 2 & 150 & 5 & 0.033 & 30 \\ 3 & 200 & 4 & 0.020 & 50\\ 4 & 150 & 8 & 0.053 & 18.75 \\ 5 & 140 & 3 & 0.021 & 46.6 \\ \end{array} \end{equation*}Given $n$ objects and a knapsack (bag) capacity $C$, where object $i$ has weight $w_i$ and earns profit $p_i$, find values of $x_i$ to maximize the total profit: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c} \sum_{i=1}^n x_i p_i \end{array} \end{equation*} Subject to \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c} \sum_{i=1}^n x_i w_i \leq C, 0\leq x_i \leq 1. \end{array} \end{equation*}
This problem is known as the fractional knapsack problem.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} ContKnapsack(a, C) \\ \quad sort~a \quad \text{ // array sorted by ratio } \\ \quad weight=0 \\ \quad i=1 \\ \quad while(i\leq n~and~weight < C) \\ \qquad \qquad if~(weight + a[i].w \leq C) \quad \text{ // eat it all} \\ \qquad \qquad \qquad weight~ \text{+=}~ a[i].w \quad \text{ // stomach is heavier now} \\ \qquad \qquad else \quad \quad \quad \text{ // 'eat' a chop of it} \\ \qquad \qquad \qquad chop =(C-weight)/a[i].w \\ \qquad \qquad \qquad weight = C \quad \quad \text{ // stomach finally full} \\ \qquad \qquad i~\text{+=}~1 \end{array} \end{equation*}
Notation:
Claim: The Fat Guy "shoots me down", so $P' \leq P$.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c} \sum_{i=1}^n x'_i p_i \leq \sum_{i=1}^n x_i p_i \end{array} \end{equation*}
During a robbery, a burglar finds much more loot than he had expected and has to decide what to take.
What's the most valuable combination of items he can fit into his bag?
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l c c c} & Item & Weight & Value \\ W=10 & 1 & 6 & $30 \\ & 2 & 3 & $14 \\ & 3 & 4 & $16 \\ & 4 & 2 & $9 \\ \end{array} \end{equation*}
*It's immoral and punishable by at least one year in prison,
regardless of the value of the items taken. The example is used only
for the sake of science.
We consider two main versions of the problem:
Does the problem fit any of the paradigms we have considered so far? Let's look at the usual properties:
*The knapsack problem generalizes a wide variety of resource-constrained selection tasks.
5 minutes
During a robbery, a burglar finds much more loot than he had expected and has to decide what to take.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l c c c c c c } & Item & Weight & Value & Low Weight & High Value & Val/Wei & Optimal \\ W=10 & 1 & 6 & $30 & - & 1 & 1 & 1 \\ & 2 & 3 & $14 & - & - & 1 & - \\ & 3 & 4 & $16 & - & 1 & - & - \\ & 4 & 2 & $9 & 5 & - & - & 2\\ \end{array} \end{equation*} |
During a robbery, a burglar finds much more loot than he had expected and has to decide what to take.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l c c c c c c } & Item & Weight & Value & Low Weight & High Value & Optimal \\ W=10 & 1 & 6 & $30 & - & 1 & 1 \\ & 2 & 3 & $14 & 1 & - & - \\ & 3 & 4 & $16 & 1 & 1 & 1 \\ & 4 & 2 & $9 & 1 & - & -\\ \end{array} \end{equation*} |
![]() |
During a robbery, a burglar finds much more loot than he had expected and has to decide what to take.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l c c c c c c } & Item & Weight & Value & Low Weight & High Value & Optimal \\ W=10 & 1 & 6 & $30 & - & 1 & 1 \\ & 2 & 3 & $14 & 1 & 1 & - \\ & 3 & \cancel{4} 5 & $16 & 1 & - & 1 \\ & 4 & 2 & $9 & 1 & - & -\\ \end{array} \end{equation*} |
![]() |
We consider two main versions of the problem:
Does the problem fit any of the paradigms we have considered so far? Let's look at the usual properties:
*The knapsack problem
generalizes a wide variety of resource-constrained selection tasks.
We consider two main versions of the problem:
How do we express the optimal solution for these 2 cases?
Notation: $K(w)$ is the maximum value achievable with a bag of capacity w.
We consider two main versions of the problem:
For (1) we express the optimal value recursively as follows: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c} K(w) = max_{i:w_i \leq w} \{ K(w-w_i) + v_i \} \end{array} \end{equation*}
We consider two main versions of the problem:
Notation: $K(w, j)$ is the maximum value achievable with a bag of capacity w and items $1, \ldots, j$
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c} K(w,j) = max \{ K(w-w_j, j-1) + v_j, K(w, j-1) \} \end{array} \end{equation*}6 minutes
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l c c c} & Item & Weight & Value \\ W=5 & 1 & 2 & $3 \\ & 2 & 3 & $4 \\ & 3 & 4 & $5 \\ & 4 & 5 & $6 \\ \end{array} \end{equation*}
In this problem tasks are to be scheduled on one or more shared resources.
Goal:Optimize use of the resource with respect to a given objective.
Suppose we have a set $S = \{a_1, a_2, \ldots, a_n\}$ of $n$ competing activities requiring exclusive use of a shared resource.
Assumption: Activities are sorted in monotonically increasing order of finish time: $$f_1 \leq f_2 \leq f_3 \leq \ldots \leq f_{n-1} \leq f_n $$
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c | c c c c c c c c c c c} i & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\\\hline s_i & 1 & 3 & 0 & 5 & 3 & 5 & 6 & 8 & 8 & 2 & 12 \\ f_i & 4 & 5 & 6 & 7 & 9 & 9 & 10 & 11 & 12 & 14 & 16 \\ \end{array} \end{equation*}6 minutes
Suppose we have a set $S = \{a_1, a_2, \ldots, a_n\}$ of $n$ competing activities requiring exclusive use of a shared resource.
Does the greedy choice property hold for "Activity Selection"?
The algorithm can be written in pseudocode as: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} GreedIsGood(s, f, k, n) \\ \quad m=k+1 \quad \text{ // we're done with k, start with the next activity } \\ \quad while (m \leq n~and~s[m]< f[k]) \\ \quad\quad m=m+1 \\ \quad if ~m \leq n \\ \quad \quad return~\{a_m\} \bigcup GreedIsGood(s, f, m, n) \quad \text{ // solve the remaining subproblem} \\ \quad else~return~\emptyset \end{array} \end{equation*}
6 minutes
The algorithm can be written in pseudocode as: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} GreedIsGood(s, f, k, n) \\ \quad m=k+1 \quad \text{ // we're done with k, start with the next activity } \\ \quad while (m \leq n~and~s[m]< f[k]) \\ \quad\quad m=m+1 \\ \quad if ~m \leq n \\ \quad \quad return~\{a_m\} \bigcup GreedIsGood(s, f, m, n) \quad \text{ // solve the remaining subproblem} \\ \quad else~return~\emptyset \end{array} \end{equation*}
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c | c c c c c c c c c c c} i & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\\\hline s_i & 1 & 3 & 0 & 5 & 3 & 5 & 6 & 8 & 8 & 2 & 12 \\ f_i & 4 & 5 & 6 & 7 & 9 & 9 & 10 & 11 & 12 & 14 & 16 \\ \end{array} \end{equation*}
Often companies need to decide an order (schedule) for some activities (jobs) that are to be performed. Given are:
Definition: A schedule specifies an order in which jobs are processed.
Question: In a problem with $n$ jobs, how many possible schedules there are?
Consider an instance of the problem with $l_1 = 1, l_2 = 2, l_3 = 3$ and suppose they are processed in this order. What are the completion times for each job?
Often companies need to decide an order (schedule) for some activities (jobs) that are to be performed. Given are:
Definition: Completion time $C_j(\sigma)$ of an activity $a_j$ is the sum of lengths of activities preceding $a_j$, plus the length of $a_j$.
Goal: Determine a schedule that minimizes the weighted completion time for all possible schedules $\sigma$: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c} \displaystyle \min_{\sigma} \sum_{j=1}^n w_jC_j(\sigma) \end{array} \end{equation*}
Can a 'greedy' approach work for this problem? To start consider the following cases:
In general, jobs have different weights and duration. From the example, we learned two rules of thumb:
In general, jobs have different weights and duration. From the example, we learned two rules of thumb:
What if we have short low-weight jobs? What about long high-weight jobs?
In general, jobs have different weights and duration. From the example, we learned two rules of thumb:
What if we have short low-weight jobs? What about long high-weight jobs?
Idea: Compute a score that considers both parameters: weight and length.
The insight we have so far suggests the following about the score:
We'll consider two different scores:
Consider this instance of the problem with |
\begin{equation*} \begin{array}{c | c | c} & a_1 & a_2 \\\hline Length & l_1 = 5 & l_2 = 2 \\ Weight & w_1 = 3 & w_2 =1 \\ \end{array} \end{equation*} |
Consider this instance of the problem with |
\begin{equation*} \begin{array}{c | c | c} & a_1 & a_2 \\\hline Length & l_1 = 5 & l_2 = 2 \\ Weight & w_1 = 3 & w_2 =1 \\ \end{array} \end{equation*} |
What is the sum of weighted completion times in the schedule of FatDiff and FatRatio respectively?
See you next time
Consider this instance of the problem with |
\begin{equation*} \begin{array}{c | c | c} & a_1 & a_2 \\\hline Length & l_1 = 5 & l_2 = 2 \\ Weight & w_1 = 3 & w_2 =1 \\ FatDiff & -2 & -1 & \\ FatRatio & 3/5 & 1/2 & \\ \end{array} \end{equation*} |
What is the sum of weighted completion times in the schedule of FatDiff and FatRatio respectively?
In this problem two strings are given and the goal is to identify the longest common subsequence.
Such a task is the basis for many applications in various domains: computational linguistics, bioinformatics, revision control systems, speech recognition, optical character recognition, etc.
Given a sequence $X =\langle x_1, x_2, \ldots, x_m \rangle$,
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{ c c c c c c c c c c c c } A & K & R & O & K & E & R & A & U & N & A & I & A \\ \end{array} \end{equation*}
Given a sequence $X =\langle x_1, x_2, \ldots, x_m \rangle$,
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{ c c c c c c c c c c c c } A & K_2 & R & O_4 & K & E & R_7 & A & U & N & A_{11} & I & A \\ \end{array} \end{equation*}
Given two sequences $X$ and $Y$, we say that a sequence $Z$ is a common subsequence
of $X$ and $Y$ if $Z$ is a subsequence of both $X$ and $Y$.
\begin{equation*}
%\setlength\arraycolsep{1.5pt}
\begin{array}{ c c c c c c }
A & B & C & B & D & A & B \\
B & D & C & A & B & A & \\
\end{array}
\end{equation*}
Problem: In the longest-common-subsequence problem,
we are given two sequences
$X =\langle x_1, x_2, \ldots, x_m \rangle$ and $Y =\langle y_1, y_2, \ldots, y_n \rangle$ and wish to find a maximum-
length common subsequence of $X$ and $Y$.
Notation: Given a sequence $ X =\langle x_1, x_2, \ldots, x_m \rangle$ we define the $i$th prefix of $X$ as $ X_i =\langle x_1, x_2, \ldots, x_i \rangle$
Theorem: Let $X =\langle x_1, x_2, \ldots, x_m \rangle$ and $Y =\langle y_1, y_2, \ldots, y_n \rangle$ be sequences and $Z=\langle z_1, z_2, \ldots, z_k \rangle$ a LCS of $X$ and $Y$.
Notation: Given a sequence $ X =\langle x_1, x_2, \ldots, x_m \rangle$ we define the $i$th prefix of $X$ as $ X_i =\langle x_1, x_2, \ldots, x_i \rangle$
Theorem: Let $X =\langle x_1, x_2, \ldots, x_m \rangle$ and $Y =\langle y_1, y_2, \ldots, y_n \rangle$ be sequences and $Z=\langle z_1, z_2, \ldots, z_k \rangle$ a LCS of $X$ and $Y$.
Optimal substructure: The LCS of two sequences contains LCS of prefixes of the two subsequences.
Notation: Let's denote with $c[i, j]$ the length of an LCS of sequences $X_i$ and $Y_j$. The recursive formulation is as follows:
\begin{equation*} c[i, j] = \left\{ \begin{array}{ll} 0 & \mbox{if } i = 0 \mbox{ or } j=0, \\ c[i-1, j-1] + 1 & \mbox{if } i,j > 0 \mbox{ and } x_i=y_j, \\ max(c[i, j-1], c[i-1, j]) & \mbox{if } i,j > 0 \mbox{ and } x_i \neq y_j, \end{array} \right. \end{equation*}
There are two standard ways to represent a graph $G = (V, E)$:
Two graphs $G=(V, E)$ and $G'=(V', E')$ are isomorphic if there exists a bijection $f: V \rightarrow V'$ such that $(u, v) \in E$ if and only if $(f(u), f(v)) \in E'$.
We say that a graph $G=(V, E)$ is a subgraph of
$G'=(V', E')$ if $V' \subseteq V$ and $E' \subseteq E$.
Given a set $V' \subseteq V$, the subgraph of $G$ induced
by $V'$ is the graph $G'=(V', E')$, where
$E'= \{(u,v) \in E : u,v \in V'\}$.
An undirected graph $G = (V, E)$
is bipartite if $V$ can be partitioned into
two sets $V_1$ and $V_2$ such that $(u, v) \in E$
implies either $u \in V_1$ and $v \in V_2$ or $u \in V_2$ and $v \in V_1$
5 minutes
Breadth-First Search explores in all possible directions:
For each $j \geq 1$, layer $L_j$ consists of all vertices at distance exactly $j$ from $s$. There is a path from $s$ to $t$ iff $t$ appears in some layer.
The algorithm can be written in pseudocode as: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} BFS(G, s) \\ \underline{In}: G=(V, E), s \in V \\ \underline{Out}: For ~ all ~ v \in V, return ~ distance~ v.d \\ s.d = 0 \quad // ~no~ driving~ to~ get~ here \\ for~all~u \in V \\ \quad u.d = \infty \quad //~ cannot~ get~ there \\ add(bag, s) \\ while~ bag~ not~ empty \\ \quad u=get(bag) \\ \quad for~all~(u,v) \in E \qquad \text{// visit neighbors next door }\\ \quad\quad if~v.d=\infty \\ \quad\quad\quad add(bag, v) \\ \quad\quad\quad v.d=u.d + 1 \end{array} \end{equation*}
The algorithm can be written in pseudocode as: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} BFS(G, s) \\ s.d = 0 \quad // ~no~ driving~ to~ get~ here \\ for~all~u \in V \\ \quad u.d = \infty; u.\pi = null \\ enqueue(queue, s) \\ while~queue~ not~ empty \\ \quad u=dequeue(queue) \\ \quad for~all~(u,v) \in E \qquad \text{// visit neighbors next door }\\ \quad\quad if~v.d=\infty \\ \quad\quad\quad enqueue(queue, v) \\ \quad\quad\quad v.d=u.d + 1; v.\pi = u \end{array} \end{equation*}
7 minutes
The algorithm can be written in pseudocode as: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} BFS(G, s) \\ s.d = 0; s.color = pink; for~all~u \in V \\ \quad u.d = \infty; u.\pi = null \\ enqueue(queue, s) \\ while~queue~ not~ empty \\ \quad u=dequeue(queue) \\ \quad for~all~(u,v) \in E \qquad \text{// adjacent }\\ \quad\quad if~v.color=white \\ \quad\quad\quad v.color=pink \\ \quad\quad\quad enqueue(queue, v) \\ \quad\quad\quad v.d=u.d + 1; v.\pi = u \\ \quad u.color=red \\ \end{array} \end{equation*}
Lemma 1
Let G=(V, E) be a directed or undirected graph, and let $s \in V$ be an arbitrary
vertex. Then, for any edge $(u, v) \in E$,
$\delta (s,v) \leq \delta(s,u) + 1$
Lemma 2
Let $G =(V, E)$ be a directed or undirected graph, and suppose that BFS is run
on $G$ from a given source vertex $s \in V$. Then upon termination,
for each vertex $\nu \in V$, the value $\nu.d$ computed by BFS satisfies $\nu.d \geq \delta(s,\nu)$
Lemma 3
Suppose that during the execution of BFS on a graph $G = (V, E)$, the queue $Q$
contains the vertices $\langle \nu_1, \nu_2, \ldots, \nu_r \rangle$, where $\nu_1$ is the head of $Q$
and $\nu_r$ is the tail.
Then, $\nu_r.d \leq \nu_1.d + 1$ and $\nu_i.d \leq \nu_{i+1}.d$ for $i = 1, 2, \ldots, r-1$.
Goal: Discover each reachable vertex in a graph.
Strategy: search "deeper" in the graph whenever possible.
Goal: Discover each reachable vertex in a graph.
Strategy: search "deeper" in the graph whenever possible.
The algorithm is given below:
In a shortest-path problem we are given:
Definitions
The algorithm can be written in pseudocode as:
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} DIJKSTRA(G,w,s) \\ 1 \quad Initialize \\ 2 \quad Q=G.V\qquad // \text{ is this O(1)? } \\ 3 \quad while~ Q~ not~ empty \\ 4 \quad\quad u = extract\text{-}Min(Q) \\ 5 \quad\quad foreach ~ v \in G.Adj[u] \\ 6 \quad\quad\quad RELAX(u,v,w) \\ \end{array} \end{equation*}The algorithm can be written in pseudocode as:
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{@{\quad}l} Bellman-Ford(G,w,s) \\ 1 \quad Initialize \\ 2 \quad for~i=1~to~|V| - 1 \quad\quad // \text{ why |V|-1? } \\ 3 \quad\quad foreach~(u,v) \in E \\ 4 \quad\quad \quad\quad relax(u, v, w) \\ 5 \quad foreach~(u,v) \in E \\ 6 \quad\quad if~v.d>u.d + w(u,v) \quad\quad // \text{ when does this occur? } \\ 7 \quad\quad\quad\quad return~false \\ 8 \quad return~true \end{array} \end{equation*}
Some important theorems follow:
Lemma 2:
Let $G=(V, E)$ be a weighted, directed graph with source $s$.
Assume $G$ contains no negative-weight cycles that are
reachable from $s$.
After the $|V - 1|$ iterations of the for loop, we
have $\nu.d=\delta(s, \nu)$ for
all vertices $\nu$ that are reachable from $s$.
Theorem: Let $G=(V,E)$ be a directed weighted graph. Assume there are no negative-weight cycles.
Bellman-Ford will compute $v.d=\delta(s, v)$ for all $v \in V$.
Many applications use directed acyclic graphs to indicate precedence among events.
A topological ordering of a dag $G = (V, E)$ is a linear ordering of all its vertices such that if $G$ contains an edge $(u,v)$, then $u$ appears before $v$ in the ordering.
![]() | ![]() |
A directed graph with no cycles is called directed acyclic graph, or a DAG for short.
A topological sort of a dag $G = (V, E)$ is a linear ordering of all its vertices such that if $G$ contains an edge $(u,v)$, then $u$ appears before $v$ in the ordering.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l} \text{TOPOLOGICAL-SORT(G)} \\ 1 \quad \text{call DFS(G) to compute finish times u.f} \\ 2 \quad \text{as each vertex is finished, insert it onto the front of a linked list} \\ 3 \quad \text{return the linked list of vertices} \\ \end{array} \end{equation*}
Very often graph algorithms start with a decomposition of a graph into its connected components.
For a directed graph $G=(V, E)$, a strongly connected component is a maximal set of vertices $C \in V$ such that for every pair of vertices $u$ and $v$ in $C$, vertices $u$ and $v$ are reachable from each other.
Very often graph algorithms start with a decomposition of a graph into its connected components.
For a directed graph $G=(V, E)$, a strongly connected component is a maximal set of vertices $C \in V$ such that for every pair of vertices $u$ and $v$ in $C$, vertices $u$ and $v$ are reachable from each other.
What are the strongly connected components of the following graph?
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l} STRONGLY\text{-}CONNECTED\text{-}COMPONENTS(G) \\ 1 \quad call~ DFS(G)~ to~ compute~ finishing~ times~ u.f \\ 2 \quad compute~ G^T \\ 3 \quad call~ DFS(G^T),~ but~ in~ the ~main~ loop~ of~ DFS,~ consider~ the~ vertices~ in~ order~ of~ decreasing~ u.f \\ 4 \quad output~ vertices~ of~ DFS~ trees~ in~ line~ 3~ as~ a~ separate~ s.c.c \\ \end{array} \end{equation*}
Find the strongly connected components of the following graph:
![]() | \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l} \text{STRONGLY-CONNECTED-COMPONENTS(G)} \\ 1 \quad \text{call DFS(G) to compute finish times u.f} \\ 2 \quad \text{compute}~G^T \\ 3 \quad \text{call}~DFS(G^T)\text{, but consider the vertices in order of decreasing u.f} \\ 4 \quad \text{output vertices of DFS trees in line 3 as a separate s.c.c} \\ \end{array} \end{equation*} |
Find the strongly connected components of the following graph:
Problems that model situations involving resources and activities.
Resources
Activities
Goal: Allocate resources to activities to achieve the best possible value of performance.
Kantina Dukat produces high-quality wine and raki*.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{c@{\quad}| c c c} Operation & Vlosh & Moskat & Production Time \\\hline Harvest & 1 & 0 & 4 \\ Fermentation & 0 & 2 & 12 \\ Distillation & 3 & 2 & 18 \\ Profit ~(lekë) & 3000 & 5000 & \\ \end{array} \end{equation*}
The management wishes to determine the quantities for the two drinks in order to maximize their total profit, subject to the restrictions imposed by the capacities above.
Assume the following models a problem with two decision variables: \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l@{\quad} r c r c r} \max & x_1 & + & x_2 & & \\ \mathrm{s.t.} & 4x_1 & - & x_2 & \leq & 8 \\ & 2x_1 & + & x_2 & \leq & 10 \\ & 5x_1 & - & 2x_2 & \geq & -2 \\ & & & x_1,x_2 & \geq & 0 \end{array} \end{equation*}
Any assignment of the $x_1$ and $x_2$ that satisfies all the constraints is a feasible solution to the linear program.
The standard form of a linear program is given by:
We wish to find $n$ real numbers $x_1,x_2, \ldots, x_n$ that \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l@{\quad} l c r c r} maximize & Z=\sum_{j=1}^{n} c_j x_j & \\ subject~to & \sum_{j=1}^{n} a_{ij} x_j \leq b_i & for~i=1,2,\ldots,m \\ & x_j \geq 0 & for~j=1,2,\ldots,n \\ \end{array} \end{equation*}
It is always possible to convert a linear program into standard form.
For certain purposes we prefer the slack form in which some of the constraints are equality constraints.
Let \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l@{\quad} r } \sum_{j=1}^{n} a_{ij} x_j \leq b_i \end{array} \end{equation*} be some inequality constraints. We introduce a new variable $s$ and rewrite the former as \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l@{\quad} r } s = b_i - \sum_{j=1}^{n} a_{ij} x_j \\ s \geq 0 \end{array} \end{equation*}
We call $s$ a slack variable as it measures the difference between the left-hand and right-hand sides in the former equation.
[CLRS] notation: When converting from standard to slack form, it uses $x_{n+i}$ (instead of $s$) to denote the slack variable associated with the $i$th inequality. \begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l@{\quad} r } x_{n+i} = b_i - \sum_{j=1}^{n} a_{ij} x_j \\ x_{n+i} \geq 0 \end{array} \end{equation*}
A given linear program in standard form can be converted into slack form.
Maximize $Z = 2x_1 - 3x_2 + 3x_3$
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{l@{\quad} r c r c r c r} % \max & x_1 & + & x_2 & & \\ \mathrm{s.t.} & x_1 & + & x_2 & - & x_3 & \leq & 7 \\ & -x_1 & - & x_2 & + & x_3 & \leq & -7 \\ & x_1 & - & 2x_2 & + & 2x_3 & \leq & 4 \\ & & & x_1,x_2, x_3 & \geq & 0 & & \end{array} \end{equation*} |
Notation: We'll use $\sum_{j=1}^{n} a_{ij} x_j + s_i = b_i$ and the simplex tableau, consisting of the augmented matrix corresponding to the constraint equations together with the coefficients of the objective function in the form:
Maximize $Z = 4x_1 + 6x_2$
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{r c r c r } -x_1 & + & x_2 & \leq & 11 \\ x_1 & + & x_2 & \leq & 27 \\ 2x_1 & + & 5x_2 & \leq & 90 \\ x_1, & x_2 & \geq & 0 & \end{array} \end{equation*} | \begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}c} x_1 & x_2 & s_1 & s_2 & s_3 & b \\\hline -1 & 1 & 1 & 0 & 0 & 11 \\ 1 & 1 & 0 & 1 & 0 & 27 \\ 2 & 5 & 0 & 0 & 1 & 90 \\\hline -4 & -6 & 0 & 0 & 0 & 0 \end{array} |
To solve a linear programm problem in standard form, follow:
Find the maximum value of $z=2x_1 - x_2 + 2x_3$.
Subject to \begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}c} 2x_1 & + & x_2 & & & \leq & 10 \\ x_1 & + & 2x_2 & - & 2x_3 & \leq & 20 \\ & & x_2 & + & 2x_3 & \leq & 5 \\ & & x_1, & x_2, & x_3, & \geq & 0 \\ \end{array} |
\begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}|c} x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & b & Base \\\hline & & & & & & & s_1\\ & & & & & & & s_2 \\ & & & & & & & s_3 \\\hline \end{array} |
Find the maximum value of $z=3x_1 + 2x_2 + x_3$.
Subject to \begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}c} 4x_1 & + & x_2 & + & x_3 & = & 30 \\ 2x_1 & + & 3x_2 & + & x_3 & \leq & 60 \\ x_1 & + & 2x_2 & + & 3x_3 & \leq & 40 \\ & & x_1, & x_2, & x_3, & \geq & 0 \\ \end{array} |
\begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}|c} x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & b & Base \\\hline & & & & & & & s_1\\ & & & & & & & s_2 \\ & & & & & & & s_3 \\\hline \end{array} |
URDoor produces aluminium windows and doors.
\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{|l| c c c c | } \hline Process & Window & Door & Accessories & Time Available \\\hline Molding & 1 & 2 & 3/2 & 12000 \\ Trimming & 2/3 & 2/3 & 1 & 4600 \\ Packaging & 1/2 & 1/3 & 1/2 & 2400 \\ Profit & 11 & 16 & 15 & - \\\hline \end{array} \end{equation*}
Profit is $Z=11x_1 + 16x_2 + 15x_3$
\begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}c} x_1 & + & 2x_2 & + & \frac{3}{2}x_3 & \leq & 12000 \\ \frac{2}{3}x_1 & + & \frac{2}{3}x_2 & + & x_3 & \leq & 4600 \\ \frac{1}{2}x_1 & + & \frac{1}{3}x_2 & + & \frac{1}{2}x_3 & \leq & 2400 \\ \end{array} |
\begin{array}{c@{\quad}| c@{\quad}| c@{\quad}| c@{\quad}| c@{\quad}| c@{\quad}| c@{\quad}|c}
x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & \quad b \quad & Base \\\hline
& & & & & & & s_1\\
& & & & & & & s_2 \\
& & & & & & & s_3 \\\hline
\end{array}
\begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}|c} x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & b & Base \\\hline 1 & 2 & \frac{3}{2} & 1 & 0 & 0 & 12000 & s_1\\ \frac{2}{3} & \frac{2}{3} & 1 & 0 & 1 & 0 & 4600 & s_2 \\ \frac{1}{2} & \frac{1}{3} & \frac{1}{2} & 0 & 0 & 1& 2400 & s_3 \\\hline -11 & -16 & -15 & 0 & 0 & 0 & 0 & \\ \end{array} |
Assume you are given the following linear equations to solve:
\begin{array}{r@{\quad} c@{\quad} r@{\quad} c@{\quad} c@{\quad} r@{\quad}r c | c r@{\quad} c@{\quad} r@{\quad} c@{\quad} r@{\quad} c@{\quad}r} x & - & 2y & + & 3z & = & 9 & & & x & - & 2y & + & 3z & = & 9\\ -x & + & 3y & & & = & -4 & & & & & y & + & 3z & = & 5 \\ 2x & - & 5y & + & 5z & = & 17 & & & & & & & z & = & 2 \\ \end{array}
Which one is easier to solve?
Which one is easier to solve?
\begin{array}{r@{\quad} c@{\quad} r@{\quad} c@{\quad} c@{\quad} r@{\quad}r c | c r@{\quad} c@{\quad} r@{\quad} c@{\quad} r@{\quad} c@{\quad}r} x & - & 2y & + & 3z & = & 9 & & & x & - & 2y & + & 3z & = & 9\\ -x & + & 3y & & & = & -4 & & & & & y & + & 3z & = & 5 \\ 2x & - & 5y & + & 5z & = & 17 & & & & & & & z & = & 2 \\ \end{array}
We can transform a system of linear equations into an equivalent one using row-operations:
1. Interchange two equations.
2. Multiply an equation by a nonzero constant.
3. Add a multiple of an equation to another equation.
A matrix in row-echelon form has the following properties:
1. All rows consisting entirely of zeros occur at the bottom of the matrix.
2. For each row that does not consist entirely of zeros, the first nonzero entry is 1.
3. For two successive (nonzero) rows, the leading 1 in the
higher row is farther to the left than the leading 1 in the lower row.
A matrix in row-echelon form is in reduced row-echelon form
if every column that has a leading 1 has zeros in every position above and below its leading 1.
1. Write the augmented matrix of the system of linear equations.
2. Use elementary row operations to rewrite the augmented matrix in row-echelon form.
3. Write the system of linear equations corresponding to the matrix in row-echelon form,
and use back-substitution to find the solution.
Solve the system \begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}} & & x_2 & + & x_3 & - & 2x_4 & = & -3 \\ x_1 & + & 2x_2 & - & x_3 & & & = & 2 \\ 2x_1 & + & 4x_2 & + & x_3 & - & 3x_4 & = & -2 \\ x_1 & - & 4x_2 & - & 7x_3 & - & x_4 & = & -19 \\ \end{array}
Given a linear program with an objective function to maximize, we can transform it into a minimization problem with the same optimal solution.
Minimize $z=0.12x_1 + 0.15x_2$. Subject to \begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}c} 60x_1 & + & 60x_2 & \geq & 300 \\ 12x_1 & + & 6x_2 & \geq & 36 \\ 10x_1 & + & 30x_2 & \geq & 90 \\ \end{array}
Minimize $z=0.12x_1 + 0.15x_2$. Subject to \begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}c} 60x_1 & + & 60x_2 & \geq & 300 \\ 12x_1 & + & 6x_2 & \geq & 36 \\ 10x_1 & + & 30x_2 & \geq & 90 \\ \end{array}
Maximize $z=\quad y_1 + \quad y_2 + \quad y_3$, subject to \begin{array}{c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad} c@{\quad}c} \quad y_1 & + & \quad y_2 & + & \quad y_3 & \quad & \quad \\ \quad y_1 & + & \quad y_2 & + & \quad y_3 & \quad & \quad \\ \end{array}
(thetartan.org)
The divide and conquer algorithmic technique has three steps:
Typical examples include: mergesort, quicksort, binary search, tree operations, Hanoi Towers solutions, etc.
Application: Collaborative filtering
Goal: compare the similarity of two rankings.
Consider the following scenario:
How "similar" are their tastes?
Input: A sequence of $n$ numbers $a_1, \ldots , a_n$.
Goal: Determine the number of inversions in the sequence $a_1, \ldots , a_n$.
\begin{equation}
\begin{array}{| c@{\quad} | c@{\quad} | c@{\quad} | c@{\quad} | c@{\quad} | c@{\quad} |}
\hline
1 & 3 & 5 & 2 & 4 & 6 \\\hline
\end{array}
\end{equation}
\begin{equation}
\begin{array}{| c@{\quad} | c@{\quad} | c@{\quad} | c@{\quad} | c@{\quad} | c@{\quad} |}
\hline
1 & 2 & 3 & 4 & 5 & 6 \\\hline
\end{array}
\end{equation}
We can try to apply the divide and conquer approach:
We can try to apply the divide and conquer approach:
Input: $n \geq 2$ points (plane) from a set of points $P$
Output: Pair of points with the smallest Euclidean distance.
Definitions:
Typical applications include: traffic-control systems, robotics, computer vision, etc.
Each recursive step takes as input
Combine
Combine
Find the shortest path from $s$ to $t$ that avoids the obstacle.
Given a set of points in the plane, find the farthest pair of points.
The convex hull of a set of points $Q$, denoted with $CH(Q)$ is the smallest convex polygon $P$ for
which each point in $Q$ is either on the boundary of $P$ or in its interior.
Solves the problem by maintaining a stack:
Points are traversed in order of increasing polar angle with respect to the starting point $p_0$
define polar angle\begin{equation*} %\setlength\arraycolsep{1.5pt} \begin{array}{rl} GS(Q) \\ 1&Init~ p_0 \\ 2&Sort~points~by~polar~angle \\ 3&Init~stack~ S \\ 4&push(S, p_0) \\ 5&push(S, p_1) \\ 6&push(S, p_2) \\ 7&for~ i=3~ to~ m \\ 8&\quad while~ angle~ formed~ by ~next(S), top(S), p_i~nonleft~turn \\ 9&\quad\quad \quad pop(S) \\ 10&\quad push(S,p_i) \\ 11&return~S \end{array} \end{equation*}
Given a finite set of points $\{p_1,\ldots, p_n\}$ in the Euclidean plane. The cell $R_k$ for the point $p_k$ consists of every point in the Euclidean plane whose distance to $p_k$ is less than or equal to its distance to any other $p_k$.
Used in biology, ecology, computational chemistry, medical diagnosis, epidemiology, materials science, urban planning, networking, computer graphics, machine learning, etc.
Given a set of points in a plane is find a triangulation such that no point is inside the circumcircle of any triangle.
Used in path planning, terrain modelling, automated driving, simulations, etc.
In the plane, the polar angle $\theta$ is the counterclockwise angle from the $x$-axis at which a point in the $xy$-plane lies.
Such angle $\theta$ is usually measured in radians. The radian is a unit of angular measure defined such that an angle of one radian subtended from the center of a unit circle produces an arc with arc length $1$.
A full angle is therefore $2\pi$ radians, so there are 360 degrees per $2\pi$ radians, equal to 180 degrees/$\pi$ or 57.29577951 degrees/radian.
https://mathworld.wolfram.com/Angle.htmlA planar polygon is convex if it contains all the line segments connecting any pair of its points. Thus, for example, a regular pentagon is convex (left figure), while an indented pentagon is not (right figure).
A planar polygon that is not convex is said to be a concave polygon.
Consider three points: $P_1$, $P_2$ and $P_3$. We have to decide whether $P_1P_2P_3$ represents a "right turn" (i.e. a turn in clockwise order) or a "left turn" (i.e. a turn in counter-clockwise order).
Given $P_1=(x_1,y_1)$, $P_2=(x_2,y_2)$ and $P_3=(x_3,y_3)$, we compute $(x_2−x_1)(y_3−y_1)−(y_2−y_1)(x_3−x_1)$:
Input: Two $n$-digit nonnegative integers, $x$ and $y$.
Output: The product $x \cdot y$
How many primitive operations? (Asymptotic Notation is fine) add quiz
Idea: Let's "cut" our numbers into parts.
Suppose $x$ and $y$ are $n$-bit integers. We need to compute the product $x \cdot y$.
Andrei Kolmogorov, one of the giants of $20$th century mathematics, conjectured that "there is no algorithm to multiply two $n$-digit numbers in subquadratic time". In 1960 (Moscow University) he restates his “$n^2$ conjecture” and posed several related problems.
About a week later, a 23-year-old student named Anatolii Karatsuba presented Kolmogorov with a remarkable counterexample: multiplication in $O(n^{log_2 3})$ time.
Suppose $X$ and $Y$ are $n \times n$ matrices of integers. In the product $Z=X \cdot Y$, entry $z_{ij}$ is the (dot) product of $i$th row of $X$ and $j$th column of $Y$: \begin{equation} z_{ij} = \sum_{k=1}^n x_{ik} y_{kj} \end{equation}
What would be the running time of a *naive* algorithm for computing the product?
Suppose $X$ and $Y$ are $n \times n$ matrices of integers. In the product $Z=X \cdot Y$, entry $z_{ij}$ is the (dot) product of $i$th row of $X$ and $j$th column of $Y$:
\begin{equation} \begin{array}{l} \hline \text{Input: Two matrices $X$ and $Y$} \\ \text{Output: The matrix product $X \cdot Y$} \\ \hline \text{for $i=1$ to $n$ do} \\ \quad \text{for $j=1$ to $n$ do} \\ \quad \quad Z[i][j]=0 \\ \quad \quad \text{for $k=1$ to $n$ do} \\ \quad \quad \quad Z[i][j] = Z[i][j] + X[i][k] * Y[k][j] \\ \text{return $Z$} \end{array} \end{equation}
Idea: Divide a square matrix to smaller square submatrices.
\begin{equation} X = \begin{pmatrix} A & B \\ C & D \end{pmatrix} \end{equation} | \begin{equation} Y = \begin{pmatrix} E & F \\ G & H \end{pmatrix} \end{equation} | \begin{equation} Z= X \cdot Y = \begin{pmatrix} A \cdot E + B \cdot G & A \cdot F + B \cdot H \\ C \cdot E + D \cdot G & C \cdot F + D \cdot H \end{pmatrix} \end{equation} |
with $A, B, \ldots, H$ all $\frac{n}{2} \times \frac{n}{2}$ matrices.
Idea: Divide a square matrix to smaller square submatrices
\begin{equation} \begin{array}{l} \hline \text{Input: Two matrices $X$ and $Y$} \\ \text{Output: The matrix product $Z=X \cdot Y$} \\ \hline \text{if $n=1$ then} \\ \quad \text{return $1 \times 1$ matrix with $X[1][1] * Y[1][1]$} \\ \text{else} \\ \quad \text{Set $A, B, C, D$ as submatrices of X} \\ \quad \text{Set $E, F, G, H$ as submatrices of Y} \\ \quad \text{recursively compute the $8$ matrix products} \\ \quad \text{return result of the computation} \end{array} \end{equation}
Idea: Save one recursive call in exchange for additional matrix additions/subtractions.
Idea: Save one recursive call in exchange for additional matrix additions/subtractions.
\begin{equation} \begin{array}{l} \hline \text{Input: Two matrices $X$ and $Y$} \\ \text{Output: The matrix product $Z=X \cdot Y$} \\ \hline \text{if $n=1$ then} \\ \quad \text{return $1 \times 1$ matrix with $X[1][1] * Y[1][1]$} \\ \text{else} \\ \quad \text{Set $A, B, C, D$ as submatrices of X} \\ \quad \text{Set $E, F, G, H$ as submatrices of Y} \\ \quad \text{recursively compute the $7$ products $P_i$} \\ \quad \text{return result of the additions/subtractions involving $P_i$(s)} \end{array} \end{equation}