\section{Terms That Need More Explanation Then A Footnote}
\subsection{Potential}\label{sec:potential}
Potential is the energy change that occurs when the position of an object changes \cite{potential}. There are many potentials, like electric potential, gravitational potential and elastic
potential. Let me explain the concept with an example. Say you are walking on a set of stairs in the upwards direction. As your muscles move to bring you one step upwards, energy that is used
by your muscles is converted into gravitational potential. Now imagine you turn around and go downwards instead. Notice how that is easier? That is due to the gravitational potential being
converted back into energy so your muscles have to deliver less energy to get you down. The potential is usually tied to a force, like the gravitational force.
Asymptotic runtime is what we use in computer science to indicate how fast an algorithm works. We do it this way because concrete time indications (seconds, minutes, hours) are very machine
dependent. It matters a lot if your CPU, RAM and GPU are fast or not for the runtime. Therefore, we needed something to compare algorithms by which is machine independent. That is what asymptotic
runtime is. We have 3 notations for asymptotic runtime, $\Omega$ which is the lower bound of the runtime: not faster than; $O$ which is the upperbound of the runtime: not slower than; and we
have $\Theta$ which is the tight bound: not slower but also not faster than. After these 3 notations we usually denote the runtime in algebraic letters which stand for the input size. $O(n)$ for
instance means that for an input of size $n$ the algorithm will not run slower than $n$ operations. Whereas $\Omega(n^3)$ means that the algorithm needs for an input of size $n$ at least $n^3$
operations. Now this is not an exact match, as there are constants and other terms in the real runtime, but for asymptotic runtime we look at the most dominant factor, as that outgrows all the
other factors if the input size increases. You can compare this by plotting the functions $y = x$ and $z = x^2$ on a graphical calculator. No matter which constant $a$ you put in front of the $x$,
$z = x^2$ will at some point (note we don't specify when or where) be larger than $y = ax$. What you need to remember for all this is that polynomials are faster than exponentials ($n^2 < 2^n$)
and logarithms are faster than polynomials ($\log(n) < n$). $n!$ is very slow, $\log(n)$ is very fast.
\subsection{Complex Numbers}\label{sec:complex}
As you all know in the real numbers ($\mathbb{R}$) negative roots are not allowed as they do not exist. But what would happen if we would allow them to exist? Then we move into the area of
complex numbers. A complex number consists out of two parts, a real part and an imaginary part in the form $a + bi$ where $i =\sqrt{-1}$. Complex numbers have all kinds of properties, but what
we need them for are rotations. This is captured in Euler's formula $e^{it}=\cos(x)+ i\sin(x)$\cite{eulerFormula}. Which means that for time $t$ we rotate around the origin (of the complex
plane) forming a circle with radius one (the unit circle). Now if you would set $t =\pi$ then the result is $0$. What this means is that we have come full circle (hah) when $t =2\pi$.