mirror of https://github.com/Askill/claude.git
Merge branch 'master' of https://github.com/TechWizzart/claude
This commit is contained in:
commit
7fdef53e59
|
|
@ -5,4 +5,21 @@
|
||||||
Potential is the energy change that occurs when the position of an object changes \cite{potential}. There are many potentials, like electric potential, gravitational potential and elastic
|
Potential is the energy change that occurs when the position of an object changes \cite{potential}. There are many potentials, like electric potential, gravitational potential and elastic
|
||||||
potential. Let me explain the concept with an example. Say you are walking on a set of stairs in the upwards direction. As your muscles move to bring you one step upwards, energy that is used
|
potential. Let me explain the concept with an example. Say you are walking on a set of stairs in the upwards direction. As your muscles move to bring you one step upwards, energy that is used
|
||||||
by your muscles is converted into gravitational potential. Now imagine you turn around and go downwards instead. Notice how that is easier? That is due to the gravitational potential being
|
by your muscles is converted into gravitational potential. Now imagine you turn around and go downwards instead. Notice how that is easier? That is due to the gravitational potential being
|
||||||
converted back into energy so your muscles have to deliver less energy to get you down. The potential is usually tied to a force, like the gravitational force.
|
converted back into energy so your muscles have to deliver less energy to get you down. The potential is usually tied to a force, like the gravitational force.
|
||||||
|
|
||||||
|
\subsection{Asymptotic Runtime} \label{sec:runtime}
|
||||||
|
Asymptotic runtime is what we use in computer science to indicate how fast an algorithm works. We do it this way because concrete time indications (seconds, minutes, hours) are very machine
|
||||||
|
dependent. It matters a lot if your CPU, RAM and GPU are fast or not for the runtime. Therefore, we needed something to compare algorithms by which is machine independent. That is what asymptotic
|
||||||
|
runtime is. We have 3 notations for asymptotic runtime, $\Omega$ which is the lower bound of the runtime: not faster than; $O$ which is the upperbound of the runtime: not slower than; and we
|
||||||
|
have $\Theta$ which is the tight bound: not slower but also not faster than. After these 3 notations we usually denote the runtime in algebraic letters which stand for the input size. $O(n)$ for
|
||||||
|
instance means that for an input of size $n$ the algorithm will not run slower than $n$ operations. Whereas $\Omega(n^3)$ means that the algorithm needs for an input of size $n$ at least $n^3$
|
||||||
|
operations. Now this is not an exact match, as there are constants and other terms in the real runtime, but for asymptotic runtime we look at the most dominant factor, as that outgrows all the
|
||||||
|
other factors if the input size increases. You can compare this by plotting the functions $y = x$ and $z = x^2$ on a graphical calculator. No matter which constant $a$ you put in front of the $x$,
|
||||||
|
$z = x^2$ will at some point (note we don't specify when or where) be larger than $y = ax$. What you need to remember for all this is that polynomials are faster than exponentials ($n^2 < 2^n$)
|
||||||
|
and logarithms are faster than polynomials ($\log(n) < n$). $n!$ is very slow, $\log(n)$ is very fast.
|
||||||
|
|
||||||
|
\subsection{Complex Numbers} \label{sec:complex}
|
||||||
|
As you all know in the real numbers ($\mathbb{R}$) negative roots are not allowed as they do not exist. But what would happen if we would allow them to exist? Then we move into the area of
|
||||||
|
complex numbers. A complex number consists out of two parts, a real part and an imaginary part in the form $a + bi$ where $i = \sqrt{-1}$. Complex numbers have all kinds of properties, but what
|
||||||
|
we need them for are rotations. This is captured in Euler's formula $e^{it} = \cos(x) + i\sin(x)$ \cite{eulerFormula}. Which means that for time $t$ we rotate around the origin (of the complex
|
||||||
|
plane) forming a circle with radius one (the unit circle). Now if you would set $t = \pi$ then the result is $0$. What this means is that we have come full circle (hah) when $t = 2\pi$.
|
||||||
|
|
@ -51,4 +51,8 @@ only applies to variables in code, every symbol in equations are explained at th
|
||||||
\item $p_0$: The pressure of a latitude, longitude, atmospheric layer gridcell from the previous calculation round.
|
\item $p_0$: The pressure of a latitude, longitude, atmospheric layer gridcell from the previous calculation round.
|
||||||
\item $\alpha_a$: The thermal diffusivity constant for air.
|
\item $\alpha_a$: The thermal diffusivity constant for air.
|
||||||
\item $\alpha_p$: The thermal diffusivity constant for the planet surface.
|
\item $\alpha_p$: The thermal diffusivity constant for the planet surface.
|
||||||
|
\item $smooth_t$: The smoothing parameter for the temperature.
|
||||||
|
\item $smooth_u$: The smoothing parameter for the $u$ component of the velocity.
|
||||||
|
\item $smooth_v$: The smoothing parameter for the $v$ component of the velocity.
|
||||||
|
\item $smooth_w$: The smoothing parameter for the $w$ component of the velocity.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
@ -206,6 +206,17 @@ chapter={3},
|
||||||
pages={49}
|
pages={49}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@article{fft,
|
||||||
|
title={An algorithm for the machine calculation of complex Fourier series},
|
||||||
|
volume={19},
|
||||||
|
DOI={10.1090/s0025-5718-1965-0178586-1},
|
||||||
|
number={90},
|
||||||
|
journal={Mathematics of Computation},
|
||||||
|
author={Cooley, James W. and Tukey, John W.},
|
||||||
|
year={1965},
|
||||||
|
pages={297–297}
|
||||||
|
}
|
||||||
|
|
||||||
%General internet sources
|
%General internet sources
|
||||||
@misc{latlong,
|
@misc{latlong,
|
||||||
title={Geographic coordinate system},
|
title={Geographic coordinate system},
|
||||||
|
|
@ -227,6 +238,16 @@ year={2020},
|
||||||
month={Jun}
|
month={Jun}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@misc{eulerFormula,
|
||||||
|
title={Equinox},
|
||||||
|
howpublished="\url{https://en.wikipedia.org/wiki/Euler%27s_formula}",
|
||||||
|
journal={Wikipedia},
|
||||||
|
publisher={Wikimedia Foundation},
|
||||||
|
author={Boldt, Axel},
|
||||||
|
year={2020},
|
||||||
|
month={September}
|
||||||
|
}
|
||||||
|
|
||||||
@misc{mole,
|
@misc{mole,
|
||||||
title={SI Units - Amount of Substance},
|
title={SI Units - Amount of Substance},
|
||||||
howpublished="\url{https://www.nist.gov/pml/weights-and-measures/si-units-amount-substance}",
|
howpublished="\url{https://www.nist.gov/pml/weights-and-measures/si-units-amount-substance}",
|
||||||
|
|
@ -351,4 +372,15 @@ publisher={Wikimedia Foundation},
|
||||||
author={Boldt, Axel},
|
author={Boldt, Axel},
|
||||||
year={2020},
|
year={2020},
|
||||||
month={Aug}
|
month={Aug}
|
||||||
|
}
|
||||||
|
|
||||||
|
@article{numpy,
|
||||||
|
title={The NumPy array: a structure for efficient numerical computation},
|
||||||
|
author={Van Der Walt, Stefan and Colbert, S Chris and Varoquaux, Gael},
|
||||||
|
journal={Computing in Science \& Engineering},
|
||||||
|
volume={13},
|
||||||
|
number={2},
|
||||||
|
pages={22},
|
||||||
|
year={2011},
|
||||||
|
publisher={IEEE Computer Society}
|
||||||
}
|
}
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
\section{Advection}
|
\section{Advection} \label{sec:adv}
|
||||||
Advection is a fluid flow transporting something with it as it flows. This can be temperature, gas, solids or other fluids. In our case we will be looking at temperature.
|
Advection is a fluid flow transporting something with it as it flows. This can be temperature, gas, solids or other fluids. In our case we will be looking at temperature.
|
||||||
|
|
||||||
\subsection{Thermal Diffusion}
|
\subsection{Thermal Diffusion}
|
||||||
|
|
|
||||||
|
|
@ -106,7 +106,10 @@ definitions can be found in \autoref{alg:model constants}. What the $adv$ boolea
|
||||||
$\delta y \leftarrow \frac{2\pi r}{nlat}$ \Comment*[l]{How far apart the gridpoints in the y direction are (degrees latitude)}
|
$\delta y \leftarrow \frac{2\pi r}{nlat}$ \Comment*[l]{How far apart the gridpoints in the y direction are (degrees latitude)}
|
||||||
$\alpha_a \leftarrow 2 \cdot 10^{-5}$ \Comment*[l]{The diffusivity constant for the atmosphere}
|
$\alpha_a \leftarrow 2 \cdot 10^{-5}$ \Comment*[l]{The diffusivity constant for the atmosphere}
|
||||||
$\alpha_p \leftarrow 1.5 \cdot 10^{-6}$ \Comment*[l]{The diffusivity constant for the planet surface}
|
$\alpha_p \leftarrow 1.5 \cdot 10^{-6}$ \Comment*[l]{The diffusivity constant for the planet surface}
|
||||||
|
$smooth_t \leftarrow 0.9$ \Comment*[l]{the smoothing parameter for the temperature}
|
||||||
|
$smooth_u \leftarrow 0.8$ \Comment*[l]{The smoothing parameter for the $u$ component of the velocity}
|
||||||
|
$smooth_v \leftarrow 0.8$ \Comment*[l]{The smoothing parameter for the $v$ component of the velocity}
|
||||||
|
$smooth_w \leftarrow 0.3$ \Comment*[l]{The smoothing parameter for the $w$ component of the velocity}
|
||||||
$count \leftarrow 0$ \;
|
$count \leftarrow 0$ \;
|
||||||
\For{$j \in [0, top]$}{
|
\For{$j \in [0, top]$}{
|
||||||
$heights[j] \leftarrow count$ \Comment*[l]{The height of a layer}
|
$heights[j] \leftarrow count$ \Comment*[l]{The height of a layer}
|
||||||
|
|
|
||||||
|
|
@ -109,4 +109,48 @@ data or work in the other magnitude.
|
||||||
Note that the function \texttt{interpolate} takes three arguments, the first one being the data points that we want to have values for, the second one is the data points that we know and the
|
Note that the function \texttt{interpolate} takes three arguments, the first one being the data points that we want to have values for, the second one is the data points that we know and the
|
||||||
third one is the values for the data points that we know. This function may or may not exist in your programming language of choice, which might mean that you have to write it yourself.
|
third one is the values for the data points that we know. This function may or may not exist in your programming language of choice, which might mean that you have to write it yourself.
|
||||||
The formula that we use for interpolation can be found in \autoref{eq:interpolation}, though you still need to figure out what value you need for $\lambda$ (see \autoref{sec:interpolation}).
|
The formula that we use for interpolation can be found in \autoref{eq:interpolation}, though you still need to figure out what value you need for $\lambda$ (see \autoref{sec:interpolation}).
|
||||||
This is left as an exercise for the reader.
|
This is left as an exercise for the reader.
|
||||||
|
|
||||||
|
\subsection{Clamping the Velocities}
|
||||||
|
Due to the boundaries in the advection calculations (see \autoref{sec:adv}) we get weird instabilities as the velocity calculations are executed on more cells. Which means that air is trying to
|
||||||
|
displace temperature (advect it) by flowing faster to those cells, but actually don't carry any temperature because we turned it off for those cells. This is something that we need to fix to get
|
||||||
|
rid of weirdness around the edges. This is done in \autoref{alg:velocity clamped}. Here the $bla:$ means from $bla$ to the last valid index, if the $:$ is in front of $bla$ then it means from
|
||||||
|
the first valid index to $bla$.
|
||||||
|
|
||||||
|
\begin{algorithm}
|
||||||
|
\autoref{alg:velocity}
|
||||||
|
$u[(adv\_boun, -adv\_boun - 1), :, :] \leftarrow 0.5u[(adv\_boun, -adv\_boun - 1), :, :]$ \;
|
||||||
|
$v[(adv\_boun, -adv\_boun - 1), :, :] \leftarrow 0.5v[(adv\_boun, -adv\_boun - 1), :, :]$ \;
|
||||||
|
$w[(adv\_boun, -adv\_boun - 1), :, :] \leftarrow 0.5w[(adv\_boun, -adv\_boun - 1), :, :]$ \;
|
||||||
|
|
||||||
|
$u[:adv\_boun, :, :] \leftarrow 0 $\;
|
||||||
|
$v[:adv\_boun, :, :] \leftarrow 0 $\;
|
||||||
|
$w[:adv\_boun, :, :] \leftarrow 0 $\;
|
||||||
|
|
||||||
|
$u[-adv\_boun:, :, :] \leftarrow 0$ \;
|
||||||
|
$v[-adv\_boun:, :, :] \leftarrow 0$ \;
|
||||||
|
$w[-adv\_boun:, :, :] \leftarrow 0$ \;
|
||||||
|
\caption{Clamping the Velocities}
|
||||||
|
\label{alg:velocity clamped}
|
||||||
|
\end{algorithm}
|
||||||
|
|
||||||
|
\subsection{Smoothing all the things}
|
||||||
|
On a planet wide scale, you have a lot of variety in the data. To counteract that we filter out the high frequency data. Which means that we filter out the data that occurs sporadically. So we
|
||||||
|
do not consider the data that occurs so infrequently that it means nothing. We do this for the radiation (temperature) and the velocity which is shown in \autoref{alg:smootht} and
|
||||||
|
\autoref{alg:smoothv} respectively. It is worth mentioning that \autoref{alg:smootht} is executed after we do the calculations for $T_a$ (shown in \autoref{alg:optical depth}).
|
||||||
|
\autoref{alg:smoothv} is done after \autoref{alg:velocity} but before \autoref{alg:velocity clamped}.
|
||||||
|
|
||||||
|
|
||||||
|
\begin{algorithm}
|
||||||
|
$T_a \leftarrow \texttt{Smooth}(T_a, smooth_t)$ \;
|
||||||
|
\caption{Smoothing the atmospheric temperature}
|
||||||
|
\label{alg:smootht}
|
||||||
|
\end{algorithm}
|
||||||
|
|
||||||
|
\begin{algorithm}
|
||||||
|
$u \leftarrow \texttt{Smooth}(u, smooth_u)$ \;
|
||||||
|
$v \leftarrow \texttt{Smooth}(v, smooth_v)$ \;
|
||||||
|
$w \leftarrow \texttt{Smooth}(w, smooth_w)$ \;
|
||||||
|
\caption{Smoothing the velocity}
|
||||||
|
\label{alg:smoothv}
|
||||||
|
\end{algorithm}
|
||||||
|
|
@ -189,4 +189,133 @@ Whereas if $z$ is close to $x$ then $\lambda$ will have a value on the lower end
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
z = (1 - \lambda)x + \lambda y
|
z = (1 - \lambda)x + \lambda y
|
||||||
\label{eq:interpolation}
|
\label{eq:interpolation}
|
||||||
\end{equation}
|
\end{equation}
|
||||||
|
|
||||||
|
\subsection{3D smoothing} \label{sec:3dsmooth}
|
||||||
|
As you can imagine the temperature, pressure and the like vary quite a lot over the whole planet. Which is something that we kind of want but not really. What we really want is to limit how
|
||||||
|
much variety we allow to exist. For this we are going to use Fast Fourier Transforms, also known as FFTs. A Fourier Transform decomposes a wave into its frequences. The fast bit comes from the
|
||||||
|
algorithm we use to calculate it. This is because doing it via the obvious way is very slow, in the order of $O(n^2)$ (for what that means, please visit \autoref{sec:runtime}). Whereas if we use
|
||||||
|
the FFT, we reduce the running time to $O(n\log(n))$. There are various ways to calculate the FFT, but we use the Cooley–Tukey algorithm \cite{fft}. To explain it, let us first dive into a normal
|
||||||
|
Fourier Transform.
|
||||||
|
|
||||||
|
The best way to explain what a Fourier Transform does is to apply it to a sound. Sound is vibrations travelling through the air that reach your air and make the inner part of your air vibrate.
|
||||||
|
If you plot the air pressure reaching your air versus the time, the result will have the form of a sinoidal wave. However, it is only a straight forward sinoidal wave (as if you plotted the
|
||||||
|
$\cos$ function) if the tone is pure. That is often not the case, and sounds are combinations of tones. This gives waves that are sinoidal but not very alike to the $\cos$ function. The FT will
|
||||||
|
transform this "unpure" wave and splits them up into a set of waves that are all of pure tone. To do that we need complex numbers which are explained here \autoref{sec:complex}.
|
||||||
|
|
||||||
|
With that explanation out of the way, we now know that with Euler's formula (\autoref{eq:euler}) we can rotate on the complex plane. If we rotate one full circle per second, the formula changes
|
||||||
|
to \autoref{eq:euler rotate}, as the circumference of the unit circle is $2\pi$ and $t$ is in seconds. This rotates in the clock-wise direction, but we want to rotate in the clockwise direction,
|
||||||
|
so we need to add a $-$ to the exponent. If we also want to control how fast the rotation happens (which is called the frequency) then we change the equation to \autoref{eq:euler freq}. Note that
|
||||||
|
the frequency unit is $Hz$ which is defined as $s^{-1}$, which means that a frequency of $10 Hz$ means 10 revolutions per second. Now we get our wave which we call $g(t)$ and plonk it in front
|
||||||
|
of the equation up until now. Which results in \autoref{eq:euler wave}. Visually, this means that we take the sound wave and wrap it around the origin. This might sound strange at first but bear
|
||||||
|
with me. If you track the center of mass (the average of all the points that form the graph) you will notice that it hovers around the origin. If you now change the frequency of the rotation ($f$)
|
||||||
|
you will see that the center of mass moves a bit, usually around the origin. However, if the frequency of the rotation matches a frequency of the wave, then the center of mass is suddenly a
|
||||||
|
relatively long distance away from the origin. This indicates that we have found a frequency that composes the sound wave. Now how do we track the center of mass? That is done using integration,
|
||||||
|
as in \autoref{eq:euler int}. Now to get to the final form, we forget about the fraction part. This means that the center of mass will still hover around the origin for the main part of the
|
||||||
|
rotation, but has a huge value for when the rotation is at the same frequency as one of the waves in the sound wave. The larger the difference between $t_2$ and $t_1$, the larger the value of the
|
||||||
|
Fourier Transform. The final equation is given in \autoref{eq:ft}.
|
||||||
|
|
||||||
|
\begin{subequations}
|
||||||
|
\begin{equation}
|
||||||
|
e^{ix} = \cos(x) + i\sin(x)
|
||||||
|
\label{eq:euler}
|
||||||
|
\end{equation}
|
||||||
|
\begin{equation}
|
||||||
|
e^{2\pi it}
|
||||||
|
\label{eq:euler rotate}
|
||||||
|
\end{equation}
|
||||||
|
\begin{equation}
|
||||||
|
e^{-2\pi ift}
|
||||||
|
\label{eq:euler freq}
|
||||||
|
\end{equation}
|
||||||
|
\begin{equation}
|
||||||
|
g(t)e^{-2\pi ift}
|
||||||
|
\label{eq:euler wave}
|
||||||
|
\end{equation}
|
||||||
|
\begin{equation}
|
||||||
|
\frac{1}{t_2 - t_1}\int^{t_2}_{t_1}g(t)e^{-2\pi ift}
|
||||||
|
\label{eq:euler int}
|
||||||
|
\end{equation}
|
||||||
|
\begin{equation}
|
||||||
|
\hat{g}(f) = \int^{t_2}_{t_1}g(t)e^{-2\pi ift}
|
||||||
|
\label{eq:ft}
|
||||||
|
\end{equation}
|
||||||
|
\end{subequations}
|
||||||
|
|
||||||
|
These Fourier Transforms have the great property that if you add them together you still have the relatively large distances of the center of mass to the origin at the original frequencies. It
|
||||||
|
is this property that enables us to find the frequencies that compose a sound wave.
|
||||||
|
|
||||||
|
Now that we know what a Fourier Transform is, we need to make it a Fast Fourier Transform, as you can imagine that calculating such a thing is quite difficult. Some smart people have thought
|
||||||
|
about this and they came up with quite a fast algorithm, the Cooley-Tukey algorithm \cite{fft}, named after the people that thought of it. They use something we know as a Discrete Fourier
|
||||||
|
Transform which is described by \autoref{eq:dft}. Here $N$ is the total amount of samples from the continuous sound wave. This means that $0 \leq k \leq N - 1$ and $0 \leq n \leq N - 1$.
|
||||||
|
|
||||||
|
Now with the DFT out of the way we can discuss the algorithm. It makes use of a clever property of the DFT. If you replace $k$ by $(N + k)$, as in \autoref{eq:dftex}, you can split up the
|
||||||
|
exponent which will transform one of the two parts into $1$ due to $e^{-i2\pi n} = 1$ for any integer $n$. This means that $X_{N + k} = X_k \Rightarrow X_{k + iN} = X_k$ for any integer $i$.
|
||||||
|
This symmetry as it is called can be exploited to produce a divide and conquer algorithm, which will recursivley calculate the FT which gives a running time of $N\log(n)$ as shown in
|
||||||
|
\autoref{alg:FFT}.
|
||||||
|
|
||||||
|
\begin{subequations}
|
||||||
|
\begin{equation}
|
||||||
|
X_k = \sum_{n = 0}^{N - 1} x_ne^{-\frac{i2\pi}{N}kn}
|
||||||
|
\label{eq:dft}
|
||||||
|
\end{equation}
|
||||||
|
\begin{equation}
|
||||||
|
X_{N + k} = \sum_{n = 0}^{N - 1} x_ne^{-\frac{i2\pi(N + k)}{N}n} = \sum_{n = 0}^{N - 1} x_ne^{-i2\pi n}x_ne^{-\frac{i2\pi}{N}kn} = \sum_{n = 0}^{N - 1} x_ne^{-\frac{i2\pi}{N}kn}
|
||||||
|
\label{eq:dftex}
|
||||||
|
\end{equation}
|
||||||
|
\end{subequations}
|
||||||
|
|
||||||
|
\begin{algorithm}
|
||||||
|
\SetKwInOut{Input}{Input}
|
||||||
|
\SetKwInOut{Output}{Output}
|
||||||
|
\Input{array $A$, integer $i$, integer $N$, integer $s$}
|
||||||
|
\Output{array $B$ with length $A.length - 1$ containing the DFT}
|
||||||
|
\uIf{$N = 1$}{
|
||||||
|
$B[0] \leftarrow A[0]$ \;
|
||||||
|
} \uElse{
|
||||||
|
$B[0], \dots, B[\frac{N}{2} - 1] \leftarrow \texttt{FFT}(A, i, \frac{N}{2}, 2s)$ \;
|
||||||
|
$B[\frac{N}{2}], \dots, B[N - 1] \leftarrow \texttt{FFT}(A, i + s, \frac{N}{2}, 2s)$ \;
|
||||||
|
\For{$k = 0$ to $\frac{N}{2} - 1$}{
|
||||||
|
$t \leftarrow B[k]$ \;
|
||||||
|
$B[k] \leftarrow t + e^{-2i\pi \frac{k}{N}} B[k + \frac{N}{2}]$ \;
|
||||||
|
$B[k + \frac{N}{2}] \leftarrow t - e^{-2i\pi \frac{k}{N}} B[k + \frac{N}{2}]$ \;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
\Return{B}
|
||||||
|
\caption{One dimensional Fast Fourier Transformation}
|
||||||
|
\label{alg:FFT}
|
||||||
|
\end{algorithm}
|
||||||
|
|
||||||
|
There is just this one problem we have, the algorithm in \autoref{alg:FFT} can only handle one dimension, and we need to support multidimensional arrays. So let us define a multidimensional FFT
|
||||||
|
first in \autoref{eq:NFFT}. Here $N$ and $M$ are the amount of indices for the dimensions, where $M$ are the amount of indices for the first dimension and $N$ are the amount of indices for the
|
||||||
|
second dimension. This can of course be extended in the same way for a $p$-dimensional array.
|
||||||
|
|
||||||
|
\begin{equation}
|
||||||
|
X_{k,l} = \sum_{m = 0}^{M - 1}\sum_{n = 0}^{N - 1} x_{m, n}e^{-2i\pi(\frac{mk}{M} + \frac{nl}{N})}
|
||||||
|
\label{eq:NFFT}
|
||||||
|
\end{equation}
|
||||||
|
|
||||||
|
It is at this point that the algorithm becomes very complicated. Therefore I would like to invite you to use a library for these kinds of calculations, like Numpy \cite{numpy} for Python. If you
|
||||||
|
really want to use your own made version, you need to wait for a bit as I try to decode the literature on the algorithm for it. It is one heck of a thing to decode, so I decided to treat that at
|
||||||
|
a later point in the future. It will then be found here, so you will have to stay tuned. Sorry about that, it's quite complex...
|
||||||
|
|
||||||
|
With that out of the way (or rather on the TODO list), we need to create a smoothing operation out of it. We do this in \autoref{alg:smooth}. Keep in mind that \texttt{FFT} the call is to the
|
||||||
|
multidimensional Fast Fourier Transform algorithm, \texttt{IFFT} the call to the inverse of the multidimensional Fast Fourier Transform algorithm (also on the TODO list) and that the $int()$
|
||||||
|
function ensures that the number in brackets is an integer. Also note that the inverse of the FFT might give complex answers, and we only want real answers which the $.real$ ensures. We only
|
||||||
|
take the real part and return that.
|
||||||
|
|
||||||
|
\begin{algorithm}
|
||||||
|
\SetKwInOut{Input}{Input}
|
||||||
|
\SetKwInOut{Output}{Output}
|
||||||
|
\Input{Array $a$, smoothing factor $s$}
|
||||||
|
\Output{Array $A$ with less variation}
|
||||||
|
$nlat \leftarrow a.length$ \;
|
||||||
|
$nlon \leftarrow a[0].length$ \;
|
||||||
|
$nlevels \leftarrow a[0][0].length$ \;
|
||||||
|
$temp \leftarrow \texttt{FFT}(a)$ \;
|
||||||
|
$temp[int(nlat s):int(nlat(1 - s)),:,:] \leftarrow 0$ \;
|
||||||
|
$temp[:,int(nlon s):int(nlon(1 - s)),:] \leftarrow 0$ \;
|
||||||
|
\Return $\texttt{IFFT}(temp).real$ \;
|
||||||
|
\caption{Smoothing function}
|
||||||
|
\label{alg:smooth}
|
||||||
|
\end{algorithm}
|
||||||
Loading…
Reference in New Issue