Lücken, Layout und kleinere Verschiebungen in Kap. 6 und 7

parent 364c651e
Pipeline #125186 passed with stage
in 1 minute and 43 seconds
 ... ... @@ -326,15 +326,15 @@ imshow(mix); \end{pmatrix} \] and this implies \begin{align*} M &= \begin{eqnarray*} M &=& V \begin{pmatrix} \lambda_1 & 0 \\ 0 & \ \lambda_2 \end{pmatrix} V^\top \\ &= \frac{1}{\abs{\nabla u(\x)}} &=& \frac{1}{\abs{\nabla u(\x)}} \begin{pmatrix} u_x & -u_y \\ u_y & u_x ... ... @@ -348,21 +348,22 @@ imshow(mix); u_x & u_y \\ -u_y & u_x \end{pmatrix} \\ &= \frac{1}{\abs{\nabla u(\x)}^2} \begin{pmatrix} \lambda_1 u_x & - \lambda_2 u_y \\ \lambda_1 u_y & \lambda_2 u_x &=& \frac{1}{\abs{\nabla u(\x)}^2} \begin{pmatrix} \lambda_1 u_x & - \lambda_2 u_y \\ \lambda_1 u_y & \lambda_2 u_x \end{pmatrix} \begin{pmatrix} u_x & u_y \\ -u_y & u_x \end{pmatrix} \\ &= \frac{1}{\abs{\nabla u(\x)}^2} \end{pmatrix} \\ &=& \frac{1}{\abs{\nabla u(\x)}^2} \begin{pmatrix} \lambda_1 u_x^2 + \lambda_2 u_y^2 & (\lambda_1 - \lambda_2)u_x u_y \\ (\lambda_1 - \lambda_2)u_x u_y & \lambda_2 u_x^2 + \lambda_1 u_y^2 \end{pmatrix}. \end{align*} \end{eqnarray*} If $\abs{\nabla u(x)} = 0$, we set $M = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$. It is also a good idea to apply pre-smoothing to $u$ before computing $M$. ... ...
 ... ... @@ -47,7 +47,9 @@ \draw (6.5,-5.72) node[circle,fill,inner sep=1pt] {}; \end{tikzpicture} \end{center} In the above sketch, one can see that the edges are enhanced by subtracting $f''$ from $f$, scaled by a suitably chosen factor $\tau > 0$. If some of the resulting values lie outside of the color space, rescaling might be necessary. In higher dimensions we use the Laplacian operator instead of the second derivative which yields In the above sketch, one can see that the edges are enhanced by subtracting $f''$ from $f$, scaled by a suitably chosen factor $\tau > 0$. If some of the resulting values lie outside of the color space, rescaling might be necessary. In higher dimensions we use the Laplacian operator instead of the second derivative which yields $f - \tau \cdot \Delta f.$ ... ... @@ -57,7 +59,9 @@ ={} &g*f - \tau \cdot ((\Delta g) * f) \\ ={} &(g-\tau \cdot \Delta g) * f. \end{align*} Tip: Choose a not too large $\tau$ and apply the procedure iteratively.\exRef{ex:laplaceSharpening} Tip: Choose a not too large $\tau$ and apply the procedure iteratively. \vgap{140mm} \exRef{ex:laplaceSharpening} \section{Deconvolution}\index{Deconvolution} \begin{center} ... ... @@ -80,12 +84,14 @@ \end{align*} This leads to a $\abs{\Omega} \times \abs{\Omega}$ system of linear equations. In 1D: \begin{align*} \pap{ \begin{pmatrix} g(0) & g(-1) & \dots & g(-n) \\ g(1) & g(0) & \ddots & \vdots \\ \vdots & \ddots & \ddots & g(-1) \\ g(n) & \dots & g(1) & g(0) \end{pmatrix} } \begin{pmatrix} f(0) \\ f(1) \\ ... ... @@ -101,6 +107,7 @@ \end{pmatrix} \end{align*} A matrix where the diagonal entries are all the same, like the one featured above, is called a \emph{Toeplitz matrix}\index{Toeplitz matrix}. \vgap{30mm} \exRef{Ex.~\ref{ex:deblurring}} \pa{Continuous Case}:\\ ... ... @@ -144,13 +151,13 @@ \end{flalign*} \pa{Boundedness:}\\ Suppose $A \colon \Ell^1(\Omega) \to \Ell^1(\Omega)$. Then, \begin{align*} \norm{Af}_1 &= \norm{u}_1 = \int_\Omega \abs{u(x)} \d x \\ &= \int_\Omega \abs{\int_\Omega k(x,y) f(y) \d y} \d x \\ &\leq \int_\Omega \int_\Omega \abs{k(x,y)} \cdot \abs{f(y)} \d y \d x \\ &= \int_\Omega \left( \int_\Omega \abs{k(x,y)} \d x \right) \abs{f(y)} \d y \\ &\leq \underbrace{\left( \sup_{y \in \Omega} \int_\Omega \abs{k(x,y)} \d x \right)}_{=\,\norm{A}_{1,1}} \cdot \underbrace{\int_\Omega \abs{f(y)} \d y}_{=\,\norm{f}_1}. \end{align*} \begin{eqnarray*} \norm{Af}_1 &=& \norm{u}_1 = \int_\Omega \abs{u(x)} \d x \\ &=& \int_\Omega \abs{\int_\Omega k(x,y) f(y) \d y} \d x \\ &\leq& \vwhite{5mm}{\int_\Omega \int_\Omega \abs{k(x,y)} \cdot \abs{f(y)} \d y \d x} \\ &=& \vwhite{5mm}{\int_\Omega \left( \int_\Omega \abs{k(x,y)} \d x \right) \abs{f(y)} \d y} \\ &\leq& \underbrace{\left( \sup_{y \in \Omega} \int_\Omega \abs{k(x,y)} \d x \right)}_{=\,\norm{A}_{1,1}} \cdot \underbrace{\int_\Omega \abs{f(y)} \d y}_{=\,\norm{f}_1}. \end{eqnarray*} The value $\norm{A}_{1,1}$ (continuous column sum norm of $A$) is the best possible constant $C$ satisfying $\norm{Af}_1 \leq C \cdot \norm{f}_1 ... ... @@ -198,7 +205,18 @@ Our integral operator A with continuous kernel k can now be uniformly approximated by finite matrices (in the operator norm \norm{A - A_n} \to 0). Then, A itself is compact. What does this mean for solving equations of the form Af = u? Let f_1, f_2, \ldots \in X with \norm{f_n} = 1 and \norm{f_m - f_n} \geq 1 for all m \neq n. We list some examples of such function sequences: Let f_1, f_2, \ldots \in X with \norm{f_n} = 1 and \norm{f_m - f_n} \geq 1 for all m \neq n. (We list some examples of such function sequences below.) Then the sequence (f_n)_{n\in\N} \subset X is bounded, implying, by the compactness of A, that (Af_n) has a convergent subsequence. Since each convergent sequence is also a Cauchy sequence, there exist for every \varepsilon > 0 natural numbers m,n with m \neq n such that \[ \norm{Af_m - Af_n} < \varepsilon\qquad\text{even though}\qquad\norm{f_m-f_n} \geq 1.$ \vgap{50mm} Thus, arbitrarily small changes in $Af$ can imply large changes for $f$. Put differently, the solution ($f$) does not depend continuously on the input data ($Af$), making $Af = u$ an \emph{ill-posed problem}, according to Hadamard. \vgap{40mm} \exRef{Ex.~\ref{ex:compactOperators}} \begin{thinkbox}{Examples of bounded sequences with $\norm{f_m - f_n} \geq 1$ for all $m \neq n$} \begin{itemize} \item $X = \Ell^1([0,1])$: The so-called \emph{Rademacher functions} are given by $$f_n(x) \coloneqq \sgn(\sin(2^n\pi x)).$$ \begin{center} ... ... @@ -238,10 +256,9 @@ \item $X = \ell^2(\N)$: Same example as for $\ell^1(\N)$. Analogously to the continuous case, the sequences again form an ONS and we have $\norm{f_n}_2 = 1$ as well as $\norm{f_m-f_n}_2 = \sqrt{2}$. \end{itemize} \end{thinkbox} In all previous examples, the sequence $(f_n)_{n\in\N} \subset X$ was bounded, implying by the compactness of $A$ that $(Af_n)$ has a convergent subsequence. Since each convergent sequence is also a Cauchy sequence, there exist for every $\varepsilon > 0$ natural numbers $m,n$ with $m \neq n$ such that $\norm{Af_m - Af_n} < \varepsilon$ even though $\norm{f_m-f_n} \geq 1$. Thus, arbitrarily small changes in $Af$ can imply large changes for $f$. Put differently, the solution ($f$) does not depend continuously on the input data ($Af$), making $Af = u$ an \emph{ill-posed problem}, according to Hadamard. \exRef{Ex.~\ref{ex:compactOperators}} We will try solving the problem regardless, with the help of the Fourier transform. \begin{align*} & &g*f &= u \\ ... ... @@ -251,17 +268,20 @@ &\implies &f &= (2\pi)^{-d/2} \cF^{-1}\left(\frac{\cF(u)}{\cF(g)}\right). \end{align*} We have obtained a closed-form solution for $f$ -- so where is the problem? In a smoothing filter $A \colon f \mapsto g*f$, the kernel $g$ is continuous. This implies little to no rapid changes in the function value of $g$, which in the frequency domain translates to small values for high frequencies, i.e. for large $\abs{z}$ holds $\hat g(z) \approx 0$. This in turn causes problems when dividing by $\hat g$, which happens in our closed-form solution.\\ In a smoothing filter $A \colon f \mapsto g*f$, the kernel $g$ is continuous. This implies little to no rapid changes in the function value of $g$, which in the frequency domain translates to small values for high frequencies, i.e. for large $\abs{z}$ holds $\hat g(z) \approx 0$. This in turn causes problems when dividing by $\hat g$, which happens in our closed-form solution. Small perturbations in $\hat u(z)$ will cause large perturbations in the quotient $\hat u(z) / \hat g(z)$, particularly for large frequencies $z$. If $\abs{\hat g(z)}$ is small for large frequencies, then $A \colon f \mapsto g*f$ is a low-pass filter. For any high-frequency function $h$, we then have Alternatively: If $\abs{\hat g(z)}$ is small for large frequencies, then $A \colon f \mapsto g*f$ is a low-pass filter. For any high-frequency function $h$, we then have $A(f+h) = Af + \underbrace{Ah}_{\approx \, 0} \approx Af,$ even though $f+h \not\approx f$. This is the same problem we have identified using the compactness of $A$. \vgap{20mm} \noindent\begin{minipage}{0.5\linewidth} To solve this problem, our first approach is to approximate the function $1/x$ using To solve this problem, our {\bf first approach} is to approximate the function $1/x$ using $R_\alpha(x) = \begin{cases} ... ... @@ -302,7 +322,7 @@$ which approximates $f$ for values of $\alpha$ that are close to $0$. Our second approach will be using variational calculus. We have two wishes: Our {\bf second approach} will be using variational calculus. We have two wishes: \begin{enumerate} \item $g*f \approx u$, and \item ... ... @@ -314,8 +334,8 @@ We first consider 1. and 2.a): \begin{align*} J(f) &\coloneqq \underbrace{\norm{g*f - u}_2^2}_{\text{data term}} + \lambda \underbrace{\norm{f}_2^2}_{\text{regularizer}} \\ &= \int_{\R^d} \abs{(g*f)(\x) - u(\x)}^2 \d\x + \lambda \int_{\R^d} \abs{f(\x)}^2 \d\x. J(f) &\ \coloneqq\ \underbrace{\norm{g*f - u}_2^2}_{\text{data term}} + \lambda \underbrace{\norm{f}_2^2}_{\text{regularizer}} \\ &\ =\ \int_{\R^d} \abs{(g*f)(\x) - u(\x)}^2 \d\x + \lambda \int_{\R^d} \abs{f(\x)}^2 \d\x. \end{align*} This is known as $\Ell^2$-deconvolution or $\Ell^2$-deblurring. ... ... @@ -354,11 +374,11 @@ which is a real-valued minimization problem with respect to $\abs{t}$. Setting the derivative w.r.t.~$\abs{t}$ equal to zero leads to \begin{align*} &0 = \frac{\partial I_\z(t)}{\partial \abs{t}} = 2 \left((2\pi)^d \hat g(\z) \overline{\hat g(\z)} + \lambda\right) \abs{t} - (2\pi)^{d/2} \cdot 2 \cdot \abs{\overline{\hat g(\z)}\hat u(\z)} \\ \implies{} &\abs{t} = \frac{(2\pi)^{d/2} \abs{\overline{\hat g(\z)}\hat u(\z)}}{(2\pi)^d \hat g(\z) \overline{\hat g(\z)} + \lambda} \text{ and } \arg(t) = \arg(\overline{\hat g(\z)}\hat u(\z)) \\ \implies{} &\abs{t} = \frac{(2\pi)^{d/2} \abs{\overline{\hat g(\z)}\hat u(\z)}}{(2\pi)^d \hat g(\z) \overline{\hat g(\z)} + \lambda} \quad\text{ and }\quad \arg(t) = \arg(\overline{\hat g(\z)}\hat u(\z)) \\ \implies{} &t = \frac{(2\pi)^{d/2} \overline{\hat g(\z)}\hat u(\z)}{(2\pi)^d \hat g(\z) \overline{\hat g(\z)} + \lambda} \eqqcolon \hat f_{\mathrm{de}}(\z). \end{align*} With the help of the inverse Fourier transform and the convolution theorem, we retrieve $f_\mathrm{de}$: \begin{equation} \begin{equation} \label{eq:fde} f_\mathrm{de}(\z) = \underbrace{\cF^{-1}\left(\frac{\overline{\hat g(\z)}}{(2\pi)^d \abs{\hat g(\z)}^2 + \lambda}\right)}_{\eqqcolon \, h} * \; u. \end{equation} The $\Ell^2$ deconvolution of $u = g*f$ is $f_\mathrm{de} = h * u$, and thus by the associativity of convolution, ... ... @@ -381,6 +401,7 @@ \implies{} &(A^* A + \abs{\lambda} I) f_\text{de} = A^* u \\ \implies{} &f_\text{de} = (A^* A + \abs{\lambda} I)^{-1} A^* u. \end{align*} The result is the same as \eqref{eq:fde} as is seen by close inspection.\\[4mm] The inverse, which we used in the last step, exists because $(A^* A + \abs{\lambda} I) x = 0$ is solved by the eigenvectors of $A^* A$ corresponding to the eigenvalue $-\abs{\lambda}$. But since $A^* A$ is positive definite, its spectrum lies on the positive real axis, hence $-\abs{\lambda}$ is not an eigenvalue and $(A^* A + \abs{\lambda} I) x = 0$ has no non-trivial solution. \end{thinkbox} ... ... @@ -422,7 +443,8 @@ $\dfrac{\overline{\hat g(\z)}}{(2\pi)^d \abs{\hat g(\z)}^2}$ & $\dfrac{\overline{\hat g(\z)}}{(2\pi)^d \abs{\hat g(\z)}^2 + \lambda}$ & $\dfrac{\overline{\hat g(\z)}}{(2\pi)^d \abs{\hat g(\z)}^2 + \lambda \abs{\z}^2}$ \end{tabular} \end{center} \vgap{60mm} \section{References and Further Reading} \begin{itemize} \item \cite[Section 4.1]{bredies} ... ...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!