initial abstract

This commit is contained in:
Simon Meister 2017-10-12 22:00:26 +02:00
parent db8d976028
commit 8b1bf5c14b

View File

@ -1,9 +1,31 @@
% INFO
% Abstract.tex
% Für den Abstract der Abschlussarbeit oder Dissertation
\begin{abstract}
Dies ist der Abstract der Arbeit.
Er gibt wertungsfrei, kurz und prägnant den Inhalt der wissenschaftlichen Arbeit wieder.
\end{abstract}
Many state of the art energy-minimization approaches to optical flow and scene flow estimation
rely on a rigid scene model, where the scene is represented as an ensemble of distinct,
rigidly moving objects, a static background and a moving camera.
By using a physical scene model, the search space of the optimization problem is significantly
reduced, enabling higly accurate motion estimation.
With the advent of deep learning methods, it has become popular to re-purpose generic deep networks
for classical computer vision problems involving pixel-wise estimation.
Following this trend, many recent end-to-end deep learning approaches to optical flow
and scene flow directly predict full resolution
depth and flow fields with a generic network for dense, pixel-wise prediction,
thereby ignoring the inherent structure of the underlying motion estimation problem
and any physical constraints within the scene.
We introduce an end-to-end deep learning approach for dense motion estimation
that respects the structure of the scene as being composed of distinct objects,
thus unifying end-to-end deep networks and a strong physical scene model.
Building on recent advanced in region-based convolutional networks (R-CNNs), we integrate motion
estimation with instance segmentation.
Given two consecutive frames from a monocular RGBD camera,
our resulting end-to-end deep network detects objects with accurate per-pixel masks
and estimates the 3d motion of each detected object between the frames.
By additionally estimating a global camera motion in the same network, we compose a dense
optical flow field based on instance-level motion predictions.
We demonstrate the effectiveness of our approach on the KITTI 2015 optical flow benchmark.
\end{abstract}