edit abstract

This commit is contained in:
Simon Meister 2017-10-13 14:56:58 +02:00
parent 146dd61ed2
commit 1b80c15568

View File

@ -1,10 +1,10 @@
\begin{abstract}
Many state of the art energy-minimization approaches to optical flow and scene flow estimation
rely on a rigid scene model, where the scene is represented as an ensemble of distinct,
rigidly moving objects, a static background and a moving camera.
By using a physical scene model, the search space of the optimization problem is significantly
reduced, enabling higly accurate motion estimation.
rely on a (piecewise) rigid scene model, where the scene is represented as an ensemble of distinct,
rigidly moving components, a static background and a moving camera.
By constraining the optimization problem with a physically sound scene model,
these approaches enable higly accurate motion estimation.
With the advent of deep learning methods, it has become popular to re-purpose generic deep networks
for classical computer vision problems involving pixel-wise estimation.
@ -17,7 +17,8 @@ and any physical constraints within the scene.
We introduce an end-to-end deep learning approach for dense motion estimation
that respects the structure of the scene as being composed of distinct objects,
thus unifying end-to-end deep networks and a strong physical scene model.
thus combining the representation learning benefits of end-to-end deep networks
with a physically plausible scene model.
Building on recent advanced in region-based convolutional networks (R-CNNs), we integrate motion
estimation with instance segmentation.