From 1b80c155688ae667ad66aeed465699c04fb55bdc Mon Sep 17 00:00:00 2001 From: Simon Meister Date: Fri, 13 Oct 2017 14:56:58 +0200 Subject: [PATCH] edit abstract --- abstract.tex | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/abstract.tex b/abstract.tex index 608def8..0dd926b 100644 --- a/abstract.tex +++ b/abstract.tex @@ -1,10 +1,10 @@ \begin{abstract} Many state of the art energy-minimization approaches to optical flow and scene flow estimation -rely on a rigid scene model, where the scene is represented as an ensemble of distinct, -rigidly moving objects, a static background and a moving camera. -By using a physical scene model, the search space of the optimization problem is significantly -reduced, enabling higly accurate motion estimation. +rely on a (piecewise) rigid scene model, where the scene is represented as an ensemble of distinct, +rigidly moving components, a static background and a moving camera. +By constraining the optimization problem with a physically sound scene model, +these approaches enable higly accurate motion estimation. With the advent of deep learning methods, it has become popular to re-purpose generic deep networks for classical computer vision problems involving pixel-wise estimation. @@ -17,7 +17,8 @@ and any physical constraints within the scene. We introduce an end-to-end deep learning approach for dense motion estimation that respects the structure of the scene as being composed of distinct objects, -thus unifying end-to-end deep networks and a strong physical scene model. +thus combining the representation learning benefits of end-to-end deep networks +with a physically plausible scene model. Building on recent advanced in region-based convolutional networks (R-CNNs), we integrate motion estimation with instance segmentation.