Multiple View Geometry
188 views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Rescooped by Yezi from Unmanned Aerial Vehicles (UAV)
Scoop.it!

Swarm technology for mapping by SenseFly

The technology behind senseFly’s multiple drone operation system first emerged in 2010 at the Laboratory of Intelligent Systems, EPFL when a team of robotic researchers showcased the first outdoor aerial collective system involving up to 10 robots flying together. This technology was then adapted by senseFly’s R&D team and first sucessfully demonstrated last June at the Paris Air Show when two eBees mapped Le Bourget.

It is now fully integrated in senseFly’s ground control software eMotion 2. Operators can use a single interface to control multiple drones which allows greater areas to be covered in less time. These drones

have automated in-flight collision avoidance and will share start and landing waypoints while coordinating their altitudes.

Other new and exciting enhancements in our system include:
- 3D flight planning. Constant over ground distance on steep terrain minimizes the image pixel resolution variation, thus providing better image quality & results.
- Google Earth visualization. The ability to check a flight plan within a 3D environment increases the safety of the operation.
- Flight data management. Review previous flights, find flight logs and corresponding images, create geotags and transfer projects for automatic image reconstruction.
- 3D viewer and editor for image processing. Provides the ability to view and edit in a unique 3D environment in order to improve processing results.
- Optional Canon S110. Great lens, user access to RAW files, manual setting of exposure parameters.

 


Via Nigel Brown
more...
No comment yet.
Rescooped by Yezi from Computer Vision Highlights
Scoop.it!

GTSAM 2.3.1

GTSAM 2.3.1 | Multiple View Geometry | Scoop.it

GTSAM is a library of C++ classes that implement smoothing and mapping (SAM) in robotics and vision, using factor graphs and Bayes networks as the underlying computing paradigm rather than sparse matrices. It can be used to solve SLAM or structure from motion optimization problems.


Via CVScoops
more...
No comment yet.
Rescooped by Yezi from Computer Vision Highlights
Scoop.it!

LSD-SLAM: Large-Scale Direct Monocular SLAM (ECCV '14)

LSD-SLAM is a novel, direct monocular SLAM technique: Instead of using keypoints, it directly operates on image intensities both for tracking and mapping. The camera is tracked using direct image alignment, while geometry is estimated in the form of semi-dense depth maps, obtained by filtering over many pixelwise stereo comparisons. We then build a Sim(3) pose-graph of keyframes, which allows to build scale-drift corrected, large-scale maps including loop-closures. LSD-SLAM runs in real-time on a CPU, and even on a modern smartphone.


Via CVScoops
more...
No comment yet.
Rescooped by Yezi from Computer Vision Highlights
Scoop.it!

Multiple View Geometry - Lecture 1 (Prof. Daniel Cremers)

First lecture of a set of lectures from  Prof. Dr. Daniel Cremers (TU München) on Multiple View Geometry published since may 2013. 


Via CVScoops
more...
Indranil Sinharoy's comment, July 18, 2013 6:21 PM
This is great!!