Thesis projects at Machine Vision and Pattern Recognition lab

Below you can find thesis topics available at MVPR.

Please note that thesis projects are not paid jobs.

Recommended templates for writing MSc or BSc thesis or course project report: thesis_fin.doc thesis_fin.pdf

Latex templates can be found from Documents and Templates

Available topics

Optimal Codebook for Visual Bag of Words

Visual Bag-of-Words has become one of the key tools for “google kind of” image based search. In this project you play with the existing code and data and you writen a machine learning algorithm that will iteratively or randomly generate, test and search optimal codebooks for the BoW based image matching. In particular, the codebooks based on linear filters will be considered.

  • C/C++ and/or Matlab skills are required.
  • Supervisor: Prof Joni Kamarainen
  • Please contact the supervisor for more details.

Image Alignment Using Pairwise Matching

The main idea of this project is to extend our previous image alignment method using single global seed, to align images pairwise and then build a tree structure which can align each image to any other image via the tree paths. For more details see our BMVC paper.

  • C/C++ and/or Matlab skills are required.
  • Supervisor: Prof Joni Kamarainen
  • Please contact the supervisor for more details.

Model Predictive Visual Servoing

Visual servoing means the real-time control of robots using vision (images). Model predictive control is a modern control paradigm which uses optimization to choose optimal control decisions by predicting a future evolution of a system and optimizing a control objective for the prediction horizon. Using model predictive control with visual servoing is still in its infancy. In this project you can learn about state-of-the-art of visual control and develop the art even further.

  • The thesis should review existing literature on visual servoing using model predictive control. An existing robotic system should also be developed further to perform model predictive visual servoing.
  • The implementation can be done on MVPR's MELFA robot arm.
  • C++ skills required. Matlab skills might be useful in prototyping.
  • Supervisor: Prof Ville Kyrki

Learning Grasp Affordances from Vision

To grasp an object, a robot needs to know where to place its fingers. The set of good finger placements for a particular object are called grasp affordances. Determining grasp affordances from visual input for unknown objects has gained much interest in robotics research community recently, as this would allow robots to operate in normal household environments.

  • The thesis should review the recent work on learning grasp affordances from vision and implement an example system using a state-of-the-art approach.
  • The implementation can be done on MVPR's MELFA robot arm, Weiss robotics/Schunk gripper, and Kinect sensor.
  • C++ and/or Matlab skills are required.
  • Supervisor: Prof Ville Kyrki
  • Please contact the supervisor for more details.

3D Interest Points from Stereo Images

Interest points have been the hot topic of computer vision for some time and they are the low level operators for image based search engines. In this project you will step on a cutting-edge topic by developing such interest points further - to be used with a stereo pair images. This means that on the low level you utilise existing interest point detectors, but then you should select only those interest points which match between the left and right view and then add 3D information to them (depth). In this project you learn about stereo imaging and state-of-the-art methods for image-based search.

  • C/C++ and/or Matlab skills are required.
  • Supervisor: Prof Joni Kamarainen
  • Please contact the supervisor for more details.

What Really is Important in Objects

This work is based on the hypothesis that there are certain local features which are more important than other. Those features can be found from many natural objects and they appear similarly in objects of a same class (car, face, etc.) In this work you will study such important features and make a detector which automatically finds them. First you will use our existing framework to select points which appear similarly in other examples of the same class and then you devise a detector especially for these points.

  • C++ and/or Matlab skills are required.
  • Supervisor: Prof Joni Kamarainen
  • Please contact the supervisor for more details.

Symbolic Description of 3D Objects

In this work, you learn some cutting edge technologies. You will learn about local features which are used in modern image based search technologies. Moreover, you will learn about stereo images which can now be produced by off-the-shelf cameras. Your task is to describe a 3D object, captured by a stereo camera, using local symbols - a kind of “3D interest points”.

  • C++ and/or Matlab skills are required.
  • Supervisor: Prof Joni Kamarainen
  • Please contact the supervisor for more details.

Detection of Things from Images

How to detect that there is a “thing” or things in an image. A thing is something which will be interesting for humans and/or important for automatic image retrieval. Humans, cars, buildings are all “things”, but how to automatically detect them. Can you make an object specific detector or even a detector for all things. That will be experimentally investigated in your work. In this work you can do play with a cutting edge problem that also interests certain big companies at the moment.

  • C++ and/or Matlab skills are required.
  • Supervisor: Prof Joni Kamarainen
  • Please contact the supervisor for more details.

Probabilistic Robotic Manipulation

One of the difficulties in using robots in home-like environments is the complexity and difficulty of observing such environments. Thus, adaptation to the environment and toleration of uncertainty in the robot's knowledge would be very valuable. In effect, the uncertainty is usually modeled using probabilistic models. Robotic manipulation in uncertain conditions has recently gained lots of attention from researchers world-wide and we in Lappeenranta are at the forefront of the research.

  • The thesis should review the recent work on manipulation under uncertainty and implement an example system using a state-of-the-art approach.
  • The implementation can be done on MVPR's MELFA robot arm, Weiss robotics/Schunk gripper, and Point Grey stereo camera.
  • C++ and/or Matlab skills are required.
  • Working knowledge of basic probability theory is required (e.g., completing Pattern Recognition course).
  • Supervisor: Prof Ville Kyrki
  • Please contact the supervisor for more details.

Lost in Probabilities

Probability theory and statistics provide the fundamental background for solving the most difficult problems in computer vision, pattern recognition, machine learning and engineering in general. Do you still feel uncomfortable while encountering probabilities? Do you, however, want to master probabilities and important concepts related them? In this project you will learn about the basic things with probabilities. You will learn what is likelihood, study how to combine likelihoods, how to transform likelihoods to probabilities and play with the concept of “probability score” developed in our laboratory. How the likelihoods can be reliably converted to probability scores and are the probability scores probabilities themselves? Your main task is to briefly review the basic probability theory, spot the most important concepts, review related literature for the selected concepts, explain the main results and program simple examples to explain and verify the theory. The main emphasis is on single and multiple Gaussian densities which have been found very efficient methods for computer vision problems in our laboratory. If you enjoy mathematics and enjoy learning new and powerful things, this could be your project.

  • Requires Matlab programming (examples)
  • Supervisor: Prof Joni Kämäräinen
  • Please contact the supervisor for more details

Robust Object Class Descriptors

This project is continuum to the previously done master's thesis in our laboratory:

  • Ville Kangas. Comparison of Local Feature Detectors and Descriptors for Visual Object Categorization, 2011.

In the previous work, it was shown how the existing descriptors perform bad for multiple images of the same class (e.g. motorbike, car etc.) Moreover, an automatic evaluation framework was developed. In this work, you will utilise the existing framework and develop and evaluate new descriptors which would be more suitable for the task. In this work, you will study some of the state-of-the-art technologies of computer vision and image analysis.

  • Requires programming
  • Supervisor: Prof Joni Kämäräinen
  • Please contact the supervisor for more details

3D Geometric Transformations

This topic will introduce you to the wonderful world of 3D transformations, i.e. how you can manipulate (translate, rotate, scale) 3D graphical objects. Background or interest to computer graphics will be appreciated. With your code you can for example rotate 3D face images or register a set of 3D face images close to each other

  • Experiments implemented on Matlab or as a combo of Matlab and C graphic libs
  • Supervisor: Prof Joni Kämäräinen
  • Please contact the supervisor for more details

Collision Detection of a Robotic Arm And Hand

Robots are beginning to appear in everyday environments. The unpredictability of home-like environments generates challenges for safe, collision-free interaction. Force sensors can be used to detect collisions, although this is especially challenging when a robot is manipulating objects.

  • The thesis should review sensor-based collision detection for robots, concentrating on force sensors. An experimental system for detecting collisions should be developed based on current state-of-the-art knowledge.
  • The implementation can be done on MVPR's MELFA robot arm equipped with a JR3 force sensor.
  • C++ and optionally Matlab skills are required.
  • Supervisor: Prof Ville Kyrki
  • Please contact the supervisor for more details

Constructing Uncertain Spline Models From Stereo

Splines are a good and general model for complex 3-D shapes. Stereo cameras can be used to acquire models of unknown objects, but the data varies in accuracy depending on the existence of noticeable features on the object. This uncertainty could be modeled and taken into account in the optimization of the spline, as well as to describe the uncertainty of the resulting spline.

  • The thesis should review spline models (especially uncertainty with splines) as well as uncertainty in stereo imaging. The results from literature should be then used to develop an experimental system where the uncertainty of stereo is taken into account in spline optimization.
  • Bumblebee 2 stereo camera is available for acquiring stereo data, with software to generating stereo point clouds.
  • Matlab (or C/C++ if preferred) skills are required.
  • Supervisor: Prof Ville Kyrki
  • Please contact the supervisor for more details.

Camera-Projector System Calibration Using a Flat Display

In this work you will learn how to calibrate a digital camera and a projector. You will implement a GUI based calibration system, which uses a display instead of a physical calibration pattern. You will learn about geometry between the world, camera and projector.

  • Requires programming (mainly Matlab)
  • Supervisor: Prof Joni Kämäräinen
  • Please contact the supervisor for more details

Reserved topics

Visual Object Categorisation with Very Simple Features (* reserved Sept 2011 *)

Visual object categorisation means systems which take can automatically detect which learned objects appear in input images. This is a hot topic in computer vision and artificial intelligence - see http://www.vision.ee.ethz.ch/~bleibe/teaching/tutorial-aaai08/ for a brief tutorial.

People are using fancier and fancier methods to solve this problem, but in this project we go back to very basics and utilise the most simplest approaches to solve the problem. My claim is that these simple methods, when properly utilised, are not actually that bad at all. As a result you will learn how the new applications, such as Google image search, really work.

  • Experiments implemented on Matlab (good knowledge on any programming language is sufficient)
  • Supervisor: Prof Joni Kamarainen
  • Please contact the supervisor for more details

Assisted 3D reconstruction from a single or multiple images (*reserved May 2011*)

In this work, you will learn how to create real 3D models from photographs. You will understand the basic principles of 3D reconstruction and 3D graphics and learn to use necessary programming libraries and tools.

  • Supervisor: Prof Joni Kämäräinen
  • Please contact the supervisor for more details

System Identification of Bacteriorhodopsin-Based Photoelectric Sensors (** reserved Nov 2010 **)

The MolComp research group has developed optoelectronic sensors based on biomolecular membranes (bacteriorhodopsin). The sensors generate both electrical and optical response under light stimulus. A measurement platform has been developed for measuring the sensor properties. Partial models for the sensor responses and the sensors themselves are available.

  • The thesis should review feasible grey-box and black-box (blind) system identification methods to study the true phenomenon generating the photoelectric response. The purpose is to continue current research on response and sensor modelling through the use of signal processing and system identification.
  • The work requires Matlab programming skills. Experience on electronics and/or signal processing is an advantage.
  • Supervisor: Assoc. Prof. Lasse Lensu
  • Please contact the supervisor for more details

Image Repainting With General Codes (* Reserved May 2010 *)

In this project you are assigned to generate “Picasso kind of” images by re-painting any given image using a specially generated codebook. Imagine that your face is reconstructed from faces of other people - could your mother still recognise you or not? During this project you will find the answer.

  • Supervisor: Prof Joni Kämäräinen
  • Please contact the supervisor for more details

Learning Grasp Affordances from Vision

To grasp an object, a robot needs to know where to place its fingers. The set of good finger placements for a particular object are called grasp affordances. Determining grasp affordances from visual input for unknown objects has gained much interest in robotics research community recently, as this would allow robots to operate in normal household environments.

  • The thesis should review the recent work on learning grasp affordances from vision and implement an example system using a state-of-the-art approach.
  • The implementation can be done on MVPR's MELFA robot arm, Weiss robotics/Schunk gripper, and Point Grey stereo camera.
  • C++ and/or Matlab skills are required.
  • Supervisor: Prof Ville Kyrki
  • Please contact the supervisor for more details.

Learning Robot Environment Through Simulation

Traditionally simulators have been used extensively in robotics to develop robotic systems without the need to build expensive hardware. However, simulators can be also be used as a “memory” for a robot, that is, the simulation is the robot's internal mental view of the world. This allows the robot to try out actions in simulation before executing them for real. Moreover, after experiencing the real trial, the simulation model could be updated based on the difference between predicted and measured sensor readings of the robot.

  • The thesis should formulate the problem of updating a simulation model based on differences between predicted and measured sensor readings for contact sensors. An experimental system should be developed to demonstrate the approach.
  • To simplify the task, the experimental system should use simulation for both memory (internal representation) and real trials (trying out).
  • A robot simulator is already available at the lab.
  • Matlab (or C++ if preferred) skills are required.
  • Supervisor: Prof Ville Kyrki
  • Please contact the supervisor for more details.

Visualisation and Classification with Biomolecular Vision System (* Reserved Jun 2010 *)

The MolComp research group has developed colour sensitive light sensors and a simple camera based on biomolecular membranes (bacteriorhodopsin). A measurement and demo platform has also been developed for demonstrating the function of the sensors. Equipment for the work is readily available.

  • BSc thesis project: To develop a visualisation and classification application as a part of the demo platform for the biomolecular vision system.
    • The work will include Matlab programming to visualise the sensor responses. Also a simple supervised learning system will be built to classify colours.
  • MSc thesis project: To study the suitability of different methods for the purpose of classifying the sensor responses.
    • The work will include Matlab programming to study the applicability of Support Vector Machines, Gaussian mixture models, and the Self-Organising Map. Partial implementations of the methods already exist.
  • Supervisor: Assoc. Prof. Lasse Lensu
  • Please contact the supervisor for more details
mvpr/thesisprojects.txt · Last modified: 2013/01/30 10:59 by jkamarai
CC Attribution-Noncommercial-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0