Research Credentials

Research Activity:

Research Projects:

This project pertains to the integration of real-time image analysis, adaptive motion control and synchronous communication that enables innovative control and security mechanisms as well as improved energy efficiency. The application developed in this project is a crane that can autonomously avoid collisions with stationary and moving objects. The combination of these components in real-time enables the crane to load a vehicle while this is in motion.

MobiTrick is a portable device for traffic enforcement (e.g., tolling). Mobile Systems are limited in size and resources. Therefore, there is not much space for a large number of sensors. Hence, the work in this project is focused on stereo vision but with two different types of cameras (e.g., color and infrared). The system adapts itself to changing situations in case it is used at a new location. This requires adaptive calibration and online learning. Energy is always a scarce resource in embedded devices. Thus the system is designed to be very energy efficient. New approaches for context-aware dynamic power management are implemented. 

Automatic detection and recognition of number plates is required for a number of purposes including law enforcement, parking lot allocation, gate entry control, etc. Performing this task without using large, bulky and expensive sensors/hardware is a challenging issue.  In this project, we developed an automatic number plate detection and recognition system that is fully based on image processing. We extract FHOG image features, an advanced form of HOG (Histogram of Oriented Gradients) features, from a number of training images and use these features to train a number plate detector based on Structured Support Vector Machine (SSVM). The extracted plates are further enhanced using a number of image processing operations. In order to extract numbers from the plates, Tesseract OCR open-source character recognition engine is used. Our proposed method gives an overall accuracy of over 96%, outperforming existing methods not only with respect to accuracy, but also processing time. 

This project aims at retrieving key information from movies using deep learning. The extracted information includes the salient tags describing the movie, the detailed information describing the overall context of the movie, and segmenting the movie accordingly. The extracted information can also be used for a number of purposes including efficient archiving, query based search, content censorship, and recommendation systems, etc. This research was carried out at Fraunhofer Institute for Integrated Circuits (IIS), Erlangen, Germany and funded by European Research Consortium for Informatics and Mathematics (ERCIM).  

Books:

Book Chapters:

Technical Blog Articles:

Journal Publications: 

Conference Publications: