Tuesday, September 13, 2011

Hydra Application for the middleware uOS

The group's middleware, uOS, focuses on adaptability of services. But that very adaptability can only be seen when there are applications dealing with the various devices inserted in the environment. Therefore, the Hydra application aims to fill a part of that role. The undergraduate student currently working on it is Lucas Augusto de Almeida (me). I joined the team at the beginning of this year of 2011, while searching for a project to work on as the graduation essay.

Hydra will divide a computer's peripherals as individual resources that will be controlled by different devices in the smart space, if they present the proper drivers. Currently, the aim is to offer Mouse, Keyboard, Camera and Screen resources for the smart space. So far, the mouse and keyboard drivers were already finished, and I'm studying how to write the Camera and Screen drive codes.

As an example regarding the usefulness of Hydra, I'll present two scenarios.

Lets suppose there's a meeting going on. While Madison presents some slides, Ethan doesn't understand some part of the presentation and requests control over the mouse to better explain his/her doubts. Madison uses Hydra to offer control over his computer's mouse pointer and Ethan then controls it using his own cellphone or computer.

A teacher wants to present how to configure some software to his/her students. S/he offers her screen as a resource and the students use Hydra to capture it and display on their own monitors. Then everyone can watch closely every step involved.

That's it. The idea is to finish the development and test by the end of November. Let's see how things will roll!

Monday, September 5, 2011

Tracking, localization and recognition of users in a SmartSpace

One of the group recent activities is in Computer Vision area. It is being developed by two Computer Science students of ''Universidade de Brasilia'': Danilo Avila (me) and Tales Porto.
Nowadays, I'm working at "SEA Tecnologia" developing web systems. Tales works at "Mirante Tecnologia", also developing web systems. We joined this research group at the beginning of this year (2011).

Danilo ÁvilaTales Porto

At the Unbquitous group we're trying to build a system to track, localize and identify people in our SmartSpace called LAICO. We're using some open source libraries like OpenCV, OpenGL and OpenNI and Kinect Sensor from Microsoft as a camera device.
Basically we use the kinect depth data and OpenNI library and drivers to identify new users in the scene and track them. When a new user enters the scene an event is generated and a callback function handles it. This callback function gets the new user's pixels, creates a new rgb image with kinect rgb data and transfers this new image to a recognition module.

User tracked and identified.Kinect.

The recognition module receives the user's image and perform face recognition. Our algorithm uses viola-jones method for face detection and eigenfaces algorithm for face recognition, both implemented by OpenCV library. When the recognition is performed, we return the name of the user identified and the confidence of the recognition back to the tracker module.
In spite of performing recognition in just new users, we keep trying to recognize users already recognized to improve the confidence in the recognition. Then we add a label to the user with it's name and confidence.
At this moment the system can identify and track users in the smartspace with no problem. But there is some issues that can't be resolved using just one kinect, like obstruction and when the user's face can't be get by the kinect because of it's position.
Now we are trying to improve the recognition confidence and to integrate our system with the middleware UbiquitOS.