Augmented reality to enable personalized guides in cultural heritage sites

graphic

‘Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences’  by Stefano Aletto, Davide Abati, Giuseppe Sera and Rita Cucchiara (University of Modena and Reggio Emilia, Italy)

Best Paper award at Intetain 2015, 7th International Conference on Intelligent Technologies for Interactive Entertainment. 

Aletto et al. have developed a new multimedia tool that enhances tourists’ cultural heritage experience thanks to augmented reality by feeding them extra information about what they see, in addition to highlighting the most noteworthy aspects. Their approach is egocentric, meaning that the augmented display that users wear follows them everywhere. This removes the constraints of current multimedia enhancements in museums and other cultural heritage sites, especially ones that allow visitors to freeroam outdoors, where the user experience becomes unguided, and thus exponentially more difficult to manage with non-egocentric approaches.

Screenshot 2016-05-03 12.24.35

The system that the paper describes is designed precisely for the uncontstrained outdoor use, taking the advantage of the fast development of wearable devices. A head-mounted camera emulates the vision of the user, recording the objects they interact with, people and events they focus their attention on, essentially everything that is relevant to them. Being able to keep up with the user in real time allows for methods of feeding them moment-to-moment relevant information.

Aletto et al. proposed a system that can retrieve architectural details from images and can provide a tourist with an augmented experience. The main idea behind the framework is that a tourist may not be able to immediately identify all the details in the artwork, and may have to rely on a guide to do so. By using augmented vision, user’s attention is not divided between the artwork and an external guide (like a handbook), and requires no additional effort on the user’s part. Using a wearable computing board and a glass-mounted camera, the user can ask the system to provide him with the details of the scene that they are looking at.

If you wish to fully explore the system used for locating the user, and the retrieval method used to find particular points of interest, you may get the full paper on EUDL.

Learn more about the current edition of Intetain.

Michal Dudic

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>