Principal Investigators: Prof. Yan Shuicheng, Prof. Xu Changsheng
This project aims to develop novel approaches on cross media semantic analysis and association for the generation of multimedia based immersive scenes. The specific research tasks include: (1) web-based image/video annotation and search; (2) personalized multimedia summarization; and (3) topic related immersive multimedia scene creation based on text information. The application will be targeted on building a web-based multimedia assisted mediation tool to enable people speaking different languages to be able to communicate easily and smoothly without help of translation tools. The technologies developed in this project will enhance state-of-the-art multimedia annotation, search and summarization as well as complement and assist machine translation.
Download video: Multimodal-based Immersive Scene Creation