Personal tools

The Science and Technology of Music

Home Activities Seminar Series Esteban Maestre: repovizz: A framework for remote storage, visual browsing, annotation, and exchange of multi-modal data

Skip to content. | Skip to navigation

Esteban Maestre: repovizz: A framework for remote storage, visual browsing, annotation, and exchange of multi-modal data

— filed under: ,

Esteban Maestre presents a talk on the software tool repovizz.

  • promo
  • Seminar
When Jun 10, 2014
from 12:30 PM to 01:30 PM
Where A-832, New Music Building, 527 Sherbrooke Street West.
Add event to calendar vCal


repovizz is an integrated online system capable of structural formatting and remote storage, browsing, exchange, annotation, and visualization of synchronous multi-modal, time-aligned data. Motivated by a growing need for data-driven collaborative research, repovizz aims to resolve commonly encountered difficulties in sharing or browsing large collections of multi-modal datasets. At its current state, repovizz is designed to hold time-aligned streams of heterogeneous data: audio, video, motion capture, physiological signals, extracted descriptors, annotations, et cetera. Most popular formats for audio and video are supported, while CSV formats are adopted for streams other than audio or video (e.g. motion capture or physiological signals). The data itself is structured via customized XML files, allowing the user to (re-) organize multi-modal data in any hierarchical manner. Datasets are stored in an online database, allowing the user to interact with the data remotely through a powerful HTML5 visual interface accessible from any current web browser; this feature can be considered a key aspect of repovizz since data can be explored, annotated, or visualized from any location.

repovizz has been developed by Universitat Pompeu Fabra in the context of large-scale research projects over the past few years, and now it is close to launching as beta. In this seminar we'll give an overview of the main capabilities of repovizz and its current state of development, followed by a short tutorial.



Marie Curie Fellow, McGill University 

Universitat Pompeu Fabra

Esteban Maestre was born in Barcelona, Spain, in 1979. He received the B.Sc. and M.Sc. degrees in Electrical Engineering from Universitat Politècnica de Catalunya, Barcelona, in 2000 and 2003; and the D.E.A. and Ph.D. degrees in Computer Science and Digital Communication from Universitat Pompeu Fabra, Barcelona, in 2006 and 2009. From 2001 to 2006, he was a Lecturer at Universitat Politècnica de Catalunya. Esteban worked as a Junior Researcher at Philips Research Laboratories Aachen, Germany, during 2003 and 2004. 

From 2004 to 2013, he was a Researcher (Music Technology Group) and a Lecturer (Department of Information and Communication Technologies) at Universitat Pompeu Fabra. Esteban spent three and a half years at the Center for Computer Research in Music and Acoustics, Stanford University between 2008 to 2014 working on physical modeling synthesis and gesture rendering for automatic control of bowed-string physical models. Between 2012 and 2013, Esteban has been Visiting Researcher at the Department of Mathematics, Universidad Federico Santa María, Santiago, Chile. 

Through a Marie Curie IOF personal fellowship, Esteban pursues now his research at the Computational Acoustics Modeling Lab / Center for Interdisciplinary Research in Music Media and Technology of McGill University. His research interests include sound analysis and synthesis, acoustics modeling, gesture control of virtual musical instruments, and music performance modeling. 


Document Actions