STARs Abstracts




STAR 1: A Survey on Video-based Graphics and Video Visualization

Borgo, Rita (Swansea University)

Chen, Min (Swansea University)

Grundy, Edward (Swansea University)

Daubney, Ben (Swansea University)

Heidemann, Gunther (Universität Stuttgart)

Höferlin, Benjamin (Universität Stuttgart)

Höferlin, Markus (Universität Stuttgart)

Jänicke, Heike (Heidelberg University)

Weiskopf, Daniel (Universitaet Stuttgart)

Xie, Xianghua (Swansea University)

In recent years, a collection of new techniques, which deal with videos as the input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video-based graphics and video visualization. We provide a comprehensive review of techniques for making photo-realistic or artistic computer-generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We propose a new taxonomy to categorize the concepts and techniques in this newly-emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g., feature extraction, detection, tracking and so on) have been featured in video-based modeling and rendering pipelines for graphics and visualization.

Session Chair:Professor Brian Wyvill

STAR 2: Computational Plenoptic Imaging

Wetzstein, Gordon (University of British Columbia)

Ihrke, Ivo (Universität des Saarlandes / MPI Informatik)

Lanman, Douglas (MIT Media Lab)

Heidrich, Wolfgang (University of British Columbia)

The plenoptic function is a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, one fundamental limitation remains: all standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons; in the process, all visual information is irreversibly lost, except for a two-dimensional, spatially-varying subset — the common photograph. In this state of the art report, we review approaches that optically encode the dimensions of the plenpotic function transcending those captured by traditional photography and reconstruct the recorded information computationally.

Session Chair:Professor Terry Hewitt

STAR 3: Visualization for the Physical Sciences

Lipsa, Dan R. (Swansea University)

Laramee, Robert S (Swansea University)

Cox, Simon J. (Aberystwyth University)

Roberts, Jonathan C. (Bangor University)

Walker, Rick (Bangor University)

Close collaboration with other scientific fields is seen as an important goal for the visualization community by leading researchers in visualization. Yet, engaging in a scientific collaboration can be challenging. Physical sciences, with its array of research directions, provide many exciting challenges for a visualization scientist which in turn create ample possibilities for collaboration. We present the first survey of its kind that provides a comprehensive view on existing work on visualization for the physical sciences. We introduce a novel classification scheme based on application area, data dimensionality and main challenge addressed and apply this classification scheme to each contribution from the literature. Our classification highlights mature areas in visualization for the physical sciences and suggests directions for future work. Our survey serves as a useful starting point for those interested in visualization for the physical sciences, namely astronomy, chemistry, earth sciences and physics.

Session Chair:Professor David Duce

STAR 4: Believable Virtual Characters in Human-Computer Dialogs

Jung, Yvonne (Fraunhofer IGD)

Kuijper, Arjan (Fraunhofer IGD)

Fellner, Dieter W. (Fraunhofer IGD, TU Darmstadt)

Kipp, Michael (DFKI, Saarbrücken)

Miksatko, Jan (DFKI, Saarbrücken)

Gratch, Jonathan (Institute for Creative Technologies, University of Southern California)

Thalmann, Daniel (Institute for Media Innovation, NTU, Singapore)

For many application areas, where a task is most naturally represented by talking or where standard input devices are difficult to use or not available at all, virtual characters can be well suited as an intuitive man-machine-interface due to their inherent ability to simulate verbal as well as nonverbal communicative behavior. This type of interface is made possible with the help of multimodal dialog systems, which extend common speech dialog systems with additional modalities just like in human-human interaction. Multimodal dialog systems consist at least of an auditive and graphical component, and communication is based on speech and nonverbal communication alike. However, employing virtual characters as personal and believable dialog partners in multimodal dialogs entails several challenges, because this requires not only a reliable and consistent motion and dialog behavior but also regarding nonverbal communication and affective components. Besides modeling the "mind" and creating intelligent communication behavior on the encoding side, which is an active field of research in artificial intelligence, the visual representation of a character including its perceivable behavior, from a decoding perspective, such as facial expressions and gestures, belongs to the domain of computer graphics and likewise implicates many open issues concerning natural communication. Therefore, in this report we give a comprehensive overview how to go from communication models to actual animation and rendering.

Session Chair:Professor Nigel John

STAR 5: A Survey on Temporal Coherence Methods in Real-Time Rendering

Scherzer, Daniel (LBI for Virtual Archaeology)

Yang, Lei (Hong Kong University of Science and Technology)

Mattausch, Oliver (Vienna University of Technology)

Nehab, Diego (IMPA, Brazil)

Sander, Pedro (Hong Kong University of Science and Technology)

Wimmer, Michael (Vienna University of Technology)

Eisemann, Elmar (Telecom ParisTech / CNRS LTCI)

Nowadays, there is a strong trend towards rendering to higher-resolution displays and at high frame rates. This development aims at delivering more detail and better accuracy, but it also comes at a significant cost. Although graphics cards continue to evolve with an ever-increasing amount of computational power, the processing gain is counteracted to a high degree by increasingly complex and sophisticated pixel computations. For real-time applications, the direct consequence is that image resolution and temporal resolution are often the first candidates to bow to the performance constraints (e.g., although full HD is possible, PS3 and XBox often render at lower resolutions). In order to achieve high-quality rendering at a lower cost, one can exploit temporal coherence (TC). The underlying observation is that a higher resolution and frame rate do not necessarily imply a much higher workload, but a larger amount of redundancy and a higher potential for amortizing rendering over several frames. In this STAR, we will investigate methods that make use of this principle and provide practical and theoretical advice on how to exploit temporal coherence for performance optimization. These methods not only allow us to incorporate more computationally intensive shading effects into many existing applications, but also offer exciting opportunities for extending high-end graphics applications to lower-spec consumer-level hardware. To this end, we first introduce the notion and main concepts of TC, including an overview of historical methods. We then describe a key data structure, the so-called reprojection cache, with several supporting algorithms that facilitate reusing shading information from previous frames, and finally illustrated its usefulness in various applications.

Session Chair:Professor Brian Wyvill

STAR 6: Interactive Character Animation using Simulated Physics

Geijtenbeek, Thomas (Utrecht University)

Pronost, Nicolas (Utrecht University)

Egges, Arjan (Utrecht University)

Overmars, Mark (Utrecht University)

Physics simulation offers the possibility of truly responsive and realistic animation. Despite wide adoption of physics simulation for the animation of passive phenomena, commercial applications still resort to kinematics based approaches for the animation of actively controlled characters. In recent years, however, research on interactive character animation using simulated physics has resulted in tremendous improvements in controllability, efficiency, flexibility and visual fidelity. In this review, we present a structured evaluation of relevant aspects, approaches and techniques regarding interactive character animation using simulated physics, based on over two decades of research. We conclude by pointing out some open research areas and possible future directions.

Session Chair:Professor Nigel John

STAR 7: Acoustic Rendering and Auditory-Visual Cross-Modal Perception and Interaction

Hulusic, Vedad (University of Warwick)

Harvey, Carlo (University of Warwick)

Nicolas Tsingos (Dolby Laboratories, USA)

Debattista, Kurt (University of Warwick)

Walker, Steve (Arup, UK)

Howard, David (University of York)

Chalmers, Alan (University of Warwick)

In recent years research in the 3-Dimensional sound generation field has been primarily focussed upon new applications of spatialised sound. In the computer graphics community the use of such techniques is most commonly found being applied to virtual, immersive environments. However, the field is more varied and diverse than this and other research tackles the problem in a more complete, and computationally expensive manner. However, simulation of light and sound wave propagation is still unachievable at a physically accurate spatio-temporal quality in real-time. Although the Human Visual System (HVS) and the Human Auditory System (HAS) are exceptionally sophisticated, they also contain certain perceptional and attentional limitations. Researchers in psychology have been investigating these limitations for several years and have come up with some findings which may be exploited in other fields. This STAR provides a comprehensive overview of the major techniques for generating spatialised sound and, in addition, discusses perceptual and cross-modal influences to consider. We also describe current limitations and provide an in-depth look at the emerging topics in the field.

Session Chair:Professor Terry Hewitt