Computer Graphics

Computer Graphics is about digital models for threedimensional geometric objects as well as images. These shapes and images may represent approximations of the real world or could be synthetic, i.e., exist only in the computer. Goals of computer graphics research are the generation of plausible and informative images, and computation with reasonable resources, i.e. in a short amount of time with little storage requirements. The models and algorithms for this task combine knowledge from different areas of mathematics and computer science..

Prof. Alexa elected as Fellow of Eurographics

Each year, the European Association for Computer Graphics elects up to three members for their longstanding contributions to be Fellows of the Association. Prof. Alexa has been elected as one of two new Fellows in 2018. Citation and more information.

Siggraph Asia 2017: Localized solutions of sparse linear systems for geometry processing

Computing solutions to linear systems is a fundamental building block of many geometry processing algorithms. In many cases the Cholesky factorization of the system matrix is computed to subsequently solve the system, possibly for many right-hand sides, using forward and back substitution. We demonstrate how to exploit sparsity in both the right-hand side and the set of desired solution values to obtain significant speedups. The method is easy to implement and potentially useful in any scenarios where linear problems have to be solved locally. We show that this technique is useful for geometry processing operations, in particular we consider the solution of diffusion problems. All problems profit significantly from sparse computations in terms of runtime, which we demonstrate by providing timings for a set of numerical experiments.

See the project page for more details.

UIST 2017: HeatSpace – Automatic Placement of Displays by Empirical Analysis of User Behavior

Heatspace records and empirically analyzes user behavior in a space and automatically suggests positions and sizes for new displays.

We present HeatSpace, a system that records and empirically analyzes user behavior in a space and automatically suggests positions and sizes for new displays. The system uses depth cameras to capture 3D geometry and users’ perspectives over time. To derive possible display placements, it calculates volumetric heatmaps describing geometric persistence and planarity of structures inside the space. It evaluates visibility of display poses by calculating a volumetric heatmap describing occlusions, position within users’ field of view, and viewing angle. Optimal display size is calculated through a heatmap of average viewing distance. Based on the heatmaps and user constraints we sample the space of valid display placements and jointly optimize their positions. This can be useful when installing displays in multi-display environments such as meeting rooms, offices, and train stations.

Please see the paper for details.

SMI 2017: Unsharp Masking Geometry Improves 3D Prints

Mass market digital manufacturing devices are severely limited in accuracy and material, resulting in a significant gap between the appearance of the virtual and the real shape. In imaging as well as rendering of shapes, it is common to enhance features so that they are more apparent. We provide an approach for feature enhancement that directly operates on the geometry of a given shape, with particular focus on improving the visual appearance for 3D printing. The technique is based on unsharp masking, modified to handle arbitrary free-form geometry in a stable, efficient way, without causing large scale deformation. On a series of manufactured shapes we show how features are lost as size of the object decreases, and how our technique can compensate for this. We evaluate this effect in a human subject experiment and find significant preference for modified geometry.

CHI 2017: Changing the Appearance of Real-World Objects by Modifying Their Surroundings

We present an approach to alter the perceived appearance of physical objects by controlling their surrounding space. Many real-world objects cannot easily be equipped with displays or actuators in order to change their shape. While common approaches such as projection mapping enable changing the appearance of objects without modifying them, certain surface properties (e. g. highly reflective or transparent surfaces) can make employing these techniques difficult. In this work, we present a conceptual design exploration on how the appearance of an object can be changed by solely altering the space around it, rather than the object itself. In a proof-of-concept implementation, we place objects onto a tabletop display and track them together with users to display perspective-corrected 3D graphics for augmentation. This enables controlling properties such as the perceived size, color, or shape of objects. We characterize the design space of our approach and demonstrate potential applications. For example, we change the contour of a wallet to notify users when their bank account is debited. We envision our approach to gain in importance with increasing ubiquity of display surfaces.

Please see our project page for more details.

 

Eurographics 2017: Diffusion Diagrams: Voronoi Cells and Centroids from Diffusion

We define Voronoi cells and centroids based on heat diffusion. These heat cells and heat centroids coincide with the common definitions in Euclidean spaces. On curved surfaces they compare favorably with definitions based on geodesics: they are smooth and can be computed in a stable way with a single linear solve. We analyze the numerics of this approach and can show that diffusion diagrams converge quadratically against the smooth case under mesh refinement, which is better than other common discretization of distance measures in curved spaces. By factorizing the system matrix in a preprocess, computing Voronoi diagrams or centroids amounts to just back-substitution. We show how to localize this operation so that the complexity is linear in the size of the cells and not the underlying mesh. We provide several example applications that show how to benefit from this approach.

See the project page for more details.

Optimal Discrete Slicing

Slicing is the procedure necessary to prepare a shape for layered manufacturing. There are degrees of freedom in this process, such as the starting point of the slicing sequence and the thickness of each slice. The choice of these parameters influences the manufacturing process and its result: the number of slices significantly affects the time needed for manufacturing, while their thickness affects the error. Assuming a discrete setting, we measure the error as the number of voxels that are incorrectly assigned due to slicing. We provide an algorithm that generates, for a given set of available slice heights and a shape, a slicing that is provably optimal. By optimal we mean that the algorithm generates sequences with minimal error for any possible number of slices. The algorithm is fast and flexible, it can accommodate a user driven importance modulation of the error function and allows the interactive exploration of the desired quality/time tradeoff.
We can demonstrate the practical importance of our optimization on several 3D-printed results.

The technical background is described in a paper that now appeared in ACM TOG.

UIST 2016: Changing the Appearance of Physical Interfaces Through Controlled Transparency

teaser

We present physical interfaces that change their appearance through controlled transparency. These transparency-controlled physical interfaces are well suited for applications where communication through optical appearance is sufficient, such as ambient display scenarios. They transition between perceived shapes within milliseconds, require no mechanically moving parts and consume little energy. We build 3D physical interfaces with individually controllable parts by laser cutting and folding a single sheet of transparency-controlled material. We explore the benefits of transparency-controlled physical interfaces by characterizing their design space and showcase four physical prototypes.

Please see our project page for more details.

Our work was featured on Fast Company Co.Design, Vice Motherboard and Futurism.

CG&A Special Issue: Measuring Visual Salience of 3D Printed Objects

teaser

We investigate human viewing behavior on physical realizations of 3D objects. Using an eye tracker with scene camera and fiducial markers we are able to gather fixations on the surface of the presented stimuli. This data is used to validate assumptions regarding visual saliency so far only experimentally analyzed using flat stimuli. We provide a way to compare fixation sequences from different subjects as well as a model for generating test sequences of fixations unrelated to the stimuli. This way we can show that human observers agree in their fixations for the same object under similar viewing conditions – as expected based on similar results for flat stimuli. We also develop a simple procedure to validate computational models for visual saliency of 3D objects and use it to show that popular models of mesh salience based on the center surround patterns fail to predict fixations.

Please see our project page for more details.

NPAR 2015: The Markov Pen – Online Synthesis of Freehand Drawing Styles

teaser

Learning expressive curve styles from example is crucial for interactive or computer-based narrative illustrations. We propose a method for online synthesis of free-hand drawing styles along arbitrary base paths by means of an autoregressive Markov Model. Choice on further curve progression is made while drawing, by sampling from a series of previously learned feature distributions subject to local curvature. The algorithm requires no user adjustable parameters other than one short example style. It may be used as a custom “random brush” designer in any task that requires rapid placement of a large number of detail-rich shapes that are tedious to create manually.

 

See the project page for more details.