Ante Qu

353 Serra Mall Room 376 · Stanford, CA 94305

Office: Gates 376

I am a second-year PhD student in the Computer Science department at Stanford University, where I am advised by Doug L. James and am fortunate to be supported by a NSF Graduate Research Fellowship.

My primary research interests are in physics simulations for computer graphics. I develop numerical simulations to synthesize sound for computer animations.

I was formerly a full-time software engineer at Microsoft, where I worked on the Office Graphics (PowerPoint, Word, Excel) team. I graduated in 2015 with an AB in Physics from Princeton University, where I was advised by Jason Fleischer. I have also worked on research projects at Columbia University, the Priceton Plasma Physics Lab, and Adobe Research.

Outside of work, I also enjoy board games, bridge, hiking, and food.

Research

Publications

Toward Wave-based Sound Synthesis for Computer Animation
Jui-Hsien Wang, Ante Qu, Timothy R. Langlois, and Doug L. James
2018, ACM Transactions on Graphics (SIGGRAPH 2018)
PDF Video Project Webpage
We explore an integrated approach to sound generation that supports a wide variety of physics-based simulation models and computer-animated phenomena. Targeting high-quality offline sound synthesis, we seek to resolve animation-driven sound radiation with near-field scattering and diffraction effects. The core of our approach is a sharp-interface finite-difference time-domain (FDTD) wavesolver, with a series of supporting algorithms to handle rapidly deforming and vibrating embedded interfaces arising in physics-based animation sound. Once the solver rasterizes these interfaces, it must evaluate acceleration boundary conditions (BCs) that involve model and phenomena-specific computations. We introduce acoustic shaders as a mechanism to abstract away these complexities, and describe a variety of implementations for computer animation: near-rigid objects with ringing and acceleration noise, deformable (finite element) models such as thin shells, bubble-based water, and virtual characters. Since time-domain wave synthesis is expensive, we only simulate pressure waves in a small region about each sound source, then estimate a far-field pressure signal. To further improve scalability beyond multi-threading, we propose a fully time-parallel sound synthesis method that is demonstrated on commodity cloud computing resources. In addition to presenting results for multiple animation phenomena (water, rigid, shells, kinematic deformers, etc.) we also propose 3D automatic dialogue replacement (3DADR) for virtual characters so that pre-recorded dialogue can include character movement, and near-field shadowing and scattering sound effects.

Multi-Scale Simulation of Nonlinear Thin-Shell Sound with Wave Turbulence
Gabriel Cirio, Ante Qu, George Drettakis, Eitan Grinspun, and Changxi Zheng
2018, ACM Transactions on Graphics (SIGGRAPH 2018)
PDF Video Project Webpage
Thin shells — solids that are thin in one dimension compared to the other two — often emit rich nonlinear sounds when struck. Strong excitations can even cause chaotic thin-shell vibrations, producing sounds whose energy spectrum diffuses from low to high frequencies over time — a phenomenon known as wave turbulence. It is all these nonlinearities that grant shells such as cymbals and gongs their characteristic “glinting” sound. Yet, simulation models that efficiently capture these sound effects remain elusive.

    We propose a physically based, multi-scale reduced simulation method to synthesize nonlinear thin-shell sounds. We first split nonlinear vibrations into two scales, with a small low-frequency part simulated in a fully nonlinear way, and a high-frequency part containing many more modes approximated through time-varying linearization. This allows us to capture interesting nonlinearities in the shells’ deformation, tens of times faster than previous approaches. Furthermore, we propose a method that enriches simulated sounds with wave turbulent sound details through a phenomenological diffusion model in the frequency domain, and thereby sidestep the expensive simulation of chaotic high-frequency dynamics. We show several examples of our simulations, illustrating the efficiency and realism of our model.

Other Work

Phase Retrieval by Flattening the Wavefront
Ante Qu
2015, Undergraduate Thesis, Presented at OSA COSI 2015
PDF
Many objects of interest in imaging, such as biological cells or turbulent air, are phase-only objects that are transparent and thus produce little to no contrast in wide-field microscopes. The phase accumulated by this light carries important information about the refractive index and the thickness of the object. We propose a method for retrieving the phase by using a spatial light modulator (SLM) to conjugate the phase of the object, flattening the wavefront of light passing through the SLM and the object. After we flatten the wavefront, the resulting configuration on the SLM is the conjugate of the phase image, which we can easily invert to recover the original phase image. This method retrieves the phase without using any prior knowledge about the object.

    Our algorithm performs a decomposition of the image into basis functions and searches for the coefficients that yield the flattest output intensity pattern. This algorithm takes advantage of the fact that a relatively small number of basis elements can store the majority of the information in the image. Popular phase retrieval methods such as the Gerchberg–Saxton algorithm can only converge to the phase image under light that is sufficiently coherent. From our simulations, we find that our method consistently produces correlations of over 99% with the original phase image, using either incoherent or coherent light and only 10% as many basis elements as the number of pixels in the image. We believe this result is a strong indication that this method will be able to reliably retrieve a direct phase image in the laboratory.

Multi-threaded GPU Acceleration of ORBIT with Minimal Code Modifications
Ante Qu, Stephane Ethier, Eliot Feibush, and Roscoe White
2013, Princeton Plastma Physics Lab Report 4996, presented at APS DPP 2013
PDF
The guiding center code ORBIT was originally developed 30 years ago to study the drift-orbit effects of charged particles in the strong guiding magnetic fields of tokamaks. Today, ORBIT remains a very active tool in magnetic-confinement fusion research and continues to adapt to the latest toroidal devices, such as the NSTX-Upgrade, for which it plays a very important role in the study of energetic particle effects. Although the capabilities of ORBIT have improved throughout the years, the code still remains a serial application, which has now become an impediment to the lengthy simulations required for the NSTX-U project. In this work, multi-threaded parallelism is introduced in the core of the code with the goal of achieving the largest performance improvement while minimizing changes made to the source code. To that end, we introduce compiler directives in the most compute-intensive parts of the code, which constitutes the stable core that seldom changes. Standard OpenMP directives are used for shared-memory CPU multi-threading while newly developed OpenACC directives and CUDA Fortran code are used for Graphics Processing Unit (GPU) multi-threading. Our data shows that the fully-optimized CUDA Fortran version is 53.6 times faster than the original code.

Patent

Sketch-effects hatching (Pending)
Wen Shi and Ante Qu, US20180033170A1

Teaching