Meet our Rising Stars 2024!

Written by WiGRAPH Executive Team
Posted on May 31, 2024

Meet the 10 up-and-coming researchers selected to participate in WiGRAPH’s Rising Stars program, a two-year program of mentorship and workshops co-located with SIGGRAPH 2024 and 2025 to explore potential career trajectories as they enter the job market.

Click on any of our Rising Stars to learn more about them!

Meet Honglin Chen

PhD Student, Columbia University

Research Vision: Physics-based animation and deformation tools are central to creating realistic visual effects and interactions in the virtual worlds. In my research, I develop techniques that transform the traditional simulation and editing pipeline toward a more robust, efficient, accurate, and easy-to-use counterpart.

My passion for animation and geometry has always been a motivating force for my research. I enjoy rethinking the algorithmic choices people have made, including numerical tools, shape discretization, optimization techniques and more, and then obtaining new inspirations for real-world creative tasks.

My broader goal is to push the frontiers of physical simulation technology, bridging the gap between the reality and virtual world and democratizing the physics-based content creation.

Bio: Honglin is a third-year PhD student in Computer Science at Columbia University, advised by Changxi Zheng. Her research lies in the intersection of physics-based simulations, geometric computing, optimization, and (a little bit of) machine learning. Before embarking on her PhD journey, Honglin obtained her MSc from University of Toronto advised by David I.W. Levin, and her B.Eng. from Zhejiang University. She has interned at Adobe, Meta, Nvidia and Microsoft Research Asia during her studies. While not stuck in front of her computer, Honglin enjoys running in the park, drawing (poorly), reading comics, and visiting museums and art exhibitions in New York City.


Meet Nicole Feng

PhD Student, Carnegie Mellon

Research Vision: Geometric problems characterize both natural phenomena (material structure, plant growth, geological formation, etc.) and human-generated data (manufactured objects, art, digital assets). Algorithms for geometric problems are key to successfully manipulating the world around us, enabling modeling, simulation, and performance analysis. Yet we often lack complete answers to fundamental geometric questions such as, "How far is this point inside a given shape?" These queries are essential to tasks like reconstruction and robotic control, to name a few - and can be difficult to answer especially since data is rarely without defects. My work identifies and answers these fundamental geometric questions, enabling reliable computation with geometric data. During my PhD, I've developed algorithms for robust inside/outside computation, signed distance, and animation tools.

Bio: Nicole is a Computer Science PhD student at Carnegie Mellon University, where she is advised by Keenan Crane and develops algorithms for robust geometry processing. Nicole received her B.S. in Applied and Computational Mathematics from the California Institute of Technology, where she did research with Peter Schröder on fluid simulation. She has also worked with Julie Dorsey and the Yale Graphics Group on sketching.


Meet Yao Feng

PhD Student, MPIIS and ETH Zürich

Research Vision: My primary research integrates computer graphics, computer vision, and machine learning to develop foundation models for digital humans. This involves capturing, modeling, and understanding 3D humans. During my PhD, I developed frameworks to capture human faces and bodies from single images, excelling in the detailed accuracy of both body and face. Additionally, I have worked on capturing and modeling the human body, clothing, face, and hair from videos using hybrid 3D representations, which facilitates the easy transfer of clothing and hairstyles between avatars. I also employ Large Vision Language Models to understand and predict 3D human poses from images or texts, blending traditional pose estimation with interactive applications and enabling LLMs to apply their extensive knowledge to human pose reasoning. Looking forward, I see digital humans as beneficial to multiple research fields and plan to explore further areas such as robotics and biomechanics to enhance digital human research.

Bio: Yao Feng is a PhD student under the supervision of Professor Michael J. Black at the Max Planck Institute for Intelligent Systems and Marc Pollefeys at ETH Zürich. She is also a research scientist at Meshcapade, focusing on building foundation models for digital humans. Previously, Yao completed her Master's at Shanghai Jiao Tong University and her Bachelor's degree at Chongqing University of Posts and Telecommunications. She has also interned at Meta Reality Labs, working on avatar creation.


Meet Geonsun Lee

PhD Student, University of Maryland, College Park

Research Vision: The future of work has been transformed by Extended Reality (XR), allowing users to operate in virtual or augmented environments from anywhere. XR significantly enhances remote collaboration and communication through immersive experiences with shared 3D avatars, environments, and multimodal cues, surpassing traditional video meetings. However, there is limited understanding of how collaborative activities, such as document interaction, space design, and XR-mediated meetings, should transition into XR.

My research addresses this gap by developing interfaces and interactions that enhance user perception and communication. Utilizing gesture and facial recognition, eye tracking, and large language models, I create solutions for various collaborative activities in XR. I have developed interfaces supporting communication and collaboration in mixed reality training, education scenarios, virtual world design, and social VR meetings. My current focus is on amplifying social communication with XR interfaces, making users more effective, expressive, and socially connected.

My goal is to create XR tools that facilitate natural, efficient interactions, making virtual communication more inclusive and expressive, thereby enhancing professional and social experiences in virtual environments.

Bio: Geonsun Lee is a Ph.D. Candidate at University of Maryland, College Park, under the supervision of Prof. Dinesh Manocha. Her research focuses on developing interfaces and interactions that enhance multi-user experiences in Extended Reality (XR). She completed her Master's degree in Computer Science and Engineering, following a Bachelor's degree where she double-majored in Business Administration and Computer Science and Engineering, both at Korea University. She has previously interned at Adobe Research, Meta Reality Labs, and Dolby Laboratories.


Meet Sunmin Lee

PhD Student, Seoul National University

Research Vision: My research aims to enable virtual agents to move like us, indistinguishable from real humans in terms of motion quality and versatility. Since human motion is a complex result of anatomy, physical laws, social norms, and style, real-world data and underlying principles offer complementary insights. Aligned with this idea, one aspect of my research focuses on integrating motion capture data and fundamental physical principles to generate plausible motions, which we have shown to be especially effective when it involves interacting with the environment.

Another major focus of my PhD study is the relationship between motion and character morphology. Starting with the question of whether we can separate and manipulate information about movements that are independent of the character's configuration, I worked on constructing a skeleton-agnostic latent motion space for bipedal animations and performing various animation tasks directly within this space. I aim to explore further how to reflect a character's unique style or user preferences, and to expand it to animals and fantastical creatures beyond humans.

Additionally, I’m also interested in building a system that allows users to manipulate character animation intuitively and interactively as they desire through various modalities.

Bio: Sunmin is a PhD candidate at Seoul National University, advised by Jungdam Won and previously by Jehee Lee. She obtained her Bachelor’s degree in Computer Science and Engineering at SNU as well. Her research focuses on understanding and synthesizing character motion. As a huge fan of dance, she has always been curious about how and why people move the way they do and how people use movement to express themselves. Naturally, when she encountered character animation research during her undergraduate years, she was quickly fascinated by it and began her PhD journey to explore it further. During her doctoral program, she interned twice at Meta Reality Labs and will intern at Nvidia this summer. She is a recipient of the ACM SIGGRAPH Conference Grant. Outside of research, Sunmin is a music and book lover and a yoga beginner.


Meet Yifei Li

PhD Candidate, MIT CSAIL

Research Vision: Design problems in engineering and technology that require direct interaction with the physical world for evaluation often involve long prototyping and testing cycles, slowing down the development process and increasing costs. An example is turbine design, where thrust output must be measured in an actual fluid environment. My research focuses on developing computational pipelines to solve inverse design problems in complex physical systems by leveraging techniques from differentiable physical simulation, numerical methods, and artificial intelligence. The benefits are twofold: significant reductions in prototyping time and costs, and the capacity to explore a wider design space more thoroughly than previously possible. Through my research, I aim to push the limits of computational design by shortening prototyping cycles, enhancing design efficiency, and facilitating the real-world application of optimized digital designs. I envision a future where computational tools not only make design more accessible but also empower individuals to create with ease and precision.

Bio: Yifei Li is a fourth-year Ph.D. student at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), advised by Prof. Wojciech Matusik. She holds an M.S. in EECS from MIT and a B.S. in Computer Science from Carnegie Mellon University. Her research specializes in computer graphics, focusing on solving design and inverse problems in complex physical systems. Her work leverages techniques from differentiable physical simulation, computational design, and machine learning. She is a recipient of the MIT Stata Family Presidential Fellowship. She has gained industry experience at Meta Reality Labs, NVIDIA, Facebook AI Research, Google, and Activision & Blizzard. Outside of research, she enjoys ballet as a member of the Harvard Ballet Company, bouldering, and playing the piano.


Meet Barbara Roessle

PhD Candidate, Technical University of Munich

Research Vision: 3D reconstruction and photorealistic novel view synthesis are of key importance for robotics and virtual reality applications, enabling interactive communication, learning or driving experiences. Neural Radiance Fields (NeRF) and 3D Gaussian Splatting have shown impressive novel view synthesis results; nonetheless their applicability to in-the-wild scenes is still challenging: recorded data is often incomplete and suffers from inconsistencies due to dynamics such as changing light conditions. My research vision is to close this gap by learning effective priors to reconstruct scenes from casually captured, imperfect data. On top of reconstructing the observed areas, I aim to generate realistic 3D content for incomplete regions. Overall, the goal is to obtain high-quality scene representations from low-effort capture, enabling straight-forward applicability to in-the-wild scenarios, which is highly relevant for various virtual reality use cases.

Bio: Barbara Roessle is a PhD candidate advised by Prof. Matthias Niessner in the Visual Computing and Artificial Intelligence Lab at the Technical University of Munich, Germany. Her research focuses on photorealistic 3D reconstruction of large-scale, real-world scenes. Additionally, she is working on generation of realistic 3D assets and scenes. Her work has been published at premier computer graphics and vision conferences such as SIGGRAPH, CVPR and ICCV. Prior to her PhD studies, she was developing software for autonomous driving at BMW, focusing on localization and sensor fusion. She obtained a Master’s degree in computer science and a Bachelor’s degree in electrical engineering from the Universities of Applied Sciences in Ulm and Esslingen, Germany.


Meet Ticha Sethapakdi

PhD Candidate, MIT CSAIL

Research Vision: My research is centered around two questions: How do we develop new technologies that enable innovative creative practices? And how can creative practice inform the design of new technologies? To answer these questions, I consolidate my artistic and technical backgrounds to develop systems that support novel creative workflows and expressive fabrication processes. In my work, I build design tools which provide users with a framework for interfacing with new materials (such as materials which change color in response to external stimuli) and streamlining complex creative tasks (such as tasks which involve optimizing the use of fabrication materials during the design process). My research bridges knowledge from a number of research areas, including Human-Computer Interaction (HCI), computer graphics, and physics. I believe a deep understanding of the symbiotic nature between the arts and computing is crucial for advancing creative technologies, and my goal is to deepen my understanding by continuing to bridge these domains.

Bio: Ticha is a PhD Candidate at MIT CSAIL. She received her BFA in Art and MS in Computer Science from Carnegie Mellon University. Her research has been supported by the MIT Stata Presidential Fellowship and Siebel Scholars program. In addition to being published at premier Human-Computer Interaction and Computer Graphics venues, including ACM CHI and ACM SIGGRAPH, her work has received media coverage and recognition from outlets such as MIT News and Fast Company. Outside of research, Ticha is active in community-building at MIT. She served as Co-Lead of the MIT EECS Graduate Application Assistance Program from 2021–2023 and is currently Co-President of the MIT EECS Graduate Students Association. Outside of community-building, Ticha tries to find time for crafting, foraging, and making music with her small band.


Meet Dongqing Wang

PhD Candidate, EPFL

Research Vision: My research interests lie in Neural Inverse Rendering and extending it to enable visually plausible and controllable Virtual Reality. Specifically, my focus is on representing real-world 3D objects or scenarios through a neural representation that allows for visually plausible rendering of objects or scenes with editable viewpoints, illumination, material properties, etc.

At the beginning of my PhD, we proposed a novel representation for transparent materials, addressing the artifacts in refractive object in existing neural representations. We later worked on enhancing the editability of 360-degree neural radiance fields by utilizing 3D diffusion models.

Currently, our focus is to enable illumination and environmental control, as well as realistic object material editing for existing 3D neural scenes, by combining physically based modeling of light transport, deep learning, and generative models.

Bio: Dongqing is a third-year PhD candidate at the Image and Visual Representation Lab, where she is supervised by Prof. Sabine Süsstrunk at EPFL, Switzerland. She earned her Bachelor of Science degree in Computer Science with summa cum laude honors and a minor in Music, as well as a Master of Engineering degree from Cornell University under the supervision of Prof. Steve Marschner. In the summer of 2024, she is interning at Meta Reality Lab in Zurich, working under the guidance of Dr. Lukas Bode and Prof. Adrian Jarabo.

In her free time, Dongqing enjoys musical theater, live music concerts, and making sounds with various instruments. She also likes reading, running, and occasionally savoring a good cocktail.


Meet Xinyue Wei

PhD Student, UC San Diego

Research Vision: My research interests lie in mesh reconstruction and processing algorithms designed to meet the strict requirements of downstream applications. 3D meshes are essential in various fields, including gaming, CAD design, AR/VR, and physical simulation. They are typically obtained through scanning, 3D reconstruction, and 3D generation techniques. However, many of these meshes cannot be directly used in applications with high constraints, such as manifoldness and intersection-free properties, which are critical for modeling and soft body simulation, etc. My work focuses on developing techniques to generate high-quality meshes with desirable properties. I have been working on both optimization- and feed-forward-based mesh reconstruction methods, which can generate manifold, intersection-free geometry with high-fidelity textures. Additionally, I am interested in creating mesh processing algorithms that enhance meshes with specific attributes, such as approximate convex decomposition and mesh simplification.

Bio: Xinyue Wei is a third-year PhD student in Electrical and Computer Engineering at UC San Diego, under the supervision of Prof. Hao Su. Her research focuses on 3D reconstruction and mesh processing. Xinyue holds a B.E. in Computer Science and Technology from Tongji University. She has gained experience through internships at Johns Hopkins University, Tencent AI Lab, and Adobe. In her spare time, Xinyue enjoys photography, singing, and playing the piano.