The first workshop on

AI for 3D Content Creation

Room S06 - Oct. 2nd Monday Full Day, @ICCV2023 Paris, France

Remote attendees can join via zoom!

Developing algorithms capable of generating realistic, high quality 3D data at scale has been a long standing problem in Computer Vision and Graphics. We anticipate that having generative models that can reliably synthesize meaningful 3D content will completely revolutionize the workflow of artists and content creators, and will also enable new levels of creativity through ``generative art". Although recently there has been considerable success in generating photorealistic images, the quality and generality of 3D generative models has lagged behind their 2D counterparts. Additionally, efficiently controlling what needs to be generated and scaling these approaches to complex scenes with several static and dynamic objects still remains an open challenge.

In this workshop, we seek to bring together researchers working on generative models for 3D shapes, humans, and scenes to discuss the latest advances, existing limitations and next steps towards developing generative pipelines capable of producing fully controllable 3D environments with multiple humans interacting with each other or with objects in the scene. In the last few years, there has been significant progress in generating 3D objects, humans, and scenes independently, but only recently has the research community shifted their attention towards generating meaningful dynamics and interactions between humans or humans and other scene elements. To this end, in our workshop we look forward to cover the following topics:

  • What is the best representation for generating meaningful variations of 3D objects with texture and high quality details?
  • What is the best representation to enable intuitive control over the generated objects?
  • How to synthesize realistic humans performing plausible actions?
  • How to generate fully controllable 3D environments, where it would be possible to manipulate both the appearance of the scene elements as well as their spatial composition?
  • What is the best representation for generating plausible dynamics and interactions between humans or humans and objects?
  • What are the ethical implications that arise from artificially generated 3D content and how we can address them.

  • Oct 1 2023: Remote attendees can join the workshop through here. In case there are issues with the link, you can use the following Meeting ID: 95263783114 and Passcode: 849452 to join remotely.
  • Oct 1 2023: The poster session will take place in Room S06.
  • Sep 25 2023: The workshop will take place on Monday 2nd of October at Room S06!. For more details please refer here.
  • Sep 18, 2023: The list of accepted papers have been released! Check them out!
  • Aug 2, 2023: We are hosting the OmniObject3D Challenge!. The submission portal is open until 23:59 UTC, September 15, 2023!!
  • July 13, 2023: We have extended the paper submission deadline by a couple of days! The new paper and supplemental material deadline is on July 23 (AoE)!!.
  • June 7, 2023: The paper submission is now open.
  • April 3, 2023: Workshop website launched, with the tentative list of the invited speakers announced.

The workshop will take place on Monday 2nd of October at Room S06! For the attendees joining remotely additional details will be provided as soon as possible. Note that all times in the below schedule are in CET.

08:45 - 09:00 Welcome and Opening Remarks
09:00 - 09:40 Daniel Ritchie Neurosymbolic Models for 3D Content Creation
09:40 - 10:20 Matthew Tancik NeRFs in Practice: From Tooling to Production
10:20 - 10:40 Naureen Mahmood Keep it SMPL
10:40 - 10:55 Coffee Break
10:55 - 11:15 Chang Kai-Hung Application of Deep Graph Learning to Architectural Design and Structural Engineering
11:15 - 11:55 Jiajun Wu TBD
11:55 - 12:35 Gul Varol Is Human Motion a Language without Words?
12:35 - 13:05 Ben Poole TBD
13:15 - 14:10 Lunch Break
14:10 - 14:50 Rana Hanocka TBD
14:50 - 15:10 Terrance de Vries 3D Content Creation in Production at Luma
15:10 - 15:40 OmniObject3D Challenge
15:40 - 17:30 Poster Session
17:35 - 17:55 Matt Deitke Building 3D Foundation Models with Objaverse-XL
17:55 - 18:00 Closing Remarks
Call for Papers

We accept both archival and non-archival paper submissions. The accepted archival papers will be included in the ICCV2023 conference proceedings, while the non-archival will only be presented in the workshop. We welcome papers that are already accepted to the ICCV main conference or other previous conferences. These works will be included in the non-archival paper track. Every accepted paper will have the opportunity to host a poster presentation at the workshop.

We accept two forms of papers:
  • Long paper: Long papers should not exceed 8 pages excluding references and should use the official ICCV template. Long papers are for presenting mature works. A long paper should not only describe novel ideas but also include extensive experimental evaluations that support the proposed ideas.
  • Short paper: Short papers should not exceed 4 pages excluding references and should also use the official ICCV template. Short papers are intended for presenting ideas that are still at an early stage. Although comprehensive analyses and experiments are not necessary for short papers, they should have some basic experiments to support their claims. Moreover, in the short paper track, we encourage submissions focusing on creative contributions demonstrating applications of existing technology into 3D content creation pipelines. For example, we look forward for submissions showcasing how ongoing research on 3D generative AI can be used to facilitate the workflow of experienced as well novice users in various fields such as architectural engineering, product designing, education, art, entertainment etc.

All submissions should anonymized. Papers with more than 4 pages (excluding references) will be reviewed as long papers, and papers with more than 8 pages (excluding references) will be rejected without review. Supplementary material is optional with supported formats: pdf, mp4 and zip. All papers that were not previously presented in a major conference, will be peer-reviewed by three experts in the field in a double-blind manner. In case you are submitting a previously accepted conference paper, please also attach a copy of the acceptance notification email in the supplementary material documents.

Submission Website:

All submissions should follow the ICCV paper format:

Paper Review Timeline:

Paper Submission and supplemental material deadline Sunday, July 23, 2023 (AoE time)
Notification to authors Monday, August 7, 2023
Camera ready deadline Saturday, August 19, 2023

Keynote Speakers
Ben Poole
Google Brain
Matthew Tancik
Jiajun Wu
Daniel Ritchie
Brown University
Rana Hanocka
University of Chicago
Gul Varol
École des Ponts ParisTech
Spotlight Speakers
Naureen Mahmood
CEO of Meshcapade
Terrance DeVries
Founding Memmber and Research Scientist at LumaAI
Kai-Hung Chang
ML Engineer at Google
Matt Deitke
Researcher at Allen Institure for AI
Despoina Paschalidou
Georgios Pavlakos
UC Berkeley
Amlan Kar
University of Toronto and NVIDIA Research
Kaichun Mo
NVIDIA Research
Davis Rempe
NVIDIA Research
Paul Guerrero
Adobe Research
Siyu Tang
ETH Zurich
Leonidas Guibas
Relevant Previous Workshops