DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-quality Implicit Neural Representations

Dogyun Park, Sihyeon Kim, Sojin Lee, Hyunwoo J. Kim
MLV LAB, Korea University
ICLR 2024

TL;DR: We propose a latent diffusion model that generates hierarchically decomposed positional embeddings of Implicit neural representations, enabling high-quality generation on various data domains.

Additional Results

Abstract

Recent studies have introduced a new class of generative models for synthesizing implicit neural representations (INRs) that capture arbitrary continuous signals in various domains. These models opened the door for domain-agnostic generative models, but they often fail to achieve high-quality generation. We observed that the existing methods generate the weights of neural networks to parameterize INRs and evaluate the network with fixed positional embeddings (PEs). Arguably, this architecture limits the expressive power of generative models and results in low-quality INR generation. To address this limitation, we propose Domain-agnostic Latent Diffusion Model for INRs (DDMI) that generates adaptive positional embeddings instead of neural networks' weights. Specifically, we develop a Discrete-to-continuous space Variational AutoEncoder (D2C-VAE), which seamlessly connects discrete data and the continuous signal functions in the shared latent space. Additionally, we introduce a novel conditioning mechanism for evaluating INRs with the hierarchically decomposed PEs to further enhance expressive power. Extensive experiments across four modalities, e.g., 2D images, 3D shapes, Neural Radiance Fields, and videos, with seven benchmark datasets, demonstrate the versatility of DDMI and its superior performance compared to the existing INR generative models.


Method Overview

PE generation

To generate high-quality implicit neural representations, we propose the generation of adaptive positional embeddings (PEs) using a diffusion model instead of generating the weights of INRs. This approach shifts the primary expressive power from MLPs to PEs, which we have observed to result in finer detailed generation results. To further enhance expressive capacity, we hierarchically decompose the PEs into multiple scales (HDBFs) and modulate the MLPs in a coarse-to-fine manner (CFC).

Framework

DDMI

In the first stage, we learn the latent space of continuous signals using our D2C-VAE framework. We then approximate the distribution of this latent space with the latent diffusion model. After training, DDMI generates hierarchically decomposed positional embeddings (HDBFs), from which the MLPs read out the signal values for given coordinates.

Framework

Applications

2D image generation

DDMI can generate images with more fine-grained details compared to the previous INR generative models.

Framework

Arbitrary-scale image generation

With DDMI, we can freely control the scale of generated images, e.g., arbitrary resolution or zoom-in, by using different sets of coordinates.

arbitrary_generation

3D generation

DDMI can generate view-consistent and fine detailed 3D objects (occupancy function and NeRF).

Occupancy Function

Neural Radiance Fields

Text to shape generation

DDMI can also be easily adopted to generate diverse and semantically aligned 3D objects given the text prompt.

"A rocking chair"

"A two-layered table"


Video generation

Using DDMI, we can also generate a high-quality video (16 frames).


BibTeX

@inproceedings{park2024ddmi,
  author    = {Dogyun Park, Sihyeon Kim, Sojin Lee, Hyunwoo J. Kim},
  title     = {DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations},
  booktitle = {The Twelfth International Conference on Learning Representations},
  year      = {2024},
}