Abstract
Diffusion models have surpassed Generative Adversarial Networks in generating high-quality, high-resolution images, enhancing detail and diversity. However, diffusion models still demand significant time and computational resources. Current work indicates that the quality of generated images is tied to sampling time steps, with each phase in the generation process affecting training and output differently. To address this challenge, this study introduces evolutionary algorithms to optimize the sequence of time-step sampling probabilities within the training phase of diffusion models. Due to traditional sampling probability sequences involving floating points and high dimensions, this paper simplifies the search space and redefines the search objectives of the evolutionary algorithm, making the search process more efficient. Experimental results demonstrate that the proposed method not only speeds up the training process of diffusion models but also reveals that effective time step sampling probability sequences from adjacent training phases have similar distributions, indicating that a stable sequence of time steps exists that can consistently accelerate network convergence throughout extensive training phases. These findings not only enhance training efficiency but also reduce computational costs while maintaining the quality of generated images.
Original language | English |
---|---|
Title of host publication | 2025 International Joint Conference on Neural Networks (IJCNN 2025) |
Number of pages | 8 |
Publication status | Published - 18 Apr 2025 |
Event | International Joint Conference on Neural Networks (IJCNN2025) - Rome, Italy Duration: 30 Jun 2025 → 05 Jul 2025 |
Conference
Conference | International Joint Conference on Neural Networks (IJCNN2025) |
---|---|
Abbreviated title | IJCNN 2025 |
Country/Territory | Italy |
City | Rome |
Period | 30 Jun 2025 → 05 Jul 2025 |