TY - JOUR
T1 - InSituNet
T2 - Deep Image Synthesis for Parameter Space Exploration of Ensemble Simulations
AU - He, Wenbin
AU - Wang, Junpeng
AU - Guo, Hanqi
AU - Wang, Ko Chih
AU - Shen, Han Wei
AU - Raj, Mukund
AU - Nashed, Youssef S.G.
AU - Peterka, Tom
N1 - Funding Information:
This work was supported in part by US Department of Energy Los Alamos National Laboratory contract 47145 and UT-Battelle LLC contract 4000159447 program manager Laura Biven.
Publisher Copyright:
© 1995-2012 IEEE.
PY - 2020/1
Y1 - 2020/1
N2 - We propose InSituNet, a deep learning based surrogate model to support parameter space exploration for ensemble simulations that are visualized in situ. In situ visualization, generating visualizations at simulation time, is becoming prevalent in handling large-scale simulations because of the I/O and storage constraints. However, in situ visualization approaches limit the flexibility of post-hoc exploration because the raw simulation data are no longer available. Although multiple image-based approaches have been proposed to mitigate this limitation, those approaches lack the ability to explore the simulation parameters. Our approach allows flexible exploration of parameter space for large-scale ensemble simulations by taking advantage of the recent advances in deep learning. Specifically, we design InSituNet as a convolutional regression model to learn the mapping from the simulation and visualization parameters to the visualization results. With the trained model, users can generate new images for different simulation parameters under various visualization settings, which enables in-depth analysis of the underlying ensemble simulations. We demonstrate the effectiveness of InSituNet in combustion, cosmology, and ocean simulations through quantitative and qualitative evaluations.
AB - We propose InSituNet, a deep learning based surrogate model to support parameter space exploration for ensemble simulations that are visualized in situ. In situ visualization, generating visualizations at simulation time, is becoming prevalent in handling large-scale simulations because of the I/O and storage constraints. However, in situ visualization approaches limit the flexibility of post-hoc exploration because the raw simulation data are no longer available. Although multiple image-based approaches have been proposed to mitigate this limitation, those approaches lack the ability to explore the simulation parameters. Our approach allows flexible exploration of parameter space for large-scale ensemble simulations by taking advantage of the recent advances in deep learning. Specifically, we design InSituNet as a convolutional regression model to learn the mapping from the simulation and visualization parameters to the visualization results. With the trained model, users can generate new images for different simulation parameters under various visualization settings, which enables in-depth analysis of the underlying ensemble simulations. We demonstrate the effectiveness of InSituNet in combustion, cosmology, and ocean simulations through quantitative and qualitative evaluations.
KW - In situ visualization
KW - deep learning
KW - ensemble visualization
KW - image synthesis
KW - parameter space exploration
UR - http://www.scopus.com/inward/record.url?scp=85075636284&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075636284&partnerID=8YFLogxK
U2 - 10.1109/TVCG.2019.2934312
DO - 10.1109/TVCG.2019.2934312
M3 - Article
C2 - 31425097
AN - SCOPUS:85075636284
SN - 1077-2626
VL - 26
SP - 23
EP - 33
JO - IEEE Transactions on Visualization and Computer Graphics
JF - IEEE Transactions on Visualization and Computer Graphics
IS - 1
M1 - 8805426
ER -