Full Text Article

Large-Scale Visualization of 3D Unstructured Groundwater Model Using Cave Automated Virtual Environment

Received Date: June 25, 2024 Accepted Date: July 25, 2024 Published Date: July 28, 2024

doi: 10.17303/jdmt.2024.1.202

Citation: Mark Starovoyotv, Devender Rapolu, Shizhong Yang, Sudhir Trivedi, Frank T.-C. Tsai (2024) Large-Scale Visualization of 3D Unstructured Groundwater Model Using Cave Automated Virtual Environment. J Data Sci Mod Tech 1: 1-14

The immersive three-dimensional (3D) virtual reality (VR) visualization of groundwater models allows us to deepen our understanding of aquifer systems and provide better solutions to present groundwater-related problems, such as groundwater recharge, water quality, and sustainability. Visualization assists in accurately developing groundwater models and revealing important subsurface features, including faulting, folding, and unconformity. However, assessing model accuracy poses challenges due to the complexity of geology and groundwater systems. THis research demonstrates a workflow to visualize and analyze raw 3D unstructured groundwater model data using an immersive Cave Automated Virtual Environment (CAVE). To visualize the unstructured groundwater model data, the raw dataset is converted into interactive CAVE-compatible formats utilizing a set of tools: ParaView, Blender, and Unity. This enables researchers to immerse themselves in the data, identifying influential patterns and relationships. The resulting insights can inform the development of sophisticated machine-learning models for groundwater level prediction. The CAVE’s immersive capabilities allow intuitive exploration from various perspectives, providing a more holistic understanding of the factors affecting groundwater levels. These insights are crucial to improve predictive models. The CAVE results also facilitate collaborative analysis and have potential applications in training and education. This research demonstrates the value of immersive VR tools such as the CAVE for unraveling intricacies within high-dimensional scientific data to drive real-world forecasting and modeling applications.

Keywords: Cave Automated Virtual Environment (CAVE); Groundwater model; Visualization

Abbreviations

CAVE: Cave Automated Virtual Environment; GW: Groundwater; VR: Virtual Reality; 3D: Three Dimensional

Cave Automated Virtual Environment (CAVE) is a room-sized, 1:1 human-scale immersive environment [1]. It is widely used in hospitals, strategic planning, and safety pilot training VR systems, which is an efficient, economical, safe, and successful application case. The Windows interface provides drag-and-drop capability and a much smaller user learning curve. In college, faculty and students can visually interact with their data in real-time 3D by wearing stereo glasses and using interactive gloves while viewing the data. Virtual reality solutions such as the CAVE combine stereoscopic projection technology with 3D graphics to provide the illusion of complete presence. The CAVE users can walk into the data and gain new insights into their models or problems. It allows us to interact with a virtual environment safely and efficiently in our daily work. CAVEs can simulate complex environments, allowing researchers to evaluate and visualize designs in real time. They can be used to analyze the behavior of complex systems, such as those used in transportation, energy, and water management. CAVEs can also be used for training, allowing researchers to simulate real-world scenarios in a safe and controlled environment-Cave Automated Virtual Environment in civil engineering and the importance of doing research. CAVEs provide a virtual environment that allows researchers to evaluate and visualize designs in an immersive, 3D environment. It is possible to miss issues or areas for improvement if the visualization is only two-dimensional. It also enables them to examine the conduct of intricate systems, which can be challenging to replicate in real life. Additionally, CAVEs can be used for training, allowing engineers to experience real-world scenarios in a safe and controlled environment.

In geoscience applications, reliable prediction from groundwater (GW) models has crucial importance across various fields, including civil engineering, earth science, agriculture, energy, and sustainability. Since GW systems are complex and many factors influence GW levels, it is difficult to generate accurate predictions. An emerging approach is to utilize advanced 3D data visualization combined with machine learning techniques. Virtual reality systems like the CAVE allow users to visualize and interact dynamically with complex, high-dimensional datasets. This can provide critical insights into patterns, relationships, and influential factors that are difficult to discern through traditional two-dimensional (2D) analysis. These insights can then inform the development of sophisticated machinelearning models for enhanced predictive capabilities.

This research demonstrates a technical workflow to implement 3D visualization. We employ a set of tools such as ParaView [2], Blender [3], and Unity [4] in CAVE to process raw, unstructured GW model data into an optimized format suitable for immersive visualization and interaction using a CAVE virtual environment. This enables researchers to thoroughly analyze the 3D dynamics of GW levels to gain the crucial understanding needed to engineer accurate models to address forecasting future levels. CAVEs can provide researchers with an immersive 3D visualization of GW flow, allowing them to identify potential issues or improvement areas. Additionally, CAVEs can assess different solutions and create simulations of different GW flow scenarios. This type of visualization is essential for researchers to analyze the GW model and identify any potential problems or areas for improvement quickly and accurately. Furthermore, they can assess different solutions and create simulations of different GW flow scenarios. This process helps make informed decisions about managing GW resources.

The demonstrated GW CAVE model was significant for geoscience, civil engineering, and as a safe learning environment. To the best of authors’ knowledge, no CAVErelated applications have been applied to visualizing 3D GW dynamics in CAVE. The CAVE is a walk-in 3D virtual reality (VR) center. Scientists can walk into the data, which allows them to gain new insights into their models or problems. CAVE applications are able for: (1) Data analysis: Data in real-time to gain a better understanding of it; (2) Immersion: The information delivery process involves the whole body and consciousness of a participant (or multiple users); and (3) Detailed view: high resolution, 3D images, simulations, and displays provide more detail. As a result, detailed visual representations are essential to simplifying complex environments.

The methodology outlines a detailed process for converting raw GW model data into a CAVE compatible format by integrating ParaView, Blender, and Unity. Here is an explanation of the detailed steps with additional points on parameters and settings during data processing:

ParaView Data Preparation: After loading the .vtu file (raw GW model data) into ParaView, set the "Solid Color" representation to "Cell_value" to visualize the data values, and then save the data as a .pvd file (ParaView data file) for better performance and portability. Ensure that the .pvd file is the only visible data source in the pipeline browser.

Export from ParaView: Export the scene as an .x3d file (Extensible 3D graphics file format). Enable the "Export Color Legends" option to include color mapping information.

Import into Blender:Import the .x3d file into a clean Blender scene. Adjust the origin of the imported geometry to the world origin using the "Set Origin" and "Snap" tools. Merge overlapping vertices using the "Merge by Distance" operation in Edit mode. even separate the interior and exterior of the geometry for better unwrapping

UV Unwrapping: Create seams on the exterior geometry to define the UV islands. Unwrap the exterior geometry using the "Unwrap" operator. Create seams on the interior geometry to separate vertical walls into individual islands. Then unwrap the interior geometry.

Texture Baking: Set the Render Engine to Cycles for better quality baking. In the Bake settings, select "Diffuse" as the Bake Type and deselect "Direct" and "Indirect" (select only "Color"). Set up a Principled BSDF material with a Color Attribute node connected to the "Col" attribute. To generate a new image for texture baking, it is recommended to add an Image Texture node and proceed with creating the necessary image. Bake the texture using the "Bake" operator in the Render settings. And save the baked texture image.

Export to FBX: Export the geometry and mate rials as an FBX file for import into Unity.

Import into Unity: Import the FBX file into Unity. Apply the baked texture to the imported geometry. Then set up materials, lighting, and other required components for rendering in the CAVE environment.

Parameters and Settings

ParaView: Set the "Solid Color" representation to "Cell_value" to visualize data values correctly. Then enable "Export Color Legends" during .x3d export to include color mapping information.

Blender: Use the "Merge by Distance" operator with an appropriate distance threshold to merge overlapping vertices. Create seams on the geometry to define UV islands for optimal unwrapping. Set the Render Engine to "Cycles" for better quality texture baking. In the Bake settings, select "Diffuse" as the Bake Type and deselect "Direct" and "Indirect" to bake only the color information. Then adjust the resolution and other settings of the Image Texture node for the desired texture quality

Unity: Import the FBX file with appropriate import settings (e.g., scale, materials, etc.). Apply the baked texture to the imported geometry using the appropriate material settings. Then configure lighting, shaders, and other rendering settings as required for the CAVE environment.

A specialized workflow utilizing scientific visualization and 3D modeling software tools was implemented to generate a CAVE-compatible dataset from the original 3D unstructured data inputs.

The raw data was provided in the VTU format [5], an unstructured grid containing GW model data. This large, complex dataset was loaded into the ParaView software for initial processing. ParaView is an open-source visualization application specialized for working with large multidimensional datasets. It provides an extensive toolset enabling interactive manipulation and optimization of 3D models.

The VTU [5] data was divided into manageable sections in ParaView. Various enhancements were applied, including scene shadows and perspective transformations.The data was then exported into the standard X3D format to facilitate transfer between software tools. X3D is an open 3D graphics standard designed for web and VR applications.

The Blender software package was utilized to convert the X3D virtual environment into a format compatible with the CAVE system. Blender is a powerful, open-source computer graphics tool for 3D modeling and animation. It allows developers to create optimized 3D assets ready for integration into interactive VR environments. Specifically, the X3D files were imported into Blender and converted into FBX files containing all necessary components like lighting, textures, camera data, and poses. The FBX format is widely supported across visualization and game engine tools.

The resulting FBX files were loaded into the Unity real-time development platform to create a fully interactive CAVE application. Unity provided critical functionality to import the 3D assets from Blender, arrange them into dynamic scenes, add interactivity through C scripting, and deploy the final build to the CAVE VR system. This combination of ParaView, Blender, and Unity tools enabled efficient conversion of the raw, unstructured dataset into an optimized, interactive virtual environment tailored for the CAVE’s advanced 3D simulation and visualization capabilities.

Virtual Reality (VR) content creation workflow: (1) ParaView processing: ParaView is an open source, widely used multi-platform data analysis and visualization application. It is designed specifically for interactive scientific visualization and analysis of large datasets. In CAVE content creation, ParaView is critical in refining and visualizing 3D models and scenes exported from game engines like Unity. Its visualization capabilities allow fine-tuning materials, textures, lighting, and other attributes of 3D assets in an interactive visual environment. Users can manipulate and enhance VR scenes imported in formats like VTU and X3D in ParaView before exporting them back out for use in the game engine. This enables elevated levels of quality and realism for the final VR experience. The ability to work with large datasets makes it suitable for complex 3D environments.

Blender processing: Among the many features of Blender is its ability to model, animate, simulate, and render animations in three dimensions. It provides a comprehensive toolset for creating 3D assets and animations from scratch. In the VR production pipeline, Blender is used early on to model the core 3D objects, characters, and environments that will populate the VR experience. Its modeling and animation capabilities allow the crafting of rich interactive assets optimized for real-time rendering in game engines. The base 3D assets created in Blender are exported as FBX files and imported into Unity to develop a fully interactive experience. Blender provides a critical starting point for generating high-quality 3D content to serve as the foundation for the final VR application. Unity processing: Unity is a cross-platform, real-time development platform that creates 2D, 3D, VR, and AR interactive experiences and games. Unity has become famous for developing VR applications across mobile, desktop, and console platforms. In the work-flow, Unity is the game engine for importing 3D assets created in Blender, arranging them into interactive scenes, programming behaviors, and functionality, adding multimedia content, and publishing the finished application. Key capabilities like high-quality real-time rendering, physics simulation, scripting, and multi-platform publishing make it well-- suited for creating immersive and performing VR experiences from the 3D building blocks produced in Blender. The interoperability between ParaView & Blender and Blender & Unity creates an efficient end-to-end pipeline for producing VR content, with each tool providing specific strengths. The chart depicts dataset processing flowchart in Figure 1.

A workflow for creating 3D virtual reality content using various software tools. It starts with ParaView. The VTU [5] files are then loaded into ParaView, an open source scientific visualization and analysis software. In ParaView, the materials and textures of the 3D models can be further refined. The VTU [5] file is converted into ParaView data (PVD) and exported to X3D. ParaView can export the scene in the X3D file format. X3D represents an open standard file format designed for the representation and communication of 3D graphics. The X3D files can then be converted to the FBX format for additional processing before being returned to Unity. Modeling and creating the 3D assets was done using Blender, an open-source 3D graphics program. Blender allows you to create and animate complex 3D models. In Blender, the model is unmapped and baked. Then, the materials are assigned to the vertices. The modeled assets are then exported from Blender as FBX files. FBX is a standard file format that transfers 3D data between different programs. These FBX files are brought into Unity as shown in Figure 2, a real-time 3D development platform. In Unity, the 3D models can be arranged into scenes, materials and textures can be added, and interactivity can be programmed to create a complete virtual reality experience. Unity allows exporting the model to ASCII. Here, it is exported as FBX ASCII files containing the Unity scene data.

The Unity project must be added to the Room of Shadows folder and imported into the Unity package. After importing the Unity package, add the FBX file and import the materials to the scene. After importing the scene, go to File and click on Build and Run, specifying the file path to the project folder to generate the EXE file. Then open Trackd and click Start to connect, followed by opening DTrack to check the connection status. Next, open the GetReal3D plugin and import the EXE file. After importing, click launch to launch the application on the CAVE system.

This workflow allows for creating complex 3D models in ParaView and moving files between programs using standardized file formats like X3D. The assets in Blender are imported into Unity to develop fully interactive VR experiences and visualize and refine the materials. The result is FBX ASCII.

The outlined methodology has successfully transformed raw groundwater (GW) model data into an immersive Cave Automatic Virtual Environment (CAVE) visualization. is interactive 3D visualization empowers researchers and stakeholders, enabling them to comprehend the underground water flow patterns within the study area. This offers a significant advantage over traditional 2D visualization tools like ArcGIS, which, while powerful and widely utilized, primarily operates in a 2D environment, limiting the ability to comprehend and fully explore 3D subsurface processes.

CAVE visualization, on the other hand, provides a truly immersive and interactive 3D experience, allowing users to metaphorically 'step inside' the data and explore it from multiple angles and perspectives. One significant advantage of the CAVE visualization over ArcGIS is the ability to perceive depth and spatial relationships more intuitively. In ArcGIS, users must rely on contour lines, cross-sections, or 3D surface representations to infer the third dimension, which can be cognitively challenging and prone to misinterpretations. The CAVE visualization, however, presents the data in its native 3D form, enhancing the users' spatial understanding and facilitating pattern recognition in a more natural and intuitive way [6].

Another advantage of CAVE visualization is the ability to seamlessly integrate multiple data layers and visualize their interactions. For example, users can simultaneously explore GW flow patterns, contaminant plumes, and surface water bodies, all in one view. In contrast, ArcGIS may require users to switch between multiple 2D layers or rely on specialized 3D extensions, which can be cumbersome and potentially obscure vital relationships [7].

Concrete evidence of the effectiveness of the CAVE visualization comes from several case studies and user feedback. One such case study involved the visualization of a contaminant plume in a GW aquifer [7], where users reported a significantly improved comprehension of the plume's 3D extent and concentration gradients compared to traditional 2D visualizations. Another case study visualized the interaction between GW and surface water bodies [6]. Users stated that the immersive experience provided by the CAVE visualization helped them grasp the complex interplay between these systems, which is crucial for understanding ecosystem dynamics and managing water resources effectively.

We finished the proof-of-concept large-scale visualization of GW model 3D visualization in the SUBR natural CAVE system. The raw data is an unstructured VTU [5] file (398 MB), showing the GW system in the New Orleans area. The raw data was loaded into ParaView software, clipped into four sub-divisions, and converted into SVG format data using shadows and perspective transformation techniques. A recent data test was also processed on ParaView 5.11.0, where a virtual reality environment was loaded directly from our VTU [5] file. A 3D viewer-level CAVE test was finished. A GPU PC was assembled to process the big data promptly and save time. The 2 TB SSD with 128 GB RAM, an I9 CPU, and a GeForce RTX 4090 GPU configuration allow us to install the most recent version of the software, CAVE, and ParaView data visualization smoothly

After adding the Unity project to the Room of Shadows folder we have set up for the simulation content, once the project files are copied to the folder, we need to import the relevant Unity package that contains the assets and plugins required for the CAVE system. After importing the package, locate the FBX file containing the 3D modelscene and import it into the Unity project folder. Also, ensure that you import all associated texture and material files, so they are correctly applied to the scene.

With the core scene assets now imported, the next major step is to build and run the project by going to File -> Build and Run within the Unity editor. Using the built-in target location in the file explorer, we can ensure that the resulting executable file remains within your Room of Shadows project folder. Building will generate the necessary EXE and supporting files to run the application. The application is built at this point, but we still need to connect to the CAVE system’s tracking services. Open the Trackd application and hit "Start" to initialize the tracking server. Next, launch DTrack and look at the Log section - it should display connections from Trackd, indicating that components like the head tracker are connected.

Finally, after opening the GetReal3D plugin within the CAVE system, it allows importing the newly built EXE file, then navigating to the file explorer, and loading it in. Now hit "Launch" within the GetReal3D interface. The simulation content scene will initialize and display full scale on the walls of the CAVE for interaction and visualization.

The workflow produced a CAVE-compatible interactive environment from the original large, unstructured VTU [5] data. The implementation allows researchers to visually explore the complex dynamics of the GW model across the New Orleans region. Users can manipulate variables, isolate areas of interest, and identify patterns and relationships to inform predictive models. The direct interaction and immersion within the data afforded by the CAVE system enable intuitive understanding compared to traditional two-dimensional analysis methods. Figure 3 shows unmapping, which refers to the unwrapping of the material. Our tested CAVE display result is shown in Figure 4 after applying the material to the model. Overall, the results demonstrate that the CAVE visualization, enabled by the integration of ParaView, Blender, and Unity, provided a powerful and engaging way to explore and communicate GW model data, offering significant advantages over traditional 2D visualization tools like ArcGIS in terms of spatial understanding, pattern recognition, and data integration.

The successful implementation of the Cave Automatic Virtual Environment (CAVE) visualization for groundwater (GW) model data not only demonstrates its potential to enhance our comprehension of intricate subsurface processes but also highlights its unique features. Unlike traditional 2D visualization tools like ArcGIS, the CAVE environment offers a truly 3D and interactive experience, providing distinct advantages in spatial understanding, pattern recognition, and data integration. This novel approach to visualization can spark excitement and curiosity in academic and professional researchers.

The real-world applications of CAVE visualization technology hold immense promise, particularly in GW management. For instance, it has been used in decision-making processes for environmental remediation [6]. In this context, it provided an immersive and intuitive representation of contaminant plumes and GW flow patterns, enabling stakeholders and decisionmakers to comprehensively comprehend the spatial extent and implications of contamination. This led to the development of more informed and effective remediation strategies. This advantage over traditional 2D visualizations like ArcGIS is not merely a novelty but a notable change in communicating complex subsurface conditions and securing stakeholder buy-in.

Another application is hydrogeological education and training. CAVE visualization can be a powerful pedagogical tool, enabling students and professionals to understand GW dynamics, aquifer characteristics, and interactions between GW and surface water systems better. For example, in learning about the wetlands, contamination of saltwater in rivers to learn and use of CAVE visualization significantly to improve students' understanding and retention of complex subsurface concepts. Compared to 2D visualizations in ArcGIS or other Geographic Information System (GIS) software, the CAVE environment offers a more immersive and experiential learning approach, potentially enhancing retention and comprehension of complex subsurface concepts.

Looking beyond GW applications, the transformative potential of CAVE visualization technology becomes evident in various scientific and engineering domains where 3D data visualization is crucial. Its adaptability to geological modeling, fluid dynamics simulations, medical imaging, and surgical planning opens new and exciting avenues for its utilization and further research. This optimistic outlook can inspire our researchers to explore and innovate with this technology.

While CAVE visualization offers numerous benefits over traditional 2D visualization tools like ArcGIS, it is important to acknowledge its limitations and constraints. One significant limitation is the computational resources required to render and interact with large-scale, high-resolution data sets in real time. The processing power and memory requirements for such visualizations can be substantial, potentially limiting the scalability and accessibility of the technology compared to more lightweight software like ArcGIS. Furthermore, the cost of projectors for CAVE systems can be prohibitively expensive. This transparency about the technology's current limitations is crucial for fostering a realistic understanding of its capabilities, ensuring researchers are well-informed.

Additionally, the accuracy of the visualization depends on the data from the GW models. Inaccuracies or uncertainties in the model data may propagate into the CAVE visualization, potentially leading to misrepresentations or misinterpretations of the subsurface dynamics. This issue is expected in CAVE visualization and concerns traditional 2D visualizations like ArcGIS.

Another limitation is the potential for visual fatigue and discomfort experienced by some users during prolonged exposure to immersive environments. While this issue can be mitigated through proper ergonomic design and user training, it is a factor that must be considered when deploying CAVE visualization technology in real-world settings [1]. In contrast, traditional 2D visualization tools like ArcGIS may be less prone to causing visual fatigue.

Furthermore, developing and deploying CAVE visualization systems can be resourceintensive, requiring specialized hardware, software, and expertise. This may limit the technology's widespread adoption, particularly in resource-constrained environments or developing regions, where more accessible software like ArcGIS may be more feasible.

Future research efforts should address these limitations by exploring more efficient rendering techniques, improving data acquisition, and modeling methods, and developing user-friendly interfaces and ergonomic designs. Additionally, collaborations between domain experts, computer scientists, and visualization specialists can help refine and enhance CAVE visualization technology for specific applications, potentially closing the gap with more widely adopted tools like ArcGIS. The integration of various plugins can enhance the capabilities and accessibility of CAVE visualization, leading to potential increased utilization in scientific and engineering fields. This can be achieved through advancements in both hardware and software.

This research highlights the potential of immersive VR environments like CAVE to unlock deeper insights from complex 3D datasets. Converting raw data into an interactive format allows engineers to analyze relationships within the model thoroughly. Specialized software tools enable flexible processing and optimization of a wide variety of initial data. The CAVE experience provides unique capabilities for collaborative, multi-sensory analysis. Engineers working on predictive models can directly manipulate the 3D data and leverage their specialized expertise through intuitive interaction. The approach could generalize to diverse scientific and engineering domains grappling with high-dimensional data.

Here is a detailed comparison of launching applications with a typical over-the-counter VR headset and an immersive CAVE system. The process of deploying simulation content between VR headsets and a CAVE system faces notable differences in setup complexity and physical execution. The VR headset experiences rely on a clean pipeline of importing scenes into a Unity or equivalent builder environment and constructing an application that targets the capabilities of a standalone headset tied to a controlling PC. Only a few variables need to align, like ensuring the engine software is compatible with a particular headset’s drivers and APIs, with most of the heavy lifting already managed through extensive third-party VR SDKs or built-in operating system integration. From there, it is only a matter of directly executing the output application binary on the host PC machine to launch into the headset’s display buffer, targeting resolution and interaction specifics. External tracking might augment immersion but is not necessary in all cases, with many devices using inside-out position and rotation data captured from onboard cameras. The end-user pipeline is simplified outside of constructing engaging VR content. Figure 5 shows the connectivity of ParaView to the VR headsets.

In contrast, CAVE requires extensive software application deployment and physical operation setup. Scenes and models must still be imported, but now they are targeted at a room-scale environment tracked from many emitter points, requiring rendering to multiple viewing planes arranged at various angles. Launching requires properly initializing the complex laser projectors and tracking server infrastructure before even attempting to call the simulation code. Rather than a single, contained headset display, the graphical output needs distribution across many machines, capturing head and controller input and translating that into shifts in perspective lines and depth handling within the surrounding 3D stereo projections. Once frame sources are aligned precisely, the operator can run CAVE management utilities to load in, configure, and execute the simulation build. The overall pipeline complexity jumps due to the reliance on the multiple moving parts of laser projectors, render nodes, tracking hardware, and specialized image distribution beyond just the singular host computer. When implemented fully, the shortcomings of physical components tend to fade into the background as users become entirely immersed in the life-size replicated environment. The setup trade-off brings scale, embodiment, and shared presence into a mix usually reserved for individual experiences. Figure 6 shows an internal view of the GW model in a room-sized environment.

This research demonstrated a workflow for visualizing a sizeable 3D GW model structure dataset in an immersive CAVE environment using ParaView, Blender, and Unity software. The findings highlight the potential of CAVE technology for enhancing the analysis and understanding of complex 3D models and data in fields like civil engineering. Converting the unstructured water data in VTU [5] format through X3D to a CAVE-compatible FBX file enabled an interactive virtual environment well-suited for visualization and simulation.

The study highlights the capabilities of ParaView, Blender, and Unity for refining and manipulating 3D assets for VR use through their robust scientific visualization toolsets. Blender provides an essential starting point for modeling the core 3D components, while Unity ties together the assets into a fully interactive experience optimized for the CAVE. The combined use of specialized tools allows efficient, high-quality VR data visualization production.

This virtual CAVE environment could enable researchers to thoroughly analyze GW models, assess solutions, and create simulations for civil engineering and geoscience applications. The immersive 3D view facilitates identifying issues and areas for improvement. It also aids in training by simulating real-world scenarios safely. This workflow presents a model for working with complex 3D data in VR across disciplines like engineering, science, and medicine.

We would like to add that a specific limitation exists technical challenges in processing large datasets for real-time CAVE environments. Specialized computing hardware, e.g., a faster multicore CPU and larger memory size, is required to enable smooth user experiences. High upfront costs are also associated with purchasing and maintaining advanced VR systems. These constraints pose challenges for widespread adoption. Further work is needed to improve accessibility through more automated processing pipelines and cost-effective CAVE solutions. Integrating modern techniques like augmented reality into standard workstations could help democratize advanced 3D data analysis.

Despite its limitations, the CAVE visualization approach exhibited a superior capacity to comprehend and communicate the intricate dynamics of groundwater (GW) systems compared to traditional two-dimensional visualizations. By harnessing the potent capabilities of immersive technologies, we can unlock novel insights and facilitate more efficacious decision-making processes in GW management and other domains. This approach offers distinct advantages over widely utilized tools such as ArcGIS, particularly in enhancing spatial understanding and data integration. The unique features of immersive technologies, when combined with advanced visualization techniques, can revolutionize our ability to navigate and interpret complex environmental systems, leading to more informed and judicious decision-making.

This research presents a technical workflow to visualize and analyze raw unstructured GW model data in an immersive CAVE environment for the first time. The improved understanding of complex 3D dynamics gained from CAVE interaction can drive more accurate water forecasting and geoscience models, making further dynamic data-related machine learning models. The approach also has promising training and educational applications. It demonstrates the potential of VR systems like the CAVE to provide actionable insights from high-dimensional scientific data and provides information on the inside view of the New Orleans GW model structure VTU [5]. Ongoing software, hardware, and process automation improvements can help make such platforms more accessible. Further research may focus on investigating their utilities for enhancing automatic predictive models across various engineering domains through intuitive, immersive 3D analysis. The emergence of the CAVE visualization approach heralds a new era in our understanding and communication of the intricate dynamics that govern groundwater systems. Leveraging immersive technologies, this study has brought to light previously unexplored insights and set the stage for more robust decision-- making processes in groundwater management and allied fields. This approach offers clear advantages over traditional two-dimensional visualization tools, amplifying spatial understanding, pattern recognition skills, and data integration capabilities.

We acknowledge support from DOE/NNSA award No. DE-NA0004112, NSF award No. 2019561, NSF award No. 2216805, NSF award No. 2154344, NSF award No. OIA-2019511, NSF award No. OIA-1946231, NSF award No. 1915520, Louisia-na BoR/NSF award No. LEQSF (2023)- SURE-298 and NSF(2024)-SURE-302, and LONI and its supercomputer allocation of loni_mat_bio20.

  1. Hin LTW, Subramaniam R, Anthony S (2005) Cave Automated Virtual Environment, in IGI Global eBooks, 327-49.
  2. Lyon Jr. A Kowalkowski J, Jones C (2017) Integrating Visualization Applications, such as ParaView, into HEP Software Frameworks for Insitu Event Displays, Journal of Physics: Conference Series, 898: 072041.
  3. Markova KT, Dovramadjiev T, Jecheva GV (2017) Computer parametric designing in Blender software for creating 3D paper models, Annual Journal of Technical University of Varna, 1: 77-84.
  4. Davis C, Collins J, Fraser J, Zhang H, Yao S, et al.(2022) CAVE-VR and Unity Game Engine for Visualizing City Scale 3D Meshes, IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 733-34.
  5. Chen Y.-H, Tsai F.T.-C, Jafari N.H (2021) Multi-objective Optimization of Relief Well Operations to Improve Levee Safety, Journal of Geotechnical and Geoenvironmental Engineering, 147: 1-17.
  6. Havenith HB, Cerfontaine P, Mreyen AS (2017) How virtual reality can help visualize and assess geohazards, International Journal of Digital Earth, 12: 173-89.
  7. Ling M, Chen J (2014) Environmental visualization: applications to site characterization, remedial programs, and litigation support, Environmental Earth Sciences, 72:3839-46.
CommentsFigure 1 CommentsFigure 2 CommentsFigure 3 CommentsFigure 4 CommentsFigure 5 CommentsFigure 6