
Discussing with your simulator
Contact persons
This vision is not just a distant dream; it is within reach, thanks to advances in AI. With the convergence of cutting-edge AI technologies like large language models (LLMs), retrieval-augmented generation (RAG), and AI agents, we are exploring how to create a seamless dialogue between humans and computational simulators. The goal is to enable users to interact with their simulations naturally, receive AI-driven suggestions, and accelerate decision-making—while keeping the scientific rigor intact.
Key components
This would require integrating multiple AI technologies to create a smooth interaction between humans and simulators. The following are the key components that make this vision a reality:
- LLM-based chatbot interface: A natural language interface that allows users to interact with the system as if they were speaking to a colleague. This interface interprets questions, commands, and requests in plain language and converts them into actions within the simulator.
- RAG (retrieval-augmented generation): A powerful framework that augments LLMs by pulling in relevant knowledge from structured sources such as documentation, previous simulation runs, research papers, and code libraries. This allows the system to provide contextual and up-to-date answers based on existing knowledge.
- Simulator controller (AI agent layer): AI agents act as the executors of simulation tasks. They translate natural language commands into specific simulator actions, such as running simulations, adjusting model parameters, or even analyzing and suggesting corrections to previous outputs.
- Knowledge base: A structured database that stores simulation data, model parameters, and common workflows, all indexed for rapid retrieval. This knowledge base enables the system to learn from past simulations and to improvise on future ones.
- Feedback loop: The AI system can suggest next steps, identify anomalies in the results, and even ask clarifying questions, fostering a collaborative environment where the simulation and user work together to refine the model.
- Differentiable simulators (optional): Simulators that support gradient-based optimization by enabling the backpropagation of gradients through their internal computations. This allows the system to efficiently compute sensitivities, calibrate models to observed data, and couple with other simulators in complex, multi-physics scenarios—mirroring techniques used in modern machine learning workflows.
- Visualization tools (optional): Visualization tools are integrated to allow the system to summarize or plot results directly within the conversation, enabling real-time understanding of simulation outputs.
- Security and safety filters: Ensuring that all commands are valid and that unsafe actions are prevented is crucial for maintaining the integrity and reliability of the simulations.
![]() |
Example use cases
Here are just a few ways in which our vision could transform the way simulations are set up, analyzed, and optimized:
- Model setup assistance:
User Request: "Based on the SPE1 benchmark, set up a multiphase flow simulation for sandstone reservoirs, varying injection rates between 100 and 500 barrels/day."
System Action: The AI agent configures the simulation, automatically suggesting default values for parameters and displaying the setup for user approval. - Exploratory runs:
User Request: "Run a sensitivity study on porosity between 15% and 25%, and plot the effect on breakthrough time."
System Action: The AI agent initiates the study, running multiple simulations with varying porosity values and presenting the results as plots. - Debugging and optimization help:
User Request: "Why is the Newton solver struggling after 3000 days?"
System Action: The system retrieves logs, analyzes the residual behavior, and suggests potential solutions. For example, it may identify issues with convergence due to nonlinearity, suggest adjusting solver settings, or recommend revisiting boundary conditions. The AI can propose adjusting the iteration tolerance or modifying the initial guess to improve convergence. - Learning and documentation retrieval:
User Request: "What discretization method does this simulator use for convection-dominated problems?"
System Action: The RAG system fetches relevant documentation or academic papers, providing a clear and concise summary of the discretization methods used. - Autonomous Scenario Analysis:
User Request: "Find the parameter set that minimizes production costs while maintaining a recovery factor above 50%."
System Action: The AI agent autonomously designs an optimization workflow, adjusting parameters to meet the cost-recovery criteria and providing a list of suggested solutions.
Challenges
As with any pioneering research, there are challenges to overcome:
- Precision and reliability: LLMs are not always reliable for complex numerical reasoning. Ensuring the simulator remains the authority on the results is key.
- Latency: Running simulations can take time, and managing user interaction during long wait times is a design challenge.
- Contextual state tracking: Keeping track of the context across complex workflows (e.g., multi-step simulations) remains a challenge for AI agents.
- Data privacy and IP: Protecting the integrity of proprietary simulation data and research outputs is a critical consideration.
Disclaimer:In the spirit of our research area, this text was generated using AI tools and subsequently curated and refined by human experts.