
By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA
AI is rapidly transforming engineering workflows. However, a fundamentally important issue rarely gets discussed: AI explanations are only as trustworthy as the simulations they rely on.
AI systems designed to support engineering decisions rely on mathematical models to estimate how physical objects or systems respond to various kinds of excitation. These models act as the AI’s evidence generators, producing estimates of the data on which decisions are based. Clearly, explainable AI (XAI) is unattainable if the models producing that evidence are themselves unexplainable. This reality imposes rigorous technical requirements on how we formulate and apply mathematical models.
Sources of Error
Mathematical models should be viewed as transformations of a set of data (the input) into the quantities of interest (QoI). The transformations comprise a set of operators that depend on established science, subjective choices, and sets of physical parameters determined by experimental means. To achieve information integrity, we must have the capability to control three sources of error:
- Model Form Errors: These occur when simplifying assumptions are introduced during the formulation of a mathematical model. Examples include assuming linear material behavior, adopting small-strain approximations, neglecting the difference between deformed and undeformed configurations, and defining hypotheses of failure initiation.
- Calibration Errors: These arise from measurement errors in physical parameters and often from inadequate recording, reporting, or archiving of experimental data.
- Discretization Errors: These represent the difference between the exact solution of the mathematical model and its numerical approximation (e.g., errors related to mesh refinement or polynomial degree) measured by the relative error in the QoI.
Controlling Model Form Errors
It is useful to think of any mathematical model as a special case of a more comprehensive model. For example, the classical (Euler–Bernoulli) beam theory, typically taught in first- or second-year mechanical engineering courses, is a special case of the three-dimensional linear theory of elasticity. Simplifying assumptions about the mode of deformation allows us to determine the deformed shape by solving a fourth-order ordinary differential equation. These assumptions also impose limitations on the allowable slenderness ratios, as well as on the types of loading and constraint conditions that can be modeled. These limitations can be eliminated by adopting more advanced beam models or—thanks to our twenty-first-century computational power—by using a fully three-dimensional elasticity model. Removing the restrictions on the assumed mode of deformation is often necessary when high-frequency vibration or the effects of material nonlinearities have to be considered.
The problem of model choice consists of selecting a model that accounts for all factors that significantly influence the quantities of interest, while keeping the model as simple as possible. This is feasible in engineering practice only when the software implementation supports a seamless transition between models — that is, when the definition of the finite elements is independent of the definition of the mathematical model.
Controlling Discretization Errors
Discretization errors are controlled by the choice of the finite element mesh and the polynomial degrees assigned to the elements. These errors are also influenced by the functions that map reference elements to the elements of the mesh and by the errors introduced through numerical integration when computing the stiffness matrices and load vectors. Here, we assume that the errors due to mapping and numerical integration are negligibly small compared with the approximation errors, which are governed by the mesh density and the polynomial degrees. For a justification of this assumption we refer to [1].
Ideally, the designer of a finite element mesh would consider not only how to partition the domain into elements but also how to accelerate convergence. This requires estimating the regularity of the exact solution from information contained in the input data [1]. It should be possible to train AI agents to perform this task.
A robust method for estimating and controlling discretization errors in terms of the quantities of interest is to obtain a convergent sequence of finite element solutions. This can be achieved either by successive mesh refinements while keeping the polynomial degrees assigned to the elements fixed, or by keeping the mesh fixed and increasing the polynomial degrees. The rate of convergence of the latter approach is at least twice that of the former.
By computing the quantities of interest for each finite element solution, we obtain a convergent sequence of values. The estimated limit of this sequence provides an estimate of the exact value [1].
Controlling Calibration Errors
Ideally, calibration errors should be limited to those associated with the measurement of physical parameters in a well-run professional laboratory. In reality, however, there are problems associated with the recording, reporting, and archiving of data. Much valuable information is lost through improper handling of experimental data. One of the important objectives of simulation governance is to ensure that experimental data are properly documented and archived [2].
The Legacy Software Bottleneck
Legacy finite element software products were designed in the 1960s and 1970s, before the theoretical foundations of numerical simulation were fully established and under severe computational constraints that no longer exist. Consequently, such software does not possess the technical capabilities required to support explainable AI. Specifically;
- Architectural Obsolescence: Legacy simulation codes were not designed to expose structure, a posteriori error estimates, and validity bounds in a machine-interpretable form, creating a fundamental barrier to integration with explainable AI.
- Entangled Uncertainties: Model-form assumptions and numerical discretization errors are inseparably intertwined, preventing reliable attribution of why a prediction was obtained or why it may be wrong.
- No Domain Awareness: AI systems have no principled mechanism to determine when they are operating outside the validated conditions. Simulation evidence is reliable only when the model is operated within its domain of calibration.
- Upstream Explainability: No amount of post-hoc explanation at the AI layer can compensate for simulations that cannot explain or bound their own predictions.
These deficiencies create a critical barrier to the goal of developing verifiable, self-reflective, and trustworthy AI systems.

StressCheck: The Perfect Fit for XAI
At present, StressCheck is the only commercial finite element software designed to control both model form errors and numerical approximation errors. Its architecture was developed after finite element analysis (FEA) became established as a branch of applied mathematics. By supporting hierarchic sequences of finite element spaces and models, it enables the reliable estimation of relative error in the quantities of interest. This unique capability removes the upstream explainability bottleneck, allowing numerical simulations to provide the transparent, machine-interpretable evidence that explainable AI demands.
Takeaway
Information integrity across AI‑enabled pipelines is achieved only if the meaning, validity, uncertainty, and provenance of simulation results are preserved in AI integration [3]. In other words, Explainable AI requires simulations that can explain themselves. At present, StressCheck is the only commercial finite element software capable of supporting XAI integration.
References
[1] Szabó, B. and Babuška, I. Finite Element Analysis. Method, Verification and Validation. John Wiley & Sons Inc. Hoboken, NJ, 2021. [2] Szabó, B. and Actis, R. Simulation governance: Technical requirements for mechanical design. Computer Methods in Applied Mechanics and Engineering, 249, pp. 158-168, 2012. [3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers & Mathematics with Applications, 162, pp. 206-214, 2024.Related Blog Posts:
- A Memo from the 5th Century BC
- Obstacles to Progress
- Why Finite Element Modeling is Not Numerical Simulation?
- XAI Will Force Clear Thinking About the Nature of Mathematical Models
- The Story of the P-version in a Nutshell
- Why Worry About Singularities?
- Questions About Singularities
- A Low-Hanging Fruit: Smart Engineering Simulation Applications
- The Demarcation Problem in the Engineering Sciences
- Model Development in the Engineering Sciences
- Certification by Analysis (CbA) – Are We There Yet?
- Not All Models Are Wrong
- Digital Twins
- Digital Transformation
- Simulation Governance
- Variational Crimes
- The Kuhn Cycle in the Engineering Sciences
- Finite Element Libraries: Mixing the “What” with the “How”
- A Critique of the World Wide Failure Exercise
- Meshless Methods
- Isogeometric Analysis (IGA)
- Chaos in the Brickyard Revisited
- Why Is Solution Verification Necessary?
- Variational Crimes and Refloating the Costa Concordia
- Lessons From a Failed Model Development Project
- Where Do You Get the Courage to Sign the Blueprint?
- Great Expectations: Agentic AI in Mechanical Engineering
- The Differences Between Calibration and Tuning
- Honored in the Breach
- Remembering Ivo Babuška
- Turtle Shells and Legacy Finite Element Codes: Evolutionary Constraints in the Age of Explainable AI
Serving the Numerical Simulation community since 1989 






Leave a Reply
We appreciate your feedback!
You must be logged in to post a comment.