The capability and reach of artificial intelligence (AI) are continually expanding to meet the needs of increasing complexity of systems in many industries, including the space industry. As a result, engineers are faced with new challenges such as verification and validation as they are tasked with integrating AI into systems. Part of the complexity stems from the recognition that AI models are only as effective as the data they’re trained with – if that data is insufficient, inaccurate, or biased, the model’s calculations will be too.
At a high level, there are three crucial ways AI and simulation are intersecting in space. The first is addressing the challenge of insufficient data, as simulation models can synthesize data that might be difficult or expensive to collect. The second is using AI models as approximations for complex high-fidelity simulations that are computationally expensive, also referred to as reduced-order modeling. The third is the use of AI models in embedded systems for applications such as supervisory logic, signal processing, and embedded vision, where simulation has become a key part of the design process.
As engineers find new ways to develop more effective AI models for space applications, this article explores how simulation and AI combine to solve challenges of time, model reliability, and data quality.
Challenge 1: Data for training and validating AI models
The process of collecting real-world data and creating good, clean, and catalogued data is difficult and time consuming – particularly for space missions. Engineers also must be mindful of the fact that while most AI models are static (they run using fixed parameter values), they are constantly exposed to new data and that data might not necessarily be captured in the training set.
Projects are more likely to fail without robust data to help train a model, making data preparation with known data and best guess approximations a crucial step in the AI workflow. Bad data can leave an engineer spending hours trying to determine why the model is not working, without the promise of insightful results.
Simulation can help engineers overcome these challenges. In recent years, data-centric AI has brought the AI community’s focus to the importance of training data – but also using verification and validation for reliability. Rather than spending all a project’s time tweaking the AI model’s architecture and parameters, it’s been shown that time spent improving the training data and testing thoroughly can often yield larger improvements in accuracy. The use of simulation to augment existing training data has multiple benefits:
- Computational simulation is in general much less costly than physical experiments and helps engineers avoid expensive hardware mistakes
- The engineer has full control over the environment and can simulate scenarios that are difficult, too dangerous, or impossible to create in the real world
- The simulation gives access to internal states that might not be measured in an experimental setup, which can be very useful when debugging why an AI model doesn’t perform well in certain situations, including fault detection.
With a model’s performance so dependent on the quality of the data it is being trained with, engineers can improve outcomes with an iterative process of simulating data, updating an AI model, observing what conditions it cannot predict well, and collecting more simulated data for those conditions. Of course, engineers need to anchor their simulated environment and model to real-world testing to the best extent possible to validate the quality of the simulated data.
Using industry tools such as Simulink and Simscape, engineers can generate simulated data that mirrors real-world scenarios. The combination of Simulink and MATLAB enables engineers to simulate their data in the same environment they build their AI models, meaning they can automate more of the process and not have to worry about switching toolchains.
Challenge 2: Approximating complex systems with AI
When designing algorithms that interact with physical systems, such as an algorithm to control a hydraulic valve, simulation-based model of the system is key to enabling rapid design iteration for your algorithms. In the controls field, this is called the plant model, in the wireless area it is called the channel model. In the reinforcement learning field, it’s called the environment model. Whatever you call it, the idea is the same: create a simulation-based model that gives you the necessary accuracy to recreate the physical system and environment your algorithms interact with.
The problem with this approach is that to achieve the necessary accuracy engineers have historically built high-fidelity models from first principles, which for complex systems can take a long time to build and simulate. Long-running simulations mean less design iteration will be possible, so there may not be enough time to evaluate potentially better design alternatives.
With AI, engineers can take the high-fidelity model of the physical system they’ve built and approximate it with an AI model (a reduced-order model). In other situations, they might just train the AI model from experimental data, completely bypassing the creation of a physics-based model. The benefit is that the reduced-order model is much less computationally expensive than the first-principles model, so the engineer can explore more of the design space. Physics-based models can always be used later in the process to validate the design determined using the AI model.
Recent advances in AI also combine AI training techniques with models with physics-based principles embedded within them. Such models can be useful when there are certain aspects of the physical system that the engineer wishes to retain, while approximating the rest of the system with a more data-centric approach.
Challenge 3: AI for algorithm development
Engineers in applications such as control systems have come to rely more on simulations when designing their algorithms. In some cases, these engineers are developing virtual sensors, observers that attempt to calculate a value that isn’t directly measured from the available sensors. A variety of approaches are used including linear models and Kalman filters – and AI has promise in doing these types of computations.
But the ability of the traditional methods to capture the nonlinear behavior present in many real-world systems is limited, so engineers are researching AI-based approaches that have the flexibility to model the complexities. They may consider using data (either measured or simulated) to train an AI model that can predict the unobserved state from the observed states, and then integrate that AI model with the system.
In this case, the AI model is included as part of the decision-making or controls algorithms that ends up on the physical hardware, which has performance/memory limitations, and typically needs to be programmed in a lower-level language such as C/C++. These requirements can impose restrictions – often more constraining in performance for space applications – on the types of machine-learning models appropriate for such applications, so engineers may need to try multiple models and compare trade-offs in accuracy and on-device performance. Also, verification and validation requirements must be considered, which can make non-mission-critical embedded applications more suitable for early AI deployments.
At the forefront of research in this area, reinforcement learning takes this approach a step further. Rather than learning just the estimator which feeds into a controller or supervisory logic algorithm, reinforcement learning learns the entire control strategy. This has shown to be a powerful technique in some challenging applications such as robotics, autonomous systems, and lunar or planetary landings, but building such a model requires an accurate model of the environment, which may not be readily available, as well as the computational power to run a large number of simulations.
In addition to virtual sensors and reinforcement learning, AI algorithms are increasingly used in embedded vision, signal processing, and wireless applications. For example, in Rendezvous, Proximity Operations, and Docking (RPOD), the generation of synthetic imagery for computer vision and perception algorithms can build an accurate simulated target vehicle to use for training AI algorithms.
The future of AI for simulation in space
Overall, as models grow in size and complexity to serve increasingly complex applications, AI and simulation will become even more essential tools in the engineer’s toolbox. Industry tools like MathWorks’ Simulink and MATLAB have empowered engineers to optimize their workflows and cut their development time by incorporating techniques such as synthetic data generation, reduced-order modeling, and embedded AI algorithms for controls, dynamics, signal processing, and embedded vision.
With the ability to develop, test, and validate models in an accurate and affordable way, before hardware is introduced, these methodologies will only continue to grow in use.
About the author: Ossi Saarela is space segment manager at MathWorks
Latest from Aerospace Manufacturing and Design
- 2024 Favorites: #10 Article – How 3D-printed aviation parts can accelerate return to air
- 2024 Favorites: #10 News – Boom Supersonic completes Overture Superfactory
- OMIC R&D hosts Supporting Women in Manufacturing Day 2024
- 4D Technology's AccuFiz SWIR interferometer
- Seventh Lockheed Martin-built GPS III satellite launches
- KYOCERA AVX's CR Series high-power chip resistor
- UT researchers receive Air Force grant for wind tunnel
- Monticont's linear voice coil servo motor