MIT researchers have developed a new optimization technique that uses tabular foundation models to solve high-dimensional engineering problems 10-100 times faster than traditional methods.
Engineers often face a daunting challenge: optimizing systems with hundreds of variables where each test is expensive and time-consuming. Whether designing safer vehicles or optimizing power grids, the computational burden can be overwhelming. MIT researchers have developed a breakthrough approach that dramatically accelerates this process by leveraging foundation models trained on tabular data.
The Challenge of High-Dimensional Optimization
Many engineering problems boil down to the same fundamental headache—too many variables to tune and too few opportunities to test them. Consider automotive safety design: engineers must integrate thousands of parts, and countless design choices can affect how a vehicle performs in a collision. Traditional optimization tools begin to struggle when searching for the best combination among hundreds of variables.
Classic Bayesian optimization, while effective for many problems, faces two major limitations. First, it requires retraining a surrogate model after each iteration, which becomes computationally intractable as the solution space grows. Second, scientists must build an entirely new model from scratch for each different scenario they want to tackle.
A Foundation Model Approach
The MIT team's solution centers on using a generative AI system known as a tabular foundation model as the surrogate model inside a Bayesian optimization algorithm. As Rosen Yu, a graduate student in computational science and engineering and lead author of the paper, explains: "A tabular foundation model is like a ChatGPT for spreadsheets. The input and output of these models are tabular data, which in the engineering domain is much more common to see and use than language."
These foundation models are pre-trained on enormous amounts of tabular data, making them well-equipped to tackle a range of prediction problems. Crucially, they can be deployed as-is without the need for constant retraining, dramatically increasing optimization efficiency.
Smart Feature Selection
One of the most innovative aspects of the approach is its ability to automatically identify which variables matter most for improving performance. "A car might have 300 design criteria, but not all of them are the main driver of the best design if you are trying to increase some safety parameters. Our algorithm can smartly select the most critical features to focus on," Yu says.
The system estimates which variables (or combinations of variables) most influence the outcome and focuses the search on those high-impact variables instead of wasting time exploring everything equally. For instance, if increasing the size of the front crumple zone significantly improves the car's safety rating, that feature likely played a crucial role in the enhancement.
Testing and Results
The researchers tested their method against five state-of-the-art optimization algorithms on 60 benchmark problems, including realistic scenarios like power grid design and car crash testing. Their method consistently found the best solution between 10 and 100 times faster than the other algorithms.
"When an optimization problem gets more and more dimensions, our algorithm really shines," Yu notes. The technique delivers even greater speedups for more complicated problems, making it particularly useful for demanding applications like materials development or drug discovery.
However, the method did not outperform baselines on all problems, such as robotic path planning. This likely indicates that scenario was not well-defined in the model's training data, suggesting areas for future improvement.
Broader Implications
The research represents a significant shift in how foundation models can be applied beyond traditional domains like language and perception. "At a higher level, this work points to a broader shift: using foundation models not just for perception or language, but as algorithmic engines inside scientific and engineering tools, allowing classical methods like Bayesian optimization to scale to regimes that were previously impractical," says Faez Ahmed, associate professor of mechanical engineering and a core member of the MIT Center for Computational Science and Engineering.
Wei Chen, the Wilson-Cook Professor in Engineering Design at Northwestern University, who was not involved in the research, praised the work: "The approach presented in this work, using a pretrained foundation model together with high-dimensional Bayesian optimization, is a creative and promising way to reduce the heavy data requirements of simulation-based design. Overall, this work is a practical and powerful step toward making advanced design optimization more accessible and easier to apply in real-world settings."
The researchers plan to study methods that could boost the performance of tabular foundation models and apply their technique to problems with thousands or even millions of dimensions, such as the design of a naval ship. Their work, titled "GIT-BO: High-Dimensional Bayesian Optimization using Tabular Foundation Models," will be presented at the International Conference on Learning Representations.
The research team included Rosen Yu, Cyril Picard (a former MIT postdoc and research scientist), and Faez Ahmed. The work was supported by the MIT Center for Computational Science and Engineering and the Department of Mechanical Engineering.
For more information about this research and related work in computational science and engineering at MIT, visit the MIT Center for Computational Science and Engineering and the Department of Mechanical Engineering websites.

Comments
Please log in or register to join the discussion