×

- Design variables: A design variable is a specification that is controllable by the designer (eg., thickness, material, etc.) and are often bounded by maximum and minimum values. Sometimes these bounds can be treated as constraints.
- Constraints: A constraint is a condition that must be satisfied for the design to be feasible. Examples include physical laws, constraints can reflect resource limitations, user requirements, or bounds on the validity of the analysis models. Constraints can be used explicitly by the solution algorithm or can be incorporated into the objective using Lagrange multipliers.
- Objectives: An objective is a numerical value or function that is to be maximized or minimized. For example, a designer may wish to maximize profit or minimize weight. Many solution methods work only with single objectives. When using these methods, the designer normally weighs the various objectives and sums them to form a single objective. Other methods allow multi-objective optimization, such as the calculation of a Pareto frontier.
- Pareto Frontier: It is relatively simple to determine an optimal solution for single objective methods (solution with the lowest objective function). However, for multiple objectives, we must evaluate solutions on a “Pareto frontier.” A solution lies on the Pareto frontier when any further changes to the parameters result in one or more objectives improving with the other objective(s) suffering as a result. Once a set of solutions have converged to the Pareto frontier, further testing is required in order to determine which candidate force field is optimal for the problems of interest. Be aware that searches with a limited number of parameters might “cram” a lot of important physics into a few parameters.
- Models: The designer must also choose models to relate the constraints and the objectives to the design variables. They may include finite element analysis, reduced order metamodels, etc.
- Reliability: the probability of a component to perform its required functions under stated conditions for a specified period of time.
- Metamodeling: A metamodel (or surrogate model) provides a quick way to approximate a function response when an analytical solution is not available, or is computationally expensive.

These methods are referred to as “zeroth-order methods” because they require only evaluation of the function, f(**X**), in each iterative step. Some examples of zeroth-order methods are the Bracketing Method and the Golden Section Search Method. Some population based methods could also be categorized as zeroth-order methods^{[1]}.

The Bracketing method is a zeroth-order method which used progressively smaller intervals to converge to an optimal solution. The interval is set up such that the x value corresponding to the optimal value of f lies within the interval. The interval is then divided into any number of sub-intervals of any given length. At each dividing point the value of f is calculated. The optimum sub-interval is then chosen as the next interval. This process iterates until convergence criteria is met^{[1]}.

In addition to evaluation of f(X), first-order methods require the calculation of the gradient vector ∇f(X) in each iterative step. Some examples of first-order methods are the Steepest Descent or Cauchy Method and the Conjugate Gradient Method.

The Steepest Descent method uses a search direction of some magnitude in the negative direction of the gradient. The negative of the gradient gives the direction of maximum decrease, hence steepest descent. The magnitude of the constant for the search direction can be determined through zeroth-order methods or from direct calculation. The direct calculation is done by setting the derivative equal to zero and solving for the constant. This method is guaranteed to converge to a local minimum, but convergence may be slow as previous iterations are not considered in determining the search direction of subsequent iterations. The rate of convergence can be estimated using the condition number of the Hessian matrix. If the condition number of the Hessian is large convergence will be slow^{[1]}.

The Conjugate Gradient Method is similar to the Steepest Descent Method except that it takes into consideration previous iterations when choosing search directions. The conjugate direction is determining by adding the steepest descent direction of the previous iteration, scaled by some value, to the steepest descent direction of the current iteration. The constant used to scale the search direction of the previous iteration can be determined using either the Fletcher-Reeves formula or the Polak-Ribiere formula^{[1]}.

Second-order methods take advantage of the Hessian matrix, the second derivative, of the the function to improve search direction and rate of convergence. Some examples of second-order methods are Newton's Method, Davidon-Fletcher-Powell (DFP) method, and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method^{[1]}.

Population based methods generate a population of points throughout the design space. Some methods then specify a range of the best points and generate a new population, continuing until convergence is reached (Monte-Carlo Method). Others generate a population and then "evolve" the points. The weakest of the new population is eliminated and the remainder evolved again until convergence is reached (Genetic Algorithm).

See Monte-Carlo

Genetic algorithms are based on the principles of natural selection and natural genetics, meaning reproduction, crossover, and mutation are involved in the search procedure. The design variables are represented as strings of binary numbers which mirror chromosomes in genetics. These strings allow for the different binary numbers, or bits, to be adjusted during the reproduction, mutation, and crossover stage^{[2]}. A population of points is used, and the number of initial points is typically two to four times the number of design variables. These points are evaluated to provide a fitness value, and above average points are selected and added to a new population of points. Points in this new population undergo the second stage in the algorithm known as crossover. In this stage information from two "parent" points, or strings, is combined to produce a new "child" point. The mutation operator is optional. It selects points based on a user-defined probability and alters a bit in the points binary string, thereby maintaining diversity in the population. The process is iterated until convergence is reached. GAs differ from other optimization techniques in that they work with a coding of the parameter set and not the parameters themselves, search a population of points instead of a single point, and use objective function knowledge instead of derivatives or other auxiliary knowledge^{[3]}.

One of the most prevalent uses of optimization occurs in the design of structures. Common applications in this area consist of the reduction of weight subject to strength requirements, maximizing energy absorption in crashworthiness scenarios, and topology optimization. In a simple case, analytical solutions for objective functions are solved while altering design variables subject to constraints. This optimization can be performed in a software such as MATLAB. In a more complex example, objective functions can be solved by a finite element software such as Abaqus, LS-DYNA, etc. Because the optimization algorithm sometimes requires hundreds or even thousands of iterations, optimization can become unfeasible when directly coupled to a computationally expensive finite element simulation. A suitable alternative is the use of metamodels.

Metamodels offer a fast analytical approximation of a complex, expensive objective function. Metamodels approximate responses of these functions over a predefined design space. In order to create a metamodel, objective function values must calculated using a full scale simulation at "training points" sampled throughout the design space. These training points can be found using a Design of Experiments (DoE). Some tutorials for DoE can be found at the following links: 1, 2, and 3. Once the DoE points and their objective function values are found, the data is used to "train" the metamodel. After training, the metamodel can be directly used in place of the full scale simulations to calculate objective functions in a much faster manner. Metamodels can also be used in Monte Carlo simulations to quantify uncertainty when many calculations are necessary.

A comparative study of metamodeling methods for multi objective crashworthiness optimization

Authors: Howie Fang, Masoud Rais-Rohani, Z. Liu, and Mark Horstemeyer

Analytical Model for Axial Crushing of Multi-cell Multi-corner Tubes (Multi-CRUSH)

Contributers: Ali Najafi and Masoud Rais-Rohani

Topology Optimization of Continuum Structures Using Element Exchange Method

Authors: Mohammad Rouhi and Masoud Rais-Rohani

Element Exchange Method for Topology Optimization

Authors: Mohammad Rouhi, Masoud Rais-Rohani and Thomas Neil Williams

A Goal-Oriented, Inverse Decision-Based Design Method for Multi-Component Product Design

Authors: Author(s): Tate R. Fonville, Anand B. Nellippallil, Mark F. Horstemeyer, Janet K. Allen, Farrokh Mistree

Product Design Optimization with Microstructure-property Modeling and Associated Uncertainties

Author(s): Kiran N. Solanki

Optimization algorithms can be used for model calibration. For example, the DMGfit for metals, TP for polymers, and MSFfit routines employ optimization algorithms to automatically fit the plasticity-damage model and the fatigue model, respectively. The constants of interest are selected and a Monte Carlo optimization routine is performed to generate candidate constants. A single element simulation then produces the model stress-strain curve. The curve is compared to the input data for fit comparison, and this process is repeated until a satisfactory fit is achieved or a maximum number of iterations is reached. The resulting optimized constants are then output.

The Embedded Atom Method (EAM) and Modified Embedded Atom Method (MEAM) potentials can be optimized based upon on Electronics Scale calculation results and experimental data. See MEAM Potential Calibration

This is an emerging topics at CAVS. The pages describing the progress are currently available only to the members of the research team.

- ↑ Rais-Rohani,Masoud “Handout #3: Mathematical Programming Methods for Unconstrained Optimization,” Design Optimization Class, Mississippi State University, Spring 2012.
- ↑ Rao, S.S., “Genetic Algorithms,” Engineering Optimization: Theory and Practice, John Wiley and Sons, Inc., 2009, pp. 694-702.
- ↑ Goldberg, D.E., Genetic Algorithms in search, optimization, and machine learning, Addison Wesley Longman, 1989.