Solving Multiobjective Optimization Problems with inequality constraint using an augmented Lagrangian function

This paper presents a technique for addressing multiobjective optimization issues subject to inequality constraints. This technique converts the original problem into a single-objective optimization without constraints by employing an augmented Lagrangian function and an ϵ -constraint method. Specifically, the augmented Lagrangian function transforms problems with multiple objectives into a single objective function, while the ϵ -constraint method changes constrained optimization problems into unconstrained ones. We provide two propositions complete with proofs to verify the admissibility and Pareto optimality of the solutions derived. Furthermore, we conduct a comparative analysis with two established methods, NSGA-II and BoostDMS, focusing on the convergence and distribution of solutions across fifty test problems sourced from existing literature. The collective theoretical and empirical evidence suggests that our proposed method is superior for solving multiobjective optimization problems.


Introduction
Recently, the application of multiobjective optimization concepts has been instrumental in addressing specific challenges in various domains, including physics, economics, transportation, and social choice [1,31,39,43].Research has been conducted on problems with single-or multiple-objective optimization under constraints.The main aim of researchers is the development of methods which are able to solve completely non-linear optimization problems.With regard to the difficulties with the constraint function, many methods use some penalty functions to combine the objective functions and constraint function by a penalty function [36,37,41,42].That allows to transform the initial problem formulation into a single objective function optimization without constraints.For this last formulation, we find many proposed methods to reach the optimal solutions.
We have many studies using the penalty functions in order to make easy the obtaining of optimal solutions but the Lagrangian penalty function is one of the well knew and used in the resolution of constrained optimization problems.Many are works, in the literature, in with Lagrangian penalty function has proved its performance in helping to reach optimal solution in the single-objective optimization.Birgin et al. proposed some results on this topic: global minimization using an augmented Lagrangian method with variable lower-level constraints [5]; practical augmented Lagrangian methods for constrained optimization [4]; optimality properties of an augmented • Introduction of an algorithm developed using the ϵ-constraint approach in conjunction with the augmented Lagrangian method to solve continuous multi-objective problems with convex constraints.• Analysis of the global convergence of the point sequences generated by the algorithm towards Paretostationarity under key assumptions such as that the problem is of a convex nature and the objective functions as well as the constraints are continuous and differentiable.• Evaluation of the performance of the proposed method compared to state-of-the-art approaches already established in the literature.
To assess the performance of the proposed method, we compare the obtained numerical solutions to those generated by methods such as NSGA-II [12], and BoostDMS [16], using various performance metrics.The obtained results demonstrate that our method outperforms most other methods on the majority of test problems.
The rest of this paper is organized as follows.In Section 2, some preliminaries are given.Section 3 introduces the augmented Lagrange penalty method based on the e-Constraint approach.In Section 4, we present an algorithm for its application accompanied by a theoretical convergence study of the algorithm.In Section 5, we propose the numerical application to test problems existing in the literature.We end in section 6 with a conclusion and some remarks.

Preliminaries
As a minimization problem can be transformed into a maximization problem and vice versa, we will present our results for the case of minimization.Multiobjective optimization problems can be formulated by : where, F : R n → R q , g i : R n → R are continuous and differentiable functions.X represents a non-empty convex set.Let X denote the feasible space of problem (1), defined as For the rest of this work, we will consider problem (1), whose objective functions are convex and constraints functions are continuous.
The following definitions present the basic concepts of optimal solutions in multiobjective programming.

Definition 1 ([18])
A feasible solution x * ∈ X is called efficient or Pareto optimal, if there is no other
Now, a necessary condition for a Pareto-stationary point is given by the following definition.
Note also that, if x * is not Pareto stationary, there exists a x ∈ X such as max j=1,q Thus, a well-known equivalent characterization of a Pareto-stationary point from the point of view of the projection is given by the following lemma.

Lemma 1
A point x * ∈ X is said to be Pareto-stationary for Problem (1) if, for all x ∈ X , where Π X [x] is the projection operator of the point x in the convex set X .
Now, what should be noted is that if we have one objective, a point x * is said to be Pareto stationary if ∇f (x * ) ⊤ (x − x * ) ≥ 0, ∀x ∈ X , and its equivalent term from the point of view of the projection is given by the following relation

Definition 4
We call the ideal point of the problem (1) the vectors z * ∈ R q whose components z * j are obtained by individually minimizing each objective function f j , j = 1, 2, . . ., q under the constraints of the problem (1).That is to say : The ϵ-constraint approach is a method used to transform problem (1) into a single-objective form as follows: where ϵ j ∈ ϵ min j , ϵ max j are determined respectively by minimize and maximize f j (x) under the constraint x ∈ X .Let us set g i (x) = f i (x) − ϵ i , i = 1, . . ., q, i ̸ = p, then problem (1) is reworded as follows: Any solution to the problem ( 5) is an optimal Pareto solution for any upper bound vector given by:

Main results
The method we have presented in this paper is based on the projected gradient, which requires some conditions before use.It concerns the convexity and compactness of decision space; objective and constraint functions must be differentiable.The above two assumptions provide the basis for these requirements.

Assumption 1
The set X ⊆ R n is closed and convex.

Assumption 2
The objective function F has bounded level sets in the multiobjective sense, i.e., the set {x ∈ R n , F (x) ≦ F (x 0 )} is compact.

Principle
As a reminder, the Augmented Lagrangian method developed in the work of E. G. Birgin [4] is a method that allows the transformation of a constrained optimization problem into an unconstrained problem.Let us consider problem (5) and pose The problem (5) after penalization becomes: where ρ k > 0 is a penalization parameter, and µ k ∈ R m + is a vector of the approximate Lagrange multiplier associated with constraints.These parameters are updated at each iteration as follows [25] : Under the assumption that the objective functions and constraints are differentiable, this implies that the function defined by equation ( 6) is also differentiable.Consequently, the gradient of the function ( 6) can be expressed as follows: However, a stationary point of the Augmented Lagrangian sub-problem is given by the following definitions.

Definition 5
Let be ϵ fixed.A point x * ∈ X is a Pareto-stationary point if and only if where Π X (x) is the projection of x in the convex space X .
Thus, the following definition gives a characterization of the α-stationary of the Augmented Lagrangian sub-problem [25].

Definition 6
Let α ≥ 0 and ϵ.A point x * ∈ X is said to be α-Pareto-stationary of the Lagrange sub-problem if and only if

Algorithm
The following Algorithm presents the augmented Lagrangian method based on the approach of the ϵ-constraint.
for i = 1, 2, 3, . . ., m do 5: 6: end for 8: else 11: end if 13: end for Algorithm 1 starts with the selection of a priority objective function that will be minimized.The other objective functions are individually minimized under the initial constraints to obtain the components of the vector ϵ.With that, each objective function is transformed into a new constraint by considering the corresponding ϵ value as its maximum value.After that, the initial problem is transformed into a parametric single-objective optimization problem, as shown in equation (5).Next, an arbitrary choice of µ is made to fix the lower and upper bounds of the value of µ.Also, in the same way, µ 0 and ρ 0 are chosen, they will be updated at each iteration in accordance with steps 8 to 11 and at step 13 of the Algorithm 1. Step 7 is dedicated to solving the Lagrangian sub-problem.Under the assumptions presented above, we apply the projected gradient method [17,19,23,24,44], which is one of the iterative methods for solving the sub-problem of the Algorithm 1.The determination of step length in the iterative step is determined using Armijo's rule [21,26].The well-defined nature of the algorithm follows directly from lemma 4 in the work of Fliege and Sviater [23].Note that a backtracking Armijo-type line search, is an R ndecreasing method, i.e., the values of the objective function always decrease in partial order per component in a finite number of iterations.

Convergence Study
The following propositions present the admissibility and optimality of the solutions given by the Algorithm 1. Proposition 2 shows that an iterated x k+1 of the Algorithm 1 is admissible, i.e., that x k+1 belongs to the space X = {x ∈ R n | g(x) ≤ 0}, the Proposition 3 shows that a limit point x * of the sequence x k generated by the Algorithm 1 is weakly Pareto optimal of the problem (5).
Lemma 2 Let x * be a limit point of the sequence x k+1 generated by the Algorithm 1.If x * ∈ X , then for any

Proof
From the Algorithm instructions, we have µ k i ≥ 0 for any k.There are two possible cases that arise.
(a) ρ k bounded.In this case, there is a k 0 such as for any k ≥ k 0 , we have ρ k → ρ k0 .Thus, for . . .
When η → ∞, since β ∈ [0; 1[, we have : which implies that min By definition, we have the sequence µ k i bounded which implies Thus, for k ∈ K sufficiently large, we have : Hence the result.

Proposition 2
Let x k+1 be a sequence generated by the Algorithm 1 whose limit point is x * .Then, each point x * is feasible point for the problem (4).
We will then demonstrate that any limit point generated by Algorithm 1 is Pareto-stationary.

Proposition 3
Let x k+1 be a sequence generated by the Algorithm 1 whose limit point is x * .Then, any point x * of the sequence x k+1 is Pareto-stationary of the problem (4).
From the instruction of the Algorithm 1, posing and which implies that at each iteration, From the properties of the projection, we obtain for all y ∈ X , By adding and subtracting x k+1 , and rearranging, we get Using the convexity of g, the last two terms of ( 23) can be bounded as follows : Now, considering the term we have Stat., Optim.Inf.Comput.Vol. 12, September 2024 A. TOUGMA AND K. SOM É 1371 Otherwise, if g i (x k+1 ) ≥ 0 we have However, we can rewrite Recalling that x k+1 − x k+1 ≤ α k , we can rewrite Equation ( 23) can therefore be rewritten as follows Now, there are two possible cases : (i) ρ k bounded, according to Lemma 2, we have µ k i = 0 (ii) ρ k → ∞ there exists a k 0 such that for all k ≥ k 0 , we obtain In each case (i) and (ii), since g(x * ) ≤ 0, for k sufficiently large, k ∈ K and continuous of ∇f ϵ p and ∇g, with This contradicts the initial hypothesis.

Numerical Experiences
In this section, we present some numerical results of ALB ϵ on some test problems.In addition, a comparison is done with two other methods namely NSGA-II and BoostDMS method.
The following Table 1 gives the used parameters of each method.In total, we performed the simulation on 50 test problems recorded in Table 2.Among the test problems found in the literature, those that do not have boundary constraints, we have defined research boundaries on these problems.For the implementation of ALB ϵ , we used the parameters λ 0 = (0, 1, 1, . . .

Table 2. List of Multiobjective optimization test problems
Problems n q Parameters bornes Source We have dealt with the test problems with : As in [6,11,38], we used the same metrics proposed for the comparison of the Algorithms.The metrics used are the purity metric, the spreads metric, and the hyper-volume indicator.The purity metric measures the quality of the Pareto front generated by an Algorithm.It gives the percentage of non-dominated solutions generated by method [11].The purity metric is given by the following formula : with F p,s the solutions generated by a solver s ∈ S for a problem p ∈ P where S is the set of solvers and P the set of test problems.F p represents the set of solutions generated by all solvers for the problem p (F p = F p,s ) without the dominated points.
The spread metrics used are Γ-Spread and ∆-Spread.The Γ-Spread metric measures the maximum spacing of solutions generated by a solver [11].It is given by the following formula : where N represents the number of solutions generated by a solver, m is the number of objective functions, and whose values of f j (x k ) are arranged in ascending order.The ∆-Spread metric measures the distribution of solutions generated by a solver [11].It is calculated by the following formula : where δ i,j is the average of the δ i,j with j = 1, ...., N − 1. δ 0,j and δ N,j represent the extreme points indexed by 0 and N + 1.Thus, we used the technique proposed in [11] to compute the extreme points for problems that do not have an analytic front.We first removed the dominant points from the meeting on all these fronts.Then, for each component of the objective function, we selected the pair corresponding to the highest distance in pairs measured using f j (.).
For a minimization problem, the hyper-volume indicator measures the volume of the part of the objective space that is dominated by the computed approximation to the Pareto front of a problem, bounded by a reference point W p p ∈ R q [22].The following formula gives the hyper-volume indicator : where V olume(.)denotes the Lebesgue measure.As in [6], the coordinates of the reference point W p are determined by the relation The scaling of H(F p,s ) is given by : with l p j = min f j (x) : z ∈ s∈S F p,s .
We then use the performance profiles proposed in [16,38,6] for an appreciation of the performance of the four metrics presented above.We refer the reader to the articles cited above for more information on performance profiles.Recall that the performance profiles are presented by a diagram of a cumulative distribution function ρ(τ ) which is defined as follows : with r p,s = t p,s min {t p,s : s ∈ S} .Since performance profiles are used for metrics whose lowest value indicates better performance, metrics such as Purity and Hypervolume, we will pose t p,s = 1/t p,s as proposed in [6].

Performance Profiles
Now, we present the performance profiles on the four metrics presented above on the 50 test problems.On these four metrics, we notice that the ALBϵ method obtains better values on the purity and the Γ-Spread metrics.For the ∆-Spread metric, ALBϵ wins over BoostDMS.We see that except for the bimodal problem KD1, the ALBϵ method wins over the two methods on the purity metric.For the Γ-Spread metric, the ALBϵ method is better on the Min-Ex, Cosh, KD1, and ZDT1.For the ∆-Spread metric, only the ZDT1 problem is won by the ALBϵ method.For hypervolume, ALBϵ is competitive on both Algorithm.From the theoretical and numerical results of the ALB ϵ method, we can say that it is competitive in solving multiobjective optimization problems since it gives better results on the generation of non-dominated solutions.
For the maximum spacing, we can say that it is due to the ϵ-constraint approach.Since we choose a priority function, this can affect the maximum spacing of solutions.For the diversity of solutions and the hypervolume metric, we observe that the ALBϵ method remains competitive.

Conclusion
In this work, an augmented Lagrangian function was combined with an ϵ-constraint approach to solve constrained multiobjective optimization problems.It consists of transforming constrained multiobjective optimization problems into unconstrained single objective optimization problems.The proposed algorithm is deterministic, and the Pareto optimality of the provided solutions has been justified.First, we present the algorithm of the method and some theoretical convergence results through some propositions.Then, we showed the numerical results of fifty test problems from the literature that we solved with our method, NSGA-II and BoostDMS, which are well-known and used methods.Finally, we compared the numerical performance of these three methods.This study has been focused on convergence and distribution parameters using obtained solutions.According to this study, it appears that ALB ϵ is the best choice for the resolution of multiobjective optimization problems when the admissible solutions set is bounded and convex, and the objective space is compact.The next phase of our research will focus on the extension of the algorithm to tackle non-convex objective functions, on the one hand, and on the other, tackle non-differentiable cases.Furthermore, we are interested in the use of this method to solve real-life problems.
Figures 1,2 and 3 represents the performance profile on the purity of ALBϵ, NSGA-II, and BoostDMS methods.We can observe that on purity, ALBϵ is better than NSGA-II and BoostDMS in terms of efficiency.Figures 4,5 and 6 represents the performance profile of the Γ-Spread metric of the three methods.We can observe a dominance of the ALBϵ method by the NSGA-II and BoostDMS methods.Considering Figures 7,8 and 9 and Figures 10,11 and 12, which represents the performances profiles of the ∆-Spread metric and the Hyper-volume indicator, we notice that the ALBϵ method remains competitive.

Table 1 .
Value of used parameters for methods

Table 3 .
Statistical study of performance values on Purity

Table 4 .
Statistical study of performance values on Γ − Spread

Table 5 .
Statistical study of performance values on ∆-Spread 4.2.Study of problems with an analytical frontIt about the problems SCH, COSH, M IN − Ex, ZDT 1 ( with 30 variables) and KD 1 .We use these problems for a comparison of the four metrics against the true front.For the KD 1 problem the global minimum is obtained