Accepted Manuscript DNMA: A double normalization-based multiple aggregation method for multi-expert multi-criteria decision making Huchang Liao , Xingli Wu PII: S0305-0483(18)30228-7 DOI: https://doi.org/10.1016/j.omega.2019.04.001 Reference: OME 2058 To appear in: Omega Received date: 28 February 2018 Accepted date: 4 April 2019 Please cite this article as: Huchang Liao , Xingli Wu , DNMA: A double normalization-based multiple aggregation method for multi-expert multi-criteria decision making, Omega (2019), doi: https://doi.org/10.1016/j.omega.2019.04.001 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. ACCEPTED MANUSCRIPT Highlights We develop a target-based linear and vector normalization techniques. Three aggregation methods based on the normalization techniques are addressed. A double normalization-based multiple aggregation (DNMA) method is proposed. T The DNMA method is implemented to solve two case study. IP CR US AN M ED PT CE AC 1 ACCEPTED MANUSCRIPT The revised manuscript (#OMEGA_2018_216.R2) DNMA: A double normalization-based multiple aggregation method for multi-expert multi-criteria decision making Huchang Liao1,2, Xingli Wu1, 1 Business School, Sichuan University, Chengdu 610064, China T 2 Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia IP _____________________________________________________________________________________ CR Abstract This paper develops a comprehensive algorithm for multi-expert multi-criteria decision making problems US considering quantitative and qualitative criteria in forms of benefit, cost or target types. We focus on using probabilistic linguistic term sets to express the qualitative evaluations due to their excellence in expressing complex AN individual and collective linguistic assessments. Firstly, we develop a target-based linear normalization technique and a target-based vector normalization technique. A weight adjustment method is proposed to achieve the tradeoff M between criteria after normalization. Given that the two target-based normalization techniques have different advantages, we then propose a ranking method, which consists three subordinate models, based on these two ED target-based normalization approaches and three aggregation techniques. Reliable results of a multi-expert multi-criteria decision making problem is determined by integrating the subordinate utility values and the ranks of PT alternatives. The proposed method is implemented to solve the green enterprise ranking problems and the excavation scheme selection problem for shallow buried tunnels, respectively. The advantages of the proposed CE method are emphasized through comparative analyses with other ranking methods. Keywords: Multiple criteria analysis; target-based normalization; probabilistic linguistic term set; double AC normalization-based multiple aggregation method _____________________________________________________________________________________ Corresponding author. E-mail addresses:
[email protected](H.C. Liao);
[email protected](X.L. Wu). 2 ACCEPTED MANUSCRIPT 1 Introduction Multi-Expert Multi-Criteria Decision Making (MEMCDM) is a process of ranking a finite set of alternatives that are evaluated by multiple experts over multiple criteria. This process contains three phases: (1) collecting evaluation values, (2) normalizing evaluation values, and (3) aggregating the normalized evaluation values. Generally, an MEMCDM problem consists of both quantitative and qualitative criteria [15]. Using linguistic terms to evaluate alternatives with respect to qualitative criteria is in coincidence with people's habits and cognition [22]. The Probabilistic Linguistic Term Set (PLTS) [29], which is characterized by a set of linguistic terms associated T with different probabilities reflecting their relative reliabilities, is an effective tool to express qualitative evaluations. IP It not only can express vague linguistic evaluations, such as ―good‖ and ―between medium and high‖, but also can CR clearly represent the relative precise and complex linguistic expressions, such as ―between young and very young with 20% certainty that it is young and 80% certainty that it is very young‖. Besides, the PLTS can express the US collective opinions of a group, such as ―20% of experts judge that the product quality is very good, 30% of experts sure that it is between medium and good, but 50% of them evaluate it as bad‖. Because its flexibility and AN comprehensiveness, the PLTS has aroused growing concerns in expressing complex linguistic evaluations [2, 23, 33-35, 38] and has gained successful applications, such as hospital evaluation based on patients‘ satisfaction [23], M investment decision in sharing economy [35] and innovation design selection [33]. There are two categories of ranking methods for MEMCDM problems: the outranking methods and the utility ED value-based ranking methods [25, 26]. The outranking methods are limited in dealing with massive alternatives due to complicated calculations [12, 13]. The utility value-based ranking methods, such as the Simple Multi-Attribute PT Rating Technique (SMART) [14], MULTIMOORA (MULTIplicative Multi-Objective Optimization by Ratio CE Analysis) [9], TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) [6,11] and VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje, meaning in Serbian, meaning multiple criteria AC optimization compromise solution in English) [28], are effective in handling MEMCDM problems. They vary in terms of different normalization and aggregation approaches which may further lead to different decision results [18]. These utility value-based ranking methods are limited in using onefold normalization methods, which would bias the results due to the fault normalized values for aggregation. For details about the discussion on ranking methods, please refer to Sect. 2.2. To overcome the defects mentioned above and make the decision results more reliable, this paper proposes a novel utility value-based ranking method, namely, the Double Normalization-based Multiple Aggregation (DNMA) 3 ACCEPTED MANUSCRIPT method. This method is characterized by two normalization techniques and three aggregation tools. It can solve general MEMCDM problems which include quantitative and qualitative criteria in forms of benefit, cost or target types. The values of quantitative criteria are expressed as numerical values while the values of qualitative criteria are depicted by PLTSs comprehensively. We highlight the study by the following innovative work: (1) We present a target-based linear normalization formula and a target-based vector normalization formula. (2) We develop a weight-adjustment method for the trade-off between criteria after normalization. T (3) We make a suitable combination of two kinds of normalization approaches and three types of aggregation IP operators to develop three subordinate aggregation models with different functions. CR (4) We propose a new aggregation method based on the weighted Euclidean distance operator, which can integrate the subordinate utility values and the ranks of alternatives. US (5) We construct a framework of the DNMA method, based on which, we solve two case studies concerning the green small and medium-sized iron and steel enterprise ranking and the excavation scheme selection for AN shallow buried tunnels. The paper is organized as follows: Section 2 reviews the operations and representative ranking methods of M PLTSs. Section 3 introduces the target-based linear and vector normalization approaches and proposes the DNMA method. Section 4 illustrates the proposed method by two case studies about the green enterprise selection and the ED excavation scheme selection for shallow buried tunnels. Final conclusions are pointed out in Section 5. 2 Preliminaries PT 2.1 The PLTSs CE This section reviews the backgrounds of the PLTSs and some operations. Some criteria are essentially qualitative and hard to be measured by means of precise numerical numbers. To AC express the vague opinions on qualitative criteria, Zadeh [36] proposed the fuzzy linguistic approach based on membership functions. The linguistic terms are in line with people‘s thinking habits and thus are straightforward in making evaluations. S s | 0,1, , and S {s | , , 1,0,1, , } are two widely used Linguistic Term Sets (LTSs). There is s s , if . To represent the hesitate linguistic information, Rodríguez et al. [31] introduced the hesitant fuzzy linguistic term set which allows experts to make judgments in more than one 4 ACCEPTED MANUSCRIPT linguistic term with the same weight. To represent the general situation that preferences to different linguistic terms are existed, Pang et al. [29] extended the hesitant fuzzy linguistic term set by associating each linguistic term with a probability, and developed the Probabilistic Linguistic Term Set (PLTS). Let S be an LTS. A PLTS is [29] L hS ( p) s ( l ) ( p ( l ) ) | s ( l ) S , p ( l ) 0, l 1,2, , L, p ( l ) 1 (1) l 1 where s (l ) ( p (l ) ) is the l th linguistic term s ( l ) associated with the probability p ( l ) , and L is the number of all different linguistic terms in hS ( p) . The linguistic terms s ( l ) ( l 1,2, , L ) in hS ( p) are arranged in ascending T order. IP For calculation convenience, we normalize the PLTS hS p as hˆS ( p) {s (l ) ( pˆ (l ) ) | l pˆ 1} , where L (l ) CR pˆ (l ) p(l ) L l p(l ) for all l 1,2, , L [29]. In addition, for two different normalized PLTSs hˆS1 p s1( l ) ( pˆ 1( l ) ) | l 1,2, , L1 and hˆS2 p {s2( l ) ( pˆ 2( l ) ) | l 1,2, , L2 } , they are processed to be of the same probability set P p(1) , p(2) , US , p( L) by using an adjusting process [34, 35]. The adjusted PLTSs are T AN hS1 p {s*1(l ) ( p(l ) ) | l 1,2, , L} and hS2 p {s*2(l ) ( p(l ) ) | l 1,2, , L} . The linguistic terms and the sum of their probabilities are not changed in the adjusted PLTSs. For example, for two normalized PLTSs M hˆS1 p s1 (0.3), s0 (0.5), s1 (0.2) and hˆS2 p {s2 (0.5), s1 (0.5)} , their adjusted PLTSs are hS p {s1 (0.3), 1 ED s0 (0.2), s0 (0.3), s1 (0.2)} and hS2 p {s2 (0.3), s2 (0.2), s1 (0.3), s1 (0.2)} , respectively. The probability sets of these two adjusted PLTSs are the same as P 0.3,0.2,0.3,0.2 . T PT Let hS p {s ( p ) | l 1,2, , L} and hS2 p {s*2(l ) ( p(l ) ) | l 1,2, , L} be the adjusted PLTSs of 1 *1( l ) ( l ) CE hS1 p and hS2 p , respectively. The distance between hS1 p and hS2 p can be defined as [35]: AC *1(l ) *2(l ) d h p, h p p L 1 2 ( l ) (2) 2 S S l 1 where and *1( l ) *2( l ) are the subscripts of the linguistic terms s*1(l ) and s*2( l ) , respectively. For an MEMCDM problem, suppose that there are Q experts eq ( q 1,2, , Q ) whose weight vector is Q ( (1) , (2) , , (Q ) )T with q 1 (q) 1 . If the weights of experts are the same or not given, we let ( q ) 1 Q , 5 ACCEPTED MANUSCRIPT q 1,2, , Q . Suppose that hS( q ) p s( q ) p( q ) s( q ) S (q 1,2, , T ) are PLTSs on the LTS S given by T experts, and there are Q T experts who do not give any evaluation. To integrate the experts‘ judgments to collective opinions, Wu and Liao [33] presented an aggregation formula which is expressed as T hS p {s(l ) ( p(l ) ) | s(l ) S , p(l ) v( q ) ( q ) , l 1, , L} q 1 where v is the weight of s in hS( q ) p and (q) (q) T p( q ) , if s(l ) hS( q ) p IP (q) v (3) 0, if s(l ) hS( q ) p CR 2.2 The analysis of traditional ranking methods This part reviews some representative ranking methods for MEMCDM problems. The outranking methods, such as ELECTRE [12, 30, 32], PROMETHEE [7], TODIM [16] and GLDS [34], are based on pairwise US AN comparisons of alternatives under each criterion. The utility value-based ranking methods compose the process of aggregating criteria values to rank alternatives, which varies from different normalization and aggregation approaches. In this paper, we focus on the utility value-based ranking methods due to their applicability and M simplicity. ED There are mainly two widely used normalization techniques, namely, the linear normalization model and the vector normalization model. Jahan and Edwards [18] illustrated that different results can be derived by different PT normalization models. The SMART [14] is a rather simple ranking method which uses a linear normalization model to eliminate different dimensions among criteria and employs a weighted average operator to integrate the CE normalized criteria values. Considering that the weighted average operator, the weighted geometric operator and the weighted maximum operator have different effects in reflecting the performances of alternatives, the AC MULTIMOORA [9] applies these three aggregation operators to derive three kinds of subordinate utility values based on the vector normalization, and the final rankings are determined by aggregating the subordinate ranks. Based on the vector normalization and the weighted average aggregation, the TOPSIS [6, 11] determines the optimal solution by calculating the distance of each alternative from the reference points. Opricovic and Tzeng [27] claimed that the solution obtained by the TOPSIS may not be the closest to the ideal solution since the TOPSIS ignores the relative importance between the distance of each alternative to the ideal point and that to the 6 ACCEPTED MANUSCRIPT negative-ideal point. The VIKOR [24, 28] computes the ―individual regret‖ values of alternatives by the weighted maximum formula, after deriving the ―group utility‖ values of alternatives based on the linear normalization and weighted average aggregation. However, the subordinate ranks are not taken into consideration in the VIKOR when integrating two types of utility values, which makes the results less stable. Given that the criteria may be benefit, cost or target forms in practice, Jahan et al. [17, 19] extended the linear normalization into the target-based linear normalization. On this basis, the target-based TOPSIS [17], target-based VIKOR [19] and target-based MULTIMOORA [20] were proposed. Table1 illustrates the differences among the above-mentioned utility T value-based ranking methods clearly. IP Table 1. The characteristics of utility value-based ranking methods CR MCDM method Normalization Aggregation function Criteria type Criteria form Theory SMART Linear Arithmetic Quantitative and qualitative Max, min Addition TOPSIS Vector Arithmetic Quantitative or qualitative Max, min Distance to ideal VIKOR MULTIMOORA Target-based TOPSIS Linear Vector Linear Arithmetic, max Arithmetic, max, geometric Arithmetic US Quantitative or qualitative Quantitative or qualitative Quantitative Max, min Max, min Max, min, target Distance to ideal Addition Distance to ideal AN Target-based VIKOR Linear Arithmetic, max Quantitative Max, min, target Distance to ideal Target-based MULTIMOORA Linear Arithmetic, max, geometric Quantitative or qualitative Max, min, target Addition The proposed method Vector, linear Arithmetic, max, geometric Quantitative and qualitative Max, min, target Distance to ideal M From Table 1, we can find that the common defect of the existing methods is that they eliminate the effects of different criteria dimensions based on only one normalization approach, which may bias the results since each ED normalization method may lose the original information to some extent. Furthermore, calculating the utility values by different aggregation operators is useful, but there still is a challenge to integrate the subordinate utility values PT with the ranks of alternatives comprehensively to derive the final rankings of alternatives. CE To handle the probabilistic linguistic information, researchers have extended the traditional ranking methods to probabilistic linguistic context, such as the PL-TOPSIS [29], PL-ORESTE [33], PL-LINMAP [23] and AC PL-MULTIMOORA [35]. Considering the consensus reaching process, Zhang et al. [38] introduced an aggregation-based method to solve probabilistic linguistic group decision-making problems, but they ignored the normalization process both in the consensus reaching process and the selection process. These methods related to PLTSs cannot avoid the above-mentioned defects of the traditional ranking methods. To overcome these drawbacks and solve the MEMCDM problems with numerical values and PLTSs simultaneously, in Sect. 3, we shall propose a new ranking method by considering different normalization techniques and aggregation models. 7 ACCEPTED MANUSCRIPT 3 DNMA: A comprehensive method for hybrid MEMCDM problems This section proposes a ranking method, named the DNMA, to solve the hybrid MEMCDM problems with both quantitative and qualitative criteria. The values of quantitative criteria are numerical numbers while the values of quantitative criteria are expressed as PLTSs. Considering that there are beneficial, cost and target-based criteria, we propose an improved target-based linear normalization formula and a target-based vector normalization formula. To derive reliable results, we combine these two target-based normalized approaches with three aggregation operators to obtain different utility values of alternatives in appropriate ways. Furthermore, we introduce a new T aggregation formula to integrate the utility values and ranks of alternatives to obtain the final ranking of IP alternatives. CR 3.1 Description of the hybrid MEMCDM problems A hybrid MEMCDM problem contains a finite set of alternatives A {a1 , a2 ,..., am } (m 2) , a set of US qualitative and quantitative criteria C {c1 , c2 ,..., cn } (n 2) with the weight vector W (1 , 2 ,..., n )T , and a set of experts E {e1 , e2 ,..., eq } (Q 2) . It is easy to collect the numerical values for quantitative criteria, and we AN need to make evaluations on the performances of alternatives over qualitative criteria. Suppose that there are g qualitative criteria C {c1 , c2 ,..., cg } and their values are evaluated by each M expert and expressed as linguistic expressions llij( q ) ( i 1,2, , m , j 1,2, , g , q 1,2, , Q ). Based on the ED transformation function [31] and the probability information, we can convert the linguistic evaluation llij( q ) to a PLTS hS p for i 1,2, , m , j 1,2, , g , q 1,2, , Q . Then, the experts‘ individual linguistic ij ( q ) evaluations can be aggregated into the collective PLTSs hS p ( i 1,2, PT ij , m , j 1,2, , g ) by Eq. (3). For the quantitative criteria C2 cg 1 , cg 2 , , cn , the values of alternatives are expressed as numerical CE numbers x , i 1, 2, ij , m , j g 1, g 2, , n . Then, a decision matrix D is established by all hSij , i 1, 2, , m , j 1,2, , g and x ij , i 1, 2, , m , j g 1, g 2, , n , shown as: AC hS11 ( p ) hS1g ( p ) x1g 1 x1n D hSi1 ( p) hSig ( p) x ig 1 x in (4) h m1 ( p) hSmg ( p) x mg 1 x mn S 3.2 The target-based normalization techniques Normalization is critical in solving MEMCDM problems. Only when all criteria are dimensionless can we 8 ACCEPTED MANUSCRIPT aggregate the values of an alternative over its criteria. Even if the same LTS is utilized to make evaluations under all qualitative criteria, the linguistic values on these criteria should also be normalized. A proper normalization method can enhance the effectiveness of final decision [4]. In this section, we propose a target-based linear normalization formula and a target-based vector normalization formula. Both the numerical values and the PLTSs can be managed by these formulas. 3.2.1 The target-based linear normalization The linear normalization eliminates the units of criteria by comparing the responses with the interval T Maximum-Minimum. It has been used in the VIKOR [28] and the extended TOPSIS [29]. Based on the distance IP between each value and the target value, Jahan et al. [19] proposed a target-based linear normalization formula as: CR xij rj y 1 1 (5) m i nx mr in ij m a x m axijx rj , ij j i i Especially, rj = max xij if c j is in benefit form, and rj min xij if c j is in cost form. i i US where x ij is the value of alternative ai with respect to criterion c j and rj is the target value on criterion c j . AN Considering both the quantitative and qualitative values, motivated by the normalization formula used in Ref. [20], we improve Eq. (5) as: d hSij ( p ), hSj* ( p ) , if j 1, 2, ,g M dij y 1 1 ij , dij (6) max dij i xij rj , if j g 1, g 2, ,n ED where hSj* ( p) is the target value on qualitative criterion c j ( j 1,2, , g ), and rj is the target value on quantitative criterion c j ( j g 1, g 2, , n ). PT The target-based linear normalization can reflect the closeness between each alternative and the target solution under each criterion. Furthermore, the normalized values are the same for different convertible units with the same CE criterion function [27], such as the length xij [m] or ij [km] , and the temperature xij [C 0 ] or ij [ F 0 ] . These ‗‗convertible‘‘ units are related as ij xij , 0 . The normalized value Nij is AC ij j ( xij ) ( rj ) 1 ij =yij1 max ij j max ( xij ) ( rj ) N i i where j is the target value on criterion c j . Thus, it is reasonable to aggregate the linear normalized values of an alternative on all criteria directly because they only represent the similarity between the judgments of alternatives and the ideal solution. However, the linear normalized values lose the distribution of the original values. This defect can be illustrated by Example 1. Example 1. Suppose that there are three projects a1 , a2 and a3 against the internal rate of return c1 9 ACCEPTED MANUSCRIPT (in %) and the payback period c2 (in years), and the decision matrix is given as: 1 5 D1 6 5.5 11 6 By Eq. (6), we obtain y11 0 , y21 0.5 , y31 1 , y12 1 , y22 0.5 and y32 0 . If the weight vector of the 1 1 1 1 1 1 criteria is W =(0.5,0.5)T , based on the weighted average operator (shown as Eq. (15)), we obtain y1 0.5 , y2 0.5 and y3 0.5 . Then, a1 a2 a3 . However, we could not accept this result. There are great differences on the values of c1 for different T projects, and x11 is so inferior that we cannot select a1 . There are small differences on the values of c2 and IP x 32 is not so bad, considering that c2 is a cost criterion. Thus, the target-based linear normalization is unable to CR reflect the real differences between different data. 3.2.2 The target-based vector normalization US The vector normalization, which has been employed in the MULTIMOORA [9] and classical TOPSIS [11], is shown as AN ij x m ij 2 x if c j is a benefit criterion i 1 yij 2 (7) x m ij 2 1 x ij if c j is a cost criterion M i 1 ij where x is the value of alternative ai with respect to criterion c j . ED The target-based vector normalization aims to normalize the values of alternatives on a criterion to the values within the unit interval [0,1] . The dimensionless number yij2 can maintain the distribution of the original value PT x ij compared with the target-based linear normalization. Brauers & Zavadskas [8] proved that the vector normalization formula is a robust option. However, it fails to eliminate the evaluation units of criteria essentially in CE two aspects: (1) It cannot eliminate the influence of different convertible units with the same criterion function. Supposing AC x m ij 2 that the ‗‗convertible‘‘ units are related as ij xij , 0 , we have yij2 xij but i 1 xij x . If 0 , then yij2 Nij . m 2 m 2 Nij ij ij ij i 1 i 1 (2) It is unable to eliminate the influence of different units with respect to different criteria on the results of an MEMCDM method which integrates the information based on the fully compensated aggregation operator. This defect can be verified by Example 2. Example 2. Suppose that there are three production lines a1 , a2 and a3 against the cost c1 (million) and 10 ACCEPTED MANUSCRIPT the production c2 (number of packages). The decision matrix is given as: 43 1100 D2 42 1050 41 900 2 0.423 , y31 By Eq. (7), we obtain y112 0.409 , y21 2 0.437 , y12 2 0.623 , y22 2 0.594 and y32 2 0.509 . Suppose that the weight vector of criteria is W =(0.5,0.5)T . Based on the weighted average operator (shown as Eq. (15)), we obtain y1 0.515 , y2 0.505 and y3 0.475 . Then, a1 a2 a3 . By Eq. (6), we get y111 0 , 1 0.5 , y31 y21 1 1 , y121 1 , y22 1 0 . Then, a2 1 0.75 and y32 a1 a3 . The results derived by two normalization T functions are not consistent. IP Eq. (7) derives normalized values based on original numerical values but the size of the criteria‘ unit is ignored. From D2 in Example 2, we can find that the discrete degree of the numerical values under criterion c2 CR are larger than that of the numerical values under criterion c1 . This phenomenon can be reflected by the vector normalization. For example, y122 y22 2 y112 y21 2 . In fact, there is a big separation between 43 million and 42 US million of the cost but a small division between 1100 and 1050 of the production packages. The differences of the alternatives on cost are decreased by the vector normalization (which is only able to measure the differences AN between numbers but ignores the unit differences). The linear normalization derives the normalized values by comparing with the target values. The linear normalized values can reflect the proportion of the alternatives in the whole rather than the size of numerical values. By Eq. (6), there is y122 y22 2 y112 y21 2 . In Example 2, the values M of the three alternatives under criterion c1 are evenly distributed. The value of a2 is in the middle position and ED the value of a1 is close to the maximum value under criterion c2 . Therefore, we prefer to select a2 instead of a1 . In this case, the vector normalization is invalid. To fill the gap of normalizing all the benefit, cost and target-based criteria values by the vector normalization, PT we introduce a target-based vector normalization formula based on the distance between each judgment and its corresponding target value, shown as: CE e hSij ( p ) e hSj* ( p ) 1 , if j 1, 2, ,g e h ( p) e h ( p) m 2 2 j* ij AC S S i 1 yij2 (8) xij rj 1 , if j g 1, g 2, ,n x r m ij 2 2 j i 1 where e hSij ( p) is the expected value of hSij ( p) s(lij) ( p(l ) ) | s(lij) S with L (l ) (l ) e hSij ( p) l 1 ij L p p (l ) (9) 2 l 1 where is the scale of the LTS S . 11 ACCEPTED MANUSCRIPT Like the target-based linear normalization values, the target-based vector normalization values also express the similarity between the judgments of alternatives and the ideal solution. However, Problem (1) presented above regarding the classical vector normalization is not avoided by the target-based vector normalization. That is to say, if the ‗‗convertible‘‘ units of a criterion are related as ij xij with 0 , the target-based vector normalization is ineffective. According to the analysis above, we find that both the target-based linear normalization and target-based vector normalization have their own advantages and limitations. Since the target-based linear normalization can T reflect the proportion of a set of data but cannot maintain the distribution of original values, it is suitable to IP normalize the values of criteria that the amount of data is large enough and the distribution is uneven. Since the CR target-based vector normalization can maintain the distribution of the original values but cannot reflect the proportion of a set of data, it is not suitable to handle the criteria whose convertible units are disproportionate. US Given this reality, we deem that the target-based vector normalization is more suitable for qualitative criteria than quantitative criteria. AN 3.2.3 The trade-off between criteria after normalization M Before aggregating the values under different criteria, the scale which is used to measure the criteria should be unified. However, when a normalization is done, it changes the units of the criteria and the decision matrices. As ED the metrics change, the criteria weights need to be updated to reflect accurately the trade-off between criteria [4]. Belton and Gear [5] claimed that the meaning of the criterion weight is the value of a unit in the scale. Therefore, PT the maximum normalization value should be unified as 1 for each criterion [5]. This part aims to adjust the criteria weights and the normalization values to make a trade-off between criteria. CE (1) By the target-based linear normalization technique, the distribution of original values may be lost as AC illustrated by Example 1. In Example 1, the gaps between alternatives under criterion c2 are expanded after the liner normalization. When aggregating the nomination values under c1 and c2 with the original criteria weights, it implies that the weight of c2 is expanded. This is the source of the biased results in Example 1. To solve this problem, we need to adjust the weights of criteria before aggregation. The discrete degree of the original data under each criterion should be measured and the standard deviation is an effective tool in this regard. Generally, a low standard deviation shows a small gap of data while a high standard deviation implies a large 12 ACCEPTED MANUSCRIPT spread of values [20]. If the standard deviation of a set of data under a criterion is low, the gaps between these data would be enlarged by the liner normalization. Therefore, we should reduce the weight of this criterion in the aggregation process. Let x ij be the numerical value of alternative ai under the quantitative criterion c j , and hSij ( p) be a PLTS of alternative ai under the qualitative criterion c j . The standard deviation of a set of values x ij and hSij ( p) , i 1, 2, , m , j 1,2, , n , can be calculated by: T 2 1 m i 1 xij max xij i 1 xij max xij IP m i m i , if c j is a quantitative criterion m j CR (10) 2 i 1 e hSij ( p) max e hSij ( p) i 1 e hSij ( p) max e hSij ( p) m 1 m i m i , if c is a qualitative criterion m j US where j is deemed as the standard deviation of criterion c j . e hSij ( p) is the expected value of hSij ( p) . AN The weight adjustment coefficient of c j can be determined as j j j n j 1 (11) M Let j be the weight of criterion c j , which is determined by decision-makers in advance. When ED aggregating the linear normalization values under different criteria, the criterion weight should be adjusted as: j j j j j n j 1 (12) PT For Example 1, by Eqs. (10) and (11), we obtain the weight adjustment coefficients of c1 and c2 as CE 1 0.9 and 2 0.1 , respectively. By Eq. (12), we obtain the adjusted weights of the two criteria as 1 =0.7 and 2 =0.3, respectively. Based on the new weights, by Eq. (15), we obtain y1 0.2 , y2 0.5 and y3 0.7 . AC This result fits our intuition that a3 is the best alternative which has large internal rate of return and not long payback period. This shows the effectiveness of the weight adjustment method for the trade-off between criteria regarding the linear normalization. By the weight adjustment tool, the defect of the target-based linear normalization that loses the distribution of original values can be avoided. 13 ACCEPTED MANUSCRIPT (2) By the target-based vector normalization technique, the distribution of a set of data on a criterion is preserved since the data is scaled down to a range of 0 to 1. Therefore, if we adjust the maximum vector normalization value to 1 and retain the distribution, there is no fundamental change in the nature of the decision matrix. Thus, we are unnecessary to adjust the weights of the criteria in this case. In addition, the maximum entry of the target-based linear normalization values also should be adjusted to meet the requirement of 1. Before aggregating the normalized values under different criteria to derive the comprehensive performance of an alternative, yij1 and yij2 should be adjusted by T yij1 yij1 max yij1 IP i 2 (13) y yij max yij2 2 ij i CR 3.3 A novel ranking method: DNMA US This section proposes the DNMA method based on two types of target-based normalization approaches and three kinds of aggregation functions. Given that both the target-based linear normalization and the target-based AN vector normalization have their advantages and limitations, we combine them with different aggregation techniques to obtain different utility values of alternatives. We aim to strengthen each approach but eliminate the biases caused M by single normalization techniques through reasonable combination. Furthermore, we introduce a new aggregation formula to integrate the utility values and ranks of alternatives derived by different aggregation models. ED 3.3.1 The subordinate aggregation models In the following, we develop three kinds of aggregation models based on the two target-based normalization PT techniques. CE (1) The Complete Compensatory Model (CCM) Zeleny [37] proposed a measurement r ( x; p) which is an aggregation function to measure the distance of AC * alternative ai to the ideal solution a : 1 p n r ( x; p) j x ij rj p , 1 p (14) j 1 where j is the weight of c j . If p 1 , the weighted value of each criterion is equally important. If p 2 , the greater the weighted value is, the greater the importance of it would be. If p , the greatest value max j xij rj is the dominant element i 14 ACCEPTED MANUSCRIPT that r ( x; p) max j xij rj . With the increase of p , the predominance of the larger value j xij rj i becomes greater and greater. The measurements r ( x; p) of p 1 and p have been used in the VIKOR method, and the measurement r ( x; p) of p 2 has been applied in the classical TOPSIS method. Since each criterion has a weight, there is no reason to add another weight to a bigger one. Thus, we employ the measurement r ( x; p) of p 1 as the first aggregation function of our method. From Example 2, we can find that the target-based linear normalization is superior to the target-based vector normalization when combining with the linear aggregation operator to fuse the values of each alternative under all criteria. Thus, we first define the CCM based on the weighted average operator as: T n u1 (ai ) j yij1 (15) IP j 1 where j is the adjusted weight of criterion c j , and yij1 is the adjusted target-based linear normalization value. CR The alternatives can be ranked by listing u1 (ai ) , i 1, 2, , m in descending order, and we get the first type of ranks r1 (ai ) , i 1, 2, ,m . US Note. Here we let the ranks of alternatives obtained in this paper be the Besson‘s mean ranks [32]. That is, if an object ai ranks the u th position, then r (ai ) u ; if both ai and at rank the u th position, then AN r (ai ) r (at ) (u (u 1)) 2 u 0.5 . For example, if a1 prefers to a2 , and a2 is indifferent to a3 , then r (a1 ) 1 and r (a2 ) r (a3 ) 2.5 . M (2) The Un-Compensatory Model (UCM) To avoid the situation that the selected solution has an extremely poor performance under a criterion, we ED employ the measurement r ( x; p) with p , namely the weighted maximum operator, and the target-based linear normalized values to compose the second aggregation function, shown as: PT u2 (ai ) max j (1 yij1 ) (16) j CE The alternatives are ranked by listing u2 (ai ) , i 1, 2, , m in ascending order, and we get the second type of ranks r2 (ai ) , i 1, 2, ,m . AC (3) The Incomplete Compensatory Model (ICM) The CCM is a complete compensatory aggregation method. Thus, the poor performance of an alternative under some criteria can be completely compensated by the good performance of the alternative under other criteria. By this model, the derived best alternative has the largest comprehensive value with respect to all criteria. However, this alternative may perform poorly under some criteria, which does not meet the decision-making requirements. In some cases, we require that the selected alternative is not bad under all criteria. The UCM is an un-compensatory aggregation method. It aims to capture the worst performance of an 15 ACCEPTED MANUSCRIPT alternative over all criteria. However, this method ignores the comprehensive performance of alternatives. The above two aggregation functions can be applied together to select compromise solutions effectively. In this way, the selected compromise solutions not only have good comprehensive performances but also perform not bad under all criteria. However, in some MEMCDM problems, we need to rank the alternatives, not just to find out the optimal one. When ranking alternatives, both the comprehensive performance and the negative performance of each alternative should be considered. The multiplicative aggregation operator has the characteristic of incomplete compensation. The small values cannot be compensated by the large values completely in this method. This feature meets the aggregation requirement in practical decision-making problems that we prefer the alternative which has moderation performances under all criteria rather than the alternative which performs very bad under some criteria T and very good under other criteria. IP Considering that the linear normalization values may be zero, it is ineffective to combine them by the multiplicative aggregation operator. The target-based vector normalization values are generally not zero. Therefore, CR we employ the target-based vector normalization values to propose the third aggregation function as Eq. (17) by the weighted geometric operator: j u3 (ai ) ( yij2 ) j US where j is the original weight of c j , and yij2 is the adjusted target-based vector normalized value. The (17) AN alternatives are ranked by listing u3 (ai ) , i 1, 2, , m in descending order, and we obtain the third type of ranks r3 (ai ) , i 1, 2, ,m . M Suppose that alternatives a1 and a2 perform similarly in total, but a1 performs moderately under all criteria while a2 performs very good under some criteria but very bad under other criteria. When using the ICM, ED we can obtain that a1 is superior to a2 , which is in line with our cognition. That is, the good performances of an alternative under some criteria cannot fully compensate for the poor performances under other criteria. This is a property of the weighted geometric operator compared with the weighted average operator and the weighted PT maximum operator. The weighted average operator would derive that a1 a2 due to the complete compensation while the weighted maximum operator would derive that a1 is absolutely superior to a2 due to the CE non-compensation. AC 3.3.2 The integration of subordinate utilities and ranks In the final phase, we need to obtain a comprehensive ranking by integrating the results of the above three models. We can take these three models as three criteria: CCM (denoted by T1 ), UCM (denoted by T2 ) and ICM (denoted by T3 ). Each alternative ai has two kinds of values: the utility value u y (ai ) and the rank ry (ai ) with respect to each criterion Ty ( y 1,2,3) . Obviously, this is a multi-criteria decision-making problem composed by two decision matrices: the utility value decision matrix D(u ) u y (ai ) m3 and the ranking decision matrix 16 ACCEPTED MANUSCRIPT D(r ) ry (ai ) : m3 u1 (a1 ) u2 (a1 ) u3 (a1 ) r1 ( a1 ) r2 ( a1 ) r3 ( a1 ) D u u1 (ai ) u2 (ai ) u3 (ai ) , D r r1 ( ai ) r2 ( ai ) r3 ( ai ) u1 (am ) u2 (am ) u3 (am ) r1 ( am ) r2 ( am ) r3 ( am ) Below we clarify the reasons for considering both the utility value decision matrix and the ranking decision matrix. T (1) If we only consider the utility values derived by three subordinated models, then the results will be sensitive to the relative importance of these models. IP Example 3. Suppose that there are three alternatives, and the utility value decision matrix is given below: CR 0.7 0.7 0.85 D u 0.6 0.5 0.8 0.5 0.4 0.7 For the CCM, the ranking set is a1 a2 US a3 ; for the UCM, the ranking set is a3 a2 a1 ; for the ICM, the AN ranking set is a1 a2 a3 . If we deem these three models of the same importance, then we can calculate the comprehensive utility value of each alternative by the formula u(ai ) u1 (ai ) (1- u2 (ai )) u3 (ai ) , in which only M the utility values are considered. Then, we can obtain u(a1 ) 1.85 , u(a2 ) 1.9 and u(a3 ) 1.8 . That is ED a2 a1 a3 . These results are different to the results when considering CCM, UCM or ICM, separately. If we deem these three subordinated models of different importance, then we can assign a weight to each of them. In this regard, PT there is u(ai ) w1 * u1 (ai )+w2 *(1-u2 (ai ))+w3 * u3 (ai ) , where w1 w2 w3 1 . If we assign w1 =0.2 , w2 0.5 CE and w3 0.3 , then a2 =a3 a1 ; If we assign w1 =0.4 , w2 0.2 and w3 0.4 , then a1 a2 a3 . The results are sensitive to the relative weights of the three subordinate models. But we are difficult to determine a reasonable AC weight vector. This problem is also existed in the VIKOR method as verified in Sect. 4.1.4. (2) If we determine the results only based on the subordinate ranks derived by the three models, then the final results may be biased since the ranks cannot reflect the difference among alternatives clearly. Example 4. Suppose that there are three alternatives a1 , a2 and a3 . If their utility values are u1 (a1 ) 0.81 , u1 (a2 ) 0.85 and u1 (a3 ) 0.5 , then their ranks are r1 (a1 ) 2 , r1 (a2 ) 1 and r1 (a3 ) 3 . The ranks cannot 17 ACCEPTED MANUSCRIPT imply the fact that the performance of a1 is extremely close to a2 . This problem can be found in the MULTIMOORA method as illustrated in Sect. 4.1.4. Considering the different dimensions among three subordinate utility values, we are supposed to normalize them before aggregating. To preserve the original distribution of the subordinate utility values u y (ai ) ( y 1,2,3 ), we normalize them by the traditional vector normalization, shown as: u y (ai ) u yN (ai ) , y 1,2,3 (18) u (ai ) m 2 T y i 1 IP A parameter [0,1] is introduced to reflect the relative importance of the subordinate utility values and the subordinate ranks. We let 0.5 in this paper. The final utility value of each alternative can be defined by a CR weighted Euclidean distance formula as: 2 2 2 m r1 (ai ) 1 2 r (a ) DNi w1 * u1N (ai ) max u1N (ai ) (1 ) US w2 * u2 (ai ) max (1 ) 2 i N u2N ( ai ) i m i m (19) 2 2 m r3 (ai ) 1 w3 * u3N (ai ) max u3N (ai ) (1 ) AN i m where w1 , w2 and w3 are the weights of the CCM, UCM and ICM, respectively, with w1 +w2 +w3 =1 . We can determine the weights according to the preferences of decision-makers on the alternatives‘ comprehensive M performances or their worst performances. If decision-makers pay more attention to the comprehensive performances of the alternatives, a big weight can be assigned to the CCM; if decision-makers are unwilling to take ED risks, that is to say, the selected alternative should not perform bad under some criteria, a big weight can be assigned to the UCM; if decision-makers pay attention to both the comprehensive performance and the decision risks, a big weight can be assigned to the ICM. The weights also can be determined based on the relative PT importance of the target-based linear normalization and the target-based vector normalization. If the linear normalization is more effective to normalize the values on criteria, a big weight can be assigned to the CCM and CE the UCM; otherwise, a big weight should be assigned to the ICM. In conclusion, the weights of the CCM, UCM or ICM are determined by the joint consideration of the normalization techniques and the aggregation approaches. The final rank set R r (a1 ), r (a2 ), , r (am ) is determined in descending order of DNi , i 1, 2, AC ,m . 3.4 Procedure of the DNMA method When using the DNMA method to handle the MEMCDM problems. Firstly, we need to normalize the decision matrix and obtain the target-based linear normalization values and the target-based vector normalization values, respectively. Then, the criterion weights and the normalization values should be adjusted to achieve the trade-off between criteria. Afterwards, the adjusted normalization values are aggregated by three aggregation models, and three types of utility values and ranks of alternatives are derived. Finally, a weighted Euclidean distance formula is 18 ACCEPTED MANUSCRIPT applied to integrate the subordinate utility values and ranks, and the collective ranking is obtained. To clarify the proposed DNMA method, we summarize its procedure in Fig. 1 intuitively. Quantitative criteria Alternatives Qualitative criteria Experts Numerical values Linguistic evalustions Target-based linear Target-based vector normalization values normalization values Adjusted Adjusted Adjusted target-based T criterion weights criterion weights vector normalization values IP Weighted average Weighted Weighted geometric operator maximum operator operator CR CCM UCM ICM Subordinated utility Subordinated utility Subordinated utility values values values Subordinated ranks Subordinated ranks Final ranks US Subordinated ranks AN Fig. 1. The procedure of the DNBA method The procedure is illustrated in detail as follows: M Step 1. (MEMCDM problem formalization) Determine alternatives and criteria, and collect the numerical ij values, x , of the quantitative criteria. Invite experts to evaluate the performance of each alternative under each ED qualitative criterion. Experts are flexible to make assessments by linguistic values, llij( q ) , for qualitative criteria. Go to the next step. Step 2. (Decision matrix construction) Translate the linguistic information into the PLTSs, hS p , PT ij ( q ) according to the transformation function and the probability information. Determine the weights of experts and aggregate experts‘ linguistic evaluations into collective ones, hS p , by Eq. (3). Then, establish the decision CE ij matrix, D , which is composed by the PLTSs and the numerical numbers. Go to the next step. AC Step 3. (Normalization) Distinguish the criteria into benefit, cost and target forms. Based on the decision matrix D , we calculate the target-based linear normalization values by Eq. (6) based on the distance measure given as Eq. (2), and the target-based vector normalization values by Eq. (8) based on the expected function given as Eq. (9). Go to the next step. Step 4. (Trade-off between criteria) Determine the weights of criteria, which can be assigned by experts directly or derived by some weight-determining methods such as the analytical hierarchy process [3]. Then, we adjust the criteria weights based on the standard deviations of criteria by Eqs. (10)-(12). The target-based liner and 19 ACCEPTED MANUSCRIPT vector normalization values are adjusted by Eq. (13) to make the maximum entry as 1 under each criterion. Go to the next step. Step 5. (Aggregation) Compute the subordinate utility values u1 (ai ) , u2 (ai ) and u3 (ai ) ( i 1,2, ,m) based on the CCM given as Eq. (15), the UCM given as Eq. (16) and the ICM given as Eq. (17), respectively, and then determine the subordinate ranks ry (ai ) , y 1,2,3; i 1,2, , m . Go to the next step. Step 6. (Integration) Calculate the normalized subordinate utility values u yN (ai ) , y 1,2,3; i 1,2, ,m , by T Eq. (18). Determine the weights of the CCM, the UCM and the ICM. Then, we integrate the normalized IP subordinate utility values and the subordinate ranks by Eq. (19) and derive the collective utility value of each CR alternative, DNi , i 1,2, , m . Determine the final ranking according to the descending order of DNi and end the algorithm. 4 Case study and comparative analyses US AN In this section, two cases concerning the green small and medium-sized iron and steel enterprises and the excavation scheme selection for shallow buried tunnels are solved by the DNMA method. The advantages of the M DNMA method are highlighted by comparative analyses. 4.1 Case 1: ranking the green small and medium-sized iron and steel enterprises ED 4.1.1 Description of Case 1 PT The iron and steel industry is a pillar industry for economic development in China. The characteristics of high consumption, high emission and high pollution make the iron and steel industry become a key point of national CE energy conservation and emission reduction. The green development of iron and steel enterprises is urgent in changing the mode of economic development in China and solving the global climate and environment problem. AC However, affected by the production capacity, the capital and the processing technologies, the work of energy saving and emission reduction in small and medium-sized iron and steel enterprises are terrible. In some cities of China, the energy saving and environmental protection management of small and medium-sized steel mills are extremely bad. Some enterprises are even in the state of no institution, no personnel and no system. They have become a weakness to establish the green iron and steel enterprises. To promote the green development of iron and steel enterprises and deal with the overcapacity, the government 20 ACCEPTED MANUSCRIPT of a city in China decides to eliminate some small and medium-sized iron and steel enterprises. Suppose that there are seven small and medium-sized iron and steel enterprises (denoted by A {a1 , a2 ,..., a7 } ) in this city. The government decides to rank the set A based on the comprehensive ability of green development. Considering the particularity of small and medium-sized iron and steel enterprises, the evaluation criteria are selected as follows: - Governance and recovery capacity of industrial effluent c1 (qualitative, max); - Energy management capability c2 (qualitative, max): it is an important factor affecting energy saving T and emission reduction, including staff training, investment in research, and supervision mechanism; IP - Recovery and utilization capacity of coal gas c3 (qualitative, max): it is determined by the governance CR attitude and equipment technology of a company; - Investments in green research and development accounting for the proportion of total investment (%) c4 US (target). The green research and development includes developing the sewage and waste gas treatment AN technologies, developing the green equipment, and green training cost. Suppose that the target value of c4 is 5%; M - Annual Gaseous Pollutant Discharge Amount (kg) c5 (min): gas, carbon monoxide, sulfur dioxide, and hydrogen sulfide; ED - Annual Liquid Pollutant Discharge Amount (kg) c6 (min): oil pollution and organic matter. PT Suppose that there is no interaction between the above criteria. The first three are qualitative criteria, while the later three are quantitative criteria. We obtain the data of c4 from the enterprises and the data of c5 and c6 CE from the Resource and Environment Inspection Department. Three experts eq (q 1,2,3) are invited to assess the AC performance of enterprises with respect to each qualitative criterion. The LTSs they used are the same as S {s3 very bad , s2 = bad , s1 = a little bad , s0 = medium, s1 = a little good , s2 = good , s3 = very good} for all qualitative criteria. The evaluation results are shown in Table 2. Table 2. The assessments expressed in PLTSs on the enterprises for qualitative criteria of Case 1 a1 a2 a3 a4 a5 a6 a7 e1 c1 s1 (0.5), s2 (0.5) s1 (1) s0 (0.5), s1 (0.5) s1 (1) s2 (1) s1 (0.5), s0 (0.5) s0 (1) 21 ACCEPTED MANUSCRIPT c2 s1 (0.8), s0 (0.2) s0 (1) s0 (0.5), s1 (0.5) s0 (1) s1 (0.5), s0 (0.5) s0 (0.3), s1 (0.7) s1 (1) c3 s2 (0.8), s3 (0.2) s1 (1) s1 (0.5), s0 (0.5) s0 (1) s1 (1) s1 (1) s1 (0.7), s2 (0.3) e2 c1 s1 (1) s0 (1) s0 (1) s1 (1) s2 (0.5), s1 (0.5) s1 (1) s0 (0.5), s1 (0.5) c2 s0 (1) s1 (1) s1 (1) s1 (1) s1 (0.5), s0 (0.5) s0 (1) s1 (0.5), s2 (0.5) c3 s2 (1) s2 (1) s2 (0.4), s1 (0.6) s0 (1) s1 (0.7), s0 (0.3) s0 (0.5), s1 (0.5) s1 (0.5), s2 (0.5) e3 c1 s1 (0.2), s2 (0.8) s1 (1) s0 (0.7), s1 (0.3) s1 (1) s2 (0.5), s1 (0.5) s0 (1) s1 (1) c2 s1 (1) s0 (1) s0 (0.2), s1 (0.8) s0 (1) s1 (0.5), s0 (0.5) s0 (0.5), s1 (0.5) s1 (0.5), s2 (0.5) c3 s2 (0.5), s3 (0.5) s1 (1) s1 (1) s0 (1) s1 (1) s0 (0.3), s1 (0.7) s1 (1) T 4.1.2 Solving Case 1 by the DNMA method IP As Step 1 is given above, we start our computations from Step 2. CR Step 2. Suppose that three experts have the same importance that (1) (2) (3) 1/ 3 . By Eq. (2), the three experts‘ evaluations on the enterprises for qualitative criteria can be aggregated into collective opinions expressed US in PLTSs. Integrating the collective PLTSs of criteria c1 , c2 and c3 , and the numerical values of criteria c4 , c5 AN and c6 , we build a decision matrix as: s1 (0.6), s2 (0.4) s1 (0.6), s0 (0.4) s2 (0.8), s3 (0.2) 12.9 113200 4450000 M s1 (0.3), s0 (0.7) s1 (0.3), s0 (0.7) s1 (0.7), s2 (0.3) 13.3 312800 8900500 s0 (0.7), s1 (0.3) s0 (0.2), s1 (0.8) s2 (0.1), s1 (0.7), s0 (0.2) 0.8 582000 2009000 D s1 (1) s1 (0.3), s0 (0.7) s0 (1) 6.7 756206 990400 ED s (0.7), s (0.3) s1 (0.5), s0 (0.5) s1 (0.9), s0 (0.1) 2.5 155024 1680500 2 1 s1 (0.5), s0 (0.5) s0 (0.3), s1 (0.7) s2 (0.3), s1 (0.7) 15 680150 7443000 s (0.5), s (0.5) 3650000 0 1 s1 (0.7), s2 (0.3) s1 (0.7), s2 (0.3) 4.8 300205 PT Step 3. The target-based linear normalization values and the target-based vector normalization values are computed by Eq. (6) and Eq. (8), respectively. The results are shown in Tables 3-4, respectively. CE Table 3. The target-based linear normalization values for Case 1 c1 c2 c3 c4 c5 c6 AC a1 1 0 1 0.21 1 0.56 a2 0.46 0.16 0.74 0.17 0.69 0 a3 0.65 0.75 0.11 0.58 0.27 0.87 a4 0.87 0.16 0.37 0.83 0 1 a5 0 0.05 0.11 0.75 0.93 0.91 a6 0.39 0.68 0 0 0.12 0.18 a7 0.71 1 0.74 0.98 0.71 0.66 Table 4. The target-based vector normalization values for Case 1 22 ACCEPTED MANUSCRIPT c1 c2 c3 c4 c5 c6 a1 1 0.8 1 0.69 1 0.74 a2 0.82 0.83 0.91 0.68 0.84 0.4 a3 0.89 0.95 0.71 0.84 0.63 0.92 a4 0.96 0.83 0.79 0.93 0.49 1 a5 0.68 0.8 0.71 0.9 0.97 0.95 a6 0.8 0.94 0.67 0.61 0.55 0.51 a7 0.91 1 0.91 0.99 0.85 0.8 Step 4. Suppose that the experts assign the weights of criteria as: 1 0.29 , 2 0.26 , 3 0.16 , T 4 0.12 , 5 0.09 and 6 0.08 . By Eqs. (10)-(12), we obtain the weight adjustment coefficients of the IP criteria as 1 0.14 , 2 0.1 , 3 0.15 , 4 0.22 , 5 0.20 and 6 0.20 . By Eq. (12), we obtain the CR adjusted weights of the criteria as 1 0.21 , 2 0.17 , 3 0.17 , 4 0.17 , 5 0.14 and 6 0.13 . From US Tables 3 and 4, we can find that only the maximum target-based linear normalization value and the maximum target-based vector normalization value under criterion c4 are smaller than 1. Therefore, we only adjust the AN normalizated values under this criterion. Step 5. Let the CCM, the UCM and the ICM be the same importance for this case. The subordinate utility M values are calculated by Eq. (15), Eq. (16) and Eq. (17), respectively, based on which, three subordinate ranks are determined. The results are shown in Table 5. ED Table 5. The calculation results of Case 1 derived by the DNMA method CCM UCM ICM Final utility Final PT u1 (ai ) r1 (ai ) u2 (ai ) r2 (ai ) u3 (ai ) r3 (ai ) values ranks a1 0.43 2 0.17 5.5 0.88 2 0.31 3 a2 0.23 7 0.15 3 0.77 6 0.12 5 CE a3 0.42 3 0.15 3 0.84 3.5 0.32 2 a4 0.35 4 0.15 3 0.84 3.5 0.27 4 a5 0.26 5 0.21 7 0.78 5 0.04 7 AC a6 0.24 6 0.17 5.5 0.73 7 0.05 6 a7 0.53 1 0.06 1 0.93 1 0.59 1 Step 6. By Eq. (19), the subordinate normalized utility values and the subordinate ranks are integrated. The final utility values and the final rankings of enterprises are shown in Table 5. Thus, the ranking set is a7 a1 a3 a4 a2 a5 a6 and enterprise a7 is the optimal one. 23 ACCEPTED MANUSCRIPT 4.1.3 Solving Case 1 by other MEMCDM methods We employ three representative utility value-based ranking methods, i.e., the MULTIMOORA, TOPSIS and VIKOR, to deal with Case 1. For the convenience of comparison, we extend these methods to the hybrid context with both quantitative and qualitative information. (1) Solving Case 1 by the extended MULTIMOORA The MULTIMOORA is characterized by three subordinate aggregation methods, namely the Ratio System T (RS), the Reference Point (RP) and the Full Multiplicative Form (FMF), based on the vector normalization IP technique. The process to solve the case by the extended MULTIMOORA is addressed as follows: CR We first normalize the decision matrix D and obtain yij2 , i 1,2, ,7, j 1,2, ,6 , by the target-based vector normalization operator shown as Eq. (8). The results are the same as those in Table 5. Then compute the n j 1 US utility values RSi by the RS model where RSi j yij2 , the utility values RPi by the RP model where RPi max j 1 yij2 , and the utility values FMFi by the FMF model where FMFi yij2 . Afterwards, we can n j AN j j 1 determine three sets of subordinate ranks based on three types of subordinate utility values, respectively. Finally, M the ranking of alternatives is determined by fusing the three kinds of ranks by the dominance theory [10]. The results are shown in Table 6. ED Table 6. The calculation results of Case 1 derived by the extended MULTIMOORA RSi Rank RPi Rank FMFi Rank Final rank 0.89 2 0.052 4.5 0.88 PT a1 2 2.5 0.7884 6 0.052 4.5 0.77 a2 6 6 CE 0.8498 4 0.0464 3 0.84 a3 3.5 4 AC 0.8563 3 0.0459 2 0.84 a4 3.5 2.5 0.7901 5 0.0928 7 0.78 a5 5 5 0.7471 7 0.058 6 0.73 a6 7 7 0.9288 1 0.0261 1 0.93 a7 1 1 24 ACCEPTED MANUSCRIPT (2) Solving Case 1 by the extended TOPSIS The TOPSIS aims to find the optimal solution which is nearest to the positive ideal solution and farthest from the negative ideal solution. The classical TOPSIS [11] normalizes the decision matrix by vector normalization. Jahan et al. [17] developed a target-based TOPSIS based on linear normalization shown as Eq. (5). To solve the case with both quantitative and qualitative criteria, we extend the TOPSIS by combining it with both the target-based linear normalization model and the target-based vector normalization model proposed in this paper. The steps to solve the case by two extended TOPSIS methods are as follows: T (a) Solving the case by the TOPSIS with the target-based linear normalization IP Normalizing the decision matrix D by the target-based linear normalization given as Eq. (6), we obtain yij1 , CR i 1,2, ,7, j 1,2, ,6, shown in Table 3. Then, yij1 is be adjusted as yij1 . The separation of each enterprise j yij1 y j , while the separation of each enterprise from 2 from the ideal solution is computed by Di n j 1 the negative-ideal solution is computed by Di n j 1 US j yij1 y j , where y j max yij1 , y j min yij1 and 2 i i AN j is the adjusted criterion weight. The relative closeness to the ideal solution is computed by RCi Di D i Di . Rank the enterprises in descending order of RCi . The results are shown in Table 8. M (b) Solving the case by the TOPSIS with the target-based vector normalization Normalizing the decision matrix D by the target-based vector normalization given as Eq. (8), we obtain yij2 , ED i 1,2, ,7, j 1,2, ,6, which are the same as those in Table 4. Similarly, as the above process, we compute the values of Di , Di and RCi . The results derived by the TOPSIS with the target-based vector normalization are PT shown in Table 7. Table 7. The calculation results of Case 1 derived by the extended TOPSIS CE a1 a2 a3 a4 a5 a6 a7 RCi 0.58 0.40 0.53 0.53 0.45 0.30 0.77 TOPSIS with linear normalization AC Rank 2 6 3.5 3.5 5 7 1 RCi 0.85 0.76 0.82 0.81 0.77 0.72 0.91 TOPSIS with vector normalization Rank 2 6 3 4 5 7 1 (3) Solving Case 1 by the extended VIKOR The VIKOR determines compromise solutions by considering both the ―group utility‖ and the ―individual regret‖ values. It normalizes the decision matrix by linear normalization. Here, we extend the VIKOR by combining it with the target-based linear normalization technique to solve the case with both quantitative and 25 ACCEPTED MANUSCRIPT qualitative information. The process is shown as follows: Normalizing the decision matrix D by Eq. (6), we obtain the target-based linear normalized values yij1 , i 1,2, ,7, j 1,2, ,6, which are shown in Table 3. The ―group utility‖ value of each enterprise is computed by GU i j 1 j yi1j , the ―individual regret‖ value of the ―opponent‖ of each enterprise is calculated by n IRi max j (1 yij1 ) , and the compromise value of each enterprise is derived by j CVi (GUi GU ) (GU GU ) (1 )( IR IRi ) ( IR IR ) , where GU max GU i , GU min GU i , T i i IP IR max IRi and IR min IRi . The parameter denotes the relative importance between the ―group utility‖ i i CR value and the ―individual regret‖ value. The results are shown in Table 8 where three different values of are used. GU i IRi 0.3 CVi CVi US Table 8. The calculation results of Case 1 derived by the extended VIKOR 0.5 0.7 CVi AN Rank Rank Rank a1 0.43 0.17 0.38 4 0.47 3 0.55 3 a2 0.23 0.15 0.31 5 0.22 5 0.13 5 a3 0.42 0.15 0.49 2 0.53 2 0.58 2 M a4 0.35 0.15 0.43 3 0.43 4 0.42 4 a5 0.26 0.21 0.04 7 0.06 7 0.09 7 ED a6 0.24 0.17 0.20 6 0.15 6 0.11 6 a7 0.53 0.06 1.00 1 1.00 1 1.00 1 4.1.4 Comparative analysis PT All the above methods imply that alternative a7 is the optimal green enterprise and alternatives while a2 , CE a5 and a6 are three worst green enterprises. We compare the DNMA with other three MEMCDM methods based AC on the calculation results for Case 1 as follows: (1) Compare the DNMA with the MULTIMOORA: The MULTIMOORA determines the final rankings based on three subordinate ranks. As for this case, we obtain the subordinate utility values of alternative a2 as: RS2 0.7884 , RP2 0.0522 , and FMF2 0.77 , and the corresponding subordinate ranks as: rRS (a2 ) 6 , rRP (a2 ) 4.5 and rFME (a2 ) 6 . For alternative a5 , we obtain RS5 0.7901 , RP2 0.0928 , and FMF2 0.78 , 26 ACCEPTED MANUSCRIPT and rRS (a5 ) 5 , rRP (a5 ) 7 and rFME (a5 ) 5 . Based on the dominance theory, we obtain the final rankings are r2 6 and r5 5 . However, we can find that the utility values of a2 and a5 derived by the RS model and the FMF model are extremely close, but there are obvious differences on the utility values derived by the RP model. The fact is r2 5 and r5 6 . Therefore, it is unreasonable to consider subordinate ranks only. The DNMA considers both subordinate ranks and utility values of alternatives to derive the final ranking by an extended weighted Euclidean distance operator as Eq. (19). It increases both the accuracy and simplicity compared with the T dominance theory in MULTIMOORA. The CCM in DNMA and the RS in MULTIMOORA have the same IP aggregation operator, but they are based on different normalization techniques. The utility values and ranks of CR alternatives derived by the two models are different. This shows that the normalization technique should be combined with appropriate aggregation operators to strength their advantages. US (2) Compare the DNMA with the TOPSIS: From Table 8, we find that the results are similar when using different normalization operators in TOPSIS. This shows that both normalization techniques are suitable for the AN decision matrix of Case 1. The extended TOPSIS determines that alternative a1 dominates a3 . Given that the ―group utility‖ values of a1 and a3 are similar but a1 performances badly on criterion c2 , we prefer a3 M rather than a1 in practice. This inconsistency is attributed to the defect that the TOPSIS ignores the ―individual ED regret‖ values. The results of the extended TOPSIS for Case 1 are similar to the results derived by the CCM and ICM, respectively. But DNMA further considers the“individual regret‖ by the UCM, which is critical in practice. PT (3) Compare the DNMA with the VIKOR: From Table 8, we find that the results are sensitive to the parameter which represents the relative importance of the ―group utility‖ and the ―individual regret‖ values. It is hard to CE select a reasonable parameter value to integrate the ―group utility‖ and the ―individual regret‖ values into a collective one. This defect is avoided by the DNMA which considers both the utility values and ranks to derive the AC final ranking. Based on the above analyses, we can conclude that the results of Case 1 derived by the DNMA show the advantages of reliability than other utility value-based ranking methods because of double normalization techniques, three aggregation models and the utility integration method. 27 ACCEPTED MANUSCRIPT 4.2 Case 2: the excavation scheme selection for shallow buried tunnels To further illustrate the practicality and advantages of the DNMA method, we use it to solve another case. 4.2.1 Description of Case 2 The shallow buried tunnel is shallow, generally less than 50 meters. The stress distribution and deformation of surrounding rock are complex, which makes it difficult to master the deformation law of the surrounding rock in the process of tunnel excavation. Besides, the bearing capacity of surrounding rock is poor in the shallow buried T tunnel. During the construction process, the formation stress induced by excavation would rapidly spread to the IP surface, which makes the surface subsidence difficult to control. There are many excavation methods which have CR different properties and are suitable for different situations of the shallow buried tunnels. Failure cases may occur in China due to improper excavation schemes. For example, a large deformation taken place in old Fort tunnel of US Zhang Ji Railway, and landslides occurred in Nanshan tunnel on Zhengxi Passenger Dedicated Line [21]. Therefore, how to select an appropriate excavation scheme is critical for the shallow buried tunnel construction. AN There is a tunnel construction project for highway construction in Aba Prefecture, Sichuan Province, China. The buried depth of this tunnel is from 5 to 20 meters. Thus, it belongs to the shallow buried tunnel. Its surrounding M rock is weathered limestone, with the uniaxial compressive strength being 43MPa. There is joint fissure development and the rock mass integrity coefficient is 0.36. According to these properties, this surrounding rock is ED identified as sub class V1. Four excavation schemes can be applied to this tunnel, including the CRoss Diaphragm (CRD) method ( a1 ), the Center Diaphragm (CD) method ( a2 ), the Three-bench Seven-step Excavation (TSE) PT method ( a3 ) and the Double Side Drift (DSD) method ( a4 ). The advantages and disadvantages of these four CE excavation schemes are described in Table 9. Table 9. The advantages and disadvantages of four excavation schemes AC Interpretation Advantages Disadvantages CRD The tunnel sections are divided into three steps: upper, middle, lower, and a Small ground subsidence; Complex and slow process; ( a1 ) certain distance is maintained between the steps to excavate and support. short closed time high cost CD The tunnel section is divided into left and right sections from the middle, each Dismantling difficulties; Short closed time ( a2 ) of which forms an independent closed unit after excavation and support. complex process TSE The tunnel section is divided into four parts: upper, lower, left and right, each Mechanical inflexibility; large Face stability; brief process ( a3 ) of which is closed in time by temporary support and inverted arch. ground subsidence DSD The tunnel section is divided into three small sections: left, middle and right Safety in construction; small Difficult section closure; long ( a4 ) by two middle walls ground subsidence construction period 28 ACCEPTED MANUSCRIPT To select an appropriate excavation scheme for this tunnel construction project, five criteria are applied to measure the performances of the above-mentioned schemes. The criteria include construction cost ( c1 ), construction speed ( c2 ), construction safety ( c3 ), ground subsidence ( c4 ) and technical proficiency ( c5 ). The ground subsidence means the severity of ground subsidence when implementing a certain excavation scheme. The technical proficiency refers to the project team's proficiency level in excavation schemes. Suppose that these criteria are independent. c1 and c4 are cost criteria while c2 , c3 and c5 are benefit criteria. c2 , c3 , c4 and T c5 are qualitative criteria. Despite that c1 is a quantitative criterion, precise data is hard to collect. Therefore, we IP invite three tunnel excavation engineers to evaluate the performances of four excavation schemes. CR The LTS that the experts used for criteria c1 , c3 and c5 is: S1 {s3 very low, s2 = low, s1 = a little low, s0 = medium, s1 = a little high, s2 = high, s3 = very high} The LTS for criterion c2 is: US AN S2 {s3 very slow, s2 = slow, s1 = a little slow, s0 = medium, s1 = a little fast , s2 = fast , s3 = very fast} M The LTS for criterion c4 is: S3 {s3 very small , s2 = small , s1 = a little small , s0 = medium, s1 = a little big , s2 = big , s3 = very big} ED The evaluation results expressed in PLTSs by three tunnel excavation engineers are shown in the following matrices: PT {s2 (0.7), s3 (0.3)} {s2 (0.8), s1 (0.2)} {s1 (1)} {s3 (1)} {s1 (0.5), s2 (0.5)} {s0 (1)} {s0 (1)} {s1 (1)} {s2 (1)} {s0 (1)} CE D (1) {s2 (0.5), s1 (0.5)} {s2 (0.5), s3 (0.5)} {s1 (0.3), s0 (0.7)} {s2 (0.8), s3 (0.2)} {s0 (1)} {s3 (1)} {s3 (0.6), s2 (0.4)} {s1 (0.5), s2 (0.5)} {s1 (0.5), s0 (0.5)} {s1 (0.5), s0 (0.5)} AC {s2 (0.5), s3 (0.5)} {s2 (1)} {s1 (0.5), s2 (0.5)} {s3 (1)} {s1 (1)} {s (0.4), s (0.6)} {s (0.7), s (0.3)} {s1 (1)} {s2 (0.5), s1 (0.5)} {s0 (1)} D (2) 0 1 0 1 {s2 (1)} {s2 (0.5), s3 (0.5)} {s0 (1)} {s2 (0.6), s3 (0.4)} {s0 (0.5), s1 (0.5)} {s3 (1)} {s2 (1)} {s2 (1)} {s1 (0.5), s0 (0.5)} {s1 (1)} {s2 (1)} {s2 (1)} {s1 (0.7), s2 (0.3)} {s3 (1)} {s1 (2)} {s (0.5), s (0.5)} {s0 (1)} {s1 (1)} {s2 (1)} {s1 (0.5), s0 (0.5)} D (3) 0 1 {s2 (1)} {s3 (1)} {s0 (1)} {s2 (0.5), s3 (0.5)} {s0 (0.2), s1 (0.8)} {s2 (0.5), s3 (0.5)} {s3 (0.8), s2 (0.2)} {s2 (1)} {s1 (0.2), s0 (0.8)} {s1 (0.5), s0 (0.5)} 29 ACCEPTED MANUSCRIPT 4.2.2 Solving Case 2 by the DNMA method We can solve Case 2 by the DNMA method. The calculation process is briefly described as follows: Suppose that the weights of three tunnel excavation engineers are: (1) 0.4 , (2) 0.3 and (3) 0.3 . The decision matrix is {s2 (0.7), s3 (0.3)} {s2 (0.9), s1 (0.1)} {s1 (0.8), s2 (0.2)} {s3 (1)} {s1 (0.8), s2 (0.2)} {s (0.7), s (0.3)} {s0 (0.9), s1 (0.1)} {s1 (1)} {s2 (0.8), s1 (0.2)} {s1 (0.2), s0 (0.8)} D 0 1 {s2 (0.8), s1 (0.2)} {s2 (0.4), s3 (0.6)} {s1 (0.1), s0 (0.9)} {s2 (0.6), s3 (0.4)} {s0 (0.6), s1 (0.4)} T {s2 (0.2), s3 (0.8)} {s3 (0.5), s2 (0.5)} {s1 (0.2), s2 (0.8)} {s1 (0.4), s0 (0.6)} {s1 (0.6), s0 (0.4)} IP Suppose that the experts assign the weights of criteria as: 1 0.25 , 2 0.15 , 3 0.1 , 4 0.4 , CR 5 0.1 , and the weights of three subordinate models as: w1 0.3 , w1 0.3 and w3 0.4 . The adjusted weights of criteria as 1 0.24 , 2 0.20 , 3 0.13 , 4 0.32 and 5 0.11 . US The subordinate utility values and subordinate rank sets derived by three subordinate models are shown in AN Table 10. Table 10. The calculation results of Case 2 derived by the DNMA method CCM UCM ICM M Utility Final u1 (ai ) r1 (ai ) u2 (ai ) r2 (ai ) u3 (ai ) r3 (ai ) values ranks a1 0.58 1 0.21 2 0.76 1 0.52 1 ED a2 0.55 2 0.13 1 0.69 2 0.49 2 a3 0.54 3 0.32 4 0.47 4 0.11 4 a4 PT 0.30 4 0.24 3 0.57 3 0.15 3 The final utility values of enterprises are shown in Table 10, and thus the ranking is a1 a2 a4 a3 , which CE implies that a1 is the optimal excavation scheme for the tunnel construction project. AC 4.2.3 Solving Case 2 by other MEMCDM methods In this section, we solve Case 2 by the extended MULTIMOORA, the extended TOPSIS and the extended VIKOR, respectively. The calculation results are summarized in Table 11. The parameter in the extended VIKOR is 0.3. Table 11. The calculation results of Case 2 derived by four ranking methods TOPSIS TOPSIS VIKOR MULTIMOORA DNMA (Linear) (Vector) 30 ACCEPTED MANUSCRIPT r (ai ) RCi r (ai ) CVi r (ai ) RSi FMFi r (ai ) r (ai ) RCi GU i IRi RPi NDi a1 0.55 1 0.63 1 0.58 0.21 0.69 2 0.80 0.12 0.76 1 0.52 1 a2 0.53 2 0.55 0.13 0.97 0.49 0.54 2 1 0.70 0.15 0.69 2 2 a3 0.40 3.5 0.54 0.32 0.26 0.11 0.52 3 4 0.63 0.33 0.47 4 4 a4 0.40 3.5 0.30 0.24 0.30 0.15 0.37 4 3 0.59 0.16 0.57 3 3 T The results of three methods show that a1 is the optimal excavation scheme except the VIKOR. In Case 2, IP the results are sensitive to the parameter of the extended VIKOR. Based on the principle of compromise solution CR [24, 28], the VIKOR derives that a1 and a2 are compromise solutions while the whole ranking set of the alternatives is not certain. The different results derived by the extended TOPSIS based on two kinds of US normalization values indicate that one normalization technique is more suitable than another for Case 2. The results derived by the extended MULTIMOORA and the DNMA are the same. Since the ―individual regret‖ values of the AN alternatives are ignored, the TOPSIS derives that a3 dominates a4 . However, a3 performs badly under criterion M c4 and c4 has a big weight. If we select a3 rather than a4 , a big risk of ground subsidence we will suffer. Besides, the problem of the ground subsidence is highly valued in the tunnel construction project of Case 2. Based ED on the above analysis, we can make a conclusion that the results derived by the DNMA are reliable. This conclusion is coincided with that obtained in Case 1. PT 4.3 Discussions CE The essential difference between the utility value-based ranking methods lies in normalization and aggregation. Different normalization techniques have their advantages and limitations, and different aggregation operators have AC different functions. The quality of a ranking method depends on the reasonable combination of normalization and aggregation. As mentioned in Sect. 2.2, there are more or less problems with existing ranking methods. This paper develops a new ranking method, named DNMA, based on two normalization techniques and three aggregation tools. There are two reasons why considering two double normalization rather than single normalization: (1) For a decision matrix, if the results derived by the target-based linear normalization method are apparently 31 ACCEPTED MANUSCRIPT different from the results derived by the target-based linear normalization method, we should make further analysis on the applicability of two normalization for the matrix. We can assign a higher weight to the more suitable normalization. If we only consider one normalization, the results would be biased. (2) In general, the target-based linear normalization with a weight adjustment method can be used for both the qualitative and quantitative criteria, while the target-based vector normalization is not suitable for the criteria whose convertible units are disproportionate. Thus, the linear normalization has wider applications. This is the reason why it has been used in many ranking methods, such as the TOPSIS and VIKOR. However, it is limited in T combining with aggregation operators due to the value of zero exists. For example, the target-based linear IP normalization is failed to combine with the weighted geometric operator. CR We apply the Spearman rank correlation coefficient [20] to show the connection of the rankings derived by different MEMCDM methods mentioned above. Figure 2 illustrates the Spearman rank correlation coefficients US between the DNMA and the other methods in terms of the alternatives‘ rankings for Case 1. The Spearman coefficient is between -1 and 1. The larger Spearman coefficient exhibits the stronger correspondence of the AN compared rankings. The extended VIKOR with the parameter 0.5 and the extended VIKOR with the parameter 0.7 M have exact agreement with the ranking derived by the DNMA. Besides, the average Spearman coefficient between the ranking deduced by the DNMA and those deduced by other methods is 0.90, which is the largest one compared ED with the average Spearman coefficients of the other methods. This implies a high reliability of our method. PT VIKOR with target-based vector normalization 1.00 0.7 VIKOR with target-based vector normalization 1.00 0.5 CE VIKOR with target-based vector normalization 0.96 0.3 TOPSIS with target-based vector normalization 0.86 AC 0.83 TOPSIS with target-based linear normalization MULTIMOORA with target-based vector 0.77 normalization 0.00 0.20 0.40 0.60 0.80 1.00 Spearman rank correlation coeffetient Fig. 2 Correlation coefficients between the ranking derived by the proposed method and the other MEMCDM methods for Case 1 Figure 3 shows the Spearman rank correlation coefficients between the ranking deduced by the DNMA and 32 ACCEPTED MANUSCRIPT those deduced by other methods for Case 2. The DNMA also has a large average Spearman coefficient compared with other methods. VIKOR with target-based vector normalization 0.80 0.3 TOPSIS with target-based vector normalization 0.95 TOPSIS with target-based linear normalization 0.80 MULTIMOORA with target-based vector 1.00 normalization T 0.00 0.20 0.40 0.60 0.80 1.00 IP Spearman rank correlation coeffetient CR Fig. 3 Correlation coefficients between the ranking derived by the proposed method and the other MEMCDM methods for Case 2 The comparison between the ranking derived by the DNMA and those deduced by other methods shows high correspondence as illustrated in Figs. 2 and 3. This implies that the ranking deduced by the DNMA should be considered as correct. US AN Based on the comparative analysis in two cases, we conclude the advantages of the DNMA as follows: (1) High breadth. The DNMA method can deal with quantitative and qualitative criteria, and the benefit, cost and target-based criteria, simultaneously. M (2) High reliability. ① In the DNMA, we can derive convincing results by using two kinds of normalization ED techniques. ② We make a trade-off between criteria after normalization. It is important since the decision matrix is changed after normalizing. But this work is ignored by most existing ranking methods. ③ Three aggregation PT models with different functions can also improve the reliability of the DNMA. ④ The final integration method considers both the subordinate utility values and the subordinate ranks in deriving the final utility values of CE alternatives. It can avoid the defect of the MULTIMOORA which only considers the subordinate ranks, and avoid the drawback of the VIKOR which only considers the subordinate utility values. AC (3) High flexibility. Experts are flexible to make evaluations in linguistic terms or expressions. Decision-makers can adjust the weights of subordinate aggregation models to reflect their preferences on the ―group utility‖ values and the ―individual regret‖ values of alternatives. 5 Conclusions A hybrid MEMCDM problem includes both the quantitative and qualitative criteria. In addition, criteria are different in terms of the benefit form, the cost form and the target form. Considering these preconditions, this paper 33 ACCEPTED MANUSCRIPT proposed a new ranking method named DNMA for short. Compared with the existing ranking methods, the DNMA is characterized by two normalization techniques and three aggregation tools. We found that the proposed target-based linear normalization can reflect the proportion of the original data but cannot maintain their distribution, while the target-based vector normalization has opposite characteristic. The weighted average operator, the weighted maximum operator and the weighted geometric operator have the functions of complete compensation, non-compensation and incomplete compensation, respectively. We made an appropriate combination of the two normalization techniques and the three aggregation tools to strengthen their advantages but avoid their weaknesses. T In this regard, three aggregation models were proposed. A new function based on the weighted Euclidean distance IP operator was developed to aggregate the subordinated results derived by the three aggregation models. The CR advantages of the DNMA was verified by two practical cases and the comparative analyses between the proposed method and other existing ranking methods. US In the future, we shall analyze appropriate application situations of the linear normalization and the vector normalization in depth and explore more effective aggregation techniques to integrate the linear normalized values AN and the vector normalized values rather than the simple average operator. A large sample is needed to emphasize the advantages of the proposed method. Considering the interaction between criteria is also a challenge. M Acknowledgements ED The work was supported by the National Natural Science Foundation of China (71771156). References PT [1] Ahn BS, Park H. Establishing dominance between strategies with interval judgments of state probabilities. CE Omega 2014;49(12):53-59. [2] Bai CZ, Zhang R, Qian LX, Wu YN. Comparisons of probabilistic linguistic term sets for multi-criteria AC decision making. Knowledge-Based System 2017;119:284-291. [3] Barker TJ, Zabinsky ZB. A multicriteria decision making model for reverse logistics using analytical hierarchy process. Omega 2011;39(5):558–73. [4] Belton V, Gear T. The legitimacy of rank reversal—a comment. Omega 1985;13(3):143-144. [5] Belton V, Gear T. On a short-coming of Saaty's method of analytic hierarchies. Omega 1983;11(3): 228-230. [6] Bilbao-Terol A, Arenas-Parra M, Cañal-Fernández V, Antomil-Ibias J. Using TOPSIS for assessing the sustainability of government bond funds. Omega 2014;49(12):1-17. 34 ACCEPTED MANUSCRIPT [7] Brans JP, Vincke P. Note—a preference ranking organisation method. Management Science 1985;31(6):647-656. [8] Brauers WKM, Zavadskas EK. The MOORA method and its application to privatization in a transition economy. Control and Cybernetics 2006;35(2):445-469. [9] Brauers WKM, Zavadskas EK. Project management by MULTIMOORA as an instrument for transition economies. Ukio Technologinis Ir Ekonominis Vystymas 2010;16(1):5-24. [10] Brauers WKM, Zavadskas EK. Multimoora optimization used to decide on a bank loan to buy property. T Technological & Economic Development of Economy 2011;17(1):174-188. IP [11] Chen SJ, Hwang CL. Fuzzy multiple attribute decision making: methods and applications. Springer-Verlag CR Berlin 1992. [12] Corrente S, Figueira JR, Greco S, Słowiński R. A robust ranking method extending ELECTRE III to hierarchy US of interacting criteria, imprecise weights and stochastic analysis. Omega 2017;73:1-17. [13] Corrente S, Greco S, Słowiński R. Multiple criteria hierarchy process with ELECTRE and PROMETHEE. AN Omega 2013;41(5):820-846. [14] Edwards W, Barron FH. SMARTS and SMARTER: improved simple methods for multiattribute utility M measurement. Organizational Behavior and Human Decision Processes 1994;60(1): 306-25. [15] Feng B, Lai F. Multi-attribute group decision making with aspirations: a case study. Omega 2014;44:136-147. ED [16] Gomes LFAM, Lima MMPP. TODIM: basic and application to multicriteria ranking of projects with environmental impacts. Foundations of Computing and Decision Sciences 1992;16(4):113-127. PT [17] Jahan A, Bahraminasab M, Edwards KL. A target-based normalization technique for materials selection. Materials & Design 2012;35:647-654. CE [18] Jahan A, Edwards KL. A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Materials & Design 2015;65:335-342. AC [19] Jahan A, Mustapha F, Ismail MY, Sapuan SM, Bahraminasab M. A comprehensive VIKOR method for material selection. Materials & Design 2011;32(3):1215-1221. [20] Hafezalkotob A, Hafezalkotob A. Comprehensive MULTIMOORA method with target-based attributes and integrated significant coefficients for materials selection in biomedical applications. Materials & Design 2015;87:949-959. [21] Li BP, Gao SM, Wang R, Qin W. Application of analytic hierarchy process ( AHP) in the selection of 35 ACCEPTED MANUSCRIPT excavation methods for shallow tunnels. Tunnel Construction 2013;33(9):726-730. [22] Li ZM, Xu JP, Lev B, Gang J. Multi-criteria group individual research output evaluation based on context-free grammar judgments with assessing attitude. Omega 2015;57:282-93. [23] Liao HC, Jiang LS, Xu ZS, Xu JP, Herrera F. A linear programming method for multiple criteria decision making with probabilistic linguistic information. Information Science 2017;415:341-355. [24] Liao HC, Xu ZS, Zeng XJ. Hesitant fuzzy linguistic VIKOR method and its application in qualitative multiple criteria decision making. IEEE Transactions on Fuzzy Systems 2015;23(5):343-1355. T [25] Liao HC, Xu ZS, Herrera-Viedma E, Herrera F. Hesitant fuzzy linguistic term set and its application in IP decision making: A state-of-the art survey. International Journal of Fuzzy Systems 2017;20(7):2084–2110. CR [26] Mulliner E, Smallbone K, Maliene V. An assessment of sustainable housing affordability using a multiple criteria decision making method. Omega 2013;41(2):270-9. US [27] Opricovic S, Tzeng GH. Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS. European Journal of Operational Research 2004;156(2):445-455. AN [28] Opricovic S. Multicriteria optimization of civil engineering systems. Faculty of Civil Engineering, Belgrade 1998. M [29] Pang Q, Wang H, Xu ZS. Probabilistic linguistic term sets in multi-attribute group decision making. Information Science 2016;369:128-143. ED [30] Papadopoulos A, Karagiannidis A. Application of the multi-criteria analysis method Electre III for the optimisation of decentralised energy systems. Omega 2008;36(5):766-76. PT [31] Rodríguez RM, Martıń ez L, Herrera F. Hesitant fuzzy linguistic terms sets for decision making. IEEE Transactions on Fuzzy Systems 2012;20:109-119. CE [32] Roy B. Classement et choix en presence de points de vue multiples (La methode ELECTRE). Revue Francaise D Informatique de Recherche Operationnelle 1968;2(8):57-75. AC [33] Wu XL, Liao HC. An approach to quality function deployment based on probabilistic linguistic term sets and ORESTE method for multi-expert multi-criteria decision making. Information Fusion 2018;43:13-26. [34] Wu XL, Liao HC. A consensus based probabilistic linguistic gained and lost dominance sore method, European Journal of Operational Research 2019;272(3):1017-1027. [35] Wu XL, Liao HC, Xu ZS, Hafezalkotob A, Herrera F. Probabilistic linguistic MULTIMOORA: a multi-criteria decision making method based on the probabilistic linguistic expectation function and the improved Borda 36 ACCEPTED MANUSCRIPT rule. Transactions on Fuzzy Systems 2018;26(6):3688-3702. [36] Zadeh LA. The concept of a linguistic variable and its applications to approximate reasoning-Part I. Information Sciences 1975;8:199-249. [37] Zeleny M. Multiple criteria decision making. McGraw-Hill, New York 1982. [38] Zhang YX, Xu ZS, Liao HC. A consensus process for group decision making with probabilistic linguistic preference relations. Information Science 2017;414:260-275. T IP CR US AN M ED PT CE AC 37