Distance-Based Knowledge Measure for Intuitionistic Fuzzy Sets with Its Application in Decision Making

Much attention has been paid to construct an applicable knowledge measure or uncertainty measure for Atanassov’s intuitionistic fuzzy set (AIFS). However, many of these measures were developed from intuitionistic fuzzy entropy, which cannot really reflect the knowledge amount associated with an AIFS well. Some knowledge measures were constructed based on the distinction between an AIFS and its complementary set, which may lead to information loss in decision making. In this paper, knowledge amount of an AIFS is quantified by calculating the distance from an AIFS to the AIFS with maximum uncertainty. Axiomatic properties for the definition of knowledge measure are extended to a more general level. Then the new knowledge measure is developed based on an intuitionistic fuzzy distance measure. The properties of the proposed distance-based knowledge measure are investigated based on mathematical analysis and numerical examples. The proposed knowledge measure is finally applied to solve the multi-attribute group decision-making (MAGDM) problem with intuitionistic fuzzy information. The new MAGDM method is used to evaluate the threat level of malicious code. Experimental results in malicious code threat evaluation demonstrate the effectiveness and validity of proposed method.


Introduction
Atanassov [1,2] developed the concept of intuitionistic fuzzy set on the basis of Zadeh's fuzzy set [3]. Atanassov's intuitionistic fuzzy sets (AIFSs) relax the condition that the non-membership degree and the membership degree sum to 1. AIFSs are a generalization of fuzzy sets, i.e., a particular case of other types of generalized fuzzy sets [4,5]. Moreover, AIFSs are identical to interval-valued fuzzy sets (IVFSs) from a mathematical perspective [6]. In an AIFS, the hesitation degree is the difference between one and the sum of membership and non-membership grades. The hesitation degree contributes much serviceability to the depiction of uncertain information. Researchers have paid much attention on the intuitionistic fuzzy set theory since its advantage in modeling uncertain information systems [7]. The theory of intuitionistic fuzzy sets has been successfully applied in many fields, including uncertainty reasoning [8] and decision making [9,10]. The connection between AIFSs and other uncertain theories is also attracting increasingly much interest [11][12][13][14][15][16][17][18].
Zadeh [3] first introduced the notion of entropy to fuzzy sets to measure the uncertainty or fuzziness in a fuzzy set. The notion of fuzzy entropy defined for fuzzy sets is partially similar to the concept of Shannon entropy [19], which was initially defined in probability theory. Luca and Termini [20] developed the axiomatic definition of entropy, and then proposed a kind of non-probabilistic fuzzy entropy. Then, Burillo and Bustince [21] first axiomatically defined the measure of intuitionistic entropy, which was merely determined by hesitation degree. Unlike the entropy measures created by Burillo and Bustince [21], the entropy measure for intuitionistic fuzzy sets developed by Szmidt and Kacprzyk [22] was defined based on the ratio of two distance values. Axiomatic definition for intuitionistic fuzzy entropy was also presented by Szmidt and Kacprzyk [22]. Following the work of Szmidt and Kacprzyk [22], many authors [23][24][25][26] have done a great deal of work concentrating on the definition of entropy measures. Some research has also focused on the entropy of AIFSs and their application in the evaluation of attribution weighting vector [9,10]. It has been pointed out by Szmidt et al. [27] that entropy measure cannot capture all uncertainty hidden in an AIFS. Thus, it may be difficult to develop a satisfactory uncertainty measure for AIFSs merely by entropy measure. The difference between entropy and hesitation in measuring the uncertainty of AIFSs has been pointed out by Pal et al. [28]. In [28], it was claimed that the combination of entropy and hesitation may furnish an effective way to measure the total uncertainty hidden in an AIFS.
Generally, knowledge measure is related to the useful information provided by an AIFS. From the perspective of information theory, much information indicates a great amount of knowledge, which is helpful for decision making. Therefore, the notion of knowledge measure can be regarded as the complementary concept of total uncertainty measure, rather than of entropy measure. This means that less total uncertainty always accompanies a greater amount of knowledge. With the purpose of making an evident distinction between types of intuitionistic fuzzy information, Szmidt et al. [27] took both intuitionistic fuzzy entropy and hesitation into consideration to develop a knowledge measure for AIFS, in which the intuitionistic fuzzy entropy was defined by quantifying the ration between the nearer distance and farer distance. This knowledge measure has been used to estimate the weight of each attribute to solve multi-attribute decision making (MADM) problems [29]. Nguyen [30] has developed a novel knowledge measure by measuring the distance from an AIFS to the most uncertain AIFS. It seems that this knowledge measure can well describe fuzziness and intuitionism in AIFSs. However, the use of normalized Euclidean distance may bring another problem, namely that the relation between fuzziness and knowledge cannot be completely reflected. Recently, Guo [29] put forward a new axiomatic definition for the knowledge measure of AIFS. A new and highly robust model was introduced in [31] to quantify the knowledge amount of AIFS. By measuring the difference between an AIFS and its complement, the new model proposed by Guo [31] has been widely used to defined entropy measure for AIFSs [32,33]. Moreover, the combination of the two parts in Guo's model [31] lacks a clear physical interpretation. Several years ago, Das et al. [34] performed a comprehensive review of axiomatic definitions of information measures of AIFSs and investigated their relationships, in which entropy measure, knowledge measure, distance measure, and similarity measure are all concerned.
The above analysis demonstrates that the topic of knowledge measure for AIFSs is still open for debate, and commanding prodigious attention. Most research on knowledge and uncertainty measures of AIFSs mainly focus on the difference between AIFS and its complement. Only a few knowledge measures are constructed by measuring the distinction between an AIFS and the AIFS with maximum uncertainty or minimum uncertainty. Although Nguyen [30] opened up this new way of studying knowledge measures of AIFSs, further exploration is needed to improve this kind of knowledge measure and realize a desirable knowledge measure for AIFSs. This motivates us to present a new method to measure the knowledge of AIFSs based on a novel intuitionistic fuzzy distance, which is defined based on the transformation from an intuitionistic fuzzy value (IFV) to an interval value. An axiomatic definition of the knowledge measure of AIFSs will also be formulated from a more general point of view. Moreover, we will further explore the proposed knowledge measure's properties, and we compare it with other measures based on numerical examples to demonstrate its performance. Then we will apply it to the problem of intuitionistic fuzzy multi-attribute group decision making (MAGDM).
The remainder of this study is structured as follows. Several concepts regarding AIFSs are explained in Section 2. In Section 3, a new type of distance measure for AIFSs is developed, followed by the proposal and discussion of the distance-based knowledge measure in Section 4. In Section 5, the proposed distance and knowledge measures are used to develop a new method to solve MAGDM problems in intuitionistic fuzzy condition. Application of the new method for MAGDM is presented in Section 6 to illustrate the performance of the proposed method. Some conclusions of this paper are presented in Section 7.

Preliminaries
Here, we briefly recount some background knowledge about AIFSs to for ease of subsequent exposition. Definition 1. Letting a non-empty set X = {x 1 , x 2 , · · · , x n } be the universe of discourse, a fuzzy set A in X is then defined as follows [3]: Definition 2. The intuitionistic fuzzy set B in X = {x 1 , x 2 , · · · , x n } as defined by Atanassov can be expressed as [1]: where µ B : X → [0, 1] and v B : X → [0, 1] are membership degree and non-membership degree, respectively, with the condition The hesitation degree of AIFS B defined in X is denoted π B . ∀x ∈ X, and the hesitation degree is calculated by the expression that follows: Apparently, we can obtain π B (x) ∈ [0, 1], ∀x ∈ X. π B (x) is also referred to as the intuitionistic index of x to B. Greater π B (x) indicates more vagueness. It is apparent that when π B (x) = 0, ∀x ∈ X, the AIFS degenerates into an ordinary fuzzy set.
For two AIFSs A and B defined in X, the following relations were defined in [1]: , and can be obtained by It has been proved that AIFSs and IVFSs are mathematically identical [4,6]. They can be converted to each other. Thus, For an AIFS B defined in X and x ∈ X, we can use an interval [µ B (x), 1 − v B (x)] to express the membership and non-membership grades of x with respect to B. We can see this as the interval-valued interpretation of AIFS, in which µ B (x) and 1 − v B (x) represent the lower bound and upper bound of membership degree, respectively. Apparently, The correspondence relation between AIFSs and IVFSs holds only from the mathematical point of view. If we explore their conceptual explanation and practical application, they may differ in the description of uncertainty [9,35].
In what follows, AIFSs(X) is used to denote the set consisted of all AIFSs defined in X. Generally, the couple µ B (x), v B (x) is also called an IFV for clarity.

Definition 3.
For two IFVs a = µ a , v a and b = µ b , v b , the partial order between them is defined as For all IFVs, based on the partial ranking order, we can obtain the smallest IFV as 0, 1 , denoted by 0, and the largest IFV as 1, 0 , denoted by 1.
For a linear order of IFVs, to rank multiple IFVs, Chen and Tan [36] defined the score function of an IFV as S(a) = µ a − v a . Following the concept of score function for IFVs, Hong and Choi [37] developed an accuracy function H(a) = µ a + v a to depict the accuracy of IFV a = µ a , v a . Xu [38] then proposed a ranking-order relation between two IFVs a and b, which can be equivalently shown as follows.
For their score functions, if S(a) is greater than S(b), then a is greater than b, and vice versa.
If S(a) and S(b) are equal, we consider the following cases: (1) if H(a) is equal to H(b), then a and b are equal; and (2) if H(a) is greater than H(b), then a is greater than b; and vice versa.
Based on above order relation, the linear order relation of multiple IFVs can be obtained.
We know that similarity measure and distance measure are important in the research of fuzzy set theory [39]. Similarly, the construction of similarity measure and distance measures for AIFSs plays an important role in AIFSs [23,[40][41][42][43][44][45][46][47][48][49][50], and they are helpful for the comparison of intuitionistic fuzzy information [24,25]. Similarity measure and distance measure usually are regarded as a couple of dual concepts. Thus, distance measures can be used to define similarity measures, and vice versa.

New Intuitionistic Fuzzy Distance Measure
In past years, numerous similarity measure and distance measure have been advanced [7,39,45]. However, some may lead to unreasonable results in practical applications [7]. Some new defined distance/similarity measures may have complicated expressions [39,45], which are not suitable for constructing knowledge measure for AIFSs. Thus, it is necessary to define a desirable distance measure to assist us in developing a new knowledge measure. Here, we propose a new distance measure for AIFSs by borrowing a distance measure for interval values. It has been claimed that an AIFS can be represented in the form of interval-valued fuzzy set [5]. Based on such relation, an intuitionistic fuzzy distance measure can be developed based on interval comparison.

Interval-Comparison-Based Distance Measure for AIFSs
x n } indicates the membership degree of x i to B is uncertain, with lower and upper bounds of µ B (x i ) and 1 − v B (x i ), respectively. That is to say, the membership grade of x i to B lies in an interval [µ B (x i ), 1 − v B (x i )], i = 1, 2, · · · , n. Thus, we can measure distance between AIFSs A and B defined in In [51], authors have reviewed distances between interval values. They pointed out that the distance measure d TD proposed in [52] is not a metric distance, since for an interval value a = [a 1 ,a 2 ], d TD (a, a) = 0 does not always hold. Thus, Irpino and Verde [51] proposed a Wasserstein distance based on the point of view of one-dimensional uniform distribution, rather than from that of two-dimensional uniform distribution as developed in [52]. The definition as follows gives the Wasserstein distance measure between interval values. Definition 6. Given two interval values a = [a 1 ,a 2 ] and b = [b 1 ,b 2 ] with a, b ∈ [0, 1] , the distance between them is defined as [51]: The maximum value of d I (A x i , B x i ) can then be obtained as 1, which is obtained when A x i = 0, 1 , B x i = 1, 0 or A x i = 1, 0 , B x i = 0, 1 . Thus, the relation 0 ≤ d I (A x i , B x i ) ≤ 1 can be obtained.
According to the analysis above, we are able to define a new distance measure for Atanassov's intuitionistic fuzzy sets. Given two AIFSs · · · , x n }, then the distance between them is calculated by the expression that follows: is a distance measure between A and B.
For the sake of readability, we provide the proof process of Theorem 1 in Appendix A. Considering the weight of x i , i = 1, 2, · · · , n, the distance between AIFSs where w i is the weight of x i , i = 1, 2, · · · , n, with w i ∈ [0, 1] and n ∑ i=1 w i = 1.
Its proof can be implemented in the same way as the proof of Theorem 1.

Comparative Analysis
By way of demonstrating the availability of the new distance measure to distinguish the information in form of intuitionistic fuzzy set, we apply numerical examples to conduct a comparative analysis. Owing to the complementary relation between distance measure and similarity measure, the below widely used measures defined for two AIFSs x n } will be used for comparison.
Using Equations (11) and (12), we obtain We note in this example that the Hamming and Euclidean distances cannot be used to determine the pattern of B. The new proposed measure D I can classify B as pattern A 1 because the distance between B and A 1 is the least. Example 2. Three patterns are presented by AIFSs defined in X = {x 1 , x 2 , x 3 , x 4 } and are given as A sample to be classified is given as Using Equation (13) These results show that the class of B cannot be determined based on the distance measure proposed by Wang and Xin [23]. Based on our proposed distance measure, we are able to obtain the minimum distance between B and three patterns as D I (A 1 ,B) = 0.0806; therefore, sample B is classified to pattern A 1 .
Example 3. Three patterns expressed by AIFSs which are defined in X = {x 1 , x 2 } are given as An unknown sample to be recognized is given by Using Equation (14) It is obvious that sample B is identical to pattern A 3 , but sample B may be classified as A 1 , A 2 , and A 3 simultaneously based on the cosine similarity, which is counter-intuitive. It can be seen that our distance measure can be used in classifying sample B as A 3 due to the zero distance between them.
The above examples show that our proposed distance measure is effective in differentiating the information conveyed by different AIFSs. It can be easily proved that the choice of attribute weights will not change the conclusion obtained based on each example. Moreover, we note that the cosine similarity may be undefined when there is a zero denominator. The developed distance measures can overcome such deficiencies, so these examples indicate that the proposed distance measures are reasonable and effective in discriminating intuitionistic fuzzy information.

Suppose that
its knowledge measure K should intuitively satisfy some properties. It is rational that the knowledge measure K must be a non-negative function determined by µ A (x) and v A (x). The knowledge amount of A should be identical to the knowledge amount of its complement, i.e., K(A) = K(A C ). When the AIFS A is reduced to classical Zadeh's fuzzy set, a negative correlation should exist between the knowledge measure and fuzziness. It has been in our mind that the fuzziness of Zadeh's fuzzy set determines its fuzzy entropy, and they are both negatively correlated to |µ A (x) − v A (x)| [22]. So the knowledge measure K(A) should be monotonously increasing with respect to |µ A (x) − v A (x)|. Moreover, we note that a crisp set provides the maximum amount of information, so the knowledge amount of a crisp set reaches the maximum value K max = 1. Conversely, the case that ∀x ∈ X, µ A (x) = v A (x) = 0 means full ignorance, so the knowledge amount reaches its minimum value K min = 0. In addition, in the case of Thus, the less a indicates greater the greater hesitant degree π A (x i ), which leads to the greater uncertainty degree and smaller knowledge amount.
Considering these intuitive properties, we give the following definition to describe the axiomatic properties of the knowledge measure for AIFSs. Definition 7. If a mapping K : AIFS → [0, 1] satisfies the following properties, it is called a knowledge measure of an AIFS A defined in X = {x 1 , x 2 , · · · , x n }: Since both knowledge and entropy measures are always regarded as two complementary concepts, we discuss these properties by comparing them with those of entropy measure. We can see that the third property in [22] defined for intuitionistic fuzzy entropy, denoted as E, is stated as: n, which is related to the property of KP3. However, this property of intuitionistic fuzzy entropy does not consider the influence of hesitation degree. It may not be sensible to discuss the relationship between fuzziness and intuitionistic fuzzy entropy if the hesitance degree is not fixed. Moreover, | is more general than the third property listed in [22]. Thus, for the relation between knowledge and fuzziness, our proposed axiomatic property is made more general by relaxing the formal constraint by using |µ(x) − v(x)|. However, such relaxation does not cause an unreliable measure of the knowledge amount because of the limitation of hesitation degree, which will be illustrated later. This also demonstrates the possibility and reasonability of further exploring the relation between the entropy measure and knowledge measure of AIFSs. We point out that the entropy of an AIFS reaches its peak value when the membership degree and non-membership degree are identical for all elements [22]. This is analogous to the entropy measure of fuzzy sets, which solely concerns the relation between membership degree and non-membership degree. Therefore, the notions of entropy and knowledge measure are not just complementary concepts, but rather they differ from each other not only in the aspect of viewpoint, but also in the point they focus on. The fuzzy entropy merely depicts the difference between an AIFS and a crisp set, which is denoted as fuzziness, while knowledge measure is defined to measure the closeness between AIFS and a crisp set, which takes both fuzziness and hesitancy into account.
Following the axiomatic properties in Definition 7, we can create knowledge measures for AIFSs by a mapping F:  1], we can effortlessly obtain many F functions satisfying the above conditions, such as F(x,y) = (|x−y|+x+y)/2 and F(x,y) = x 2 +y 2 . Using these functions, we can construct knowledge measures for AIFSs. Given an AIFS In this way, many knowledge measures can be created for AIFSs, but most may lack of specific physical meaning. This motivates us to construct knowledge measures with both clear physical significance and axiomatic mathematical properties.

Construction of Knowledge Measure
From the second property KP2, we can conclude that the AIFS F = { x, 0, 0 |x ∈ X} conveys the least knowledge. The amount of knowledge conveyed by an AIFS A can be reflected by the distance between A and F. The greater the distance between them, the greater the knowledge amount the AIFS A conveys, prompting us to devise a knowledge measure according to the distance from A to F. (9):

For an AIFS
Equation (15) can be further written as Considering Thus, the distance between A and F can be normalized by multiplying by √ 3, giving the following form: We can then construct a knowledge measure for AIFSs defined in the discourse universe X = {x} as follows: Generally, for the AIFS defined in |x ∈ X }, its knowledge amount can be quantified by Theorem 3 is proved in Appendix B.

Numerical Examples
Here, the performance of the proposed knowledge measure K I will be examined considering some numerical examples.

Example 4.
Four AIFSs A 1 , A 2 , A 3 and A 4 are defined in universe X = {x}. They are given as The entropy measure presented in [23,[55][56][57][58] cannot discriminate these AIFSs, since these measures are defined according to the difference between membership degree and non-membership degree. The membership degree and non-membership degree are identical in these four AIFSs, so they may be considered identically with the maximal entropy, which induces a minimal knowledge amount conveyed by them. However, according to the proposed knowledge measure K I , we have It can be seen that these four different AIFSs differ greatly from each other from the viewpoint of knowledge amount. This is helpful for handling such extreme cases with identical supporting and opposing degrees. From the definition of K I , we find that, when µ A (x) = v A (x) and for all x ∈ X, the calculation of K I assumes the following form: which indicates that the knowledge amount increases with the variable µ A (x i ) in the conditions of µ A (x i ) = v A (x i ) and ∀i ∈ {1, 2, · · · , n}. This useful feature coincides with intuitive analysis.
To further demonstrate the discriminability of the knowledge measure K I , we give  De et al. [59] defined an exponent operation for AIFS A defined in X. Given a nonnegative real number m, A m is defined as Based on the operations in Equation (21), we have Considering the characterization analysis of linguistic variables, we can consider AIFS A as "LARGE" in X. Correspondingly, AIFSs A 0.5 , A 2 , A 3 , and A 4 can be regarded as "More or less LARGE," "Very LARGE," "Quite very LARGE," and "Very LARGE," respectively.
Intuitively, from A 0.5 to A 4 , the uncertainty hidden in them becomes less and the knowledge amount conveyed by them increases. Therefore, the following relations hold: To make a comparison, the entropy and knowledge measures listed in Table 1 are used. It is worth nothing that some of the entropy measures in the table are initially designed for interval valued fuzzy sets [56,57]. These entropy measures are modified for AIFSs based on their connection with interval values fuzzy sets. We present the results obtained based on different measures in Table 2 to facilitate comparative analysis.
From Table 2, we can see that entropy measures E ZL , E ZB , E BB , E SK , E HC , E S , and E ZJ induce the following relations: Because the entropy of AIFS A 0.5 is less than that of AIFS A, entropy measures E ZL , E ZB , E ZE , E BB , E SK , and E ZJ do not perform as well as other entropy measures. From the point of view of knowledge amount, we note that the results obtained by K SKB , K N , and K G are not so reasonable, since counter-intuitive relations K SKB (A 0.5 ) > K SKB (A), K N (A 0.5 ) > K N (A), and K G (A 0.5 ) > K G (A) exist. However, our developed knowledge measure K I can produce a rational result as K I (A 0.5 ) < K I (A) < K I (A 2 ) < K I (A 3 ) < K I (A 4 ). Thus, it is demonstrated that half of entropy measures in Table 1 cannot reflect the uncertainty hidden in these AIFSs. Although several knowledge measures have been presented, they are not able to distinguish the nuance of knowledge amount in different AIFSs. Thus, our developed knowledge measure outperforms other knowledge measures by providing persuasive results complying with intuitive analysis.
Through the operation shown in Equation (21), the following AIFSs related to B can be generated:  Table 1, we obtain the comparative results as shown in Table 3. It can be seen that AIFS B still has more entropy than AIFS B 0.5 when entropy measures E ZL , E ZB , E ZE , E BB , E SK , and E ZJ are considered. The ordered results obtained based on these entropy measures are It can be seen that these ranked orders do not satisfy intuitive analysis in Equation (22), while other entropy measures can induce desirable results. In this example, E HC and E S perform well, but the measure E ZE performs poorly. This illustrates that these entropy measures are not robust enough.
Moreover, the results produced by knowledge measures K SVB , K N , and K G are also not reasonable, shown as: However, our proposed knowledge measure K I indicates that: Thus, the knowledge measures K SVB , K N , and K G are still not suitable for differentiating the knowledge amount conveyed by AIFSs. The effectiveness of the proposed knowledge measure K I is once again indicated by this example.
From the above examples, we conclude that entropy measures E ZL , E ZB , E ZE , E BB , E HC , E S , E SK , and E ZJ perform poorly because of their lack of robustness and discriminability. The proposed knowledge measure performs much better than knowledge measures K SVB , K N , and K G . The performances of entropy measures E A , E ZC , E ZD , E VS , E LDL , and the proposed knowledge measure K I in Table 3 seem to show that less entropy indicates more knowledge amount. Nevertheless, the relationship between entropy and knowledge measure is limited and conditional, as was discussed previously.
The above analysis indicates an effective way to define knowledge measure for AIFSs based on a metric distance measure d AIFS for AIFSs.

New Method for Solving MAGDM Problems
Since the inception of AIFSs, many researchers have been dedicated to exploring applications of AIFSs along with their mathematical mechanism. One important application area of AIFSs is multi-attribute group decision making (MAGDM) [28,30,36,38,62,63]. In the MAGDM problem, because of the limitation of experts' knowledge and time pressure, uncertain or incomplete information may be provided in the evaluation of each alternative. Therefore, a suitable model should be constructed to depict the incomplete information. By introducing hesitancy degree, AIFSs can describe the uncertainty caused both by fuzziness and by lack of knowledge. Moreover, incomplete information can be aggregated in a direct way with the help of intuitionistic fuzzy aggregation operators. Thus, AIFSs are accepted by many researchers as one effective tool for solving MAGDM problems. The application of AIFSs in solving MAGDM problems has attract many researchers because of a series of open topics in this area, such as the determination of attribute weights, effective aggregation operators for AIFSs, ranking of alternatives based on IFVs, and the construction of intuitionistic fuzzy model from incomplete information.
Here, we put forth a new method with which to solve intuitionistic fuzzy MAGDM problems. We develop the approach according to the proposed intuitionistic fuzzy distance measure and distance-based knowledge measure. The intuitionistic fuzzy MAGDM problem is depicted as follows. G = {G 1 , G 2 , · · · , G m } is the set consisted of all threat levels. A = {A 1 , A 2 , · · · , A n } is the set containing all attributes which will be considered to evaluate the threat level. E = {E 1 , E 2 , · · · , E s } is the set of all decision makers to evaluate threat levels. The weight of attribute A i is w i , i = 1, 2, · · · , n, with n ∑ i=1 w i = 1. All weights are expressed by weight vector w = (w 1 , w 2 , · · · , w n ) T . Each decision maker is assigned a weighting factor λ j , j = 1, 2, · · · , s, with s ∑ j=1 λ j = 1. Decision maker E k (k = 1, 2, · · · , s) gives the decision matrix expressed by IFVs as: where r k ij = µ k ij , v k ij is an IFV representing the evaluation result of alternative G i according to attribute A j .
If the attribute weights are unknown, this MAGDM problem should be solved by steps as following.
Step 1. Determine attribute weights In most cases, the weighting factor of each attribute is partly known or completely unknown due to limited time and expert knowledge. Thus, determining the weighting vector of all attributes is necessary. Several approaches have been put forward to assess the importance of all attributes in decision making.
Li et al. [62] developed the TOPSIS-based method to obtain the interval-valued weight factor for all attributes, which may cause information loss in the process of decision making. Wei [64] proposed an optical model to derive the attribute weighting vector, which was implemented by maximizing the deviation between all evaluation results under an attribute. Regarding the hesitance degree as an entropy measure, Ye [10] developed an entropy-based method to evaluate the attribute weight vector.
Note that Wei's method [64] is based on the idea of maximizing the deviation, while Ye's method [10] is based on the idea of minimizing the entropy. Combining Wei's [64] and Ye's ideas [10], Xia and Xu [9] proposed an entropy-/cross-entropy-based model to determine the attribute weighting vector, in which they utilize the cross-entropy to describe the deviation between IFVs. Borrowing the idea of Xia and Xu [9], we develop a model using the proposed distance measure D I and the knowledge measure K I to determine attribute weights.
For decision maker E k , the average divergence of alternative G i from all other alternatives under attribute A j can be measured as Based on distance measure D I and knowledge measure K I , the average divergence and knowledge amount of all information provided by E k under attribute A j can be measured, respectively, as Considering the weighting factor of each decision maker, we can obtain the total difference among all alternatives and the total amount of knowledge with respect to attribute A j as Generally, if the evaluation information of all alternatives under an attribute is quite different from each other, it means that this attribute provides much discriminative information, and thus it should be more important. Conversely, if there is little difference among the evaluation results of all alternatives obtained with respect to one attribute, then this attribute is less important. We also have the sense that a greater amount of knowledge conveyed by the information under an attribute indicates that the information provided is more helpful for decision making. Therefore, this particular attribute is more important. Based on the above analysis, we establish an optimal model with which to calculate the weighting vector w of all attributes as where H is a set that contains all of the incomplete information of an attribute weight.
In particular, if there is no additional information about the weighting vector, i.e., each attribute's weighting factor is totally unknown, the weighting factor of attribute A j (j = 1, 2, · · · , n) can be calculated as Step 2. Use the intuitionistic fuzzy weighted averaging (IFWA) operator proposed in [38] and the weighting vector λ = (λ 1 , λ 2 , · · · , λ s ) T to collect the individual intuitionistic fuzzy decision matrices r k ij = µ k ij , v k ij (k = 1, 2, · · · , s) into an aggregated decision matrix with intuitionistic fuzzy information, denoted R = (r ij )m x n.
Step 4. Calculate both the score function and accuracy function of IFVs Z 1 , Z 2 , · · · , Z m .
Step 5. Rank all alternatives according to the score function and accuracy function of IFVs Z 1 , Z 2 , · · · , Z m to obtain the priority order.

Application on Evaluation of Malicious Code Threat
Here, the method proposed in Section 5 for solving the MAGDM problems is applied on evaluation method of malicious code threat degree. Example 6. In a battle of cyber defense, the cyber-defense unit aims to choose a target with the highest threat to attack. In cyberspace security, cyber security researchers need to evaluate the threats caused by malicious code. In the way, the most dangerous threat can be addressed first, and then the other threats can be addressed.
The weighting vector of four experts is λ = (0.3, 0.2, 0.3, 0.2) T . The associated weighting factor for the hybrid aggregation of the four experts is η = (0.155, 0.345, 0.345, 0.155) T , which is derived by the method based on normal distribution, as shown in [63]. The threat degree of each malicious code evaluated by four experts is expressed by the following four intuitionistic fuzzy decision matrices:    We then use the proposed method shown in Equation (31) to establish the weighting vectors of five attributes. We solve this problem according to the next steps: (1) Using the distance measure D I and knowledge measure K I to get the average divergence and the amount of knowledge under all attributes for all decision makers, we obtain the divergence and knowledge matrix, respectively, as The elements div ij in matrix DIV represents the whole average divergence provided by D i under A j , and k ij in matrix K represents the knowledge amount provided by D i under A j . (3) Collecting all decision makers' decision matrices based on the proposed IFWA operator, we can get the aggregated decision matrix as:  (6) According to the score grades, we obtain the ranking order R of all malicious codes' threat degree as Based on the method proposed in [9], when E 1.5 M and CE 1.5 M are used, the attribute weights are obtained as w a = (0.1940, 0.2238, 0.1330, 0.2117, 0.2375) T , and the final ranking order is R a : G 4 G 5 G 1 G 3 G 2 . When E 1 N and CE 1 N are used, the attribute weights are obtained as w b = (0.1931, 0.2219, 0.1325, 0.2133, 0.2392) T , and the final ranking order is It is notable that the final ranking order obtained using the method proposed in Section 5 is not completely identical to that obtained in [9]. However, all methods can be used to obtain the same optimal alternative, G 4 . Since the solving the MAGDM problem is aimed at obtaining the best choice, the order of other alternatives may not be of concern. We can use the similarity between two weighting vectors, which is defined as the cosine value of the angle between them, denoted Sim: The consensus level between two ranking orders R 1 and R 2 is calculated by Spearman's rank correlation coefficient [65]: where p is the number of alternatives; r (1) i and r (2) i are the positions of alternative G i in respective ranking order R 1 and R 2 .
These results indicate that the attribute weights obtained by the proposed method are quite similar to those yielded in [9]. Moreover, the ranking orders are at a high consensus level. It is demonstrated that the proposed method is effective for solving MAGDM problems.

Case 2.
We suppose that the attribute weights are partially known by some relations as following: We can then use the following optimal model to get the attribute weighting vector: max T = (3.5614, 3.6045, 3.4614, 3.3783, 3.7011)w , and we obtain the weighting vector as w = (0.1, 0.2, 0.15, 0.2, 0.35) T .
Using the weighting vector w, we obtain the aggregated threat grades of each malicious code by the IFWA operator: respectively. Then, we obtain the ranking order as If there is only one expert in MAGDM problems, we do not need to fuse the results of different experts. Thus, we can deal with such cases by evaluating attribute weight vector and then aggregating all the results under different attributes. We will use another example to compare the proposed methods with other methods. Example 7. The cyber-defense unit will attack the malicious code with the maximum threat grade. In cyberspace security, cyber security researchers evaluate their own protection capabilities by evaluating malicious codes, and can judge the order in which malicious codes are difficult to be discovered in the system.
There are pieces of five malicious code for their choice. The following five types of malicious code include: Five malicious code are presented as: G 1 , a backdoor; G 2 , a Trojan-PWS; G 3 , a Worm; G 4 , a Trojan-Spy; G 5 , a Trojan-Downloader.
The cyber security researchers evaluates these five malicious code based on four attributes, which are the following: A 1 , the resource consumption; A 2 , the self-starting ability; A 3 , the con-cealment ability; A 4 , the self-protection ability.
The results of evaluation using intuitionistic fuzzy information are The elements div i and k i in vector DIV and K represent the average divergence degree and knowledge quantity under attribute A i , respectively.
(2) The weight factor of attribute A i can be calculated as We then obtain the weighting vector as w = (0.2563,0.2142,0.2696,0.2600) T . (5) Thus, we rank all alternatives in order R as For further analysis, we compare these results with the solutions for Xia and Xu's method [9]. The weighting vector that they obtained is w c = (0.2659, 0.2486, 0.2370, 0.2486) T and the ranking order is R c : G 5 G 2 G 3 G 4 G 1 . We note that these ranking orders are slightly diverse due to the distinction between intuitionistic fuzzy measures used, but they obtain the same optimal alternative G 5 .
We also obtain Sim(w, w c ) = 0.9975 and ρ(R, R c ) = 0.9, indicating that the results achieved based on the method proposed in Section 5 are quite close to the results in [9]. By comparing the score grades of five IFVs, the ranking order of these five malicious codes' threat degree can be obtained as: Using the method proposed in [9], the attribute weights can be yielded as w d = (0.19, 0.16, 0.35, 0.30) T , and the corresponding ranking order is R d : It is shown that the weighting vector obtained by the proposed method is much close to that obtained by Xia and Xu in [9] when partial information on the attribute weight is provided. We calculate the similarity degree between them as Sim(w, w c ) = 0.9996. It can also be seen that the order yielded by our proposed method is identical to R d , a phenomenon that appears to be caused by the incomplete information.
These illustrative examples reveal the necessity of utilizing distance and knowledge measures to establish the attribute weights. They further demonstrate that our method proposed here reasonably and effectively handles intuitionistic fuzzy MAGDM problems. The applicability of our proposed knowledge measure is also illustrated. In the method proposed in [9], we note that they used more complex entropy/cross-entropy measures with additional parameters, but without specific physical meaning. Moreover, the hybrid aggregation operator used in [8] needs an associated weight vector to aggregate intuitionistic fuzzy information. Compared with these entropy/cross-entropy measures in [9], our developed distance and knowledge measures with relatively concise simple expressions and specific physical meaning can also obtain reasonable solutions with the help of the original IFWA operator. Thus, our proposed method seems to be more practical and easier to implement to solve MAGDM problems.

Conclusions
In this paper, we propose a knowledge measure based on our proposed intuitionistic fuzzy distance measure for the purpose of measuring the knowledge amount of AIFSs more accurately. The axiomatic definition of knowledge measure is refined from a more general view, after which we investigate the properties of the new distance-based knowledge measure. Mathematical analysis and numerical examples are provided to illustrate the proposed knowledge measure's properties. To demonstrate the applicability of the proposed distance-based knowledge measure, we apply it to develop a new method of solving MAGDM problems with intuitionistic fuzzy information. Application examples combined with comparative analysis illustrate the effectiveness and rationality of our method.
We only present a knowledge measure based on our proposed distance measure in this paper. The main feature of the proposed knowledge measure lies in its succinct expression, good properties, and evident physical significance. This is a new perspective to considering knowledge measure and uncertainty measure. There must be other kinds of knowledge measures used if other distance measures are applied. Exploration on the reasonable distance measure is critical for the definition of knowledge measure. Conversely, based on the relation on distance measure and uncertainty measure, we can also develop new distance measure based on some reasonable knowledge measures. Furthermore, syncretic research on distance measure, similarity measure, knowledge measure, and uncertainty measure is also attractive and worthy.

Data Availability Statement:
The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
Proof of Theorem 1.
(1) Given D I (A, B) = 0, we obtain µ A ( = 0 ∀i ∈ {1, 2, · · · , n}, which can be written identically as We then obtain µ A (x i ) = µ B (x i ) and v A (x i ) = v B (x i ) by adding and subtracting the above equations, respectively, i = 1, 2, · · · , n. Hence, for all elements x ∈ X, µ A (x) = µ B (x) and v A (x) = v B (x) hold simultaneously, which indicates that A=B.
For two AIFSs, A and B, defined in X = {x 1 , x 2 , · · · , x n }, we have the following relation: We can conclude from the above analysis that D I (A, B) = 0 ⇔ A = B .
(2) It is straightforward that D I (A, B) = D I (B, A).
(3) Three AIFSs, A, B, and C defined in X = {x 1 , x 2 , · · · , x n }, can be expressed as , v B (x) |x ∈ X }, and C = { x, µ C (x), v C (x) |x ∈ X }, respectively. Considering the condition A ⊆ B ⊆ C, we have the relations µ A ( The distance between AIFSs A and B can be written as between AIFSs A and C can be written as . We then construct a function f (x, y) with two variables as where 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0 ≤ a ≤ 1, and 0 ≤ b ≤ 1. The partial derivatives for variables x and y can be obtained as follows: .
Under the conditions a = µ A (x i ), b = v A (x i ), and ∀i ∈ {1, 2, · · · , n}, the following expressions hold: Therefore, we have D I (A, B) ≤ D I (A, C).