Article Search
닫기

## Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(3): 303-324

Published online September 25, 2022

https://doi.org/10.5391/IJFIS.2022.22.3.303

© The Korean Institute of Intelligent Systems

## Novel Bipolar Soft Rough-Set Approximations and Their Application in Solving Decision-Making Problems

Rizwan Gul1 , Muhammad Shabir1, Wali Khan Mashwani2, and Hayat Ullah2

1Department of Mathematics, Quaid-i-Azam University, Islamabad, Pakistan
2Institute of Numerical Sciences, Academic Block-III Kohat University of Science & Technology, Khyber Pakhtunkhwa, Pakistan

Correspondence to :
Rizwan Gul (rgul@math.qau.edu.pk)

Received: December 23, 2021; Revised: March 11, 2022; Accepted: April 15, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

The rough set (RS) theory is a successful approach for studying the uncertainty in data. In contrast, the bipolar soft sets (BSS) can deal with the uncertainty, as well as bipolarity of the data in many situations. In 2018, Karaaslan and Çağman proposed bipolar soft rough sets (BSRSs), a hybridization of RS and BSS. However, certain shortcomings with BSRS violate Pawlak’s RS theory. To overcome these shortcomings, the concept of the modified bipolar soft rough set (MBSRS) has been proposed in this study. Moreover, this idea has been investigated through illustrative examples, where the important properties are inspected deeply. Furthermore, certain significant measures associated with MBSRS are also provided. Finally, an application of the MBSRS to multi-attribute group decision-making (MAGDM) problems is proposed. In addition, among various alternatives, an algorithm for decisionmaking accompanied by a practical example is presented as the optimal alternative . A brief comparative analysis of the proposed approach with some existing techniques is also provided to indicate the validity, flexibility, and superiority of the suggested MAGDM model.

Keywords: Bipolar soft set, Bipolar soft rough set, MBSR approximations, MAGDM

In modern society, there are a plethora of ideas in engineering, economics, environmental science, social science, medical science. Many other disciplines have uncertainties in the information which is collected and studied for several purposes. In classical mathematics, all mathematical concepts must be precise. Therefore, it is not always a successful tool for dealing with uncertain issues. To researchers, this uncertainty has become a barrier in addressing complex problems in various domains. A plethora of theories have been proposed to address this uncertainty, including fuzzy set (FS) theory [1], RS theory [2], and decision-making (DM) theory. However, each of these theories has internal issues that may be related to the insufficiency of the parameterization techniques mentioned in [3].

Molodtsov [3] offered an alternative technique to cope with uncertainty, known as the “soft set” (SS). Data parameters play a vital role in scrutinizing and analyzing data or making decisions. The SS theory is an adequate parameterization tool. Therefore, this theory overcomes the difficulties faced by using old approaches. Because of its diverse applications, this theory has received attention of many researchers. Rapid growth in the study of SS has been observed in the last few years. A few SS operations were pioneered by Maji et al. [4]. Ali et al. [5] introduced various novel SS operations and enhanced the concept of SS compliments. Al- Shami and El-Shafei [6] proposed a T-soft equality relationship. By merging FSs and SS, Maji et al. [7] established the concept of fuzzy sets and SS.

A plethora of researchers have considered a diverse hybrid fusion of RSs, FSs, and SS for engineering, information management, medical diagnosis, and multi-criteria decision-making (MCDM) applications. Feng et al. [8] explored the link between RS and SS theories and introduced soft rough sets (SRSs), which provide better and more efficient approximations than the RS theory. Shabir et al. [9] redesigned SRSs and proposed modified soft rough sets (MRSs). Greco and his colleagues [1014] offered dominance-based RS as an extension of the RS. Du and Hu [15] pioneered dominance-based FS. In 2019, Shaheen et al. [16] proposed a dominance-based SRS and highlighted its use in DM. Feng [17] applied SRSs to multi-criteria group decision-making (MCGDM). Ayub et al. [18] initiated a new RS approach with DM, known as linear Diophantine fuzzy RSs. Riaz et al. [19] introduced the idea of linear Diophantine fuzzy soft RSs to select sustainable material handling equipment. In 2021, Hashmi et al. [20] established the concept of spherical linear Diophantine fuzzy SRSs with applications in MCDM. Akram and Ali [21] proposed hybrid models for DM based on rough Pythagorean fuzzy bipolar soft information.

In many types of data analyses, bipolarity is an important factor to consider when designing mathematical formulas for certain problems. The positive and negative sides of the data are provided by the bipolarity. The positive side deals with conceivable ideas, whereas the negative side deals with unconceivable ideas. The philosophy of bipolarity considers that the human judgment is built upon positive and negative sides, and the stronger side is preferred. SS, FS, and RS are not effective approaches for dealing with this bipolarity.

Owing to the importance of bipolarity, Shabir and Naz [22] introduced the notion of bipolar soft set (BSS) with application to DM. BSS has grown in popularity among researchers as a result of this study. In 2015, Karaaslan and Karataş [23] redesigned the BSS with different approximations, allowing them to investigate the topological axioms of the BSS. Subsequently, Karaaslan et al. [24] proposed a theory of bipolar soft groups. In addition, Naz and Shabir [25] pioneered the idea of fuzzy BSS and investigated their algebraic structures. The notions of bipolar soft topological spaces were then further developed by Öztürk [26]. Abdullah et al. [27] developed a bipolar fuzzy SS by combining the SS and bipolar FS and applied it to the DM problem. Alkouri et al. [28] proposed a bipolar complex FS and addressed its applicability to DM.

Karaaslan and Çağman [29] developed BSRSs in 2018. Furthermore, they addressed the applicability of BSRSs to the DM. Shabir and Gul [30] established and discussed the modified rough bipolar soft sets (MRBSs) in MCGDM. Gul et al. [31] presented a new technique for determining the roughness of BSSs and examined their applicability to MCGDM. Mahmood et al. [32] suggested a complex fuzzy N-SS and DM algorithm. In [33], Malik and Shabir pioneered the concept of rough fuzzy BSS and used them to rectify DM problems. Malik and Shabir [34] created a consensus model using rough bipolar fuzzy approximations. Al-Shami [35] conceived the idea of belonging and non-belonging relations between a BSS and an ordinary point. Riaz and Tehrim [36] proposed bipolar fuzzy soft mapping and analyzed its applicability to bipolar disorders. In [37], the authors suggested bipolar N-SS, an extension of N-SS, and addressed its applicability to DM. Gul and Shabir [38] introduced a new concept of the roughness of a crisp set based on (α, β)-indiscernibility of the bipolar fuzzy relation.

### 1.1 Motivation

By analyzing all the preceding arguments, we can see that the BSSs can manage the bipolarity of the data by employing two mappings; one of them addresses the positivity of the data, while the other measures the negativity of the data. Bearing in mind the connection between RS and BSS, two initiatives have been established to investigate the roughness of BSS: the first by Karaaslan and Çağman [29], and the second by Shabir and Gul [30]. In this paper, we propose a new technique for improving the roughness of BSSs. This new approach is known as the “modified bipolar soft rough sets” (MBSRS). In addition, we discuss the application of MBSRS to DM problems.

### 1.2 Aim of the Suggested Model

The major objective of this study is to propose an innovative variant of BSRS approximations that overcomes some of the shortcomings of the Karaaslan and Çağman BSRS model (see Example 2.7).

The key contributions of this study are as follows:

• • A novel concept of MBSRS is proposed, which overcomes the deficiencies of the existing BSRS model.

• • many essential properties of the MBSRS are thoroughly investigated.

• • Some key MBSRS-related measures are proposed to quantify the uncertainty of the MBSRS.

• • A fair comparison between the results provided by MBSRS and BSRS is provided.

• • A robust MAGDM method is established in the framework of MBSRS, and its applicability is validated through the real-world applications.

• • To illustrate the merits of the suggested technique, rigorous comparison with some other current methodologies is performed.

### 1.3 Organization of the Paper

The remainder of this paper is organized as follows. Section 2 outlines a plethora of fundamental concepts necessary for comprehending our research. Section 3 begins by introducing novel MBSR approximation operators. These operators were studied further by considering their significant structural properties. Section 4 discusses the MBSRS-related measures. In Section 5, we describe the general methodology of MAGDM in the MBSRS framework. In addition, we introduced a DM algorithm to select the optimal alternative. In addition, we provide an illustration of the proposed DM approach to demonstrate how it can be effectively used in various real-life problems. Section 6 compares the proposed DM method with other DM approaches. Section 7 concludes the study with an overview of the current study and suggestions for future research.

This section reviews the key concepts used in this study. Throughout this study, we use , ℘, and to represent the universe, parameter set, and power set of , respectively.

### Definition 2.1 ([2])

The pair is called the approximation space where is a non-empty finite universe, and σ is an equivalence relation on .

For any , the lower and upper approximations of with respect to are characterized as follows:

σ(X)={xU:[x]σX},σ(X)={xU:[x]σX},

where

[x]σ={yU:(x,y)σ}.

Moreover, the boundary region of is given as

ndσ(X)=σ(X)-σ(X).

Set is called a rough set with respect to σ if otherwise, it is called definable.

Some recent rough approximations were defined on binary relations to extend the scope of applications of the RS theory; see, for example, [3941].

### Definition 2.2 ([3])

A SS over is a pair (f, ℘), where f : . Therefore, an SS over provides a parameterized collection of subsets of .

### Definition 2.3 ([4])

An object of the form , where ¬e = not e is called the NOT set of parameters of ℘.

### Definition 2.4 ([22])

A BSS over is an object of form (f, g : ℘), where and such that for all e ∈ ℘ and f(e) ∩ ge) = ∅︀.

In other words, a BSS over provides a pair of parameterized families of subsets of .

A BSS (f, g : ℘) over can be represented through a pair of binary tables, one for each function f and g, respectively. In both tables, rows are categorized based on the objects of and columns are categorized using parameters. We used the following keys for the tables of f and g:

aij={1,if xif(ej),0,if xif(ej),bij={1,if xig(¬ej),0,if xig(¬ej),

where aij and bij are the ith entries of jth columns of each table, respectively.

Thus, indicates the collection of all BSSs over .

### Definition 2.5 ([29])

is called a full BSS if: ef(e)=U=¬eg(¬e).

### Definition 2.6 ([29])

For , object is called (bipolar soft approximation space). Based on β, the following four operators are defined for any :

S_β+(X)={uU:e,[uf(e)X]},S_β-(X)={uU:¬e,[ug(¬e),g(¬e)Xc]},S¯β+(X)={uU:e,[uf(e),f(e)X]},S¯β-(X)={uU:¬e,[ug(¬e)Xc]}}

are regarded as soft lower positive, soft lower negative, and soft upper positive, and soft upper negative approximations of , respectively. Moreover,

S_β(X)=(S_β+(X),S_β-(X)),S¯β(X)=(S¯β+(X),S¯β-(X))}

are called BSR approximations of . Moreover, is termed BSRS if S_β(X)S¯β(X), where is called a bipolar soft definable.

Definitions 2.6 do not fulfill the criteria of Pawlak’s RSs. For instance:

• (1) Upper approximation of a non-empty set may be empty.

• (2) Upper approximation of a subset of universe may not contain the set which does not occur in Pawlak’s RS theory.

The following example explains this observation:

### Example 2.7

Let , where and ℘ = {e1, e2, e3, e4}. The tabular representation of (f, g : ℘) is presented in Tabel 1(a) and 1(b).

From the above tables, it can be seen that u4 does not have any of the properties mentioned in ℘ or . For . The soft lower positive, soft upper positive, soft lower negative, and soft upper negative approximations of are given as

S_β+(X)={u3},S¯β+(X)={u1,u2,u3,u5}.S_β-(X)={u2,u3,u6},S¯β-(X)={}.

Object u4 is not a member of any one of the soft lower positive approximations, and soft upper positive approximation. However, according to the given information of , u4 should be a member of because . Similarly, we can see that object u1 is not a member of any of the soft lower negative approximations and soft upper negative approximation. However, according to the given information of , u1 should be a member of because .

From Table 1(a), it is clear that: [u1] = {u1}, [u2] = {u2, u5} = [u5], [u3] = {u3}, [u4] = {u4} and [u6] = {u6}. Now, according to Definition 2.1, we have and . Furthermore, u1 is a member of . However, there is no element in that is equivalent to u1, and thus its membership in is difficult to justify.

Similarly, as presented in Table 1(b), it is clear that: [u1] = [u4] = {u1, u4}, [u2] = {u2}, [u3] = [u6] = {u3, u6}, and [u5] = {u5}. Again, according to Definition 2.1, we have and . We can also observe that u2 is a member of . However, there is no element in which is equivalent to u2; therefore, its membership in is difficult to justify.

Another unusual situation may occur, that is, for a non-empty subset of and are empty sets. Thus, we assume that . Then and . In other words, u4 is an unfortunate object and will never be an element of and for any .

In this section, to overcome the shortcomings mentioned in Example 2.7 we offer a new type of BSR-approximation known as MBSR approximations. The significant structural properties of these novel MBSR approximations were also investigated using counter examples.

### Definition 3.1

Let is . Based on β, the following operators are defined for any :

SL_β+(X)={f(e),e:f(e)X},SU¯β+(X)=(SL_β+(Xc))c,SU¯β-(X)={g(¬e),¬e:g(¬e)Xc},SL_β-(X)=(SU¯β-(Xc))c}

are regarded as modified soft β-lower positive, modified soft β-upper positive, and modified soft β-upper negative, and modified soft β-lower negative approximations of , respectively. Here, , Moreover, the ordered pairs are given as

MBS_β(X)=(SL_β+(X),SL_β-(X)),MBS¯β(X)=(SU¯β+(X),SU¯β-(X))}

are called the MBSR approximations of with respect to . Moreover, when MBS_β(X)MBS¯β(X) then, is termed MBSRS, in which is said to be modified bipolar soft β-definable. The corresponding positive, boundary, and negative regions with respect to the MBSR approximations are given as

MPOSβ(X)=(SL_β+(X),SU¯β-(X)),MBNDβ(X)=(SL¯β+(X)\SL_β+(X),SL_β-(X)\SL¯β-(X)),NGβ(X)=(U,U)-MBS¯β(X)=((SL¯β+(X))c,(SU_β-(X))c).

### Remark 3.2

From Definition 3.1, we observe that is MBSRS-definable when ,

### Remark 3.3

From Definition 3.1, we conclude that

• • The soft lower positive and modified soft β-lower positive approximations of are identical. That is, .

• • The soft upper negative and modified soft β-upper negative approximations of are identical. That is, S¯β-(X)=SU¯β-(X).

Here, we provide the following example to clarify the concept of MBSR approximations.

### Example 3.4

Let , where and ℘ = {e1, e2, e3, e4}. The maps f and g are characterized as follows:

f:2U,e{{u1,u6},if e=e1,{u3,u4},if e=e2,{},if e=e3,{u2,u5},if e=e4,g:2U,¬e{{u3,u5},if ¬e=¬e1,{u5},if ¬e=¬e2,{u2,u6},if ¬e=¬e3,{u4},if ¬e=¬e4.

According to Definition 3.1, we can evaluate the MBSR approximations of as follows.

SL_β+(X)={u3,u4},SU¯β+(X)={u2,u3,u4,u5},SU¯β-(X)={u2,u6},SL_β-(X)={u1,u2,u6}.

Therefore,

MBS_β(X)=({u3,u4},{u1,u2,u6}),MBS¯β(X)=({u2,u3,u4,u5},{u2,u6}).

Consequently, is an MBSRS because MBS_β(X)MBS¯β(X). Moreover, by direct calculations, we obtain

MPOSβ(X)=({u3,u4},{u2,u6}),MBNDβ(X)=({u2,u5},{u1}),MBNDβ(X)=({u1,u6},{u3,u4,u5}).

### Remark 3.5

The relationship between the containment of and the MBS¯β(X) is given by SL_β+(X)SU¯β+(X) and SL_β-(X)SU¯β-(X).

To determine the relationship between the containment of the modified soft β-lower and modified soft β-upper positive approximations of , one can obtain the following properties:

### Theorem 3.6

It is assumed that . be and . Then, the following properties hold.

• (1) SL_β+(X)XSU¯β+(X);

• (2) ;

• (3) SU¯β+(U)=U;

• (4) ;

• (5) XYSU¯β+(X)SU¯β+(Y);

• (6) ;

• (7) ;

• (8) SU¯β+(XY)SU¯β+(X)SU¯β+(Y);

• (9) SU¯β+(XY)SU¯β+(X)SU¯β+(Y);

• (10) SL_β+(Xc)=(SU¯β+(X))c.

Proof
• (1) According to Definition 3.1, is obvious. For the next inclusion, since SU¯β+(Xc)=(SL_β+(X))c. But , so and SU¯β+(Xc)Xc. Replacing by , we have, XSU¯β+(X). Consequently, we obtain SL_β+(X)XSU¯β+(X).

• (2) By definition, .

• (3) By definition, SU¯β+(U)=(SL_β+(Uc))c=(SL_β+())c=()c=U.

• (4) Assume that . Thus, there exists some f(e) such that . But , so follows that . That is, . Hence, . Consequently, .

• (5) Since, , so . By part (4), it follows that . Therefore, we have . This gives SU¯β+(X)SU¯β+(Y).

• (6) Let . Thus, there exists some f(e) such that . This implies that and . Consequently, and . So, . Hence, , as required.

• (7) Contrary suppose that . Therefore, for all . Then for all or . Consequently, or . This implies that . Hence, .

• (8) By Definition 3.1, we have

SU¯β+(XY)=(SL_β+(XY)c)c=(SL_β+(XcYc))c(SL_β+(Xc)SL_β+(Yc))c,         Bypart(7)=(SL_β+(Xc))c(SL_β+(Yc))c=SU¯β+(X)SU¯β+(Y).

Hence, SU¯β+(XY)SU¯β+(X)SU¯β+(Y).

• (9) By Definition 3.1,

SU¯β+(XY)=(SL_β+(XY)c)c=(SL_β+(XcYc))c(SL_β+(Xc)SL_β+(Yc))c,         Bypart(6)=(SL_β+(Xc))c(SL_β+(Yc))c=SU¯β+(X)SU¯β+(Y).

Therefore, SU¯β+(XY)SU¯β+(X)SU¯β+(Y).

• (10) By definition of modified soft β-upper positive approximation of , we have SU¯β+(X)=(SL_β+(Xc))c. This implies that SL_β+(Xc)=(SU¯β+(X))c.

This completes this proof.

The next example indicates that the inclusions in parts (6)–(9) in Theorem 3.6 may hold strictly.

### Example 3.7

Let , where and ℘ = {e1, e2, e3, e4, e5, e6}. The mappings f and g are as follows:

f:2U,e{{u1,u2,u3},if e=e1,{u1,u4},if e=e2,{u1,u3},if e=e3,{u3,u5,u6},if e=e4,{u2,u4},if e=e5,{u1,u2,u5},if e=e6,g:2U,¬e{{u4,u5},if ¬e=¬e1,{u5},if ¬e=¬e2,{u2,u6},if ¬e=¬e3,{u4},if ¬e=¬e4,{u3,u6},if ¬e=¬e5,{u3,u4,u6},if ¬e=¬e6.

Now, if we consider and then and . Now, by direct calculation, we obtain:

SL_β+(X)={u1,u2,u4},SL_β+(Y)={u1,u3},SL_β+(XY)=,SL_β+(XY)={u1,u2,u3,u4,u5}.

Clearly, ; That is, , which indicates that inclusion in Part (6) of Theorem 3.6 may be strict. Similarly, . That is, , which shows that the inclusion in Part (7) of Theorem 3.6 may hold strictly.

Now, if we take and , then , So,

SU¯β+(X)=U,SU¯β+(Y)={u4,u5,u6},SL¯β+(XY)=.

Clearly, SU¯β+(XY)={u4,u5,u6}=SU¯β+(X)SU¯β+(Y); That is, SU¯β+(XY)SU¯β+(X)SU¯β+(Y), which shows that inclusion in part (8) of Theorem 3.6 may be strict.

Similarly, if we assume and , then , Therefore,

SU¯β+(X)={u1},SU¯β+(Y)={u3,u6},SL¯β+(XY)={u1,u3,u5,u6}.

Clearly, SU¯β+(XY)={u1,u3,u5,u6}{u1,u3,u6}=SU¯β+(X)SU¯β+(Y), That is, SU¯β+(XY)SU¯β+(X)SU¯β+(Y), implying that inclusion in part (9) of Theorem 3.6 may be strict.

To determine the relationship between the containment of the modified soft β-upper negative and soft β-lower negative approximations of , we obtain the following results:

### Theorem 3.8

Suppose . be and . Then, the following properties hold.

• (1) SU¯β-(X)XcSL_β-(X);

• (2) SU¯β-(U)=;

• (3) ;

• (4) XYSU¯β-(X)SU¯β-(Y);

• (5) ;

• (6) SU¯β-(XY)SU¯β-(X)SU¯β-(Y);

• (7) SU¯β-(XY)SU¯β-(X)SU¯β-(Y);

• (8) ;

• (9) ;

• (10) SU¯β-(Xc)=(SL_β-(X))c.

Proof
• (1) According to Definition 3.1, it follows that SU¯β-(X)Xc is obvious. For the other inclusion, since SL_β-(Xc)=(SU¯β-(X))c. But SU¯β-(X)Xc, so (SU¯β-(X))cX and . Using instead of , we have, . Consequently, SU¯β-(X)XcSL_β-(X).

• (2) By definition, SU¯β-(U)={g(¬e),¬e:g(¬e)Uc=}=.

• (3) By definition, SL_β-()=(SU¯β-(c))c=(SU¯β-(U))c=()c=U.

• (4) let uSU¯β-(X)={g(¬e),¬e:g(¬e)Xc}. Therefore, there exists some ge) such that . Because , so . Thus in particular, . Therefore, uSU¯β-(Y). Consequently, SU¯β-(X)SU¯β-(Y).

• (5) As , so . By part (4), we can infer that SU¯β-(Xc)SU¯β-(Yc). Thus it implies that, (SU¯β-(Xc))c(SU¯β-(Yc))c. Hence, .

• (6) Assume that uSU¯β-(XY)={g(¬e),¬e:g(¬e)(XY)c=XcYc}. Therefore, for all . Then for all or . Consequently, uSU¯β-(X) or uSU¯β-(Y). This implies that, uSU¯β-(X)SU¯β-(Y). Hence, SU¯β-(XY)SU¯β-(X)SU¯β-(Y).

• (7) Suppose that uSU¯β-(XY)={g(¬e),¬e:g(¬e)(XY)c=XcYc}. Thus, there exists some ge) such that . This implies that and . Consequently, uSU¯β-(X) and uSU¯β-(Y). Therefore, uSU¯β-(X)SU¯β-(Y). Hence, we obtain SU¯β-(XY)SU¯β-(X)SU¯β-(Y).

• (8) By Definition 3.1,

SL_β-(XY)=(SU¯β-(XY)c)c=(SU¯β-(XcYc))c         Bypart(7)(SU¯β-(Xc)SU¯β-(Yc))c=(SU¯β-(Xc))c(SU¯β-(Yc))c=SL_β-(X)SL_β-(Y).

Hence, .

• (9) By Definition 3.1, it follows that

SL_β-(XY)=(SU¯β-(XY)c)c=(SU¯β-(XcYc))c         Bypart(6)(SU¯β-(Xc)SU¯β-(Yc))c=(SU¯β-(Xc))c(SU¯β-(Yc))c=SL_β-(X)SL_β-(Y).

Hence, .

• (10) By definition of modified soft β-lower negative approximation of , we have SL_β-(X)=(SU¯β-(Xc))c.

This indicates that SU¯β-(Xc)=(SL_β-(X))c.

This completes this proof.

The following example indicates that the inclusions in parts (6) to (9) of Theorem 3.8 may be strict.

### Example 3.9

Let as given in Example 3.7. If we take, such that and . Then and . By direct computation, we obtain

SU¯β-(X)={u3,u4,u5,u6},SL_β-(X)=U,SU¯β-(Y)={u4,u5},SL_β-(Y)=U,SU¯β-(XY)={u2,u3,u4,u5,u6},SL_β-(XY)={u1,u3,u4,u5}.

Clearly, SU¯β-(XY)={u2,u3,u4,u5,u6}{u3,u4,u5,u6}=SU¯β-(X)SU¯β-(Y), So, SU¯β-(XY)SU¯β-(X)SU¯β-(Y), which indicates that inclusion in Part (6) of Theorem 3.8 may hold strictly.

Similarly, , That is, , which shows that inclusion in part (9) of Theorem 3.8 may be strict.

Now, if we consider that is such that and . Then . Therefore

SU¯β-(X)={u2,u6},SU¯β-(Y)={u3,u4,u6},SU¯β-(XY)=.

Clearly, SU¯β-(XY)={u6}=SU¯β-(X)SU¯β-(Y); That is, SU¯β-(XY)SU¯β-(X)SU¯β-(X), which implies that inclusion in part (7) of Theorem 3.8 may hold strictly.

Now, if we assume that is such that and . Then . Then

SL_β-(X)={u1,u2,u5},SL_β-(Y)={u1,u3,u4,u5},SL_β-(XY)=U.

Clearly, , That is, , which shows that inclusion in part (8) of Theorem 3.8 may be strict.

### Remark 3.10

In BSRSs [29] and MRBSs [30], we observe the following:

• and ∅︀Φ+ = ̄Φ+. But in our proposed MBSRS model SL_β+()=SU¯β+() is not hold in general.

• and ∅︀Ψ = ̄Ψ. But in our proposed MBSRS model SL_β-()=SU¯β-() is not hold in general.

• and . But in our proposed MBSRS model SL_β+(U)=SU¯β+(U) is not hold in general.

• and . But in our proposed MBSRS model SL_β-(U)=SU¯β-(U) is not hold in general.

The following result shows the condition under which the modified soft β-lower positive and modified soft β-upper positive approximations of and ∅︀ coincide:

### Proposition 3.11

Let , such that ef(e)=U. Then,

• (1) SL_β+(U)=U=SU¯β+(U);

• (2) SL_β+()==SU¯β+().

Proof
• (1) From part (3) of Theorem 3.6, it follows that SU¯β+(U)=U. Now according to Definition 3.1, we have . Since, , therefore . That is, . Hence, SL_β+(U)=U=SU¯β+(U).

• (2) From part (1) of Theorem 3.6, it implies that . Now by Definition 3.1, SU¯β+()=(SL_β+(c))c=(SL_β+(U))c=Uc=. That is, SU¯β+()=. Consequently, SL_β+()==SU¯β+().

The next result shows the condition under which the modified soft β-upper negative and modified soft β-lower negative approximations of ∅︀ and coincide.

### Proposition 3.12

Let , such that ¬eg(¬e)=U. Then,

• (1) SL_β-()=U=SU¯β-();

• (2) SL_β-(U)==SU¯β-(U).

Proof
• (1) From part (3) of Theorem 3.8, it follows that . Now by Definition 3.1, we have it implies that SU¯β-()={g(¬e),¬e:g(¬e)()c=U}. Since, . Therefore it implies that SU¯β-()={g(¬e),¬e:g(¬e)(U)}=¬eg(¬e)=U. That is, SU¯β-()=U. Thus, SL_β-()=U=SU¯β-().

• (2) From part (2) of Theorem 3.8, we have SU¯β-(U)=. Now by Definition 3.1, we get SL_β-(U)=(SU¯β-(Uc))c=(SU¯β-())c=Uc=. That is, . Hence, SL_β-(U)==SU¯β-(U).

### Proposition 3.13

Let such that is . Then, the following are equivalent:.

• (1) (f, g : ℘) is a full BSS;

• (2) ;

• (3) MBS¯β(U)=(U,);

• (4) ;

• (5) MBS¯β()=(,U).

Proof

Direct consequence of Proposition 3.11 and Proposition 3.12.

The next result shows the relationship between the soft upper positive, soft lower negative, modified soft β-upper positive, and modified soft β-lower negative approximations of .

### Proposition 3.14

Let be a full BSS such that is . Then, for any , the following properties hold.

• (1) SU¯β+(X)S¯β+(X);

• (2) .

Proof
• (1) Let . Then, for all e, we have . So, , which follows that . Therefore, x(SL_β+(Xc))c=SU¯β+(X). Consequently, SU¯β+(X)S¯β+(X).

• (2) Assume that . Then, for all , we have . Therefore, , which follows that xSU¯β-(Xc). Thus, x(SU¯β-(Xc))c=SL_β-(X). Hence, .

### Remark 3.15

The above proposition reveals that the soft upper positive approximation of is finer than the modified soft β-upper positive approximation of . Similarly, the soft lower negative approximation of is finer than the modified soft β–lower negative approximation of .

The next example shows that the inclusions in parts (1) and (2) of the above proposition might be strict.

### Example 3.16

Let , where and = {e1, e2, e3, e4, e5, e6}. The mappings f and g are as follows:

f:2U,e{{u1,u3},if e=e1,{u1,u4,u5},if e=e2,{u2},if e=e3,{u2,u4,u5},if e=e4,{u1,u2},if e=e5,{u3,u5},if e=e6,g:2U,¬e{{u2,u5},if ¬e=¬e1,{u3},if ¬e=¬e2,{u3,u4,u5},if ¬e=¬e3,{u1,u3},if ¬e=¬e4,{u5},if ¬e=¬e5,{u2,u4},if ¬e=¬e6.

If we take , then

SU¯β+(X)={u1,u3,u4,u5},S¯β+(X)={u1,u2,u3,u4,u5}.

Clearly, SU¯β+(X)S¯β+(X), which indicates that the inclusion in part (1) of Proposition 3.14 may hold strictly.

Now, if , then

SL_β-(X)={u2,u4,u5},S_β-(X)={u2,u3,u4,u5}.

Clearly, we can see the following: , indicating that the inclusion in part (2) of Proposition 3.14 may be strict.

### Theorem 3.17

Let such that are and . Then, the following properties hold.

• (1) ;

• (2) SU¯β+(SU¯β+(X))=SU¯β+(X);

• (3) SL_β+(SU¯β+(X))SU¯β+(X);

• (4) SU¯β+(SL_β+(X))SL_β+(X).

Proof
• (1) From part (1) of Theorem 3.6, it follows that . For the reverse inclusion, let and . Then, there exists some e such that . Therefore, . Therefore, . Thus, . Consequently, this implies that .

• (2) By Definition 3.1, we have

SU¯β+(SU¯β+(X))=(SL_β+(SU¯β+(X))c)c=[SL_β+([SL_β+(Xc)]c)c]c=[SL_β+(SL_β+(Xc))]c=[SL_β+(Xc)]c         bypart(1)=SU¯β+(X).

Hence, SU¯β+(SU¯β+(X))=SU¯β+(X).

The proofs of parts (3) and (4) are quite clear from part (1) of Theorem 3.6.

### Remark 3.18

From parts (1) and (2) of the above theorem, it can be observed that the modified soft β-lower positive approximation of and the modified soft β-upper positive approximation of SU¯β+(X) with respect to are invariant.

The next example shows that the inclusions in parts (3) and (4) of the above theorem may hold strictly.

### Example 3.19

If we consider in Example 3.7, then

SU¯β+(X)={u1},SL_β+(SU¯β+(X))=.

Clearly, SL_β+(SU¯β+(X))={u1}=SU¯β+(X), indicating that the inclusion in part (3) of Proposition 3.17 may be strict.

Similarly, if we consider in Example 3.7, then

SL_β+(X)={u1,u3},SU¯β+(SL_β+(X))={u1,u3,u4,u6}.

Clearly, SU¯β+(SL_β+(X))={u1,u3,u4,u6}{u1,u3}=SL_β+(X), showing that the inclusion in Part (4) of Proposition 3.17 holds strictly.

### Theorem 3.20

Let such that are and . Then, the following properties hold.

• (1) SU¯β-(SL_β-(X))=(SL_β-(X))c;

• (2) SL_β-(SU¯β-(X))=(SU¯β-(X))c;

• (3) SU¯β-(SU¯β-(X))(SU¯β-(X))c;

• (4) .

Proof
• (1) By part (1) of Theorem 3.8, we have SU¯β-(SL_β-(X))(SL_β-(X))c. For the reverse inclusion, let and uY=(SL_β-(X))c=SU¯β-(Xc)={g(¬e),¬e:g(¬e)X}. Then, there exists some such that . So, uSU¯β-(Yc)=SU¯β-[(SL_β-(X))c]c=SU¯β-(SL_β-(X)). Therefore, uSU¯β-(SL_β-(X)). Hence, YSU¯β-(SL_β-(X)). This implies that, (SL_β-(X))cSU¯β-(SL_β-(X)). Consequently, SU¯β-(SL_β-(X))=(SL_β-(X))c.

• (2) By Definition 3.1, it follows that

SL_β-(SU¯β-(X))=(SU¯β-(SU¯β-(X))c)c=[SU¯β-(SL_β-(Xc))]c         bypart(10)ofTheorem3.8=[[SL_β-(Xc)]c]c         bypart(1)=SL_β-(Xc)=(SU¯β-(X))c         bypart(10)ofTheorem3.8.

Hence, SL_β-(SU¯β-(X))=(SU¯β-(X))c.

The proofs of parts (3) and (4) are quite clear from part (1) of Theorem 3.8.

The next example shows that the inclusions in parts (3) and (4) of the above theorem may hold strictly.

### Example 3.21

If we consider in Example 3.7, then

SU¯β-(X)={u4},SL_β-(X)={u1,u2,u4},SU¯β-(SU¯β-(X))={u2,u3,u5,u6},SL_β-(SL_β-(X))={u1,u2,u3,u5,u6}.

Clearly, SU¯β-(SU¯β-(X))={u2,u3,u5,u6}{u1,u2,u3,u5,u6}(SU¯β-(X))c, indicating that the inclusion in part (3) of Theorem 3.20 may be strict. Also, , which shows that the inclusion in part (4) of Proposition 3.20 may hold strictly.

A comparison between BSR-approximations and MBSR approximations is presented in Table 2.

Pawlak identified two quantitative measures for quantifying the inaccuracy of RS approximations in [2], which might assist in obtaining a sense of how precisely the information is connected to a certain equivalence relation for a certain classification. Generally, the existence of a boundary region causes uncertainty in a set. The greater the boundary region of a set, the lower the accuracy of the set.

According to Pawlak [2], the accuracy and roughness measures of are defined as

A(X)=|σ(X)||σ(X)|,R(X)=1-A(X),

where | • | denotes the order of the set.

In other words, captures the degree of completeness of the knowledge of set , whereas is viewed as the degree of incompleteness of the knowledge of set .

As a generalization of these measures, the present section introduces some measures in the framework of the MBSRS and investigates some of its fundamental properties.

### Definition 4.1

Let and , . Then, the accuracy measure of in the MBSRS environment is characterized as follows:

AM(X)=(Xβ+,Xβ-),

where

Xβ+=|SL_β+(X)||SU¯β+(X)|,

and

Xβ-=|SU¯β-(X)||SL_β-(X)|.

The roughness measure for in the MBSRS environment is characterized as follows:

RM(X)=(1,1)-(Xβ+,Xβ-)=(1-Xβ+,1-Xβ-).

Clearly, and .

### Remark 4.2

If is the accuracy of . Then,

• (i) is MBSRS-definable set if and only if .

• (ii) If and the set has some nonempty boundary region and consequently is MBSRS.

From Definition 4.1, we can prove the following results:

### Proposition 4.3

Let : Then,

• (1) AM(X)=(0,0)SL_β+(X)==SU¯β-(X);

• (2) AM(X)=(1,1)SL_β+(X)=SU¯β+(X) and SU¯β-(X)=SL_β-(X);

• (3) .

Proof

Straightforward.

Gediga and Düntsch [42], in 2001, introduced a measure of precision of the approximation of , which is given by

P(X)=|σ(X)||X|.

This is simply a relative number of elements of which can be approximated by σ. It is important to note that needs complete knowledge of , whereas does not exist.

It can be generalized in the context of the MBSRS as follows:

### Definition 4.4

Let and be . Then, the precision measure of in the MBSRS environment is defined as

PM(X)=(Xβ+,Xβ-),

where

Xβ+=|SL_β+(X)||X|,

and

Xβ-=|SU¯β-(X)||X|.

Clearly, 0Xβ+1 and 0Xϒ-1.

From the above definition, we can derive the following properties of :

### Proposition 4.5

Let . Then,

• (1) PM(X)=(0,0)SL_β+(X)==SU¯β-(X);

• (2) PM(X)=(1,1)SL_β+(X)=X=SU¯β-(X);

• (3) Xβ+Xβ+ and Xβ-Xβ-;

• (4) .

Proof

Straightforward.

Yao [43] revised some of the properties of the accuracy measure given by Pawlak [2] and proposed another measure known as the measure of the completeness of knowledge, which is given by

Ck(X)=|σ(X)|+|σ(Xc)||U|.

In the MBSRS framework, it can be characterized as

### Definition 4.6

Let and be . Then, measure of the completeness of knowledge of under the MBSRS environment is defined as

MCk(X)=(X#β+,X#β-),

where

X#β+=|SL_β+(X)|+|SL_β+(Xc)||U|,

and

X#β-=|SU¯β-(X)|+|SU¯β-(Xc)||U|.

Clearly, 0<X#β+<1 and 0X#β-1. In other words, cannot be zero for any .

The following proposition shows the condition under which attains its maximum value.

### Proposition 4.7

Let be the full BSS. Then, whenever or ,

Proof

Because (f, g : ℘) is a full BSS, ef(e)=U and ¬eg(¬e)=U, Now, we prove the required result for the two cases.

Case 1

When , then

X#β+=|SL_β+()|+|SL_β+(c)||U|=|SL_β+()|+|SL_ϒ+(U)||U|=||+|U||U|         byProposition3.11=0+|U||U|=1.

Similarly,

X#β-=|SU¯β-()|+|SU¯β-(c)||U|=|SU¯β-()|+|SU¯β-(U)||U|=|U|+|||U|         byProposition3.12=|U|+0|U|=1.

Therefore, MCk(X)=(X#β+,X#β-)=(1,1).

Case 2

When , then

X#β+=|SL_β+(U)|+|SL_β+(Uc)||U|=|SL_β+(U)|+|SL_β+()||U|=|U|+|||U|         byProposition3.11=|U|+0|U|=1.

Also,

X#β-=|SU¯β-(U)|+|SU¯β-(Uc)||U|=|SU¯β-(U)|+|SU¯β-()||U|=||+|U||U|         byProposition3.12=0+|U||U|=1.

Consequently, MCk(X)=(X#β+,X#β-)=(1,1). Hence, in both cases we obtain .

Here, we elaborate on the following examples to explain the concepts of AM(X), PM(X) and MCk(X).

The MBSR approximations of are as follows:

SL_β+(X)={u3,u4},SU¯β+(X)={u2,u3,u4,u5},SU¯β-(X)={u2,u6},SL_β-(X)={u1,u2,u6}.

Also,

SL_β+(Xc)={u1,u6},SU¯β-(Xc)={u3,u4,u5}.

Therefore,

AM(X)=(Xβ+,Xβ-)=(24,23)=(0.500,0.666),PM(X)=(Xβ+,Xβ-)=(23,23)=(0.666,0.666),MCk(X)=(X#β+,X#β-)=(2+26,2+36)=(46,56)=(0.666,0.833).

Group decision-making (GDM) is an efficient strategy for dealing with complicated DM problems in which numerous experts decide on a set of alternatives. The aim is to integrate the opinions expressed by experts to find an alternative that is most agreeable to the group of experts as a whole. GDM techniques must consider various criteria and attributes in a complex society. As a result, with rapid development in numerous domains, studies on GDM that specifically involve multiple attributes are the main focus and have achieved tremendous progress.

In general, MAGDM is a technique in which a team of experts (DMs) collaborates to determine the optimum option from a set of available alternatives that are categorized based on their features in a specific context.

In this section, we describe the design of a robust MAGDM technique using MBSRS. We provide a brief statement of a MAGDM problem within the context of the MBSRS and then provide a generic mathematical formulation for the MAGDM problem based on MBSRS theory.

### 5.1 Problem Description

Assume that is a set of n objects, and = {e1, e2, . . ., em} is the set of all possible object attributes. Suppose we have a group of experts. consisting of k invited DMs. Each expert needs to examine all the objects of and will be requested to only point out “the optimal alternatives” as his/her evaluation result according to his/her experience and professional knowledge. Therefore, each expert’s primary evaluation result is a subset of : Let represent the primary evaluations of DMs , respectively, and be the actual results that were previously obtained for problems at various times or locations. For simplicity, we assume that the evaluations of each expert in are equally important. The DM for this MAGDM problem is then: “how to reconcile (or compromise) differences in the evaluations expressed by individual experts to find the alternative that is most acceptable by the group of experts as a whole”.

### 5.2 Mathematical Modelling

In this subsection, we provide step-by-step mathematical modelling and the procedure of the MAGDM method using MBSRS theory.

Definition 5.1

Let MBS_Bq(Xj)=(SL_β+fq(Xj),SL_β-gq(Xj));MBS_Bq(Xj)=(SU¯β+fq(Xj),SU¯β-gq(Xj)) be the lower and upper MBSR approximations of is related to ; (q = 1, 2, . . ., r).

Then,

[M˜]=([SL_β+f1(X1),SL_β-g1(X1)][SL_β+f1(X2),SL_β-g1(X2)][SL_β+f1(Xk),SL_β-g1(Xk)][SL_β+f2(X1),SL_β-g2(X1)][SL_β+f2(X2),SL_β-g2(X2)][SL_β+f2(Xk),SL_β-g2(Xk)][SL_β+fr(X1),SL_β-gr(X1)][SL_β+fr(X2),SL_β-gr(X2)][SL_β+fr(Xk),SL_β-gr(Xk)]),[M]˜=([SU¯β+f1(X1),SU¯β-g1(X1)][SU¯β+f1(X2),SU¯β-g1(X2)][SU¯β+f1(Xk),SU¯β-g1(Xk)][SU¯β+f2(X1),SU¯β-g2(X1)][SU¯β+f2(X2),SU¯β-g2(X2)][SU¯β+f2(Xk),SU¯β-g2(Xk)][SU¯β+fr(X1),SU¯β-gr(X1)][SU¯β+fr(X2),SU¯β-gr(X2)][SU¯β+fr(Xk),SU¯β-gr(Xk)])

are called the lower and upper MBSR approximation matrices, respectively. Here

SL_β+fq(Xj)=(u1j_fq,u2j_fq,,unj_fq),SL_β-gq(Xj)=(u1j_gq,u2j_gq,,unj_gq),SU¯β+fq(Xj)=(u1j¯fq,u2j¯fq,,unj¯fq),SU¯β-fq(Xj)=(u1j¯gq,u2j¯gq,,unj¯gq),

where

uij_fq={1,if uiSL_β+fq(Xj),0,otherwise,uij_gq={12,if uiSL_β-gq(Xj),0,otherwise,uij¯fq={12,if uiSU¯β+fq(Xj),0,otherwise,uij¯gq={1,if uiSU¯ϒ-fq(Xj),0,otherwise.
Definition 5.2

Let [M~] and [M]˜ be the lower and upper MBSR approximation matrices with respect to and MBS¯Bq(Xj), where: j = 1, 2, . . ., k and q = 1, 2, . . ., r.

Then,

Vf=j=1kq=1r(SL_β+fq(Xj)SU¯β+fq(Xj)),Vg=j=1kq=1r(SL_β-gq(Xj)SU¯β-gq(Xj))

are called the positive and negative MBSR approximation vectors, respectively. Here, the operations+represent the vector summation.

Definition 5.3

Let and be positive and negative MBSR approximation vectors, respectively. Then,

Vd=Vf-Vg=(δ1,δ2,,δn),

is said to be a decision vector where each δi is called the score value ( ) of .

• (i) is considered as an optimal alternative if its is a maximum of δi; ∀i = 1, 2, . . ., n.

• (ii) is considered as the worst alternative if its is a minimum of δi; ∀i = 1, 2, . . ., n.

When there is more than one optimal alternative for , we choose any one of them.

### 5.3 Proposed Algorithm

We now present a DM algorithm for the established MAGDM problem considered in subsection 5.2. The related steps are as follows.

• Step 1: Take primary evaluations of experts ; j = 1, 2, . . ., k.

• Step 2: Construct ; q = 1, 2, . . ., r using the actual results.

• Step 3: Calculate and MBS¯Bq(Xj)j=1, 2, . . ., k and q = 1, 2, . . ., r.

• Step 4: Compute [M~] and)[M]˜ by Definition 5.1.

• Step 5: Calculate and from Definition 5.2.

• Step 6: Compute by Definition 5.3.

• Step 7: Find max1inδi. An alternative with the highest should be chosen for the final selection.

A flowchart depicting the above algorithm is shown in Figure 1.

### 5.4 An Illustrative Example (Faculty Appointment Problem)

In this subsection, we present a case study to illustrate the principal methodology of the proposed algorithm and its related concepts.

Example 5.4

The appointment of faculty members to senior positions in universities involves very complicated evaluations and DM. A candidate may be judged by various attributes, such as research productivity, managerial skills, and the ability to work under pressure. To make accurate judgments about the candidates based on these attributes, it is wise to consult experts for their professional opinions.

Let be the set of five candidates that may fit a senior faculty position at a certain university. To hire the person most suitable for this position, a panel, of experts is set up. The panel evaluates candidates according to the set of attributes. = {e1, e2, e3, e4, e4, e6}, where e1 = research productivity, e2 = managerial skills, e3 = impact on research community, e4 = ability to work under pressure; e5 = academic leadership qualities and e6 = contribution to X University.

• Step 1: Let is a panel of expert who give their Primary evaluations for the candidates as

X1={C1,C2,C3},X2={C1,C3,C5} and X3={C2,C4,C5}.

• Step 2: Actual results in two different meetings and times for the candidates are represented in the form of BSSs as and , where positive membership map of BSS denote expertise of candidates and negative membership denote non-expertise of candidates in a certain attribute:

f1:2U,e{{C1,C3},if e=e1,{C1,C4,C5},if e=e2,{C2},if e=e3,{C2,C4,C5},if e=e4,{C1,C2},if e=e5,{C3,C5},if e=e6,g1:2A,¬e{{C2,C5},if ¬e=¬e1,{C3},if ¬e=¬e2,{C3,C4,C5},if ¬e=¬e3,{C1,C3},if ¬e=¬e4,{C5},if ¬e=¬e5,{C2,C4},if ¬e=¬e6,

and

f2:2U,e{{C2,C3},if e=e1,{C1,C3},if e=e2,{C2,C3,C4},if e=e3,{C5},if e=e4,{C1,C5},if e=e5,{C3,C4,C5},if e=e6,g2:2A,¬e{{C4},if ¬e=¬e1,{C4,C5},if ¬e=¬e2,{C1},if ¬e=¬e3,{C1,C3,C4},if ¬e=¬e4,{C2,C3},if ¬e=¬e5,{C1,C2},if ¬e=¬e6.

• Step 3: Lower and upper MBSR-approximations of ; (q = 1, 2) can be calculated as follows:

MBS_B1(X1)=({C1,C2,C3},{C2,C4,C5}),MBS¯B1(X1)=({C1,C2,C3,C4,C5},{C5}),MBS_B1(X2)=({C1,C3,C5},{C2,C4}),MBS¯B1(X2)=({C1,C3,C4,C5},{C2,C4}),MBS_B1(X3)=({C2,C4,C5},{C1,C3}),MBS¯B1(X3)=({C2,C4,C5},{C1,C3}).

Similarly,

MBS_B2(X1)=({C1,C2,C3},{C4,C5}),MBS¯B2(X1)=({C1,C2,C3,C4},{C4,C5}),MBS_B2(X2)=({C1,C3,C5},{C2,C3,C4,C5}),MBS¯B2(X2)=({C1,C2,C3,C4,C5},{C4}),MBS_B2(X3)=({C5},{C1,C2,C3}),MBS¯B2(X3)=({C2,C4,C5},{C1}).

• Step 4: According to Definition 5.1, lower and upper MBSR-approximation matrices can be calculated as follows:

[M~]=([(1,1,1,0,0),(0,12,0,12,12)][(1,0,1,0,1),(0,12,0,12,0)][(0,1,0,1,1),(12,0,12,0,0)][(1,1,1,0,0),(0,0,0,12,12)][(1,0,1,0,1),(0,12,12,12,12)][(0,0,0,0,1),(12,12,12,0,0)]),[M]˜=([(12,12,12,12,12),(0,0,0,0,1)][(12,0,12,12,12,(0,1,0,1,0)][(0,12,0,12,12),(1,0,1,0,0)][(12,12,12,12,0),(0,0,0,1,1)][(12,12,12,12,12),(0,0,0,1,0)][(0,12,0,12,12),(1,0,0,0,0)]).

• Step 5: By Definition 5.2, and can be calculated as follows:

Vf=(6,5.5,6,4,6.5),Vg=(3,3,2.5,5,3.5).

• Step 6: According to Definition 5.3, we get:

Vd=(3,2.5,4.5,-1,3).

• Step 7: As max1i5δi=C3=4.5. Therefore, is the most suitable candidate for senior faculty position. Accordingly, we can obtain the preference order of the five candidates as follows:

C3C1=C1C2C4.

A graphical representation of the candidate preference order is shown in Figure 2.

### 6.1 Advantages of the Proposed Technique

The key advantages of the proposed technique over existing methods are summarized below.

• (i) The suggested technique considers positive and negative aspects of each alternative in the form of BSS. This hybrid model is more generalized and suitable for dealing with aggressive DM.

• (ii) Using the MBSR-approximations, this approach provides another way to obtain the group preference evaluation based on the individual preference evaluation for a considered MAGDM problem.

• (iii) This technique is also ideal because the DMs are liberated from any external restrictions and requirements in this approach.

• (iv) Our proposed technique effectively solves MAGDM problems when the weight information for the attribute is entirely unknown.

• (v) The suggested approach considers not only the opinions of DMs but also past experiences (primary evaluations) by MBSR-approximations in actual scenarios. Therefore, it is a more comprehensive approach for a better interpretation of available information and thus makes decisions using artificial intelligence.

• (vi) The proposed MAGDM approach is easy to understand and apply to real-life DM issues.

• (vii) If we compare our proposed technique with methods presented in [10,12,17,4448], we see that these methods are incapable of detecting bipolarity in the DM process, which is a key element of human thinking and behavior.

### 6.2 Comparison with Some Other Approaches

In this subsection, we reevaluate the best DM procedure for the uncertainty problem given in Example 5.4 using the algorithm given by Shabir and Gul [30]. We compared the results with the DM technique proposed in this article. First, we used the algorithm proposed by Shabir and Gul [30] to solve Example 5.4. Based on these results, we obtained the preference order of the candidates as follows:

C1=C2=C3=C4=C5.

In other words, the preference order of the candidates could not be detected.

Now, applying the algorithm given in Karaaslan and Çağman [29] to Example 5.4, we obtain the following preference order of the candidates:

C1=C2=C3C5C4.

From Table 3, we can see that our proposed algorithm is capable of identifying the most suitable candidate for a senior faculty position and achieves a clear distinction among each candidate. Considering the above advantages, we recommend the approach given in this paper, and suggest to apply it in the DM process for uncertainty problems.

The RS theory is emerging as a powerful theory and has diverse applications in many areas. On the other hand, the BSS is a suitable mathematical model for handling uncertainty along with bipolarity, that is, the positivity and negativity for the data. In this study, we developed an alternative strategy for the roughness of BSS called “MBSRS,” which eliminates various limitations of BSRS introduced by Karaaslan and Çağman [29].

This study makes the following main contributions.

• We begin by defining some novel types of BSRSs approximation operators for a given BSS.

• The fundamental structural properties of the newly proposed approximation operators have also been thoroughly investigated, with various examples.

• In addition, certain uncertainty measures related to MBSRS are also offered.

• Meanwhile, based on the MBSRS, we offer a generic framework for the MAGDM method, which refines the primary evaluations of the entire group of experts and enables us to select the optimal object in a more reliable manner.

• A DM algorithm is then presented with two key benefits. Firstly, it manages the bipolarity of the data, accompanied by uncertainty. Secondly, it considers the views of any (finite) number of experts on any (finite) number of alternatives.

• Moreover, a practical application of the proposed MAGD M approach demonstrates the credibility of this methodology.

• Finally, a comparison of the proposed model with few existing techniques is carried out, demonstrating that the MBSRS approach is better than the traditional approaches and that this modification can be used to make a correct decision.

The current work is also limited and plenty of meaningful study issues need further in-depth exploration. The following research directions will be the focus of our future studies.

• Researchers may examine the algebraic structures of MBSRS based on the characterized ideas and procedures in this study.

• We would like to examine the topological properties and similarity measures of MBSRS to establish a solid foundation for future research investigations and to improve working approaches.

• The notions of the MBSRS can be generalized to multi-granulation MBSRS.

• Furthermore, we will focus on the applications of the suggested approach to a broader range of selection problems, like TOSIS, VIKOR, ELECTRE, AHP, COPRAS, PROMETHEE, etc.

• We might also look at various hybridizations of the suggested technique to improve result accuracy and use these procedures to certifiable problems with big data sets. In this way, we can try to obtain and show the utility of the suggested strategy.

• The MBSRS can be extended in a fuzzy environment, and effective DM techniques might be developed.

Fig. 1.

Summary of the proposed algorithm for MAGDM.

Fig. 2.

Preference order of the candidates.

Table. 1.

Table 1. Tabular representation of (f, g : ℘).

fu1u2u3u4u5u6
e1100001
e2001000
e3000000
e4110010
(a)
gu1u2u3u4u5u6
¬e1000010
¬e2000000
¬e3011001
¬e4001001
(b)

Table. 2.

Table 2. Comparison between BSR-approximations and MBSR-approximations.

BSR-approximationsMBSR-approximations
(need not be)XSU¯β+(X)
SU¯β+(U)=U
(need not be)
SU¯β+()=(need not be)
SU¯β+(XY)SU¯β+(X)SU¯β+(Y)
SU¯β-(XY)SU¯β-(X)SU¯β-(Y)

Table. 3.

Table 3. Results obtained using various approaches for Example 5.4.

ApproachPreference order of the candidates
Shabir and Gul [30]Cannot handle
Karaaslan and Çağman [29]
Our proposed method

1. Zadeh, LA (1965). Fuzzy sets. Information and Control. 8, 338-353. https://doi.org/10.1016/S0019-9958(65)90241-X
2. Pawlak, Z (1982). Rough sets. International Journal of Computer & Information Sciences. 11, 341-356. https://doi.org/10.1007/BF01001956
3. Molodtsov, D (1999). Soft set theory: first results. Computers & Mathematics with Applications. 37, 19-31. https://doi.org/10.1016/S0898-1221(99)00056-5
4. Maji, PK, Biswas, R, and Roy, AR (2003). Soft set theory. Computers and Mathematics with Applications. 45, 555-562. https://doi.org/10.1016/S0898-1221(03)00016-6
5. Ali, MI, Feng, F, Liu, X, Min, WK, and Shabir, M (2009). On some new operations in soft set theory. Computers & Mathematics with Applications. 57, 1547-1553. https://doi.org/10.1016/j.camwa.2008.11.009
6. Alshami, T, and El-Shafei, M (2020). T-soft equality relation. Turkish Journal of Mathematics. 44, 1427-1441. https://doi.org/10.3906/mat-2005-117
7. Maji, PK, Biswas, R, and Roy, AR (2001). Fuzzy soft sets. Journal of Fuzzy Mathematics. 9, 589-602.
8. Feng, F, Liu, X, Leoreanu-Fotea, V, and Jun, YB (2011). Soft sets and soft rough sets. Information Sciences. 181, 1125-1137. https://doi.org/10.1016/j.ins.2010.11.004
9. Shabir, M, Ali, MI, and Shaheen, T (2013). Another approach to soft rough sets. Knowledge-Based Systems. 40, 72-80. https://doi.org/10.1016/j.knosys.2012.11.012
10. Greco, S, Matarazzo, B, and Slowinski, R (1999). The use of rough sets and fuzzy sets in MCDM. Multicriteria Decision Making. Boston, MA: Springer, pp. 397-455 https://doi.org/10.1007/978-1-4615-5025-9_14
11. Greco, S, Matarazzo, B, and Slowinski, R (1999). Rough approximation of a preference relation by dominance relations. European Journal of Operational Research. 117, 63-83. https://doi.org/10.1016/S0377-2217(98)00127-1
12. Greco, S, Matarazzo, B, and Slowinski, R (2001). Rough sets theory for multicriteria decision analysis. European Journal of Operational Research. 129, 1-47. https://doi.org/10.1016/S0377-2217(00)00167-3
13. Greco, S, Matarazzo, B, and Slowinski, R (2002). Rough approximation by dominance relations. International Journal of Intelligent Systems. 17, 153-171. https://doi.org/10.1002/int.10014
14. Slowinski, R, Greco, S, and Matarazzo, B (2022). Rough set analysis of preference-ordered data. Rough Sets and Current Trends in Computing. Heidelberg, Germany: Springer, pp. 44-59 https://doi.org/10.1007/3-540-45813-1_6
15. Du, WS, and Hu, BQ (2017). Dominance-based rough fuzzy set approach and its application to rule induction. European Journal of Operational Research. 261, 690-703. https://doi.org/10.1016/j.ejor.2016.12.004
16. Shaheen, T, Mian, B, Shabir, M, and Feng, F (2019). A novel approach to decision analysis using dominance-based soft rough sets. International Journal of Fuzzy Systems. 21, 954-962. https://doi.org/10.1007/s40815-019-00612-2
17. Feng, F (2011). Soft rough sets applied to multicriteria group decision making. Annals of Fuzzy Mathematics and Informatics. 2, 69-80.
18. Ayub, S, Shabir, M, Riaz, M, Mahmood, W, Bozanic, D, and Marinkovic, D (2022). Linear diophantine fuzzy rough sets: a new rough set approach with decision making. Symmetry. 14. article no 525
19. Riaz, M, Hashmi, MR, Kalsoom, H, Pamucar, D, and Chu, YM (2020). Linear Diophantine fuzzy soft rough sets for the selection of sustainable material handling equipment. Symmetry. 12. article no 1215
20. Hashmi, MR, Tehrim, ST, Riaz, M, Pamucar, D, and Cirovic, G (2021). Spherical linear diophantine fuzzy soft rough sets with multi-criteria decision making. Axioms. 10. article no 185
21. Akram, M, and Ali, G (2020). Hybrid models for decisionmaking based on rough Pythagorean fuzzy bipolar soft information. Granular Computing. 5, 1-15. https://doi.org/10.1007/s41066-018-0132-3
22. Shabir, M, and Naz, M. (2013) . On bipolar soft sets. [Online]. Available: https://arxiv.org/abs/1303.1344
23. Karaaslan, F, and Karatas, S (2015). A new approach to bipolar soft sets and its applications. Discrete Mathematics, Algorithms and Applications. 7. article no 1550054
24. Karaaslan, F, Ahmad, I, and Ullah, A (2016). Bipolar soft groups. Journal of Intelligent & Fuzzy Systems. 31, 651-662. https://doi.org/10.3233/IFS-162178
25. Naz, M, and Shabir, M (2014). On fuzzy bipolar soft sets, their algebraic structures and applications. Journal of Intelligent & Fuzzy Systems. 26, 1645-1656. https://doi.org/10.3233/IFS-130844
26. Ozturk, TY (2018). On bipolar soft topological spaces. Journal of New Theory. 2018, 64-75.
27. Abdullah, S, Aslam, M, and Ullah, K (2014). Bipolar fuzzy soft sets and its applications in decision making problem. Journal of Intelligent & Fuzzy Systems. 27, 729-742. https://doi.org/10.3233/IFS-131031
28. Alkouri, AUM, Massa’deh, MO, and Ali, M (2020). On bipolar complex fuzzy sets and its application. Journal of Intelligent & Fuzzy Systems. 39, 383-397. https://doi.org/10.3233/JIFS-191350
29. Karaaslan, F, and Cagman, N (2018). Bipolar soft rough sets and their applications in decision making. Afrika Matematika. 29, 823-839. https://doi.org/10.1007/s13370-018-0580-6
30. Shabir, M, and Gul, M (2020). Modified rough bipolar soft sets. Journal of Intelligent & Fuzzy Systems. 39, 4259-4283. https://doi.org/10.3233/JIFS-200317
31. Gul, R, Shabir, M, Naz, M, and Aslam, M (2021). A novel approach toward roughness of bipolar soft sets and their applications in MCGDM. IEEE Access. 9, 135102-135120. https://doi.org/10.1109/ACCESS.2021.3116097
32. Mahmood, T, and Ali, Z (2021). A novel complex fuzzy N-soft sets and their decision-making algorithm. Complex & Intelligent Systems. 7, 2255-2280. https://doi.org/10.1007/s40747-021-00373-2
33. Malik, N, and Shabir, M (2019). Rough fuzzy bipolar soft sets and application in decision-making problems. Soft Computing. 23, 1603-1614. https://doi.org/10.1007/s00500-017-2883-1
34. Malik, N, and Shabir, M (2019). A consensus model based on rough bipolar fuzzy approximations. Journal of Intelligent & Fuzzy Systems. 36, 3461-3470. https://doi.org/10.3233/JIFS-181223
35. Al-Shami, TM (2021). Bipolar soft sets: relations between them and ordinary points and their applications. Complexity. 2021. article no 6621854
36. Riaz, M, and Tehrim, ST (2019). Bipolar fuzzy soft mappings with application to bipolar disorders. International Journal of Biomathematics. 12. article no 1950080
37. Kamaci, H, and Petchimuthu, S (2020). Bipolar N-soft set theory with applications. Soft Computing. 24, 16727-16743. https://doi.org/10.1007/s00500-020-04968-8
38. Gul, R, and Shabir, M (2020). Roughness of a set by (α, β)-indiscernibility of Bipolar fuzzy relation. Computational and Applied Mathematics. 39. article no 160
39. Al-shami, TM (2021). An improvement of rough sets’ accuracy measure using containment neighborhoods with a medical application. Information Sciences. 569, 110-124. https://doi.org/10.1016/j.ins.2021.04.016
40. Al-shami, TM, and Ciucci, D (2022). Subset neighborhood rough sets. Knowledge-Based Systems. 237. article no 107868
41. Al-shami, TM (2021). Improvement of the approximations and accuracy measure of a rough set using somewhere dense sets. Soft Computing. 25, 14449-14460. https://doi.org/10.1007/s00500-021-06358-0
42. Gediga, G, and Duntsch, I (2001). Rough approximation quality revisited. Artificial Intelligence. 132, 219-234. https://doi.org/10.1016/S0004-3702(01)00147-3
43. Yao, YY (2010). Notes on rough set approximations and associated measures. Journal of Zhejiang Ocean University (Natural Science). 29, 399-410.
44. Cagman, N, Enginoglu, S, and Citak, F (2011). Fuzzy soft set theory and its applications. Iranian Journal of Fuzzy Systems. 8, 137-147.
45. Cagman, N, and Enginoglu, S (2010). Soft matrix theory and its decision making. Computers & Mathematics with Applications. 59, 3308-3314. https://doi.org/10.1016/j.camwa.2010.03.015
46. Celik, Y, and Yamak, S (2013). Fuzzy soft set theory applied to medical diagnosis using fuzzy arithmetic operations. Journal of Inequalities and Applications. 2013. article no 82
47. Gogoi, K, Dutta, AK, and Chutia, C (2014). Application of fuzzy soft set theory in day to day problems. International Journal of Computer Applications. 85, 27-31.
48. Herawan, T (2012). Soft set-based decision making for patients suspected influenza-like illness. International Journal of Modern Physics: Conference Series. 9, 259-270. https://doi.org/10.1142/S2010194512005302

Rizwan Gul received a B.S. degree in mathematics from Kohat University of Science & Technology, Kohat, Pakistan in 2017 and a M.Phil. degree in pure mathematics from Quaid-i-Azam University, Islamabad, Pakistan, in 2020. He is currently a Ph.D. scholar with the Department of Mathematics, Quaid-i-Azam University, Islamabad, Pakistan. His current research interests include fuzzy algebraic structures, decision analysis, soft sets, rough sets, and their hybrid algebraic structures with their applications.

E-mail: rgul@math.qau.edu.pk

Muhammad Shabir received the M.Sc., M.Phil. and Ph.D. degrees in mathematics from Quaid-i-Azam University, Islamabad, Pakistan, in 1981, 1984, and 1996, respectively. He is currently working as a visiting professor with the Department of Mathematics, Quaid-i-Azam University. He has published more than 200 research papers in international journals. He is also a co-author of the book “Fuzzy Semirings with Applications to Automata Theory” (New York, NY, USA: Springer). His current research interests include fuzzy algebraic structures, soft algebraic structures, and their applications. He has supervised 17 Ph.D. and 120 M.Phil. Thesis.

E-mail: mshabirbhatti@yahoo.co.uk

Wali Khan Mashwani received a Ph.D. degree in mathematics from the University of Essex, UK. He is currently working as a professor with the Institute of Numerical Sciences, Kohat University of Science & Technology. His research interests include Evolutionary Computation, Single and Multi-objective Evolutionary Optimization, MOEA/D, Neural networks, Game Theory, Numerical Analysis, and Mathematical Programming. He has published 150 papers in SCI and ESCI, Scopus Journals, and different peer-review conference proceedings and book chapters.

E-mail: walikhan@kust.edu.pk

Hayat Ullah received his B.S. and M.Phil. degrees in mathematics from Kohat University of Science & Technology, Kohat, Pakistan in 2017 and 2022, respectively. His current research interests include decision analysis, soft sets, rough sets, and their hybrid algebraic structures.

E-mail: hayatkhattak335@gmail.com

### Article

#### Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(3): 303-324

Published online September 25, 2022 https://doi.org/10.5391/IJFIS.2022.22.3.303

## Novel Bipolar Soft Rough-Set Approximations and Their Application in Solving Decision-Making Problems

Rizwan Gul1 , Muhammad Shabir1, Wali Khan Mashwani2, and Hayat Ullah2

1Department of Mathematics, Quaid-i-Azam University, Islamabad, Pakistan
2Institute of Numerical Sciences, Academic Block-III Kohat University of Science & Technology, Khyber Pakhtunkhwa, Pakistan

Correspondence to:Rizwan Gul (rgul@math.qau.edu.pk)

Received: December 23, 2021; Revised: March 11, 2022; Accepted: April 15, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

The rough set (RS) theory is a successful approach for studying the uncertainty in data. In contrast, the bipolar soft sets (BSS) can deal with the uncertainty, as well as bipolarity of the data in many situations. In 2018, Karaaslan and Çağman proposed bipolar soft rough sets (BSRSs), a hybridization of RS and BSS. However, certain shortcomings with BSRS violate Pawlak’s RS theory. To overcome these shortcomings, the concept of the modified bipolar soft rough set (MBSRS) has been proposed in this study. Moreover, this idea has been investigated through illustrative examples, where the important properties are inspected deeply. Furthermore, certain significant measures associated with MBSRS are also provided. Finally, an application of the MBSRS to multi-attribute group decision-making (MAGDM) problems is proposed. In addition, among various alternatives, an algorithm for decisionmaking accompanied by a practical example is presented as the optimal alternative . A brief comparative analysis of the proposed approach with some existing techniques is also provided to indicate the validity, flexibility, and superiority of the suggested MAGDM model.

Keywords: Bipolar soft set, Bipolar soft rough set, MBSR approximations, MAGDM

### 1. Introduction

In modern society, there are a plethora of ideas in engineering, economics, environmental science, social science, medical science. Many other disciplines have uncertainties in the information which is collected and studied for several purposes. In classical mathematics, all mathematical concepts must be precise. Therefore, it is not always a successful tool for dealing with uncertain issues. To researchers, this uncertainty has become a barrier in addressing complex problems in various domains. A plethora of theories have been proposed to address this uncertainty, including fuzzy set (FS) theory [1], RS theory [2], and decision-making (DM) theory. However, each of these theories has internal issues that may be related to the insufficiency of the parameterization techniques mentioned in [3].

Molodtsov [3] offered an alternative technique to cope with uncertainty, known as the “soft set” (SS). Data parameters play a vital role in scrutinizing and analyzing data or making decisions. The SS theory is an adequate parameterization tool. Therefore, this theory overcomes the difficulties faced by using old approaches. Because of its diverse applications, this theory has received attention of many researchers. Rapid growth in the study of SS has been observed in the last few years. A few SS operations were pioneered by Maji et al. [4]. Ali et al. [5] introduced various novel SS operations and enhanced the concept of SS compliments. Al- Shami and El-Shafei [6] proposed a T-soft equality relationship. By merging FSs and SS, Maji et al. [7] established the concept of fuzzy sets and SS.

A plethora of researchers have considered a diverse hybrid fusion of RSs, FSs, and SS for engineering, information management, medical diagnosis, and multi-criteria decision-making (MCDM) applications. Feng et al. [8] explored the link between RS and SS theories and introduced soft rough sets (SRSs), which provide better and more efficient approximations than the RS theory. Shabir et al. [9] redesigned SRSs and proposed modified soft rough sets (MRSs). Greco and his colleagues [1014] offered dominance-based RS as an extension of the RS. Du and Hu [15] pioneered dominance-based FS. In 2019, Shaheen et al. [16] proposed a dominance-based SRS and highlighted its use in DM. Feng [17] applied SRSs to multi-criteria group decision-making (MCGDM). Ayub et al. [18] initiated a new RS approach with DM, known as linear Diophantine fuzzy RSs. Riaz et al. [19] introduced the idea of linear Diophantine fuzzy soft RSs to select sustainable material handling equipment. In 2021, Hashmi et al. [20] established the concept of spherical linear Diophantine fuzzy SRSs with applications in MCDM. Akram and Ali [21] proposed hybrid models for DM based on rough Pythagorean fuzzy bipolar soft information.

In many types of data analyses, bipolarity is an important factor to consider when designing mathematical formulas for certain problems. The positive and negative sides of the data are provided by the bipolarity. The positive side deals with conceivable ideas, whereas the negative side deals with unconceivable ideas. The philosophy of bipolarity considers that the human judgment is built upon positive and negative sides, and the stronger side is preferred. SS, FS, and RS are not effective approaches for dealing with this bipolarity.

Owing to the importance of bipolarity, Shabir and Naz [22] introduced the notion of bipolar soft set (BSS) with application to DM. BSS has grown in popularity among researchers as a result of this study. In 2015, Karaaslan and Karataş [23] redesigned the BSS with different approximations, allowing them to investigate the topological axioms of the BSS. Subsequently, Karaaslan et al. [24] proposed a theory of bipolar soft groups. In addition, Naz and Shabir [25] pioneered the idea of fuzzy BSS and investigated their algebraic structures. The notions of bipolar soft topological spaces were then further developed by Öztürk [26]. Abdullah et al. [27] developed a bipolar fuzzy SS by combining the SS and bipolar FS and applied it to the DM problem. Alkouri et al. [28] proposed a bipolar complex FS and addressed its applicability to DM.

Karaaslan and Çağman [29] developed BSRSs in 2018. Furthermore, they addressed the applicability of BSRSs to the DM. Shabir and Gul [30] established and discussed the modified rough bipolar soft sets (MRBSs) in MCGDM. Gul et al. [31] presented a new technique for determining the roughness of BSSs and examined their applicability to MCGDM. Mahmood et al. [32] suggested a complex fuzzy N-SS and DM algorithm. In [33], Malik and Shabir pioneered the concept of rough fuzzy BSS and used them to rectify DM problems. Malik and Shabir [34] created a consensus model using rough bipolar fuzzy approximations. Al-Shami [35] conceived the idea of belonging and non-belonging relations between a BSS and an ordinary point. Riaz and Tehrim [36] proposed bipolar fuzzy soft mapping and analyzed its applicability to bipolar disorders. In [37], the authors suggested bipolar N-SS, an extension of N-SS, and addressed its applicability to DM. Gul and Shabir [38] introduced a new concept of the roughness of a crisp set based on (α, β)-indiscernibility of the bipolar fuzzy relation.

### 1.1 Motivation

By analyzing all the preceding arguments, we can see that the BSSs can manage the bipolarity of the data by employing two mappings; one of them addresses the positivity of the data, while the other measures the negativity of the data. Bearing in mind the connection between RS and BSS, two initiatives have been established to investigate the roughness of BSS: the first by Karaaslan and Çağman [29], and the second by Shabir and Gul [30]. In this paper, we propose a new technique for improving the roughness of BSSs. This new approach is known as the “modified bipolar soft rough sets” (MBSRS). In addition, we discuss the application of MBSRS to DM problems.

### 1.2 Aim of the Suggested Model

The major objective of this study is to propose an innovative variant of BSRS approximations that overcomes some of the shortcomings of the Karaaslan and Çağman BSRS model (see Example 2.7).

The key contributions of this study are as follows:

• • A novel concept of MBSRS is proposed, which overcomes the deficiencies of the existing BSRS model.

• • many essential properties of the MBSRS are thoroughly investigated.

• • Some key MBSRS-related measures are proposed to quantify the uncertainty of the MBSRS.

• • A fair comparison between the results provided by MBSRS and BSRS is provided.

• • A robust MAGDM method is established in the framework of MBSRS, and its applicability is validated through the real-world applications.

• • To illustrate the merits of the suggested technique, rigorous comparison with some other current methodologies is performed.

### 1.3 Organization of the Paper

The remainder of this paper is organized as follows. Section 2 outlines a plethora of fundamental concepts necessary for comprehending our research. Section 3 begins by introducing novel MBSR approximation operators. These operators were studied further by considering their significant structural properties. Section 4 discusses the MBSRS-related measures. In Section 5, we describe the general methodology of MAGDM in the MBSRS framework. In addition, we introduced a DM algorithm to select the optimal alternative. In addition, we provide an illustration of the proposed DM approach to demonstrate how it can be effectively used in various real-life problems. Section 6 compares the proposed DM method with other DM approaches. Section 7 concludes the study with an overview of the current study and suggestions for future research.

### 2. Preliminary Concepts

This section reviews the key concepts used in this study. Throughout this study, we use , ℘, and to represent the universe, parameter set, and power set of , respectively.

### Definition 2.1 ([2])

The pair is called the approximation space where is a non-empty finite universe, and σ is an equivalence relation on .

For any , the lower and upper approximations of with respect to are characterized as follows:

$σ★(X)={x∈U:[x]σ⊆X},$$σ★(X)={x∈U:[x]σ∩X≠∅},$

where

$[x]σ={y∈U:(x,y)∈σ}.$

Moreover, the boundary region of is given as

$ℬndσ(X)=σ★(X)-σ★(X).$

Set is called a rough set with respect to σ if otherwise, it is called definable.

Some recent rough approximations were defined on binary relations to extend the scope of applications of the RS theory; see, for example, [3941].

### Definition 2.2 ([3])

A SS over is a pair (f, ℘), where f : . Therefore, an SS over provides a parameterized collection of subsets of .

### Definition 2.3 ([4])

An object of the form , where ¬e = not e is called the NOT set of parameters of ℘.

### Definition 2.4 ([22])

A BSS over is an object of form (f, g : ℘), where and such that for all e ∈ ℘ and f(e) ∩ ge) = ∅︀.

In other words, a BSS over provides a pair of parameterized families of subsets of .

A BSS (f, g : ℘) over can be represented through a pair of binary tables, one for each function f and g, respectively. In both tables, rows are categorized based on the objects of and columns are categorized using parameters. We used the following keys for the tables of f and g:

$aij={1,if xi∈f(ej),0,if xi∉f(ej),bij={1,if xi∈g(¬ej),0,if xi∉g(¬ej),$

where aij and bij are the ith entries of jth columns of each table, respectively.

Thus, indicates the collection of all BSSs over .

### Definition 2.5 ([29])

is called a full BSS if: $∪e∈℘f(e)=U=∪¬e∈ℵg(¬e)$.

### Definition 2.6 ([29])

For , object is called (bipolar soft approximation space). Based on β, the following four operators are defined for any :

$S_β+(X)= {u∈U:∃e∈℘,[u∈f(e)⊆X]},S_β-(X)= {u∈U:∃¬e∈ℵ, [u∈g(¬e),g(¬e)∩Xc≠∅]},S¯β+(X)= {u∈U:∃e∈℘, [u∈f(e),f(e)∩X≠∅]},S¯β-(X)= {u∈U:∃¬e∈ℵ,[u∈g(¬e)⊆Xc]}}$

are regarded as soft lower positive, soft lower negative, and soft upper positive, and soft upper negative approximations of , respectively. Moreover,

$ℬS_β(X)=(S_β+(X),S_β-(X)),ℬS¯β(X)=(S¯β+(X),S¯β-(X))}$

are called BSR approximations of . Moreover, is termed BSRS if $ℬS_β(X)≠ℬS¯β(X)$, where is called a bipolar soft definable.

Definitions 2.6 do not fulfill the criteria of Pawlak’s RSs. For instance:

• (1) Upper approximation of a non-empty set may be empty.

• (2) Upper approximation of a subset of universe may not contain the set which does not occur in Pawlak’s RS theory.

The following example explains this observation:

### Example 2.7

Let , where and ℘ = {e1, e2, e3, e4}. The tabular representation of (f, g : ℘) is presented in Tabel 1(a) and 1(b).

From the above tables, it can be seen that u4 does not have any of the properties mentioned in ℘ or . For . The soft lower positive, soft upper positive, soft lower negative, and soft upper negative approximations of are given as

$S_β+(X)={u3}, S¯β+(X)={u1,u2,u3,u5}.S_β-(X)={u2,u3,u6},S¯β-(X)={}.$

Object u4 is not a member of any one of the soft lower positive approximations, and soft upper positive approximation. However, according to the given information of , u4 should be a member of because . Similarly, we can see that object u1 is not a member of any of the soft lower negative approximations and soft upper negative approximation. However, according to the given information of , u1 should be a member of because .

From Table 1(a), it is clear that: [u1] = {u1}, [u2] = {u2, u5} = [u5], [u3] = {u3}, [u4] = {u4} and [u6] = {u6}. Now, according to Definition 2.1, we have and . Furthermore, u1 is a member of . However, there is no element in that is equivalent to u1, and thus its membership in is difficult to justify.

Similarly, as presented in Table 1(b), it is clear that: [u1] = [u4] = {u1, u4}, [u2] = {u2}, [u3] = [u6] = {u3, u6}, and [u5] = {u5}. Again, according to Definition 2.1, we have and . We can also observe that u2 is a member of . However, there is no element in which is equivalent to u2; therefore, its membership in is difficult to justify.

Another unusual situation may occur, that is, for a non-empty subset of and are empty sets. Thus, we assume that . Then and . In other words, u4 is an unfortunate object and will never be an element of and for any .

### 3. Novel Type of Bipolar Soft Rough Sets Approximation

In this section, to overcome the shortcomings mentioned in Example 2.7 we offer a new type of BSR-approximation known as MBSR approximations. The significant structural properties of these novel MBSR approximations were also investigated using counter examples.

### Definition 3.1

Let is . Based on β, the following operators are defined for any :

$SL_β+(X)=∪{f(e),e∈℘:f(e)⊆X},SU¯β+(X)=(SL_β+(Xc))c,SU¯β-(X)=∪{g(¬e),¬e∈ℵ:g(¬e)⊆Xc},SL_β-(X)=(SU¯β-(Xc))c}$

are regarded as modified soft β-lower positive, modified soft β-upper positive, and modified soft β-upper negative, and modified soft β-lower negative approximations of , respectively. Here, , Moreover, the ordered pairs are given as

$MBS_β(X)=(SL_β+(X),SL_β-(X)),MBS¯β(X)=(SU¯β+(X),SU¯β-(X))}$

are called the MBSR approximations of with respect to . Moreover, when $MBS_β(X)≠MBS¯β(X)$ then, is termed MBSRS, in which is said to be modified bipolar soft β-definable. The corresponding positive, boundary, and negative regions with respect to the MBSR approximations are given as

$MPOSβ(X)=(SL_β+(X),SU¯β-(X)),$$MBNDβ(X)=(SL¯β+(X)\SL_β+(X),SL_β-(X)\SL¯β-(X)),$$ℳNℰGβ(X)=(U,U)-MBS¯β(X)=((SL¯β+(X))c,(SU_β-(X))c).$

### Remark 3.2

From Definition 3.1, we observe that is MBSRS-definable when ,

### Remark 3.3

From Definition 3.1, we conclude that

• • The soft lower positive and modified soft β-lower positive approximations of are identical. That is, .

• • The soft upper negative and modified soft β-upper negative approximations of are identical. That is, $S¯β-(X)=SU¯β-(X)$.

Here, we provide the following example to clarify the concept of MBSR approximations.

### Example 3.4

Let , where and ℘ = {e1, e2, e3, e4}. The maps f and g are characterized as follows:

$f:℘→2U,e↦{{u1,u6},if e=e1,{u3,u4},if e=e2,{},if e=e3,{u2,u5},if e=e4,g:ℵ→2U,¬e↦{{u3,u5},if ¬e=¬e1,{u5},if ¬e=¬e2,{u2,u6},if ¬e=¬e3,{u4},if ¬e=¬e4.$

According to Definition 3.1, we can evaluate the MBSR approximations of as follows.

$SL_β+(X)={u3,u4},SU¯β+(X)={u2,u3,u4,u5},SU¯β-(X)={u2,u6},SL_β-(X)={u1,u2,u6}.$

Therefore,

$MBS_β(X)=({u3,u4},{u1,u2,u6}),MBS¯β(X)=({u2,u3,u4,u5},{u2,u6}).$

Consequently, is an MBSRS because $MBS_β(X)≠MBS¯β(X)$. Moreover, by direct calculations, we obtain

$MPOSβ(X)=({u3,u4},{u2,u6}),MBNDβ(X)=({u2,u5},{u1}),MBNDβ(X)=({u1,u6},{u3,u4,u5}).$

### Remark 3.5

The relationship between the containment of and the $MBS¯β(X)$ is given by $SL_β+(X)⊆SU¯β+(X)$ and $SL_β-(X)⊇SU¯β-(X)$.

To determine the relationship between the containment of the modified soft β-lower and modified soft β-upper positive approximations of , one can obtain the following properties:

### Theorem 3.6

It is assumed that . be and . Then, the following properties hold.

• (1) $SL_β+(X)⊆X⊆SU¯β+(X)$;

• (2) ;

• (3) $SU¯β+(U)=U$;

• (4) ;

• (5) $X⊆Y⇒SU¯β+(X)⊆SU¯β+(Y)$;

• (6) ;

• (7) ;

• (8) $SU¯β+(X∩Y)⊆SU¯β+(X)∩SU¯β+(Y)$;

• (9) $SU¯β+(X∪Y)⊇SU¯β+(X)∪SU¯β+(Y)$;

• (10) $SL_β+(Xc)=(SU¯β+(X))c$.

Proof
• (1) According to Definition 3.1, is obvious. For the next inclusion, since $SU¯β+(Xc)=(SL_β+(X))c$. But , so and $SU¯β+(Xc)⊇Xc$. Replacing by , we have, $X⊆SU¯β+(X)$. Consequently, we obtain $SL_β+(X)⊆X⊆SU¯β+(X)$.

• (2) By definition, .

• (3) By definition, $SU¯β+(U)=(SL_β+(Uc))c=(SL_β+(∅))c=(∅)c=U$.

• (4) Assume that . Thus, there exists some f(e) such that . But , so follows that . That is, . Hence, . Consequently, .

• (5) Since, , so . By part (4), it follows that . Therefore, we have . This gives $SU¯β+(X)⊆SU¯β+(Y)$.

• (6) Let . Thus, there exists some f(e) such that . This implies that and . Consequently, and . So, . Hence, , as required.

• (7) Contrary suppose that . Therefore, for all . Then for all or . Consequently, or . This implies that . Hence, .

• (8) By Definition 3.1, we have

$SU¯β+(X∩Y)=(SL_β+(X∩Y)c)c=(SL_β+(Xc∪Yc))c⊆(SL_β+(Xc)∪SL_β+(Yc))c, By part (7)=(SL_β+(Xc))c∩(SL_β+(Yc))c=SU¯β+(X)∩SU¯β+(Y).$

Hence, $SU¯β+(X∩Y)⊆SU¯β+(X)∩SU¯β+(Y)$.

• (9) By Definition 3.1,

$SU¯β+(X∪Y)=(SL_β+(X∪Y)c)c=(SL_β+(Xc∩Yc))c⊇(SL_β+(Xc)∩SL_β+(Yc))c, By part (6)=(SL_β+(Xc))c∪(SL_β+(Yc))c=SU¯β+(X)∪SU¯β+(Y).$

Therefore, $SU¯β+(X∪Y)⊇SU¯β+(X)∪SU¯β+(Y)$.

• (10) By definition of modified soft β-upper positive approximation of , we have $SU¯β+(X)=(SL_β+(Xc))c$. This implies that $SL_β+(Xc)=(SU¯β+(X))c$.

This completes this proof.

The next example indicates that the inclusions in parts (6)–(9) in Theorem 3.6 may hold strictly.

### Example 3.7

Let , where and ℘ = {e1, e2, e3, e4, e5, e6}. The mappings f and g are as follows:

$f:℘→2U,e↦{{u1,u2,u3},if e=e1,{u1,u4},if e=e2,{u1,u3},if e=e3,{u3,u5,u6},if e=e4,{u2,u4},if e=e5,{u1,u2,u5},if e=e6,g:ℵ→2U,¬e↦{{u4,u5},if ¬e=¬e1,{u5},if ¬e=¬e2,{u2,u6},if ¬e=¬e3,{u4},if ¬e=¬e4,{u3,u6},if ¬e=¬e5,{u3,u4,u6},if ¬e=¬e6.$

Now, if we consider and then and . Now, by direct calculation, we obtain:

$SL_β+(X)={u1,u2,u4},SL_β+(Y)={u1,u3},SL_β+(X∩Y)=∅,SL_β+(X∪Y)={u1,u2,u3,u4,u5}.$

Clearly, ; That is, , which indicates that inclusion in Part (6) of Theorem 3.6 may be strict. Similarly, . That is, , which shows that the inclusion in Part (7) of Theorem 3.6 may hold strictly.

Now, if we take and , then , So,

$SU¯β+(X)=U,SU¯β+(Y)={u4,u5,u6},SL¯β+(X∩Y)=∅.$

Clearly, $SU¯β+(X∩Y)=∅⊂{u4,u5,u6}=SU¯β+(X)∩SU¯β+(Y)$; That is, $SU¯β+(X∩Y)⊂SU¯β+(X)∩SU¯β+(Y)$, which shows that inclusion in part (8) of Theorem 3.6 may be strict.

Similarly, if we assume and , then , Therefore,

$SU¯β+(X)={u1},SU¯β+(Y)={u3,u6},SL¯β+(X∪Y)={u1,u3,u5,u6}.$

Clearly, $SU¯β+(X∪Y)={u1,u3,u5,u6}⊃{u1,u3,u6}=SU¯β+(X)∪SU¯β+(Y)$, That is, $SU¯β+(X∪Y)⊃SU¯β+(X)∪SU¯β+(Y)$, implying that inclusion in part (9) of Theorem 3.6 may be strict.

To determine the relationship between the containment of the modified soft β-upper negative and soft β-lower negative approximations of , we obtain the following results:

### Theorem 3.8

Suppose . be and . Then, the following properties hold.

• (1) $SU¯β-(X)⊆Xc⊆SL_β-(X)$;

• (2) $SU¯β-(U)=∅$;

• (3) ;

• (4) $X⊆Y⇒SU¯β-(X)⊆SU¯β-(Y)$;

• (5) ;

• (6) $SU¯β-(X∩Y)⊇SU¯β-(X)∪SU¯β-(Y)$;

• (7) $SU¯β-(X∪Y)⊆SU¯β-(X)∩SU¯β-(Y)$;

• (8) ;

• (9) ;

• (10) $SU¯β-(Xc)=(SL_β-(X))c$.

Proof
• (1) According to Definition 3.1, it follows that $SU¯β-(X)⊆Xc$ is obvious. For the other inclusion, since $SL_β-(Xc)=(SU¯β-(X))c$. But $SU¯β-(X)⊆Xc$, so $(SU¯β-(X))c⊇X$ and . Using instead of , we have, . Consequently, $SU¯β-(X)⊆Xc⊆SL_β-(X)$.

• (2) By definition, $SU¯β-(U)=∪{g(¬e),¬e∈ℵ:g(¬e)⊆Uc=∅}=∅$.

• (3) By definition, $SL_β-(∅)=(SU¯β-(∅c))c=(SU¯β-(U))c=(∅)c=U$.

• (4) let $u∈SU¯β-(X)=∪{g(¬e),¬e∈ℵ:g(¬e)⊆Xc}$. Therefore, there exists some ge) such that . Because , so . Thus in particular, . Therefore, $u∈SU¯β-(Y)$. Consequently, $SU¯β-(X)⊆SU¯β-(Y)$.

• (5) As , so . By part (4), we can infer that $SU¯β-(Xc)⊇SU¯β-(Yc)$. Thus it implies that, $(SU¯β-(Xc))c⊆(SU¯β-(Yc))c$. Hence, .

• (6) Assume that $u∉SU¯β-(X∩Y)=∪{g(¬e),¬e∈ℵ:g(¬e)⊆(X∩Y)c=Xc∪Yc}$. Therefore, for all . Then for all or . Consequently, $u∉SU¯β-(X)$ or $u∉SU¯β-(Y)$. This implies that, $u∉SU¯β-(X)∪SU¯β-(Y)$. Hence, $SU¯β-(X∩Y)⊇SU¯β-(X)∪SU¯β-(Y)$.

• (7) Suppose that $u∈SU¯β-(X∪Y)=∪{g(¬e),¬e∈ℵ:g(¬e)⊆(X∪Y)c=Xc∩Yc}$. Thus, there exists some ge) such that . This implies that and . Consequently, $u∈SU¯β-(X)$ and $u∈SU¯β-(Y)$. Therefore, $u∈SU¯β-(X)∩SU¯β-(Y)$. Hence, we obtain $SU¯β-(X∪Y)⊆SU¯β-(X)∩SU¯β-(Y)$.

• (8) By Definition 3.1,

$SL_β-(X∩Y)=(SU¯β-(X∩Y)c)c=(SU¯β-(Xc∪Yc))c By part (7)⊇(SU¯β-(Xc)∩SU¯β-(Yc))c=(SU¯β-(Xc))c∪(SU¯β-(Yc))c=SL_β-(X)∪SL_β-(Y).$

Hence, .

• (9) By Definition 3.1, it follows that

$SL_β-(X∪Y)=(SU¯β-(X∪Y)c)c=(SU¯β-(Xc∩Yc))c By part (6)⊆(SU¯β-(Xc)∪SU¯β-(Yc))c=(SU¯β-(Xc))c∩(SU¯β-(Yc))c=SL_β-(X)∩SL_β-(Y).$

Hence, .

• (10) By definition of modified soft β-lower negative approximation of , we have $SL_β-(X)=(SU¯β-(Xc))c$.

This indicates that $SU¯β-(Xc)=(SL_β-(X))c$.

This completes this proof.

The following example indicates that the inclusions in parts (6) to (9) of Theorem 3.8 may be strict.

### Example 3.9

Let as given in Example 3.7. If we take, such that and . Then and . By direct computation, we obtain

$SU¯β-(X)={u3,u4,u5,u6},SL_β-(X)=U,SU¯β-(Y)={u4,u5},SL_β-(Y)=U,SU¯β-(X∩Y)={u2,u3,u4,u5,u6},SL_β-(X∪Y)={u1,u3,u4,u5}.$

Clearly, $SU¯β-(X∩Y)={u2,u3,u4,u5,u6}⊃{u3,u4,u5,u6}=SU¯β-(X)∪SU¯β-(Y)$, So, $SU¯β-(X∩Y)⊃SU¯β-(X)∪SU¯β-(Y)$, which indicates that inclusion in Part (6) of Theorem 3.8 may hold strictly.

Similarly, , That is, , which shows that inclusion in part (9) of Theorem 3.8 may be strict.

Now, if we consider that is such that and . Then . Therefore

$SU¯β-(X)={u2,u6},SU¯β-(Y)={u3,u4,u6},SU¯β-(X∪Y)=∅.$

Clearly, $SU¯β-(X∪Y)=∅⊂{u6}=SU¯β-(X)∩SU¯β-(Y)$; That is, $SU¯β-(X∪Y)⊂SU¯β-(X)∩SU¯β-(X)$, which implies that inclusion in part (7) of Theorem 3.8 may hold strictly.

Now, if we assume that is such that and . Then . Then

$SL_β-(X)={u1,u2,u5},SL_β-(Y)={u1,u3,u4,u5},SL_β-(X∩Y)=U.$

Clearly, , That is, , which shows that inclusion in part (8) of Theorem 3.8 may be strict.

### Remark 3.10

In BSRSs [29] and MRBSs [30], we observe the following:

• and ∅︀Φ+ = ̄Φ+. But in our proposed MBSRS model $SL_β+(∅)=SU¯β+(∅)$ is not hold in general.

• and ∅︀Ψ = ̄Ψ. But in our proposed MBSRS model $SL_β-(∅)=SU¯β-(∅)$ is not hold in general.

• and . But in our proposed MBSRS model $SL_β+(U)=SU¯β+(U)$ is not hold in general.

• and . But in our proposed MBSRS model $SL_β-(U)=SU¯β-(U)$ is not hold in general.

The following result shows the condition under which the modified soft β-lower positive and modified soft β-upper positive approximations of and ∅︀ coincide:

### Proposition 3.11

Let , such that $∪e∈℘f(e)=U$. Then,

• (1) $SL_β+(U)=U=SU¯β+(U)$;

• (2) $SL_β+(∅)=∅=SU¯β+(∅)$.

Proof
• (1) From part (3) of Theorem 3.6, it follows that $SU¯β+(U)=U$. Now according to Definition 3.1, we have . Since, , therefore . That is, . Hence, $SL_β+(U)=U=SU¯β+(U)$.

• (2) From part (1) of Theorem 3.6, it implies that . Now by Definition 3.1, $SU¯β+(∅)=(SL_β+(∅c))c=(SL_β+(U))c=Uc=∅$. That is, $SU¯β+(∅)=∅$. Consequently, $SL_β+(∅)=∅=SU¯β+(∅)$.

The next result shows the condition under which the modified soft β-upper negative and modified soft β-lower negative approximations of ∅︀ and coincide.

### Proposition 3.12

Let , such that $∪¬e∈ℵg(¬e)=U$. Then,

• (1) $SL_β-(∅)=U=SU¯β-(∅)$;

• (2) $SL_β-(U)=∅=SU¯β-(U)$.

Proof
• (1) From part (3) of Theorem 3.8, it follows that . Now by Definition 3.1, we have it implies that $SU¯β-(∅)=∪{g(¬e),¬e∈ℵ:g(¬e)⊆(∅)c=U}$. Since, . Therefore it implies that $SU¯β-(∅)=∪{g(¬e),¬e∈ℵ:g(¬e)⊆(U)}=∪¬e∈ℵg(¬e)=U$. That is, $SU¯β-(∅)=U$. Thus, $SL_β-(∅)=U=SU¯β-(∅)$.

• (2) From part (2) of Theorem 3.8, we have $SU¯β-(U)=∅$. Now by Definition 3.1, we get $SL_β-(U)=(SU¯β-(Uc))c=(SU¯β-(∅))c=Uc=∅$. That is, . Hence, $SL_β-(U)=∅=SU¯β-(U)$.

### Proposition 3.13

Let such that is . Then, the following are equivalent:.

• (1) (f, g : ℘) is a full BSS;

• (2) ;

• (3) $MBS¯β(U)=(U,∅)$;

• (4) ;

• (5) $MBS¯β(∅)=(∅,U)$.

Proof

Direct consequence of Proposition 3.11 and Proposition 3.12.

The next result shows the relationship between the soft upper positive, soft lower negative, modified soft β-upper positive, and modified soft β-lower negative approximations of .

### Proposition 3.14

Let be a full BSS such that is . Then, for any , the following properties hold.

• (1) $SU¯β+(X)⊆S¯β+(X)$;

• (2) .

Proof
• (1) Let . Then, for all e, we have . So, , which follows that . Therefore, $x∉(SL_β+(Xc))c=SU¯β+(X)$. Consequently, $SU¯β+(X)⊆S¯β+(X)$.

• (2) Assume that . Then, for all , we have . Therefore, , which follows that $x∈SU¯β-(Xc)$. Thus, $x∉(SU¯β-(Xc))c=SL_β-(X)$. Hence, .

### Remark 3.15

The above proposition reveals that the soft upper positive approximation of is finer than the modified soft β-upper positive approximation of . Similarly, the soft lower negative approximation of is finer than the modified soft β–lower negative approximation of .

The next example shows that the inclusions in parts (1) and (2) of the above proposition might be strict.

### Example 3.16

Let , where and = {e1, e2, e3, e4, e5, e6}. The mappings f and g are as follows:

$f:℘→2U,e↦{{u1,u3},if e=e1,{u1,u4,u5},if e=e2,{u2},if e=e3,{u2,u4,u5},if e=e4,{u1,u2},if e=e5,{u3,u5},if e=e6,g:ℵ→2U,¬e↦{{u2,u5},if ¬e=¬e1,{u3},if ¬e=¬e2,{u3,u4,u5},if ¬e=¬e3,{u1,u3},if ¬e=¬e4,{u5},if ¬e=¬e5,{u2,u4},if ¬e=¬e6.$

If we take , then

$SU¯β+(X)={u1,u3,u4,u5},S¯β+(X)={u1,u2,u3,u4,u5}.$

Clearly, $SU¯β+(X)⊂S¯β+(X)$, which indicates that the inclusion in part (1) of Proposition 3.14 may hold strictly.

Now, if , then

$SL_β-(X)={u2,u4,u5},S_β-(X)={u2,u3,u4,u5}.$

Clearly, we can see the following: , indicating that the inclusion in part (2) of Proposition 3.14 may be strict.

### Theorem 3.17

Let such that are and . Then, the following properties hold.

• (1) ;

• (2) $SU¯β+(SU¯β+(X))=SU¯β+(X)$;

• (3) $SL_β+(SU¯β+(X))⊆SU¯β+(X)$;

• (4) $SU¯β+(SL_β+(X))⊇SL_β+(X)$.

Proof
• (1) From part (1) of Theorem 3.6, it follows that . For the reverse inclusion, let and . Then, there exists some e such that . Therefore, . Therefore, . Thus, . Consequently, this implies that .

• (2) By Definition 3.1, we have

$SU¯β+(SU¯β+(X))=(SL_β+(SU¯β+(X))c)c=[SL_β+([SL_β+(Xc)]c)c]c=[SL_β+(SL_β+(Xc))]c=[SL_β+(Xc)]c by part (1)=SU¯β+(X).$

Hence, $SU¯β+(SU¯β+(X))=SU¯β+(X)$.

The proofs of parts (3) and (4) are quite clear from part (1) of Theorem 3.6.

### Remark 3.18

From parts (1) and (2) of the above theorem, it can be observed that the modified soft β-lower positive approximation of and the modified soft β-upper positive approximation of $SU¯β+(X)$ with respect to are invariant.

The next example shows that the inclusions in parts (3) and (4) of the above theorem may hold strictly.

### Example 3.19

If we consider in Example 3.7, then

$SU¯β+(X)={u1},SL_β+(SU¯β+(X))=∅.$

Clearly, $SL_β+(SU¯β+(X))=∅⊂{u1}=SU¯β+(X)$, indicating that the inclusion in part (3) of Proposition 3.17 may be strict.

Similarly, if we consider in Example 3.7, then

$SL_β+(X)={u1,u3},SU¯β+(SL_β+(X))={u1,u3,u4,u6}.$

Clearly, $SU¯β+(SL_β+(X))={u1,u3,u4,u6}⊃{u1,u3}=SL_β+(X)$, showing that the inclusion in Part (4) of Proposition 3.17 holds strictly.

### Theorem 3.20

Let such that are and . Then, the following properties hold.

• (1) $SU¯β-(SL_β-(X))=(SL_β-(X))c$;

• (2) $SL_β-(SU¯β-(X))=(SU¯β-(X))c$;

• (3) $SU¯β-(SU¯β-(X))⊆(SU¯β-(X))c$;

• (4) .

Proof
• (1) By part (1) of Theorem 3.8, we have $SU¯β-(SL_β-(X))⊆(SL_β-(X))c$. For the reverse inclusion, let and $u∈Y=(SL_β-(X))c=SU¯β-(Xc)=∪{g(¬e),¬e∈ℵ:g(¬e)⊆X}$. Then, there exists some such that . So, $u∈SU¯β-(Yc)=SU¯β-[(SL_β-(X))c]c=SU¯β-(SL_β-(X))$. Therefore, $u∈SU¯β-(SL_β-(X))$. Hence, $Y⊆SU¯β-(SL_β-(X))$. This implies that, $(SL_β-(X))c⊆SU¯β-(SL_β-(X))$. Consequently, $SU¯β-(SL_β-(X))=(SL_β-(X))c$.

• (2) By Definition 3.1, it follows that

$SL_β-(SU¯β-(X))=(SU¯β-(SU¯β-(X))c)c=[SU¯β-(SL_β-(Xc))]c by part (10) of Theorem 3.8=[[SL_β-(Xc)]c]c by part (1)=SL_β-(Xc)=(SU¯β-(X))c by part (10) of Theorem 3.8.$

Hence, $SL_β-(SU¯β-(X))=(SU¯β-(X))c$.

The proofs of parts (3) and (4) are quite clear from part (1) of Theorem 3.8.

The next example shows that the inclusions in parts (3) and (4) of the above theorem may hold strictly.

### Example 3.21

If we consider in Example 3.7, then

$SU¯β-(X)={u4},SL_β-(X)={u1,u2,u4},SU¯β-(SU¯β-(X))={u2,u3,u5,u6},SL_β-(SL_β-(X))={u1,u2,u3,u5,u6}.$

Clearly, $SU¯β-(SU¯β-(X))={u2,u3,u5,u6}⊂{u1,u2,u3,u5,u6}(SU¯β-(X))c$, indicating that the inclusion in part (3) of Theorem 3.20 may be strict. Also, , which shows that the inclusion in part (4) of Proposition 3.20 may hold strictly.

A comparison between BSR-approximations and MBSR approximations is presented in Table 2.

### 4. Some Important Measures Associated with MBSRS

Pawlak identified two quantitative measures for quantifying the inaccuracy of RS approximations in [2], which might assist in obtaining a sense of how precisely the information is connected to a certain equivalence relation for a certain classification. Generally, the existence of a boundary region causes uncertainty in a set. The greater the boundary region of a set, the lower the accuracy of the set.

According to Pawlak [2], the accuracy and roughness measures of are defined as

$A(X)=|σ★(X)||σ★(X)|,R(X)=1-A(X),$

where | • | denotes the order of the set.

In other words, captures the degree of completeness of the knowledge of set , whereas is viewed as the degree of incompleteness of the knowledge of set .

As a generalization of these measures, the present section introduces some measures in the framework of the MBSRS and investigates some of its fundamental properties.

### Definition 4.1

Let and , . Then, the accuracy measure of in the MBSRS environment is characterized as follows:

$AM(X)=(Xβ+,Xβ-),$

where

$Xβ+=|SL_β+(X)||SU¯β+(X)|,$

and

$Xβ-=|SU¯β-(X)||SL_β-(X)|.$

The roughness measure for in the MBSRS environment is characterized as follows:

$RM(X)=(1,1)-(Xβ+,Xβ-)=(1-Xβ+,1-Xβ-).$

Clearly, and .

### Remark 4.2

If is the accuracy of . Then,

• (i) is MBSRS-definable set if and only if .

• (ii) If and the set has some nonempty boundary region and consequently is MBSRS.

From Definition 4.1, we can prove the following results:

### Proposition 4.3

Let : Then,

• (1) $AM(X)=(0,0)⇔SL_β+(X)=∅=SU¯β-(X)$;

• (2) $AM(X)=(1,1)⇔SL_β+(X)=SU¯β+(X)$ and $SU¯β-(X)=SL_β-(X)$;

• (3) .

Proof

Straightforward.

Gediga and Düntsch [42], in 2001, introduced a measure of precision of the approximation of , which is given by

$P(X)=|σ★(X)||X|.$

This is simply a relative number of elements of which can be approximated by σ. It is important to note that needs complete knowledge of , whereas does not exist.

It can be generalized in the context of the MBSRS as follows:

### Definition 4.4

Let and be . Then, the precision measure of in the MBSRS environment is defined as

$PM(X)=(X★β+,X★β-),$

where

$X★β+=|SL_β+(X)||X|,$

and

$X★β-=|SU¯β-(X)||X|.$

Clearly, $0≤X★β+≤1$ and $0≤X★ϒ-≤1$.

From the above definition, we can derive the following properties of :

### Proposition 4.5

Let . Then,

• (1) $PM(X)=(0,0)⇔SL_β+(X)=∅=SU¯β-(X)$;

• (2) $PM(X)=(1,1)⇔SL_β+(X)=X=SU¯β-(X)$;

• (3) $X★β+≥Xβ+$ and $X★β-≥Xβ-$;

• (4) .

Proof

Straightforward.

Yao [43] revised some of the properties of the accuracy measure given by Pawlak [2] and proposed another measure known as the measure of the completeness of knowledge, which is given by

$Ck(X)=|σ★(X)|+|σ★(Xc)||U|.$

In the MBSRS framework, it can be characterized as

### Definition 4.6

Let and be . Then, measure of the completeness of knowledge of under the MBSRS environment is defined as

$MCk(X)=(X#β+,X#β-),$

where

$X#β+=|SL_β+(X)|+|SL_β+(Xc)||U|,$

and

$X#β-=|SU¯β-(X)|+|SU¯β-(Xc)||U|.$

Clearly, $0 and $0≤X#β-≤1$. In other words, cannot be zero for any .

The following proposition shows the condition under which attains its maximum value.

### Proposition 4.7

Let be the full BSS. Then, whenever or ,

Proof

Because (f, g : ℘) is a full BSS, $∪e∈℘f(e)=U$ and $∪¬e∈ℵg(¬e)=U$, Now, we prove the required result for the two cases.

Case 1

When , then

$X#β+=|SL_β+(∅)|+|SL_β+(∅c)||U|=|SL_β+(∅)|+|SL_ϒ+(U)||U|=|∅|+|U||U| by Proposition 3.11=0+|U||U|=1.$

Similarly,

$X#β-=|SU¯β-(∅)|+|SU¯β-(∅c)||U|=|SU¯β-(∅)|+|SU¯β-(U)||U|=|U|+|∅||U| by Proposition 3.12=|U|+0|U|=1.$

Therefore, $MCk(X)=(X#β+,X#β-)=(1,1)$.

Case 2

When , then

$X#β+=|SL_β+(U)|+|SL_β+(Uc)||U|=|SL_β+(U)|+|SL_β+(∅)||U|=|U|+|∅||U| by Proposition 3.11=|U|+0|U|=1.$

Also,

$X#β-=|SU¯β-(U)|+|SU¯β-(Uc)||U|=|SU¯β-(U)|+|SU¯β-(∅)||U|=|∅|+|U||U| by Proposition 3.12=0+|U||U|=1.$

Consequently, $MCk(X)=(X#β+,X#β-)=(1,1)$. Hence, in both cases we obtain .

Here, we elaborate on the following examples to explain the concepts of AM(X), PM(X) and MCk(X).

### Example 4.8. (Continued from Example 3.4)

The MBSR approximations of are as follows:

$SL_β+(X)={u3,u4},SU¯β+(X)={u2,u3,u4,u5},SU¯β-(X)={u2,u6},SL_β-(X)={u1,u2,u6}.$

Also,

$SL_β+(Xc)={u1,u6},SU¯β-(Xc)={u3,u4,u5}.$

Therefore,

$AM(X)=(Xβ+,Xβ-)=(24,23)=(0.500,0.666),PM(X)=(X★β+,X★β-)=(23,23)=(0.666,0.666),MCk(X)=(X#β+,X#β-)=(2+26,2+36)=(46,56)=(0.666,0.833).$

### 5. MAGDM Using MBSRS

Group decision-making (GDM) is an efficient strategy for dealing with complicated DM problems in which numerous experts decide on a set of alternatives. The aim is to integrate the opinions expressed by experts to find an alternative that is most agreeable to the group of experts as a whole. GDM techniques must consider various criteria and attributes in a complex society. As a result, with rapid development in numerous domains, studies on GDM that specifically involve multiple attributes are the main focus and have achieved tremendous progress.

In general, MAGDM is a technique in which a team of experts (DMs) collaborates to determine the optimum option from a set of available alternatives that are categorized based on their features in a specific context.

In this section, we describe the design of a robust MAGDM technique using MBSRS. We provide a brief statement of a MAGDM problem within the context of the MBSRS and then provide a generic mathematical formulation for the MAGDM problem based on MBSRS theory.

### 5.1 Problem Description

Assume that is a set of n objects, and = {e1, e2, . . ., em} is the set of all possible object attributes. Suppose we have a group of experts. consisting of k invited DMs. Each expert needs to examine all the objects of and will be requested to only point out “the optimal alternatives” as his/her evaluation result according to his/her experience and professional knowledge. Therefore, each expert’s primary evaluation result is a subset of : Let represent the primary evaluations of DMs , respectively, and be the actual results that were previously obtained for problems at various times or locations. For simplicity, we assume that the evaluations of each expert in are equally important. The DM for this MAGDM problem is then: “how to reconcile (or compromise) differences in the evaluations expressed by individual experts to find the alternative that is most acceptable by the group of experts as a whole”.

### 5.2 Mathematical Modelling

In this subsection, we provide step-by-step mathematical modelling and the procedure of the MAGDM method using MBSRS theory.

Definition 5.1

Let $MBS_Bq(Xj)=(SL_β+fq(Xj),SL_β-gq(Xj)); MBS_Bq(Xj)=(SU¯β+fq(Xj),SU¯β-gq(Xj))$ be the lower and upper MBSR approximations of is related to ; (q = 1, 2, . . ., r).

Then,

$[M˜]=([SL_β+f1(X1),SL_β-g1(X1)][SL_β+f1(X2),SL_β-g1(X2)]⋯[SL_β+f1(Xk),SL_β-g1(Xk)][SL_β+f2(X1),SL_β-g2(X1)][SL_β+f2(X2),SL_β-g2(X2)]⋯[SL_β+f2(Xk),SL_β-g2(Xk)]⋮⋮⋱⋮[SL_β+fr(X1),SL_β-gr(X1)][SL_β+fr(X2),SL_β-gr(X2)]⋯[SL_β+fr(Xk),SL_β-gr(Xk)]),$$[M]˜=([SU¯β+f1(X1),SU¯β-g1(X1)][SU¯β+f1(X2),SU¯β-g1(X2)]⋯[SU¯β+f1(Xk),SU¯β-g1(Xk)][SU¯β+f2(X1),SU¯β-g2(X1)][SU¯β+f2(X2),SU¯β-g2(X2)]⋯[SU¯β+f2(Xk),SU¯β-g2(Xk)]⋮⋮⋱⋮[SU¯β+fr(X1),SU¯β-gr(X1)][SU¯β+fr(X2),SU¯β-gr(X2)]⋯[SU¯β+fr(Xk),SU¯β-gr(Xk)])$

are called the lower and upper MBSR approximation matrices, respectively. Here

$SL_β+fq(Xj)=(u1j_fq,u2j_fq,…,unj_fq),$$SL_β-gq(Xj)=(u1j_gq,u2j_gq,…,unj_gq),$$SU¯β+fq(Xj)=(u1j¯fq,u2j¯fq,…,unj¯fq),$$SU¯β-fq(Xj)=(u1j¯gq,u2j¯gq,…,unj¯gq),$

where

$uij_fq={1,if ui∈SL_β+fq(Xj),0,otherwise,$$uij_gq={12,if ui∈SL_β-gq(Xj),0,otherwise,$$uij¯fq={12,if ui∈SU¯β+fq(Xj),0,otherwise,$$uij¯gq={1,if ui∈SU¯ϒ-fq(Xj),0,otherwise.$
Definition 5.2

Let [$M~$] and $[M]˜$ be the lower and upper MBSR approximation matrices with respect to and $MBS¯Bq(Xj)$, where: j = 1, 2, . . ., k and q = 1, 2, . . ., r.

Then,

$Vf=⊕j=1k⊕q=1r(SL_β+fq(Xj)⊕SU¯β+fq(Xj)),$$Vg=⊕j=1k⊕q=1r(SL_β-gq(Xj)⊕SU¯β-gq(Xj))$

are called the positive and negative MBSR approximation vectors, respectively. Here, the operations+represent the vector summation.

Definition 5.3

Let and be positive and negative MBSR approximation vectors, respectively. Then,

$Vd=Vf-Vg=(δ1,δ2,⋯,δn),$

is said to be a decision vector where each δi is called the score value ( ) of .

• (i) is considered as an optimal alternative if its is a maximum of δi; ∀i = 1, 2, . . ., n.

• (ii) is considered as the worst alternative if its is a minimum of δi; ∀i = 1, 2, . . ., n.

When there is more than one optimal alternative for , we choose any one of them.

### 5.3 Proposed Algorithm

We now present a DM algorithm for the established MAGDM problem considered in subsection 5.2. The related steps are as follows.

• Step 1: Take primary evaluations of experts ; j = 1, 2, . . ., k.

• Step 2: Construct ; q = 1, 2, . . ., r using the actual results.

• Step 3: Calculate and $MBS¯Bq(Xj)∀j=1$, 2, . . ., k and q = 1, 2, . . ., r.

• Step 4: Compute [$M~$] and)$[M]˜$ by Definition 5.1.

• Step 5: Calculate and from Definition 5.2.

• Step 6: Compute by Definition 5.3.

• Step 7: Find $max1≤i≤nδi$. An alternative with the highest should be chosen for the final selection.

A flowchart depicting the above algorithm is shown in Figure 1.

### 5.4 An Illustrative Example (Faculty Appointment Problem)

In this subsection, we present a case study to illustrate the principal methodology of the proposed algorithm and its related concepts.

Example 5.4

The appointment of faculty members to senior positions in universities involves very complicated evaluations and DM. A candidate may be judged by various attributes, such as research productivity, managerial skills, and the ability to work under pressure. To make accurate judgments about the candidates based on these attributes, it is wise to consult experts for their professional opinions.

Let be the set of five candidates that may fit a senior faculty position at a certain university. To hire the person most suitable for this position, a panel, of experts is set up. The panel evaluates candidates according to the set of attributes. = {e1, e2, e3, e4, e4, e6}, where e1 = research productivity, e2 = managerial skills, e3 = impact on research community, e4 = ability to work under pressure; e5 = academic leadership qualities and e6 = contribution to X University.

• Step 1: Let is a panel of expert who give their Primary evaluations for the candidates as

$X1={C1,C2,C3},X2={C1,C3,C5} and X3={C2,C4,C5}.$

• Step 2: Actual results in two different meetings and times for the candidates are represented in the form of BSSs as and , where positive membership map of BSS denote expertise of candidates and negative membership denote non-expertise of candidates in a certain attribute:

$f1:℘→2U,e↦{{C1,C3},if e=e1,{C1,C4,C5},if e=e2,{C2},if e=e3,{C2,C4,C5},if e=e4,{C1,C2},if e=e5,{C3,C5},if e=e6,g1:ℵ→2A,¬e↦{{C2,C5},if ¬e=¬e1,{C3},if ¬e=¬e2,{C3,C4,C5},if ¬e=¬e3,{C1,C3},if ¬e=¬e4,{C5},if ¬e=¬e5,{C2,C4},if ¬e=¬e6,$

and

$f2:℘→2U,e↦{{C2,C3},if e=e1,{C1,C3},if e=e2,{C2,C3,C4},if e=e3,{C5},if e=e4,{C1,C5},if e=e5,{C3,C4,C5},if e=e6,g2:ℵ→2A,¬e↦{{C4},if ¬e=¬e1,{C4,C5},if ¬e=¬e2,{C1},if ¬e=¬e3,{C1,C3,C4},if ¬e=¬e4,{C2,C3},if ¬e=¬e5,{C1,C2},if ¬e=¬e6.$

• Step 3: Lower and upper MBSR-approximations of ; (q = 1, 2) can be calculated as follows:

$MBS_B1(X1)=({C1,C2,C3},{C2,C4,C5}),MBS¯B1(X1)=({C1,C2,C3,C4,C5},{C5}),MBS_B1(X2)=({C1,C3,C5},{C2,C4}),MBS¯B1(X2)=({C1,C3,C4,C5},{C2,C4}),MBS_B1(X3)=({C2,C4,C5},{C1,C3}),MBS¯B1(X3)=({C2,C4,C5},{C1,C3}).$

Similarly,

$MBS_B2(X1)=({C1,C2,C3},{C4,C5}),MBS¯B2(X1)=({C1,C2,C3,C4},{C4,C5}),MBS_B2(X2)=({C1,C3,C5},{C2,C3,C4,C5}),MBS¯B2(X2)=({C1,C2,C3,C4,C5},{C4}),MBS_B2(X3)=({C5},{C1,C2,C3}),MBS¯B2(X3)=({C2,C4,C5},{C1}).$

• Step 4: According to Definition 5.1, lower and upper MBSR-approximation matrices can be calculated as follows:

$[M~]=([(1,1,1,0,0),(0,12,0,12,12)][(1,0,1,0,1),(0,12,0,12,0)][(0,1,0,1,1),(12,0,12,0,0)][(1,1,1,0,0),(0,0,0,12,12)][(1,0,1,0,1),(0,12,12,12,12)][(0,0,0,0,1),(12,12,12,0,0)]),[M]˜=([(12,12,12,12,12),(0,0,0,0,1)][(12,0,12,12,12,(0,1,0,1,0)][(0,12,0,12,12),(1,0,1,0,0)][(12,12,12,12,0),(0,0,0,1,1)][(12,12,12,12,12),(0,0,0,1,0)][(0,12,0,12,12),(1,0,0,0,0)]).$

• Step 5: By Definition 5.2, and can be calculated as follows:

$Vf=(6,5.5,6,4,6.5),Vg=(3,3,2.5,5,3.5).$

• Step 6: According to Definition 5.3, we get:

$Vd=(3,2.5,4.5,-1,3).$

• Step 7: As $max1≤i≤5δi=C3=4.5$. Therefore, is the most suitable candidate for senior faculty position. Accordingly, we can obtain the preference order of the five candidates as follows:

$C3≻C1=C1≻C2≻C4.$

A graphical representation of the candidate preference order is shown in Figure 2.

### 6.1 Advantages of the Proposed Technique

The key advantages of the proposed technique over existing methods are summarized below.

• (i) The suggested technique considers positive and negative aspects of each alternative in the form of BSS. This hybrid model is more generalized and suitable for dealing with aggressive DM.

• (ii) Using the MBSR-approximations, this approach provides another way to obtain the group preference evaluation based on the individual preference evaluation for a considered MAGDM problem.

• (iii) This technique is also ideal because the DMs are liberated from any external restrictions and requirements in this approach.

• (iv) Our proposed technique effectively solves MAGDM problems when the weight information for the attribute is entirely unknown.

• (v) The suggested approach considers not only the opinions of DMs but also past experiences (primary evaluations) by MBSR-approximations in actual scenarios. Therefore, it is a more comprehensive approach for a better interpretation of available information and thus makes decisions using artificial intelligence.

• (vi) The proposed MAGDM approach is easy to understand and apply to real-life DM issues.

• (vii) If we compare our proposed technique with methods presented in [10,12,17,4448], we see that these methods are incapable of detecting bipolarity in the DM process, which is a key element of human thinking and behavior.

### 6.2 Comparison with Some Other Approaches

In this subsection, we reevaluate the best DM procedure for the uncertainty problem given in Example 5.4 using the algorithm given by Shabir and Gul [30]. We compared the results with the DM technique proposed in this article. First, we used the algorithm proposed by Shabir and Gul [30] to solve Example 5.4. Based on these results, we obtained the preference order of the candidates as follows:

$C1=C2=C3=C4=C5.$

In other words, the preference order of the candidates could not be detected.

Now, applying the algorithm given in Karaaslan and Çağman [29] to Example 5.4, we obtain the following preference order of the candidates:

$C1=C2=C3≻C5≻C4.$

From Table 3, we can see that our proposed algorithm is capable of identifying the most suitable candidate for a senior faculty position and achieves a clear distinction among each candidate. Considering the above advantages, we recommend the approach given in this paper, and suggest to apply it in the DM process for uncertainty problems.

### 7. Conclusion

The RS theory is emerging as a powerful theory and has diverse applications in many areas. On the other hand, the BSS is a suitable mathematical model for handling uncertainty along with bipolarity, that is, the positivity and negativity for the data. In this study, we developed an alternative strategy for the roughness of BSS called “MBSRS,” which eliminates various limitations of BSRS introduced by Karaaslan and Çağman [29].

This study makes the following main contributions.

• We begin by defining some novel types of BSRSs approximation operators for a given BSS.

• The fundamental structural properties of the newly proposed approximation operators have also been thoroughly investigated, with various examples.

• In addition, certain uncertainty measures related to MBSRS are also offered.

• Meanwhile, based on the MBSRS, we offer a generic framework for the MAGDM method, which refines the primary evaluations of the entire group of experts and enables us to select the optimal object in a more reliable manner.

• A DM algorithm is then presented with two key benefits. Firstly, it manages the bipolarity of the data, accompanied by uncertainty. Secondly, it considers the views of any (finite) number of experts on any (finite) number of alternatives.

• Moreover, a practical application of the proposed MAGD M approach demonstrates the credibility of this methodology.

• Finally, a comparison of the proposed model with few existing techniques is carried out, demonstrating that the MBSRS approach is better than the traditional approaches and that this modification can be used to make a correct decision.

The current work is also limited and plenty of meaningful study issues need further in-depth exploration. The following research directions will be the focus of our future studies.

• Researchers may examine the algebraic structures of MBSRS based on the characterized ideas and procedures in this study.

• We would like to examine the topological properties and similarity measures of MBSRS to establish a solid foundation for future research investigations and to improve working approaches.

• The notions of the MBSRS can be generalized to multi-granulation MBSRS.

• Furthermore, we will focus on the applications of the suggested approach to a broader range of selection problems, like TOSIS, VIKOR, ELECTRE, AHP, COPRAS, PROMETHEE, etc.

• We might also look at various hybridizations of the suggested technique to improve result accuracy and use these procedures to certifiable problems with big data sets. In this way, we can try to obtain and show the utility of the suggested strategy.

• The MBSRS can be extended in a fuzzy environment, and effective DM techniques might be developed.

### Fig 1.

Figure 1.

Summary of the proposed algorithm for MAGDM.

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 303-324https://doi.org/10.5391/IJFIS.2022.22.3.303

### Fig 2.

Figure 2.

Preference order of the candidates.

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 303-324https://doi.org/10.5391/IJFIS.2022.22.3.303

Tabular representation of (f, g : ℘).

fu1u2u3u4u5u6
e1100001
e2001000
e3000000
e4110010
(a)
gu1u2u3u4u5u6
¬e1000010
¬e2000000
¬e3011001
¬e4001001
(b)

Comparison between BSR-approximations and MBSR-approximations.

BSR-approximationsMBSR-approximations
(need not be)XSU¯β+(X)
SU¯β+(U)=U
(need not be)
SU¯β+()=(need not be)
SU¯β+(XY)SU¯β+(X)SU¯β+(Y)
SU¯β-(XY)SU¯β-(X)SU¯β-(Y)

Results obtained using various approaches for Example 5.4.

ApproachPreference order of the candidates
Shabir and Gul [30]Cannot handle
Karaaslan and Çağman [29]
Our proposed method

### References

1. Zadeh, LA (1965). Fuzzy sets. Information and Control. 8, 338-353. https://doi.org/10.1016/S0019-9958(65)90241-X
2. Pawlak, Z (1982). Rough sets. International Journal of Computer & Information Sciences. 11, 341-356. https://doi.org/10.1007/BF01001956
3. Molodtsov, D (1999). Soft set theory: first results. Computers & Mathematics with Applications. 37, 19-31. https://doi.org/10.1016/S0898-1221(99)00056-5
4. Maji, PK, Biswas, R, and Roy, AR (2003). Soft set theory. Computers and Mathematics with Applications. 45, 555-562. https://doi.org/10.1016/S0898-1221(03)00016-6
5. Ali, MI, Feng, F, Liu, X, Min, WK, and Shabir, M (2009). On some new operations in soft set theory. Computers & Mathematics with Applications. 57, 1547-1553. https://doi.org/10.1016/j.camwa.2008.11.009
6. Alshami, T, and El-Shafei, M (2020). T-soft equality relation. Turkish Journal of Mathematics. 44, 1427-1441. https://doi.org/10.3906/mat-2005-117
7. Maji, PK, Biswas, R, and Roy, AR (2001). Fuzzy soft sets. Journal of Fuzzy Mathematics. 9, 589-602.
8. Feng, F, Liu, X, Leoreanu-Fotea, V, and Jun, YB (2011). Soft sets and soft rough sets. Information Sciences. 181, 1125-1137. https://doi.org/10.1016/j.ins.2010.11.004
9. Shabir, M, Ali, MI, and Shaheen, T (2013). Another approach to soft rough sets. Knowledge-Based Systems. 40, 72-80. https://doi.org/10.1016/j.knosys.2012.11.012
10. Greco, S, Matarazzo, B, and Slowinski, R (1999). The use of rough sets and fuzzy sets in MCDM. Multicriteria Decision Making. Boston, MA: Springer, pp. 397-455 https://doi.org/10.1007/978-1-4615-5025-9_14
11. Greco, S, Matarazzo, B, and Slowinski, R (1999). Rough approximation of a preference relation by dominance relations. European Journal of Operational Research. 117, 63-83. https://doi.org/10.1016/S0377-2217(98)00127-1
12. Greco, S, Matarazzo, B, and Slowinski, R (2001). Rough sets theory for multicriteria decision analysis. European Journal of Operational Research. 129, 1-47. https://doi.org/10.1016/S0377-2217(00)00167-3
13. Greco, S, Matarazzo, B, and Slowinski, R (2002). Rough approximation by dominance relations. International Journal of Intelligent Systems. 17, 153-171. https://doi.org/10.1002/int.10014
14. Slowinski, R, Greco, S, and Matarazzo, B (2022). Rough set analysis of preference-ordered data. Rough Sets and Current Trends in Computing. Heidelberg, Germany: Springer, pp. 44-59 https://doi.org/10.1007/3-540-45813-1_6
15. Du, WS, and Hu, BQ (2017). Dominance-based rough fuzzy set approach and its application to rule induction. European Journal of Operational Research. 261, 690-703. https://doi.org/10.1016/j.ejor.2016.12.004
16. Shaheen, T, Mian, B, Shabir, M, and Feng, F (2019). A novel approach to decision analysis using dominance-based soft rough sets. International Journal of Fuzzy Systems. 21, 954-962. https://doi.org/10.1007/s40815-019-00612-2
17. Feng, F (2011). Soft rough sets applied to multicriteria group decision making. Annals of Fuzzy Mathematics and Informatics. 2, 69-80.
18. Ayub, S, Shabir, M, Riaz, M, Mahmood, W, Bozanic, D, and Marinkovic, D (2022). Linear diophantine fuzzy rough sets: a new rough set approach with decision making. Symmetry. 14. article no 525
19. Riaz, M, Hashmi, MR, Kalsoom, H, Pamucar, D, and Chu, YM (2020). Linear Diophantine fuzzy soft rough sets for the selection of sustainable material handling equipment. Symmetry. 12. article no 1215
20. Hashmi, MR, Tehrim, ST, Riaz, M, Pamucar, D, and Cirovic, G (2021). Spherical linear diophantine fuzzy soft rough sets with multi-criteria decision making. Axioms. 10. article no 185
21. Akram, M, and Ali, G (2020). Hybrid models for decisionmaking based on rough Pythagorean fuzzy bipolar soft information. Granular Computing. 5, 1-15. https://doi.org/10.1007/s41066-018-0132-3
22. Shabir, M, and Naz, M. (2013) . On bipolar soft sets. [Online]. Available: https://arxiv.org/abs/1303.1344
23. Karaaslan, F, and Karatas, S (2015). A new approach to bipolar soft sets and its applications. Discrete Mathematics, Algorithms and Applications. 7. article no 1550054
24. Karaaslan, F, Ahmad, I, and Ullah, A (2016). Bipolar soft groups. Journal of Intelligent & Fuzzy Systems. 31, 651-662. https://doi.org/10.3233/IFS-162178
25. Naz, M, and Shabir, M (2014). On fuzzy bipolar soft sets, their algebraic structures and applications. Journal of Intelligent & Fuzzy Systems. 26, 1645-1656. https://doi.org/10.3233/IFS-130844
26. Ozturk, TY (2018). On bipolar soft topological spaces. Journal of New Theory. 2018, 64-75.
27. Abdullah, S, Aslam, M, and Ullah, K (2014). Bipolar fuzzy soft sets and its applications in decision making problem. Journal of Intelligent & Fuzzy Systems. 27, 729-742. https://doi.org/10.3233/IFS-131031
28. Alkouri, AUM, Massa’deh, MO, and Ali, M (2020). On bipolar complex fuzzy sets and its application. Journal of Intelligent & Fuzzy Systems. 39, 383-397. https://doi.org/10.3233/JIFS-191350
29. Karaaslan, F, and Cagman, N (2018). Bipolar soft rough sets and their applications in decision making. Afrika Matematika. 29, 823-839. https://doi.org/10.1007/s13370-018-0580-6
30. Shabir, M, and Gul, M (2020). Modified rough bipolar soft sets. Journal of Intelligent & Fuzzy Systems. 39, 4259-4283. https://doi.org/10.3233/JIFS-200317
31. Gul, R, Shabir, M, Naz, M, and Aslam, M (2021). A novel approach toward roughness of bipolar soft sets and their applications in MCGDM. IEEE Access. 9, 135102-135120. https://doi.org/10.1109/ACCESS.2021.3116097
32. Mahmood, T, and Ali, Z (2021). A novel complex fuzzy N-soft sets and their decision-making algorithm. Complex & Intelligent Systems. 7, 2255-2280. https://doi.org/10.1007/s40747-021-00373-2
33. Malik, N, and Shabir, M (2019). Rough fuzzy bipolar soft sets and application in decision-making problems. Soft Computing. 23, 1603-1614. https://doi.org/10.1007/s00500-017-2883-1
34. Malik, N, and Shabir, M (2019). A consensus model based on rough bipolar fuzzy approximations. Journal of Intelligent & Fuzzy Systems. 36, 3461-3470. https://doi.org/10.3233/JIFS-181223
35. Al-Shami, TM (2021). Bipolar soft sets: relations between them and ordinary points and their applications. Complexity. 2021. article no 6621854
36. Riaz, M, and Tehrim, ST (2019). Bipolar fuzzy soft mappings with application to bipolar disorders. International Journal of Biomathematics. 12. article no 1950080
37. Kamaci, H, and Petchimuthu, S (2020). Bipolar N-soft set theory with applications. Soft Computing. 24, 16727-16743. https://doi.org/10.1007/s00500-020-04968-8
38. Gul, R, and Shabir, M (2020). Roughness of a set by (α, β)-indiscernibility of Bipolar fuzzy relation. Computational and Applied Mathematics. 39. article no 160
39. Al-shami, TM (2021). An improvement of rough sets’ accuracy measure using containment neighborhoods with a medical application. Information Sciences. 569, 110-124. https://doi.org/10.1016/j.ins.2021.04.016
40. Al-shami, TM, and Ciucci, D (2022). Subset neighborhood rough sets. Knowledge-Based Systems. 237. article no 107868
41. Al-shami, TM (2021). Improvement of the approximations and accuracy measure of a rough set using somewhere dense sets. Soft Computing. 25, 14449-14460. https://doi.org/10.1007/s00500-021-06358-0
42. Gediga, G, and Duntsch, I (2001). Rough approximation quality revisited. Artificial Intelligence. 132, 219-234. https://doi.org/10.1016/S0004-3702(01)00147-3
43. Yao, YY (2010). Notes on rough set approximations and associated measures. Journal of Zhejiang Ocean University (Natural Science). 29, 399-410.
44. Cagman, N, Enginoglu, S, and Citak, F (2011). Fuzzy soft set theory and its applications. Iranian Journal of Fuzzy Systems. 8, 137-147.
45. Cagman, N, and Enginoglu, S (2010). Soft matrix theory and its decision making. Computers & Mathematics with Applications. 59, 3308-3314. https://doi.org/10.1016/j.camwa.2010.03.015
46. Celik, Y, and Yamak, S (2013). Fuzzy soft set theory applied to medical diagnosis using fuzzy arithmetic operations. Journal of Inequalities and Applications. 2013. article no 82
47. Gogoi, K, Dutta, AK, and Chutia, C (2014). Application of fuzzy soft set theory in day to day problems. International Journal of Computer Applications. 85, 27-31.
48. Herawan, T (2012). Soft set-based decision making for patients suspected influenza-like illness. International Journal of Modern Physics: Conference Series. 9, 259-270. https://doi.org/10.1142/S2010194512005302