Article Search
닫기

Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(2): 202-212

Published online June 25, 2022

https://doi.org/10.5391/IJFIS.2022.22.2.202

© The Korean Institute of Intelligent Systems

Analysis of the Three-Way Decision Rules in Online Learning-Taking a Calculus Course as an Example

Songlin Yang1;2, Ting Xu1, and Yongcun Shao1

1Wenzheng College of Soochow University, Suzhou, China
2Soochow College, Soochow University, Suzhou, China

Correspondence to :
Songlin Yang (songliny@suda.edu.cn)

Received: January 6, 2021; Revised: July 26, 2021; Accepted: April 27, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

To investigate the credibility of online examinations, we randomly chose 30 engineering students in Wenzheng College of Soochow University as a sample and established an information system S. According to pedagogy theory, students’scores are related to their intelligence levels, course basis, learning methods, and learning time. Therefore, in the information system S, “score of early semester,” “average score of class quizzes,” “completion ratio of watching videos,” and “completion grade of homework” were taken as the condition attributes, and “score of final exam” was taken as the decision attribute. We applied the three-way decision rules theory to provide positive, negative, and boundary decision rules for this information system S. Furthermore, we obtained confidence in the decision rules. The results of this study validate the credibility of online examinations and have certain guiding significance for online teaching and learning.

Keywords: Three-way decision rules, Information systems, Condition attributes, Decision attributes, Confidence, Online learning, Online teaching

The coronavirus disease 2019 (COVID-19) pandemic has disrupted the original offline teaching mode in colleges and universities. Under the background of “suspending classes without stopping teaching, suspending classes without stopping learning,” we developed an online teaching mode of calculus courses using the Superstar network teaching platform and successfully completed the online teaching process including lesson preparation, teaching, and examinations. We found some problems in the system during online teaching, for example, in “brush the lesson” learning [1,2]. We know that some students just wanted to complete their study tasks; they opened the course videos and did other things after signing in. Some students used the so-called “lesson brush artifact” to complete their study tasks. We also found that there are some problems in the online examinations; for example, as there are no monitors, some students may copy the answers or cheat in other ways. Therefore, it is important to evaluate online learning and student satisfaction. In [35], the authors evaluated the impact of shifting from traditional learning to online learning during the COVID-19 pandemic on undergraduate students and examined the positive and negative aspects of online learning from the students’ perspectives. Thom et al. [6] examined the lessons learned in online learning during the COVID-19 pandemic. However, it is also necessary to evaluate the credibility of online examinations. In this study, we investigated the credibility of the online examination for the calculus course we had developed. According to pedagogy theory [7], students’ scores are related to the following factors: intelligence level, course basis, psychological state, learning method, and learning time. In particular, we focused on the following four aspects: students’ proficiency in the course, appropriate learning methods, degree of class attendance, and completion of homework.

To investigate the credibility of the online examination, we randomly selected 30 engineering students at Wenzheng College of Soochow University as a sample and established an information system S = (U, CD, f). Online learning data for these students are presented in Table 1. We used the rough-set theory to analyze Table 1 and extract useful information hidden in these data. Rough-set theory, which is a logic-mathematical method proposed by Pawlak [8,9], is an effective tool for extracting and analyzing useful information hidden in data. In recent years, the rough-set theory has been widely implemented in many fields of natural and social sciences [1014]. In 2009, Yao [15,16] introduced a third decision, the boundary decision, based on a two-way decision and proposed the three-way decision theory. The three-way decision theory provides a rational interpretation of the three regions in rough sets. Corresponding to the three positive, negative, and boundary regions in rough sets, this theory shows the regions of acceptance, rejection, and non-commitment in a ternary classification [17]. In this study, we use the three-way decision rule to analyze and extract useful information hidden in Table 1.

Remark 1.1

“Score of Early Semester” in Table 1 refers to the students’ calculus (1) course scores in the last semester. “Average Score of Class Quizzes” is a weighted average of the students’ scores from six class quizzes and two chapter quizzes during this semester; three or four problems in the class quiz are completed in 15 to 20 minutes. Then, the student signs in with the student number and name within the allotted time and uploads the completed quiz to the teacher. These quizzes can appropriately reflect the students’ learning situation. “Completion Ratio of Watching Videos” is the average completion ratio of students watching the teaching videos in this semester; “Completion Grade of Homework” refers to the students’ completion grade of homework in this semester. “Score of Final Exam” is the student’s final exam score in this semester.

From Table 1, which combines the queries mentioned earlier, the following question arises.

Question 1.2

Does Table 1 accurately reflect the score of the final exam derived from the “Score of Early Semester,” “Average Score of Class Exercise,” “Completion Ratio of Watching Video,” and “Grade of Homework” completion?

To discuss Question 1.2 theoretically, useful information hidden in Table 1 must be extracted. This leads us to establish an information system S = (U, CD, f) based on Table 1, where “Score of Early Semester,” “Average Score of Class Quizzes,” “Completion Ratio of Watching Videos,” and and “Grade of Completion Homework” are taken as the condition attributes, and “Score of Final Exam” is taken as the decision attribute. Thus, the authenticity of the students’ scores described in Question 1.2 can be converted to a decision rule question described in the following question.

Question 1.3

Let S = (U, CD, f) be an information system established based on Table 1. In S = (U, CD, f), is the decision attribute derived from the condition attributes? Furthermore, how can we characterize the confidence that the decision attribute is derived from the condition attributes?

Our discussion is based around Questions 1.2 and 1.3. We apply the three-way decision rules to the information system S = (U, CD, f) and use confidence to characterize the three-way decision rules, which answers Question 1.3. Based on these, we check the accuracy of whether “Score of Final Exam” is derived from “Score of Early Semester,” “Average Score of Class Quizzes,” “Completion Ratio of Watching Videos,” and “Grade of Completion Homework” in Table 1. The results of this study provide a new theoretical analysis method for online teaching and learning information for calculus, which will be helpful for teachers to develop other online courses accurately.

The basic concepts for rough-set theory and decision rule can be found in [8,13,14,16,18,19].

Notation 1.4

(1) For a finite set B, |B| denotes the cardinal of B.

(2) For a collection 1, 2, ⋯, k of families of sets, ∧{i : i = 1, 2, ⋯, k} denotes a family of sets:

{Fi:i=1,2,,k}={{Fi:i=1,2,,k}:FiFi,i=1,2,,k}.

Definition 1.5

S = (U, CD, f) is called an information system, where

(1) U, a nonempty finite set, is called the universe of discourse.

(2) CD is a finite set of attributes, where C and D are disjoint nonempty sets of condition attributes and decision attributes, respectively.

(3) f is an information function defined as U × (CD). For each uU and xC, f(u, x) is called a condition attribute value. For each uU and xD, f(u, x) is called a decision attribute value.

To apply the three-way decision rules for the theoretical analysis of online learning data of the calculus course (2), we must establish an information system S = (U, CD, f) based on Table 1.

(1) Let U = {u1, u2, ⋯, u30} represent a set of 30 students, where u1, u2, ⋯, u30 denote 30 students.

(2) Let C = {c1, c2, c3, c4}, where c1, c2, c3, c4 denote the “Score of Early Semester”, “Average Score of Class Quizzes”, “Completion Ratio of Watching Videos”, and “Completion Grade of Homework”, respectively.

(3) Let D = {d}, where d denotes “Score of Final Exam”.

According to the general rule of statistical grouping (e.g., refer to [20]), we divide the condition attributes into four groups. For condition attributes c1, c2 and c3, the group interval

h=(max+0.5)-(min-0.5)4,

where max and min are the maximum and minimum values, respectively, of the set of group data.

(i) For the condition attribute c1, the group interval hc1=(95+0.5)-(42-0.5)4=13.5. According to the equidistant grouping method, the open interval (41.5, 95.5) can be divided into four intervals:(41.5, 55.0), [55.0, 68.5), [68.5, 82.0), and [82.0, 95.5). Let c11 indicate “Score of Early Semester” between 82.5 and 95.5, let c21 indicate “Score of Early Semester” between 68.5 and 82.0, let c31 indicate “Score of Early Semester” between 55.0 and 68.5, and let c41 indicate “Score of Early Semester” between 41.5 and 55.0.

(ii) For the condition attribute c2, the group interval hc2=(80+0.5)-(25-0.5)414.0. According to the equidistant grouping method, the open interval (24.5, 80.5) can be divided into four intervals: (24.5, 38.5), [38.5, 52.5), [52.5, 66.5), and [66.5, 80.5). Let c12 indicate “Average Score of Class Quizzes” between 66.5 and 80.5, let c22 indicate “Average Score of Class Quizzes” between 52.5 and 66.5, let c32 indicate “Average Score of Class Quizzes” between 38.5 and 52.5, and let c42 indicate “Average Score of Class Quizzes” between 24.5 and 38.5.

(iii) For the condition attribute c3, the group interval hc3=(99+0.5)-(21-0.5)419.8. According to the equidistant grouping method, the open interval (20.5, 99.5) can be divided into four intervals: (20.5, 40.3), [40.3, 60.1), [60.1, 79.9), and [79.9, 99.5). Let c13 indicate “Completion Ratio of Watching Videos” between 79.9 and 99.5, let c23 indicate “Completion Ratio of Watching Videos” between 60.1 and 79.9, let c33 indicate “Completion Ratio of Watching Videos” between 40.3 and 60.1, and let c43 indicate “Completion Ratio of Watching Videos” between 20.5 and 40.3.

(iv) For the condition attribute c4, Let c14 indicate “Completion Grade of Homework” for A, let c24 indicate “Completion Grade of Homework” for B, let c34 indicate “Completion Grade of Homework” for C, and let c44 indicate “Completion Grade of Homework” for D.

(v) For the decision attribute d, the group interval hd=(91+0.5)-(41-0.5)4=12.8. According to the equidistant grouping method, the open interval (40.5, 91.5) can be divided into four intervals: Id1 = [78.9, 91.5), Id2 = [66.1, 78.9), Id3 = [53.3, 66.1) and Id4 = (40.5, 53.3). Let d1 indicate “Score of Final Exam” between 78.9 and 91.5, let d2 indicate “Score of Final Exam” between 66.1 and 78.9, let d3 indicate “Score of Final Exam” between 53.3 and 66.1, and let d4 indicate “Score of Final Exam” between 40.5 and 53.3.

We now define the information function f: for each uU and dIdj, set f(u, d) = dj for some j = 1, 2, 3, 4. Similarly, for each uU, we define f(u, ci) = cji for some i, j = 1, 2, 3, 4. Consequently, the information function f defined on U × (CD) is constructed.

From (i)-(v), an information system S = (U, CD, f) based on Table 1 is established. Furthermore, S = (U, CD, f) can be expressed as a decision table (see Table 2), whose columns are labeled by elements of CD and rows are labeled by elements of U. For each uU and xCD, f(u, x) lies in the cross of the row labeled by u and column labeled by x.

Now, we provide some simple results that are useful for our discussion.

Remark 3.1

An information system S = (U, CD, f) can be expressed as a data table, which is called a decision table, whose columns are labeled by elements of CD, rows are labeled by elements of U; f(u, x) lies in the cross of the row labeled by u and the column labeled by x.

Notation 3.2

Let S = (U, CD, f) be an information system.

(1) For xCD, we define the equivalence relation ~ on U as follows: ui ~ uj ⇐⇒ f(ui, x) = f(uj, x) for some i, j. U/x denotes the family comprising all equivalence classes with respect to ~.

(2) ∧{U/c : cC} is a partition of U denoted as U/C. The equivalence relation induced by U/C is denoted as C.

Definition 3.3

Let S = (U, CD, f) be an information system and uU. An equivalence class containing u with respect to the equivalence relation C is denoted by C(u) and is called a condition attribute granule.

Definition 3.4

Let S = (U, CD, f) be an information system and set XU.

(1) apr(X) = {u : uU and C(u) ⊆ X} is the lower approximation of X (with respect to C).

(2) apr¯(X)={u:uUand C(u)X} is the upper approximation of X (with respect to C).

Lemma 3.5

Let S = (U, CD, f) be an information system and set XU. Then, the following holds:

(1) apr(X) = ∪ {C(u) : uU and C(u) ⊆ X}.

(2) apr¯(X)={C(u):uUand C(u)X}.

Definition 3.6

Let S = (U, CD, f) be an information system and set XU.

(1) POS(X) = apr(X) is the positive region of X.

(2) NEG(X)=U-apr¯(X) is the negative region of X.

(3) BND(X)=apr¯(X)-apr_(X) is the boundary region of X.

Definition 3.7

Let S = (U, CD, f) be an information system, uU and XU/D.

(1) If a rule allows us to accept u as a member of X, this rule is called a positive decision rule, which is denoted as Des(C(u))PDes(X).

(2) If a rule allows us to reject u as a member of X, this rule is called a negative decision rule, which is denoted as Des(C(u))NDes(X).

(3) If a rule allows us to make an uncertain decision about whether we accept u as a member of X, then this rule is called a boundary decision rule, which is denoted as Des(C(u))BDes(X).

Here, the positive, negative, and boundary decision rules are called the three-way decision rules.

In an information system S = (U, CD, f), the positive, negative, and boundary regions of decision class XU/D can be constructed.

Lemma 3.8

Let S = (U, CD, f) be an information system, uU and XU/D. Then, the following holds:

(1) If C(u) ⊆ POS(X), then Des(C(u))PDes(X).

(2) If C(u) ⊆ NEG(X), then Des(C(u))NDes(X).

(3) If C(u) ⊆ BND(X), Then Des(C(u))BDes(X).

We provide an easier method to derive the positive, negative, and boundary decision rules.

Lemma 3.9

Let S = (U, CD, f) be an information system, uU and XU/D. Then, the following holds:

(1) u ∈ POS(X) if and only if C(u) ⊆ POS(X);

(2) u ∈ NEG(X) if and only if C(u) ⊆ NEG(X);

(3) u ∈ BND(X) if and only if C(u) ⊆ BND(X);

The following theorem can be obtained from Lemmas 3.8 and 3.9.

Theorem 3.10

Let S = (U, CD, f) be an information system, uU and XU/D. Then, the following holds:

(1) If u ∈ POS(X), then Des(C(u))PDes(X).

(2) If u ∈ BND(X), Des(C(u))BDes(X).

(3) If u ∈ NEG(X), then Des(C(u))NDes(X).

The confidence of three-way decision rules on an information system was introduced by Yao and his colleague [12,15].

Definition 3.11

Let S = (U, CD, f) be an information system, uU, XU/D and Λ ∈ {P, B, N}. Put

conf(Des(C(u))ΛDes(X))=|C(u)X||C(u)|.

Then, conf(Des(C(u))ΛDes(X)) is the confidence of the decision rule Des(C(u))ΛDes(X).

We have the following proposition:

Proposition 3.12

Let S = (U, CD, f) be an information system, uU, Λ ∈ {P, B, N} and XU/D. Then, the following holds:

(1) conf(Des(C(u))ΛDes(X))=1 if and only if Des(C(u))PDes(X);

(2) conf(Des(C(u))ΛDes(X))=0 if and only if Des(C(u))NDes(X).

(3) 0<conf(Des(C(u))ΛDes(X))<1 if and only if Des(C(u))BDes(X).

In the following sections, the information system S = (U, CD, f) is expressed in Table 2.

In this section, we discuss three-way decision rules for S = (U, CD, f). First, we provide some related partitions of U obtained from Table 2.

Proposition 4.1

The following partitions of U hold for S = (U, CD, f).

(1) U/C = {{u3}, {u4}, {u6}, {u7}, {u11}, {u13}, {u14}, {u16}, {u17}, {u18, {u19}, {u20}, {u21}, {u23}, {u1, u24}, {u2, u9}, {u12, u15}, {u5, u25, u29}, {u8, u10, u28}, {u22, u26, u27, u30}}.

(2) U/d = {{u2, u3, u5, u7, u10, u13, u18, u22, u25, u26, u27, u28, u29, u30}, {u1, u4, u8, u9, u11, u12, u14, u17, u23}, {u6, u15, u16, u24}, {u19, u20, u21}}.

From Proposition 4.1, we get some conditional attribute granules in S = (U, CD, f).

Proposition 4.2

The following conditional attribute granules hold for S = (U, CD, f).

(1) C(u) = {u} for each u ∈ {u3, u4, u6, u7, u11, u13, u14, u16, u17, u18, u19, u20, u21, u23}.

(2) C(u) = X for each uX and X ∈ {{u1, u24}, {u2, u9}, {u12, u15}, {u5, u25, u29}, {u8, u10, u28}, {u22, u26, u27, u30}}.

Let U/d = {D1, D2, D3, D4}, where D1 = {u2, u3, u5, u7, u10, u13, u18, u22, u25, u26, u27, u28, u29, u30}, D2 = {u1, u4, u8, u9, u11, u12, u14, u17, u23}, D3 = {u6, u15, u16, u24} and D4 = {u19, u20, u21}. For uDi, the value of the decision attribute is di (for some i = 1, 2, 3, 4). By Lemma 3.5, Proposition 4.1, and Proposition 4.2, we have the upper and lower approximations of D1, D2, D3, D4.

Proposition 4.3

The following upper and lower approximations of D1, D2, D3, D4 hold for S = (U, CD, f).

(1) apr(D1) = {u3, u5, u7, u13, u18, u22, u25, u26, u27, u29, u30}.

apr¯(D1)={u2,u3,u5,u7,u8,u9,u10,u13,u18,u22,u25,u26,u27,u28,u29,u30}.

(2) apr(D2) = {u4, u9, u11, u14, u17, u23}.

apr¯(D2)={u1,u4,u8,u9,u10,u11,u12,u14,u15,u17,u23,u24,u28}.

(3) apr(D3) = {u6, u16},

apr¯(D3)={u1,u6,u12,u15,u16,u24}.

(4) apr(D4) = {u19, u20, u21}.

apr¯(D4)={u19,u20,u21}.

Now, by Definition 3.6 and Proposition 4.3, we discuss the three-way decision rules for S = (U, CD, f). We have the following positive, negative, and boundary regions of D1, D2, D3, D4.

Theorem 4.4

The following positive, negative, and boundary regions of D1, D2, D3, D4 hold for S = (U, CD, f).

(1) POS(D1) = {u3, u5, u7, u13, u18, u22, u25, u26, u27, u29, u30}.

NEG(D1) = {u1, u4, u6, u11, u12, u14, u15, u16, u17, u19, u20, u21, u23, u24}.

BND(D1) = {u2, u8, u9, u10, u28}.

(2) POS(D2) = {u4, u9, u11, u14, u17, u23}.

NEG(D2) = {u2, u3, u5, u6, u7, u13, u16, u18, u19, u20, u21, u22, u25, u26, u27, u29, u30}.

BND(D2) = {u1, u8, u10, u12, u15, u24, u28}.

(3) POS(D3) = {u6, u16}.

NEG(D3) = {u2, u3, u4, u5, u7, u8, u9, u10, u11, u13, u14, u17, u18, u19, u20, u21, u22, u23, u25, u26, u27, u28, u29, u30}.

BND(D3) = {u1, u12, u15, u24}.

(3) POS(D4) = {u19, u20, u21}.

NEG(D4) = {u1, u2, u3, u4, u5, u6, u7, u8, u9, u10, u11, u12, u13, u14, u15, u16, u17, u18, u22, u23, u24, u25, u26, u27, u28,

u29, u30}.

BND(D4) = ∅︀.

By Theorems 3.10 and 4.4, we can easily obtain the three-way decision rules for S = (U, CD, f). Table 3 lists Des(C(u))ΛDes(Di) for all uU and i ∈ {1, 2, 3, 4}, where Λ ∈ {P, N, B}. Here, the decision rule Des(C(u))ΛDes(Di) is simply denoted by C(u)ΛDi for uU, Λ ∈ {P, N, B} and all i ∈ {1, 2, 3, 4} (This notation is shown in Table 4).

The three-way decision rules for S = (U, CD, f) can also be shown by its confidence. In other words, we can use conf(Des(C(u))ΛDes(Di)) to characterize Des(C(u))ΛDes(Di), where uU, Λ ∈ {P, N, B} and all i ∈ {1, 2, 3, 4}. For example, for u1, C(u1) = {u1, u24}, C(u1) ∩ D1 = ∅︀, C(u1) ∩ D2 = {u1}, C(u1) ∩ D3 = {u24} and C(u1) ∩ D4 = ∅︀. By Definition 3.11,

conf(Des(C(u1))ΛDes(D1))=|C(u1)D1||C(u1)|=0,conf(Des(C(u1))ΛDes(D2))=|C(u1)D2||C(u1)|=12,conf(Des(C(u1))ΛDes(D3))=|C(u1)D3||C(u1)|=12,conf(Des(C(u1))ΛDes(D4))=|C(u1)D4||C(u1)|=0.

Thus, we have confidence in the three-way decision rules on S = (U, CD, f)(see Table 4).

Tables 3 and 4 give the three-way decision rules on S = (U, CD, f) and the confidence of the three-way decision rules, respectively, which answers Question 1.3 (see Conclusion 5.2 for more detailed explanations).

In this section, we focus on Question 1.2. First, we provide a remark on the three-way decision rules on S = (U, CD, f).

Remark 5.1

The following are some semantic interpretations for the three-way decision rules on S = (U, CD, f) in the sense of [15], where i ∈ {1, 2, 3, 4}. By Proposition 3.12,

(1) For uU, if Des(C(u))PDes(Di) (equivalently, conf(Des(C(u))PDes(Di))=1), then the decision attribute value derived from condition attribute values is di.

(2) For uU, if Des(C(u))NDes(Di) (equivalently, conf(Des(C(u))ΛDes(Di))=0), then the decision attribute value derived from the condition attribute values is not di.

(3) For uU, if Des(C(u))BDes(Di) (equivalently, 0<conf(Des(C(u))ΛDes(Di))<1), then it is uncertain whether the decision attribute value derived from the condition attribute values is di.

From Table 3 (or Table 4) and Remark 5.1, we have the following more detailed explanations of the three-way decision rules on S = (U, CD, f).

Conclusion 5.2

The following are true for S = (U, CD, f).

(1) For u = u1, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 1, 4. However, it is uncertain whether the decision attribute value derived from the condition attribute values is dj because Des(C(u))BDes(Dj), where j = 2, 3.

(2) For u = u2, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value derived from condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(3) For u = u3, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1).

(4) For u = u4, the decision attribute value derived from the condition attribute values is d2 because Des(C(u))PDes(D2).

(5) For u = u5, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1).

(6) For u = u6, the decision attribute value derived from the condition attribute values is d3 because Des(C(u))PDes(D3).

(7) For u = u7, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1).

(8) For u = u8, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value derived from condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(9) For u = u9, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value derived from condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(10) For u = u10, the decision attribute value that is derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value that is derived from condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(11) For u = u11, the decision attribute value derived from the conditional attribute values is d2 because Des(C(u))PDes(D2).

(12) For u = u12, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 1, 4. However, it is uncertain whether the decision attribute value derived from the condition attribute values is dj because Des(C(u))BDes(Dj), where j = 2, 3.

(13) For u = u13, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1).

(14) For u = u14, the decision attribute value derived from the condition attribute values is d2 because Des(C(u))PDes(D2).

(15) For u = u15, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 1, 4. However, it is uncertain whether the decision attribute value derived from the condition attribute values is dj because Des(C(u))BDes(Dj), where j = 2, 3.

(16) For u = u16, the decision attribute value derived from the conditional attribute values is d3 because Des(C(u))PDes(D3).

(17) For u = u17, the decision attribute value derived from the conditional attribute values is d2 because Des(C(u))PDes(D2).

(18) For u = u18, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(19) For u = u19, the decision attribute value derived from the condition attribute values is d4 because Des(C(u))PDes(D4).

(20) For u = u20, the decision attribute value derived from the condition attribute values is d4 because Des(C(u))PDes(D4).

(21) For u = u21, the decision attribute value derived from the condition attribute values is d4 because Des(C(u))PDes(D4).

(22) For u = u22, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(23) For u = u23, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(24) For u = u24, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di) where i = 2, 4. However, it is uncertain whether the decision attribute value derived from the condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 3.

(25) For u = u25, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(26) For u = u26, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(27) For u = u27, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(28) For u = u28, the decision attribute value that is derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value derived from the conditional attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(29) For u = u29, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(30) For u = u30, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1). In S = (U, CD, f), Conclusion 5.2 gives a theoretical answer to Question 1.3.

Conclusion 5.3

For online teaching data of calculus (2), the final scores of most students are credible.

For students: 2019488007, 2019488008, 2019488009, 2019 488011, 2019488013, 2019488022, 2019488027, 2019488028, 2019488032 2019488034, 2019488036, 2019488038, 2019488 039, 2019488040, 2019488041, 2019488045, 2019488049, 201 9488050, 2019488054, 2019488057 and 2019488060, their scores are credible. For students: 2019488002, 2019488005 2019488014, 2019488015, 2019488020, 2019488025, 201948 8030, 2019488046 and 2019488055, their scores are uncertain.

Thus, the scores of the online exam are mostly credible. This provides a theoretical basis for online teaching and examinations.

Although this paper focuses on the theoretical analysis of online learning data of calculus (2), it still has practical significance for other online courses. However, a more practical analysis is required to determine whether this theory is suitable for investigating other problems.

The authors express gratitude to the reviewers for their thoughtful comments and valuable suggestions. This project was supported by the National Natural Science Foundation of China (Grant No. 61472469).
Table. 1.

Table 1. Online learning data of calculus (2) course.

OrderStudent numberScore of Early SemesterAverage Score of Class QuizzesCompletion Ratio of Watching Videos(%)Grade of Completion HomeworkScore of Final Exam
12019488002563553C67
22019488005794585A87
32019488007812570C83
42019488008845045B74
52019488009774521B81
62019488011683552D56
72019488013863545A90
82019488014845095A74
92019488015815089A78
102019488020914588A84
112019488022673543A73
122019488025693550C68
132019488027813590B84
142019488028638096B73
152019488030693557C66
162019488032655594C62
172019488034695585A74
182019488036845542A85
192019488038564572D51
202019488039422571C49
212019488040533598D41
222019488041745081B91
232019488045817099A71
242019488046653551C57
252019488049695039B86
262019488050744583B79
272019488054734097B81
282019488055955094A91
292019488057804537B79
302019488060775090B90

Table. 2.

Table 2. Decision table.

Uc1c2c3c4d
u1c31c42c33c34d2
u2c21c32c13c14d1
u3c21c42c23c34d1
u4c11c32c33c24d2
u5c21c32c43c24d1
u6c31c42c33c44d3
u7c11c42c33c14d1
u8c11c32c13c14d2
u9c21c32c13c14d2
u10c11c32c13c14d1
u11c31c42c33c14d2
u12c21c42c33c34d2
u13c21c42c13c24d1
u14c31c12c13c24d2
u15c21c42c33c34d3
u16c31c22c13c34d3
u17c21c22c13c14d2
u18c11c22c33c14d1
u19c31c32c23c44d4
u20c41c42c23c34d4
u21c41c42c13c44d4
u22c21c32c13c24d1
u23c21c12c13c14d2
u24c31c42c33c34d3
u25c21c32c43c24d1
u26c21c32c13c24d1
u27c21c32c13c24d1
u28c11c32c13c14d1
u29c21c32c43c24d1
u30c21c32c13c24d1

Table. 3.

Table 3. Three-way decision rules on S = (U, CD, f).

uC(u)ΛD1C(u)ΛD2C(u)ΛD3C(u)ΛD4
u = u1Λ = NΛ = BΛ = BΛ = N
u = u2Λ = BΛ = BΛ = NΛ = N
u = u3Λ = PΛ = NΛ = NΛ = N
u = u4Λ = NΛ = PΛ = NΛ = N
u = u5Λ = PΛ = NΛ = NΛ = N
u = u6Λ = NΛ = NΛ = PΛ = N
u = u7Λ = PΛ = NΛ = NΛ = N
u = u8Λ = BΛ = BΛ = NΛ = N
u = u9Λ = BΛ = BΛ = NΛ = N
u = u10Λ = BΛ = BΛ = NΛ = N
u = u11Λ = NΛ = PΛ = NΛ = N
u = u12Λ = NΛ = BΛ = BΛ = N
u = u13Λ = PΛ = NΛ = NΛ = N
u = u14Λ = NΛ = PΛ = NΛ = N
u = u15Λ = NΛ = BΛ = BΛ = N
u = u16Λ = NΛ = NΛ = PΛ = N
u = u17Λ = NΛ = PΛ = NΛ = N
u = u18Λ = PΛ = NΛ = NΛ = N
u = u19Λ = NΛ = NΛ = NΛ = P
u = u20Λ = NΛ = NΛ = NΛ = P
u = u21Λ = NΛ = NΛ = NΛ = P
u = u22Λ = PΛ = NΛ = NΛ = N
u = u23Λ = PΛ = NΛ = NΛ = N
u = u24Λ = BΛ = NΛ = BΛ = N
u = u25Λ = PΛ = NΛ = NΛ = N
u = u26Λ = PΛ = NΛ = NΛ = N
u = u27Λ = PΛ = NΛ = NΛ = N
u = u28Λ = BΛ = BΛ = NΛ = N
u = u29Λ = PΛ = NΛ = NΛ = N
u = u30Λ = PΛ = NΛ = NΛ = N

Table. 4.

Table 4. Confidence of the three-way decision rules on S = (U, CD, f).

uC(u)ΛD1C(u)ΛD2C(u)ΛD3C(u)ΛD4
u = u1conf = 0conf=12conf=12conf = 0
u = u2conf=12conf=12conf = 0conf = 0
u = u3conf = 1conf = 0conf = 0conf = 0
u = u4conf = 0conf = 1conf = 0conf = 0
u = u5conf = 1conf = 0conf = 0conf = 0
u = u6conf = 0conf = 0conf = 1conf = 0
u = u7conf = 1conf = 0conf = 0conf = 0
u = u8conf=23conf=13conf = 0conf = 0
u = u9conf=12conf=12conf = 0conf = 0
u = u10conf=23conf=13conf = 0conf = 0
u = u11conf = 0conf = 1conf = 0conf = 0
u = u12conf = 0conf=12conf=12conf = 0
u = u13conf = 1conf = 0conf = 0conf = 0
u = u14conf = 0conf = 1conf = 0conf = 0
u = u15conf = 0conf=12conf=12conf = 0
u = u16conf = 0conf = 0conf = 1conf = 0
u = u17conf = 0conf = 1conf = 0conf = 0
u = u18conf = 1conf = 0conf = 0conf = 0
u = u19conf = 0conf = 0conf = 0conf = 1
u = u20conf = 0conf = 0conf = 0conf = 1
u = u21conf = 0conf = 0conf = 0conf = 1
u = u22conf = 1conf = 0conf = 0conf = 0
u = u23conf = 1conf = 0conf = 0conf = 0
u = u24conf=12conf = 0conf=12conf = 0
u = u25conf = 1conf = 0conf = 0conf = 0
u = u26conf = 1conf = 0conf = 0conf = 0
u = u27conf = 1conf = 0conf = 0conf = 0
u = u28conf=23conf=13conf = 0conf = 0
u = u29conf = 1conf = 0conf = 0conf = 0
u = u30conf = 1conf = 0conf = 0conf = 0

  1. Zeng, L (Array). Analysis of the Implementation effect and the influential factors of online teaching in the background of epidemic prevention. Higher Education Exploration. 7, 85-9.
  2. Wang, X, Yang, H, Cui, Y, and Zuo, C (2020). Analysis of College Students’ Online Learning Identity Degree during the ‘Fight against COVID-19’ Period. Modern Educational Technology. 30, 105-112.
  3. Maqableh, M, and Alia, M (2021). Evaluation online learning of undergraduate students under lockdown amidst COVID-19 pandemic: the online learning experience and students’ satisfaction. Children and Youth Services Review. 128. article no. 106160
    CrossRef
  4. Duo, Y (2022). Research on the equivalence of online test scores and traditional test scores: a meta-analysis based on international empirical research from 2000 to 2020. Distance Education in China. 2022, 73-84.
  5. Ilgaz, H, and Afacan Adanir, G (2020). Providing online exams for online learners: does it really matter for them?. Education and Information Technologies. 25, 1255-1269. https://doi.org/10.1007/s10639-019-10020-6
    CrossRef
  6. Thom, ML, Kimble, BA, Qua, K, and Wish-Baratz, S (2021). Is remote near-peer anatomy teaching an effective teaching strategy? Lessons learned from the transition to online learning during the Covid-19 pandemic. Anatomical Sciences Education. 14, 552-561. https://doi.org/10.1002/ase.2122
    Pubmed KoreaMed CrossRef
  7. Wang, DJ, and Guo, WA (2016). Pedagogy. Beijing, China: People’s Education Press
  8. Pawlak, Z (1982). Rough sets. International Journal of Computer & Information Sciences. 11, 341-356. https://doi.org/10.1007/BF01001956
    CrossRef
  9. Pawlak, Z (2002). Rough sets, decision algorithms and Bayes’ theorem. European Journal of Operational Research. 136, 181-189. https://doi.org/10.1016/S0377-2217(01)00029-7
    CrossRef
  10. Lee, CH, and Seo, SH (2004). Efficient extraction of hierarchically structured rules using rough sets. International Journal of Fuzzy Logic and Intelligent Systems. 4, 205-210. https://doi.org/10.5391/IJFIS.2004.4.2.205
    CrossRef
  11. Stepaniuk, J (2009). Rough–Granular Computing in Knowledge Discovery and Data Mining. Heidelberg, Germany: Springer
  12. Yao, Y, and Zhao, Y (2008). Attribute reduction in decision-theoretic rough set models. Information Sciences. 178, 3356-3373. https://doi.org/10.1016/j.ins.2008.05.010
    CrossRef
  13. Ge, X, and Qian, J (2009). Some investigations on higher mathematics scores for Chinese university students. International Journal of Computer and Information Science and Engineering. 3, 46-49.
  14. Yang, S, and Ge, Y (2010). Remarks on some properties of decision rules. International Journal of Mathematical and Computational Sciences. 4, 178-181.
  15. Yao, Y (2009). Three-way decision: an interpretation of rules in rough set theory. Rough Sets and Knowledge Technology. Heidelberg, Germany: Springer, pp. 642-649 https://doi.org/10.1007/978-3-642-02962-281
    CrossRef
  16. Yao, Y (2010). Three-way decisions with probabilistic rough sets. Information Sciences. 180, 341-353. https://doi.org/10.1016/j.ins.2009.09.021
    CrossRef
  17. Ma, JM, Zhang, HY, and Qian, YH (2019). Three-way decisions with reflexive probabilistic rough fuzzy sets. Granular Computing. 4, 363-375. https://doi.org/10.1007/s41066-018-0125-2
    CrossRef
  18. Kim, YC (2014). The properties of L-lower approximation operators. International Journal of Fuzzy Logic and Intelligent Systems. 14, 57-65. https://doi.org/10.5391/IJFIS.2014.14.1.57
    CrossRef
  19. Yun, SM, and Lee, SJ (2015). Intuitionistic fuzzy rough approximation operators. International Journal of Fuzzy Logic and Intelligent Systems. 15, 208-215. https://doi.org/10.5391/IJFIS.2015.15.3.208
    CrossRef
  20. Montgomery, DC (1996). Introduction to Statistical Quality Control. Hoboken, NJ: John Wiley & Sons

Songlin Yang, Ph.D. Work Unit: WENZHENG College of Soochow University and Soochow College, Soochow University Research interests: Differential Geometry, Topology and Rough Set Theory

E-mail: songliny@suda.edu.cn


Ting Xu, Master Work Unit: WENZHENG College of Soochow University Research interests: Computational Mathematics and Operations Research

E-mail: tingxu_wz@sina.com


Yongcun Shao, Master Work Unit: WENZHENG College of Soochow University Research interests: Functional Analysis and Rough Set Theory

E-mail: wzj015@suda.edu.cn


Article

Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(2): 202-212

Published online June 25, 2022 https://doi.org/10.5391/IJFIS.2022.22.2.202

Copyright © The Korean Institute of Intelligent Systems.

Analysis of the Three-Way Decision Rules in Online Learning-Taking a Calculus Course as an Example

Songlin Yang1;2, Ting Xu1, and Yongcun Shao1

1Wenzheng College of Soochow University, Suzhou, China
2Soochow College, Soochow University, Suzhou, China

Correspondence to:Songlin Yang (songliny@suda.edu.cn)

Received: January 6, 2021; Revised: July 26, 2021; Accepted: April 27, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

To investigate the credibility of online examinations, we randomly chose 30 engineering students in Wenzheng College of Soochow University as a sample and established an information system S. According to pedagogy theory, students’scores are related to their intelligence levels, course basis, learning methods, and learning time. Therefore, in the information system S, “score of early semester,” “average score of class quizzes,” “completion ratio of watching videos,” and “completion grade of homework” were taken as the condition attributes, and “score of final exam” was taken as the decision attribute. We applied the three-way decision rules theory to provide positive, negative, and boundary decision rules for this information system S. Furthermore, we obtained confidence in the decision rules. The results of this study validate the credibility of online examinations and have certain guiding significance for online teaching and learning.

Keywords: Three-way decision rules, Information systems, Condition attributes, Decision attributes, Confidence, Online learning, Online teaching

1. Introduction

The coronavirus disease 2019 (COVID-19) pandemic has disrupted the original offline teaching mode in colleges and universities. Under the background of “suspending classes without stopping teaching, suspending classes without stopping learning,” we developed an online teaching mode of calculus courses using the Superstar network teaching platform and successfully completed the online teaching process including lesson preparation, teaching, and examinations. We found some problems in the system during online teaching, for example, in “brush the lesson” learning [1,2]. We know that some students just wanted to complete their study tasks; they opened the course videos and did other things after signing in. Some students used the so-called “lesson brush artifact” to complete their study tasks. We also found that there are some problems in the online examinations; for example, as there are no monitors, some students may copy the answers or cheat in other ways. Therefore, it is important to evaluate online learning and student satisfaction. In [35], the authors evaluated the impact of shifting from traditional learning to online learning during the COVID-19 pandemic on undergraduate students and examined the positive and negative aspects of online learning from the students’ perspectives. Thom et al. [6] examined the lessons learned in online learning during the COVID-19 pandemic. However, it is also necessary to evaluate the credibility of online examinations. In this study, we investigated the credibility of the online examination for the calculus course we had developed. According to pedagogy theory [7], students’ scores are related to the following factors: intelligence level, course basis, psychological state, learning method, and learning time. In particular, we focused on the following four aspects: students’ proficiency in the course, appropriate learning methods, degree of class attendance, and completion of homework.

To investigate the credibility of the online examination, we randomly selected 30 engineering students at Wenzheng College of Soochow University as a sample and established an information system S = (U, CD, f). Online learning data for these students are presented in Table 1. We used the rough-set theory to analyze Table 1 and extract useful information hidden in these data. Rough-set theory, which is a logic-mathematical method proposed by Pawlak [8,9], is an effective tool for extracting and analyzing useful information hidden in data. In recent years, the rough-set theory has been widely implemented in many fields of natural and social sciences [1014]. In 2009, Yao [15,16] introduced a third decision, the boundary decision, based on a two-way decision and proposed the three-way decision theory. The three-way decision theory provides a rational interpretation of the three regions in rough sets. Corresponding to the three positive, negative, and boundary regions in rough sets, this theory shows the regions of acceptance, rejection, and non-commitment in a ternary classification [17]. In this study, we use the three-way decision rule to analyze and extract useful information hidden in Table 1.

Remark 1.1

“Score of Early Semester” in Table 1 refers to the students’ calculus (1) course scores in the last semester. “Average Score of Class Quizzes” is a weighted average of the students’ scores from six class quizzes and two chapter quizzes during this semester; three or four problems in the class quiz are completed in 15 to 20 minutes. Then, the student signs in with the student number and name within the allotted time and uploads the completed quiz to the teacher. These quizzes can appropriately reflect the students’ learning situation. “Completion Ratio of Watching Videos” is the average completion ratio of students watching the teaching videos in this semester; “Completion Grade of Homework” refers to the students’ completion grade of homework in this semester. “Score of Final Exam” is the student’s final exam score in this semester.

From Table 1, which combines the queries mentioned earlier, the following question arises.

Question 1.2

Does Table 1 accurately reflect the score of the final exam derived from the “Score of Early Semester,” “Average Score of Class Exercise,” “Completion Ratio of Watching Video,” and “Grade of Homework” completion?

To discuss Question 1.2 theoretically, useful information hidden in Table 1 must be extracted. This leads us to establish an information system S = (U, CD, f) based on Table 1, where “Score of Early Semester,” “Average Score of Class Quizzes,” “Completion Ratio of Watching Videos,” and and “Grade of Completion Homework” are taken as the condition attributes, and “Score of Final Exam” is taken as the decision attribute. Thus, the authenticity of the students’ scores described in Question 1.2 can be converted to a decision rule question described in the following question.

Question 1.3

Let S = (U, CD, f) be an information system established based on Table 1. In S = (U, CD, f), is the decision attribute derived from the condition attributes? Furthermore, how can we characterize the confidence that the decision attribute is derived from the condition attributes?

Our discussion is based around Questions 1.2 and 1.3. We apply the three-way decision rules to the information system S = (U, CD, f) and use confidence to characterize the three-way decision rules, which answers Question 1.3. Based on these, we check the accuracy of whether “Score of Final Exam” is derived from “Score of Early Semester,” “Average Score of Class Quizzes,” “Completion Ratio of Watching Videos,” and “Grade of Completion Homework” in Table 1. The results of this study provide a new theoretical analysis method for online teaching and learning information for calculus, which will be helpful for teachers to develop other online courses accurately.

The basic concepts for rough-set theory and decision rule can be found in [8,13,14,16,18,19].

Notation 1.4

(1) For a finite set B, |B| denotes the cardinal of B.

(2) For a collection 1, 2, ⋯, k of families of sets, ∧{i : i = 1, 2, ⋯, k} denotes a family of sets:

{Fi:i=1,2,,k}={{Fi:i=1,2,,k}:FiFi,i=1,2,,k}.

Definition 1.5

S = (U, CD, f) is called an information system, where

(1) U, a nonempty finite set, is called the universe of discourse.

(2) CD is a finite set of attributes, where C and D are disjoint nonempty sets of condition attributes and decision attributes, respectively.

(3) f is an information function defined as U × (CD). For each uU and xC, f(u, x) is called a condition attribute value. For each uU and xD, f(u, x) is called a decision attribute value.

2. Establishment of the Information System S = (U, CD, f)

To apply the three-way decision rules for the theoretical analysis of online learning data of the calculus course (2), we must establish an information system S = (U, CD, f) based on Table 1.

(1) Let U = {u1, u2, ⋯, u30} represent a set of 30 students, where u1, u2, ⋯, u30 denote 30 students.

(2) Let C = {c1, c2, c3, c4}, where c1, c2, c3, c4 denote the “Score of Early Semester”, “Average Score of Class Quizzes”, “Completion Ratio of Watching Videos”, and “Completion Grade of Homework”, respectively.

(3) Let D = {d}, where d denotes “Score of Final Exam”.

According to the general rule of statistical grouping (e.g., refer to [20]), we divide the condition attributes into four groups. For condition attributes c1, c2 and c3, the group interval

h=(max+0.5)-(min-0.5)4,

where max and min are the maximum and minimum values, respectively, of the set of group data.

(i) For the condition attribute c1, the group interval hc1=(95+0.5)-(42-0.5)4=13.5. According to the equidistant grouping method, the open interval (41.5, 95.5) can be divided into four intervals:(41.5, 55.0), [55.0, 68.5), [68.5, 82.0), and [82.0, 95.5). Let c11 indicate “Score of Early Semester” between 82.5 and 95.5, let c21 indicate “Score of Early Semester” between 68.5 and 82.0, let c31 indicate “Score of Early Semester” between 55.0 and 68.5, and let c41 indicate “Score of Early Semester” between 41.5 and 55.0.

(ii) For the condition attribute c2, the group interval hc2=(80+0.5)-(25-0.5)414.0. According to the equidistant grouping method, the open interval (24.5, 80.5) can be divided into four intervals: (24.5, 38.5), [38.5, 52.5), [52.5, 66.5), and [66.5, 80.5). Let c12 indicate “Average Score of Class Quizzes” between 66.5 and 80.5, let c22 indicate “Average Score of Class Quizzes” between 52.5 and 66.5, let c32 indicate “Average Score of Class Quizzes” between 38.5 and 52.5, and let c42 indicate “Average Score of Class Quizzes” between 24.5 and 38.5.

(iii) For the condition attribute c3, the group interval hc3=(99+0.5)-(21-0.5)419.8. According to the equidistant grouping method, the open interval (20.5, 99.5) can be divided into four intervals: (20.5, 40.3), [40.3, 60.1), [60.1, 79.9), and [79.9, 99.5). Let c13 indicate “Completion Ratio of Watching Videos” between 79.9 and 99.5, let c23 indicate “Completion Ratio of Watching Videos” between 60.1 and 79.9, let c33 indicate “Completion Ratio of Watching Videos” between 40.3 and 60.1, and let c43 indicate “Completion Ratio of Watching Videos” between 20.5 and 40.3.

(iv) For the condition attribute c4, Let c14 indicate “Completion Grade of Homework” for A, let c24 indicate “Completion Grade of Homework” for B, let c34 indicate “Completion Grade of Homework” for C, and let c44 indicate “Completion Grade of Homework” for D.

(v) For the decision attribute d, the group interval hd=(91+0.5)-(41-0.5)4=12.8. According to the equidistant grouping method, the open interval (40.5, 91.5) can be divided into four intervals: Id1 = [78.9, 91.5), Id2 = [66.1, 78.9), Id3 = [53.3, 66.1) and Id4 = (40.5, 53.3). Let d1 indicate “Score of Final Exam” between 78.9 and 91.5, let d2 indicate “Score of Final Exam” between 66.1 and 78.9, let d3 indicate “Score of Final Exam” between 53.3 and 66.1, and let d4 indicate “Score of Final Exam” between 40.5 and 53.3.

We now define the information function f: for each uU and dIdj, set f(u, d) = dj for some j = 1, 2, 3, 4. Similarly, for each uU, we define f(u, ci) = cji for some i, j = 1, 2, 3, 4. Consequently, the information function f defined on U × (CD) is constructed.

From (i)-(v), an information system S = (U, CD, f) based on Table 1 is established. Furthermore, S = (U, CD, f) can be expressed as a decision table (see Table 2), whose columns are labeled by elements of CD and rows are labeled by elements of U. For each uU and xCD, f(u, x) lies in the cross of the row labeled by u and column labeled by x.

3. Preliminaries of Rough Sets and Decision Rules

Now, we provide some simple results that are useful for our discussion.

Remark 3.1

An information system S = (U, CD, f) can be expressed as a data table, which is called a decision table, whose columns are labeled by elements of CD, rows are labeled by elements of U; f(u, x) lies in the cross of the row labeled by u and the column labeled by x.

Notation 3.2

Let S = (U, CD, f) be an information system.

(1) For xCD, we define the equivalence relation ~ on U as follows: ui ~ uj ⇐⇒ f(ui, x) = f(uj, x) for some i, j. U/x denotes the family comprising all equivalence classes with respect to ~.

(2) ∧{U/c : cC} is a partition of U denoted as U/C. The equivalence relation induced by U/C is denoted as C.

Definition 3.3

Let S = (U, CD, f) be an information system and uU. An equivalence class containing u with respect to the equivalence relation C is denoted by C(u) and is called a condition attribute granule.

Definition 3.4

Let S = (U, CD, f) be an information system and set XU.

(1) apr(X) = {u : uU and C(u) ⊆ X} is the lower approximation of X (with respect to C).

(2) apr¯(X)={u:uUand C(u)X} is the upper approximation of X (with respect to C).

Lemma 3.5

Let S = (U, CD, f) be an information system and set XU. Then, the following holds:

(1) apr(X) = ∪ {C(u) : uU and C(u) ⊆ X}.

(2) apr¯(X)={C(u):uUand C(u)X}.

Definition 3.6

Let S = (U, CD, f) be an information system and set XU.

(1) POS(X) = apr(X) is the positive region of X.

(2) NEG(X)=U-apr¯(X) is the negative region of X.

(3) BND(X)=apr¯(X)-apr_(X) is the boundary region of X.

Definition 3.7

Let S = (U, CD, f) be an information system, uU and XU/D.

(1) If a rule allows us to accept u as a member of X, this rule is called a positive decision rule, which is denoted as Des(C(u))PDes(X).

(2) If a rule allows us to reject u as a member of X, this rule is called a negative decision rule, which is denoted as Des(C(u))NDes(X).

(3) If a rule allows us to make an uncertain decision about whether we accept u as a member of X, then this rule is called a boundary decision rule, which is denoted as Des(C(u))BDes(X).

Here, the positive, negative, and boundary decision rules are called the three-way decision rules.

In an information system S = (U, CD, f), the positive, negative, and boundary regions of decision class XU/D can be constructed.

Lemma 3.8

Let S = (U, CD, f) be an information system, uU and XU/D. Then, the following holds:

(1) If C(u) ⊆ POS(X), then Des(C(u))PDes(X).

(2) If C(u) ⊆ NEG(X), then Des(C(u))NDes(X).

(3) If C(u) ⊆ BND(X), Then Des(C(u))BDes(X).

We provide an easier method to derive the positive, negative, and boundary decision rules.

Lemma 3.9

Let S = (U, CD, f) be an information system, uU and XU/D. Then, the following holds:

(1) u ∈ POS(X) if and only if C(u) ⊆ POS(X);

(2) u ∈ NEG(X) if and only if C(u) ⊆ NEG(X);

(3) u ∈ BND(X) if and only if C(u) ⊆ BND(X);

The following theorem can be obtained from Lemmas 3.8 and 3.9.

Theorem 3.10

Let S = (U, CD, f) be an information system, uU and XU/D. Then, the following holds:

(1) If u ∈ POS(X), then Des(C(u))PDes(X).

(2) If u ∈ BND(X), Des(C(u))BDes(X).

(3) If u ∈ NEG(X), then Des(C(u))NDes(X).

The confidence of three-way decision rules on an information system was introduced by Yao and his colleague [12,15].

Definition 3.11

Let S = (U, CD, f) be an information system, uU, XU/D and Λ ∈ {P, B, N}. Put

conf(Des(C(u))ΛDes(X))=|C(u)X||C(u)|.

Then, conf(Des(C(u))ΛDes(X)) is the confidence of the decision rule Des(C(u))ΛDes(X).

We have the following proposition:

Proposition 3.12

Let S = (U, CD, f) be an information system, uU, Λ ∈ {P, B, N} and XU/D. Then, the following holds:

(1) conf(Des(C(u))ΛDes(X))=1 if and only if Des(C(u))PDes(X);

(2) conf(Des(C(u))ΛDes(X))=0 if and only if Des(C(u))NDes(X).

(3) 0<conf(Des(C(u))ΛDes(X))<1 if and only if Des(C(u))BDes(X).

In the following sections, the information system S = (U, CD, f) is expressed in Table 2.

4. Three-Way Decision Rules on the Information System S = (U, CD, f)

In this section, we discuss three-way decision rules for S = (U, CD, f). First, we provide some related partitions of U obtained from Table 2.

Proposition 4.1

The following partitions of U hold for S = (U, CD, f).

(1) U/C = {{u3}, {u4}, {u6}, {u7}, {u11}, {u13}, {u14}, {u16}, {u17}, {u18, {u19}, {u20}, {u21}, {u23}, {u1, u24}, {u2, u9}, {u12, u15}, {u5, u25, u29}, {u8, u10, u28}, {u22, u26, u27, u30}}.

(2) U/d = {{u2, u3, u5, u7, u10, u13, u18, u22, u25, u26, u27, u28, u29, u30}, {u1, u4, u8, u9, u11, u12, u14, u17, u23}, {u6, u15, u16, u24}, {u19, u20, u21}}.

From Proposition 4.1, we get some conditional attribute granules in S = (U, CD, f).

Proposition 4.2

The following conditional attribute granules hold for S = (U, CD, f).

(1) C(u) = {u} for each u ∈ {u3, u4, u6, u7, u11, u13, u14, u16, u17, u18, u19, u20, u21, u23}.

(2) C(u) = X for each uX and X ∈ {{u1, u24}, {u2, u9}, {u12, u15}, {u5, u25, u29}, {u8, u10, u28}, {u22, u26, u27, u30}}.

Let U/d = {D1, D2, D3, D4}, where D1 = {u2, u3, u5, u7, u10, u13, u18, u22, u25, u26, u27, u28, u29, u30}, D2 = {u1, u4, u8, u9, u11, u12, u14, u17, u23}, D3 = {u6, u15, u16, u24} and D4 = {u19, u20, u21}. For uDi, the value of the decision attribute is di (for some i = 1, 2, 3, 4). By Lemma 3.5, Proposition 4.1, and Proposition 4.2, we have the upper and lower approximations of D1, D2, D3, D4.

Proposition 4.3

The following upper and lower approximations of D1, D2, D3, D4 hold for S = (U, CD, f).

(1) apr(D1) = {u3, u5, u7, u13, u18, u22, u25, u26, u27, u29, u30}.

apr¯(D1)={u2,u3,u5,u7,u8,u9,u10,u13,u18,u22,u25,u26,u27,u28,u29,u30}.

(2) apr(D2) = {u4, u9, u11, u14, u17, u23}.

apr¯(D2)={u1,u4,u8,u9,u10,u11,u12,u14,u15,u17,u23,u24,u28}.

(3) apr(D3) = {u6, u16},

apr¯(D3)={u1,u6,u12,u15,u16,u24}.

(4) apr(D4) = {u19, u20, u21}.

apr¯(D4)={u19,u20,u21}.

Now, by Definition 3.6 and Proposition 4.3, we discuss the three-way decision rules for S = (U, CD, f). We have the following positive, negative, and boundary regions of D1, D2, D3, D4.

Theorem 4.4

The following positive, negative, and boundary regions of D1, D2, D3, D4 hold for S = (U, CD, f).

(1) POS(D1) = {u3, u5, u7, u13, u18, u22, u25, u26, u27, u29, u30}.

NEG(D1) = {u1, u4, u6, u11, u12, u14, u15, u16, u17, u19, u20, u21, u23, u24}.

BND(D1) = {u2, u8, u9, u10, u28}.

(2) POS(D2) = {u4, u9, u11, u14, u17, u23}.

NEG(D2) = {u2, u3, u5, u6, u7, u13, u16, u18, u19, u20, u21, u22, u25, u26, u27, u29, u30}.

BND(D2) = {u1, u8, u10, u12, u15, u24, u28}.

(3) POS(D3) = {u6, u16}.

NEG(D3) = {u2, u3, u4, u5, u7, u8, u9, u10, u11, u13, u14, u17, u18, u19, u20, u21, u22, u23, u25, u26, u27, u28, u29, u30}.

BND(D3) = {u1, u12, u15, u24}.

(3) POS(D4) = {u19, u20, u21}.

NEG(D4) = {u1, u2, u3, u4, u5, u6, u7, u8, u9, u10, u11, u12, u13, u14, u15, u16, u17, u18, u22, u23, u24, u25, u26, u27, u28,

u29, u30}.

BND(D4) = ∅︀.

By Theorems 3.10 and 4.4, we can easily obtain the three-way decision rules for S = (U, CD, f). Table 3 lists Des(C(u))ΛDes(Di) for all uU and i ∈ {1, 2, 3, 4}, where Λ ∈ {P, N, B}. Here, the decision rule Des(C(u))ΛDes(Di) is simply denoted by C(u)ΛDi for uU, Λ ∈ {P, N, B} and all i ∈ {1, 2, 3, 4} (This notation is shown in Table 4).

The three-way decision rules for S = (U, CD, f) can also be shown by its confidence. In other words, we can use conf(Des(C(u))ΛDes(Di)) to characterize Des(C(u))ΛDes(Di), where uU, Λ ∈ {P, N, B} and all i ∈ {1, 2, 3, 4}. For example, for u1, C(u1) = {u1, u24}, C(u1) ∩ D1 = ∅︀, C(u1) ∩ D2 = {u1}, C(u1) ∩ D3 = {u24} and C(u1) ∩ D4 = ∅︀. By Definition 3.11,

conf(Des(C(u1))ΛDes(D1))=|C(u1)D1||C(u1)|=0,conf(Des(C(u1))ΛDes(D2))=|C(u1)D2||C(u1)|=12,conf(Des(C(u1))ΛDes(D3))=|C(u1)D3||C(u1)|=12,conf(Des(C(u1))ΛDes(D4))=|C(u1)D4||C(u1)|=0.

Thus, we have confidence in the three-way decision rules on S = (U, CD, f)(see Table 4).

Tables 3 and 4 give the three-way decision rules on S = (U, CD, f) and the confidence of the three-way decision rules, respectively, which answers Question 1.3 (see Conclusion 5.2 for more detailed explanations).

5. Conclusion

In this section, we focus on Question 1.2. First, we provide a remark on the three-way decision rules on S = (U, CD, f).

Remark 5.1

The following are some semantic interpretations for the three-way decision rules on S = (U, CD, f) in the sense of [15], where i ∈ {1, 2, 3, 4}. By Proposition 3.12,

(1) For uU, if Des(C(u))PDes(Di) (equivalently, conf(Des(C(u))PDes(Di))=1), then the decision attribute value derived from condition attribute values is di.

(2) For uU, if Des(C(u))NDes(Di) (equivalently, conf(Des(C(u))ΛDes(Di))=0), then the decision attribute value derived from the condition attribute values is not di.

(3) For uU, if Des(C(u))BDes(Di) (equivalently, 0<conf(Des(C(u))ΛDes(Di))<1), then it is uncertain whether the decision attribute value derived from the condition attribute values is di.

From Table 3 (or Table 4) and Remark 5.1, we have the following more detailed explanations of the three-way decision rules on S = (U, CD, f).

Conclusion 5.2

The following are true for S = (U, CD, f).

(1) For u = u1, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 1, 4. However, it is uncertain whether the decision attribute value derived from the condition attribute values is dj because Des(C(u))BDes(Dj), where j = 2, 3.

(2) For u = u2, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value derived from condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(3) For u = u3, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1).

(4) For u = u4, the decision attribute value derived from the condition attribute values is d2 because Des(C(u))PDes(D2).

(5) For u = u5, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1).

(6) For u = u6, the decision attribute value derived from the condition attribute values is d3 because Des(C(u))PDes(D3).

(7) For u = u7, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1).

(8) For u = u8, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value derived from condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(9) For u = u9, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value derived from condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(10) For u = u10, the decision attribute value that is derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value that is derived from condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(11) For u = u11, the decision attribute value derived from the conditional attribute values is d2 because Des(C(u))PDes(D2).

(12) For u = u12, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 1, 4. However, it is uncertain whether the decision attribute value derived from the condition attribute values is dj because Des(C(u))BDes(Dj), where j = 2, 3.

(13) For u = u13, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1).

(14) For u = u14, the decision attribute value derived from the condition attribute values is d2 because Des(C(u))PDes(D2).

(15) For u = u15, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 1, 4. However, it is uncertain whether the decision attribute value derived from the condition attribute values is dj because Des(C(u))BDes(Dj), where j = 2, 3.

(16) For u = u16, the decision attribute value derived from the conditional attribute values is d3 because Des(C(u))PDes(D3).

(17) For u = u17, the decision attribute value derived from the conditional attribute values is d2 because Des(C(u))PDes(D2).

(18) For u = u18, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(19) For u = u19, the decision attribute value derived from the condition attribute values is d4 because Des(C(u))PDes(D4).

(20) For u = u20, the decision attribute value derived from the condition attribute values is d4 because Des(C(u))PDes(D4).

(21) For u = u21, the decision attribute value derived from the condition attribute values is d4 because Des(C(u))PDes(D4).

(22) For u = u22, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(23) For u = u23, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(24) For u = u24, the decision attribute value derived from the condition attribute values is not di because Des(C(u))NDes(Di) where i = 2, 4. However, it is uncertain whether the decision attribute value derived from the condition attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 3.

(25) For u = u25, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(26) For u = u26, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(27) For u = u27, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(28) For u = u28, the decision attribute value that is derived from the condition attribute values is not di because Des(C(u))NDes(Di), where i = 3, 4. However, it is uncertain whether the decision attribute value derived from the conditional attribute values is dj because Des(C(u))BDes(Dj), where j = 1, 2.

(29) For u = u29, the decision attribute value derived from the conditional attribute values is d1 because Des(C(u))PDes(D1).

(30) For u = u30, the decision attribute value derived from the condition attribute values is d1 because Des(C(u))PDes(D1). In S = (U, CD, f), Conclusion 5.2 gives a theoretical answer to Question 1.3.

Conclusion 5.3

For online teaching data of calculus (2), the final scores of most students are credible.

For students: 2019488007, 2019488008, 2019488009, 2019 488011, 2019488013, 2019488022, 2019488027, 2019488028, 2019488032 2019488034, 2019488036, 2019488038, 2019488 039, 2019488040, 2019488041, 2019488045, 2019488049, 201 9488050, 2019488054, 2019488057 and 2019488060, their scores are credible. For students: 2019488002, 2019488005 2019488014, 2019488015, 2019488020, 2019488025, 201948 8030, 2019488046 and 2019488055, their scores are uncertain.

Thus, the scores of the online exam are mostly credible. This provides a theoretical basis for online teaching and examinations.

Although this paper focuses on the theoretical analysis of online learning data of calculus (2), it still has practical significance for other online courses. However, a more practical analysis is required to determine whether this theory is suitable for investigating other problems.

Table 1 . Online learning data of calculus (2) course.

OrderStudent numberScore of Early SemesterAverage Score of Class QuizzesCompletion Ratio of Watching Videos(%)Grade of Completion HomeworkScore of Final Exam
12019488002563553C67
22019488005794585A87
32019488007812570C83
42019488008845045B74
52019488009774521B81
62019488011683552D56
72019488013863545A90
82019488014845095A74
92019488015815089A78
102019488020914588A84
112019488022673543A73
122019488025693550C68
132019488027813590B84
142019488028638096B73
152019488030693557C66
162019488032655594C62
172019488034695585A74
182019488036845542A85
192019488038564572D51
202019488039422571C49
212019488040533598D41
222019488041745081B91
232019488045817099A71
242019488046653551C57
252019488049695039B86
262019488050744583B79
272019488054734097B81
282019488055955094A91
292019488057804537B79
302019488060775090B90

Table 2 . Decision table.

Uc1c2c3c4d
u1c31c42c33c34d2
u2c21c32c13c14d1
u3c21c42c23c34d1
u4c11c32c33c24d2
u5c21c32c43c24d1
u6c31c42c33c44d3
u7c11c42c33c14d1
u8c11c32c13c14d2
u9c21c32c13c14d2
u10c11c32c13c14d1
u11c31c42c33c14d2
u12c21c42c33c34d2
u13c21c42c13c24d1
u14c31c12c13c24d2
u15c21c42c33c34d3
u16c31c22c13c34d3
u17c21c22c13c14d2
u18c11c22c33c14d1
u19c31c32c23c44d4
u20c41c42c23c34d4
u21c41c42c13c44d4
u22c21c32c13c24d1
u23c21c12c13c14d2
u24c31c42c33c34d3
u25c21c32c43c24d1
u26c21c32c13c24d1
u27c21c32c13c24d1
u28c11c32c13c14d1
u29c21c32c43c24d1
u30c21c32c13c24d1

Table 3 . Three-way decision rules on S = (U, CD, f).

uC(u)ΛD1C(u)ΛD2C(u)ΛD3C(u)ΛD4
u = u1Λ = NΛ = BΛ = BΛ = N
u = u2Λ = BΛ = BΛ = NΛ = N
u = u3Λ = PΛ = NΛ = NΛ = N
u = u4Λ = NΛ = PΛ = NΛ = N
u = u5Λ = PΛ = NΛ = NΛ = N
u = u6Λ = NΛ = NΛ = PΛ = N
u = u7Λ = PΛ = NΛ = NΛ = N
u = u8Λ = BΛ = BΛ = NΛ = N
u = u9Λ = BΛ = BΛ = NΛ = N
u = u10Λ = BΛ = BΛ = NΛ = N
u = u11Λ = NΛ = PΛ = NΛ = N
u = u12Λ = NΛ = BΛ = BΛ = N
u = u13Λ = PΛ = NΛ = NΛ = N
u = u14Λ = NΛ = PΛ = NΛ = N
u = u15Λ = NΛ = BΛ = BΛ = N
u = u16Λ = NΛ = NΛ = PΛ = N
u = u17Λ = NΛ = PΛ = NΛ = N
u = u18Λ = PΛ = NΛ = NΛ = N
u = u19Λ = NΛ = NΛ = NΛ = P
u = u20Λ = NΛ = NΛ = NΛ = P
u = u21Λ = NΛ = NΛ = NΛ = P
u = u22Λ = PΛ = NΛ = NΛ = N
u = u23Λ = PΛ = NΛ = NΛ = N
u = u24Λ = BΛ = NΛ = BΛ = N
u = u25Λ = PΛ = NΛ = NΛ = N
u = u26Λ = PΛ = NΛ = NΛ = N
u = u27Λ = PΛ = NΛ = NΛ = N
u = u28Λ = BΛ = BΛ = NΛ = N
u = u29Λ = PΛ = NΛ = NΛ = N
u = u30Λ = PΛ = NΛ = NΛ = N

Table 4 . Confidence of the three-way decision rules on S = (U, CD, f).

uC(u)ΛD1C(u)ΛD2C(u)ΛD3C(u)ΛD4
u = u1conf = 0conf=12conf=12conf = 0
u = u2conf=12conf=12conf = 0conf = 0
u = u3conf = 1conf = 0conf = 0conf = 0
u = u4conf = 0conf = 1conf = 0conf = 0
u = u5conf = 1conf = 0conf = 0conf = 0
u = u6conf = 0conf = 0conf = 1conf = 0
u = u7conf = 1conf = 0conf = 0conf = 0
u = u8conf=23conf=13conf = 0conf = 0
u = u9conf=12conf=12conf = 0conf = 0
u = u10conf=23conf=13conf = 0conf = 0
u = u11conf = 0conf = 1conf = 0conf = 0
u = u12conf = 0conf=12conf=12conf = 0
u = u13conf = 1conf = 0conf = 0conf = 0
u = u14conf = 0conf = 1conf = 0conf = 0
u = u15conf = 0conf=12conf=12conf = 0
u = u16conf = 0conf = 0conf = 1conf = 0
u = u17conf = 0conf = 1conf = 0conf = 0
u = u18conf = 1conf = 0conf = 0conf = 0
u = u19conf = 0conf = 0conf = 0conf = 1
u = u20conf = 0conf = 0conf = 0conf = 1
u = u21conf = 0conf = 0conf = 0conf = 1
u = u22conf = 1conf = 0conf = 0conf = 0
u = u23conf = 1conf = 0conf = 0conf = 0
u = u24conf=12conf = 0conf=12conf = 0
u = u25conf = 1conf = 0conf = 0conf = 0
u = u26conf = 1conf = 0conf = 0conf = 0
u = u27conf = 1conf = 0conf = 0conf = 0
u = u28conf=23conf=13conf = 0conf = 0
u = u29conf = 1conf = 0conf = 0conf = 0
u = u30conf = 1conf = 0conf = 0conf = 0

References

  1. Zeng, L (Array). Analysis of the Implementation effect and the influential factors of online teaching in the background of epidemic prevention. Higher Education Exploration. 7, 85-9.
  2. Wang, X, Yang, H, Cui, Y, and Zuo, C (2020). Analysis of College Students’ Online Learning Identity Degree during the ‘Fight against COVID-19’ Period. Modern Educational Technology. 30, 105-112.
  3. Maqableh, M, and Alia, M (2021). Evaluation online learning of undergraduate students under lockdown amidst COVID-19 pandemic: the online learning experience and students’ satisfaction. Children and Youth Services Review. 128. article no. 106160
    CrossRef
  4. Duo, Y (2022). Research on the equivalence of online test scores and traditional test scores: a meta-analysis based on international empirical research from 2000 to 2020. Distance Education in China. 2022, 73-84.
  5. Ilgaz, H, and Afacan Adanir, G (2020). Providing online exams for online learners: does it really matter for them?. Education and Information Technologies. 25, 1255-1269. https://doi.org/10.1007/s10639-019-10020-6
    CrossRef
  6. Thom, ML, Kimble, BA, Qua, K, and Wish-Baratz, S (2021). Is remote near-peer anatomy teaching an effective teaching strategy? Lessons learned from the transition to online learning during the Covid-19 pandemic. Anatomical Sciences Education. 14, 552-561. https://doi.org/10.1002/ase.2122
    Pubmed KoreaMed CrossRef
  7. Wang, DJ, and Guo, WA (2016). Pedagogy. Beijing, China: People’s Education Press
  8. Pawlak, Z (1982). Rough sets. International Journal of Computer & Information Sciences. 11, 341-356. https://doi.org/10.1007/BF01001956
    CrossRef
  9. Pawlak, Z (2002). Rough sets, decision algorithms and Bayes’ theorem. European Journal of Operational Research. 136, 181-189. https://doi.org/10.1016/S0377-2217(01)00029-7
    CrossRef
  10. Lee, CH, and Seo, SH (2004). Efficient extraction of hierarchically structured rules using rough sets. International Journal of Fuzzy Logic and Intelligent Systems. 4, 205-210. https://doi.org/10.5391/IJFIS.2004.4.2.205
    CrossRef
  11. Stepaniuk, J (2009). Rough–Granular Computing in Knowledge Discovery and Data Mining. Heidelberg, Germany: Springer
  12. Yao, Y, and Zhao, Y (2008). Attribute reduction in decision-theoretic rough set models. Information Sciences. 178, 3356-3373. https://doi.org/10.1016/j.ins.2008.05.010
    CrossRef
  13. Ge, X, and Qian, J (2009). Some investigations on higher mathematics scores for Chinese university students. International Journal of Computer and Information Science and Engineering. 3, 46-49.
  14. Yang, S, and Ge, Y (2010). Remarks on some properties of decision rules. International Journal of Mathematical and Computational Sciences. 4, 178-181.
  15. Yao, Y (2009). Three-way decision: an interpretation of rules in rough set theory. Rough Sets and Knowledge Technology. Heidelberg, Germany: Springer, pp. 642-649 https://doi.org/10.1007/978-3-642-02962-281
    CrossRef
  16. Yao, Y (2010). Three-way decisions with probabilistic rough sets. Information Sciences. 180, 341-353. https://doi.org/10.1016/j.ins.2009.09.021
    CrossRef
  17. Ma, JM, Zhang, HY, and Qian, YH (2019). Three-way decisions with reflexive probabilistic rough fuzzy sets. Granular Computing. 4, 363-375. https://doi.org/10.1007/s41066-018-0125-2
    CrossRef
  18. Kim, YC (2014). The properties of L-lower approximation operators. International Journal of Fuzzy Logic and Intelligent Systems. 14, 57-65. https://doi.org/10.5391/IJFIS.2014.14.1.57
    CrossRef
  19. Yun, SM, and Lee, SJ (2015). Intuitionistic fuzzy rough approximation operators. International Journal of Fuzzy Logic and Intelligent Systems. 15, 208-215. https://doi.org/10.5391/IJFIS.2015.15.3.208
    CrossRef
  20. Montgomery, DC (1996). Introduction to Statistical Quality Control. Hoboken, NJ: John Wiley & Sons

Share this article on :

Related articles in IJFIS