Article Search
닫기

## Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(1): 78-88

Published online March 25, 2022

https://doi.org/10.5391/IJFIS.2022.22.1.78

© The Korean Institute of Intelligent Systems

## Neutrosophic Conditional Probabilities: Theories and Applications

Ahmad M. H. Al-khazaleh1 and Shawkat Alkhazaleh2

1Department of Mathematics, Faculty of Science, Al-Albayt University, Al-Mafraq, Jordan
2Department of Mathematics, Faculty of Science and Information Technology, Jadara University, Irbid, Jordan

Correspondence to :
Shawkat Alkhazaleh (shmk79@gmail.com)

Received: August 19, 2021; Revised: September 11, 2021; Accepted: September 23, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Data could be uncertain, and the levels of precision of data are intuitively different. Neutrosophic set expressions are considered an alternative to represent imprecise data in such cases. In this paper, a general definition of neutrosophic conditional probability is introduced as a generalization of the classical conditional probability. Additionally, the properties of this neutrosophic conditional probability are presented. The concepts of joint distribution function, regular conditional probabilities, marginal density function, expected value, and joint density function in the classical type are generalized to a neutrosophic type with two discrete and continuous neutrosophic random variables. Various properties and examples are presented to demonstrate the significance of this study.

Keywords: Conditional probability, Neutrosophic conditional probability, Neutrosophic distribution function, Marginal neutrosophic density function, Neutrosophic expected value, Joint neutrosophic density function

Crisp is the most important requirement in classical mathematics, whereas real problems involve uncertain data. Thus, the solution to these problems involves the use of mathematical principles based on uncertainty (not crisp). Therefore, many scientists and engineers have been interested in uncertainty modeling to describe and debrief useful information hidden in uncertain data. To help them deal with this uncertainty, numerous theories such as the fuzzy set theory [1], intuitionistic fuzzy set theory [2], rough set theory [3], and neutrosophic set theory [4,5] have been proposed in recent years.

Smarandache proposed the theory of neutrosophic logic as a general framework for the unification of many existing logics, such as the intuitionistic fuzzy logic. This theory aims to be a new mathematical tool for handling problems involving imprecise, indeterminant, and inconsistent data. The main idea of neutrosophic logic is to distinguish each logical statement in a three-dimensional neutrosophic space, wherein each dimension of the space represents the truth T, falsehood F, and indeterminacy I of the statement under consideration.

The neutrosophic probability given by Smarandache [6], which is a generalization of the classical and imprecise probabilities in which the chance that an event A occurs is t% true-where t varies in the subset T, i% is indeterminate - where i varies in the subset I, and f% is false - where f varies in the subset F. In addition, if nsup ≤ 1 is in classical probability, then nsup ≤ 3+ is in the neutrosophic probability. Moreover, the probability of an event is a subset T in [0, 1] in imprecise probability, not a fixed number p in [0, 1] and the opposite subset F from the unit interval [0, 1]. There is no indeterminate subset I in the imprecise probability.

Although the neutrosophic probability theory is one of the most important tools and has applications in real life, it has not received significant attention. However, it has been the focus of some studies. For more information about neutrosophic probability, see [68].

In 2003, Smarandache [9], for the first time, introduced the notions of neutrosophic measure and neutrosophic integral. Neutrosophic measure is a generalization of the classical measure when the space contains some indeterminacy, and the neutrosophic integral is defined on the neutrosophic measure. Hanafy et al. [1013] studied the correlation coefficient under uncertainty. Thereafter, Salama et al. [14] in 2014, introduced and studied the concepts of correlation and correlation coefficient of neutrosophic data in probability spaces and some of their properties. In addition, they introduced and studied the neutrosophic simple linear regression model and provided a possibility of its application to data processing. By applying the neutrosophic probability in physics, Yuhua [15] in 2015, determined the neutrosophic probability of accelerating the expansion of the partial universe. Some problems and solutions related to the neutrosophic statistical distribution, given by Patro and Smarandache [16] in 2016 and Smarandache et al. [17] in 2017, used proportional conflict redistribution rule number 5 (PCR5) to combine the information of two sources providing subjective probabilities of an event A occurring with a chance that A occurs, an indeterminate chance that A occurs, and a chance that A does not occur. Likewise, in 2017, Guo et al. [18] proposed an evidence fusion method based on neutrosophic probability analysis in the DSmT framework. They also introduced some basic theories, including DST, DSmT, and the dissimilarity measure of evidence. Consequently, in 2017, Gafar and El-Henawy [19] presented a framework of ant colony optimization and entropy theory and used it to define a neutrosophic variable from concrete data. In their paper, they exhibited the incorporation of a hybrid search model amongst ant colony optimization and information theory measures to demonstrate a neutrosophic variable. Taking a new step towards the study of neutrosophic probabilities in 2018, Alhabib et al. [20] introduced and studied some neutrosophic probability distributions by generalizing some classical probability distributions such as the Poisson distribution, exponential distribution, and uniform distribution to the neutrosophic type. Subsequently, in 2019, Alhasan and Smarandache studied the neutrosophic Weibull distribution and the Weibull family along with the relationship of the functions with the neutrosophic Weibull—such as the inverse Weibull, Rayleigh distribution, three-parameter Weibull, beta Weibull, five Weibull, and six Weibull distributions under the neutrosophic case. A general definition of neutrosophic random variables was introduced by Zeina and Hatip [21] in 2021. They studied the properties of this concept and generalized the probability distribution function, cumulative distribution function, expected value, variance, standard deviation, mean deviation, rth quartiles, moment generating function, and characteristic function from crisp logic to neutrosophic logic. In this paper, as a generalization of the classical conditional probability, we introduce a general definition of neutrosophic conditional probability and its properties. In addition, we will generalize, from the classical type to the neutrosophic type, the concepts of joint distribution function, regular conditional probabilities, marginal density function, expected value, and joint density function. We do this using two neutrosophic random variables, discrete and continuous. The significance of this study is demonstrated by providing numerous properties and examples.

In this section, we recall the definitions that are related to this work. The neutrosophic set, neutrosophic probability, and neutrosophic random variables are defined.

### 2.1 General Definitions

Definition 2.1 [4,5]

Let X be a non-empty fixed set. A neutrosophic set A is an object having the form

${x,(μA(x),δA(x),γA(x)):x∈X},$

where μA(x), δA(x), and γA(x) represent the degree of membership, degree of indeterminacy, and degree of non-membership, respectively, of each element xX to the set A.

Definition 2.2 [7]

Classical neutrosophic number has the form a + bI where a, b are real or complex numbers, and I is the indeterminacy, such that 0.I = 0 and 12 = I, which results in In = I for all positive integers n.

Definition 2.3 [9]

The neutrosophic probability of an event A occurring is

$NP(A)=(ch(A),ch(neutA),ch(antiA))=(T,I,F)$

where T, I,F are standard or nonstandard subsets of the nonstandard unitary interval.

### 2.2 Neutrosophic Random Variable

A neutrosophic random (stochastic) variable is subject to change due to both randomness and indeterminacy, while the classical random (stochastic) variable is subject to change only due to randomness. The values of this variable represent the possible outcomes and possible indeterminacies. Randomness and indeterminacy can be either objective or subjective.

Definition 2.4 [9]

A neutrosophic random variable is a variable that may have an indeterminate outcome.

Definition 2.5 [9]

A neutrosophic random (stochastic) process represents the evolution of some neutrosophic random values over time. This is a collection of random neutrosophic variables.

Definition 2.6 [21]

Consider the crisp random variable X, having a real value, which is defined as follows: X : Ω → R, where Ω is the event space. We define the neutrosophic random variable XN as follows: XN : Ω → R(I) and XN = X + I where I is indeterminacy.

Definition 2.7 [21]

Consider the neutrosophic random variable XN = X + I, where the CDF of X is FX(x) = P(Xx). Then, the PDF and CDF of the neutrosophic random variables are defined as follows:

• FXN (x) = FX(xI),

• fXN (x) = fX(xI).

Definition 2.8 [21]

Consider the neutrosophic random variable XN = X + I, where the CDF of X is FX(x) = P(Xx). Then, the PDF and CDF of the neutrosophic random variables are defined as follows:

• FXN (x) = FX(xI),

• fXN (x) = fX(xI).

Definition 2.9 [21]

Consider the neutrosophic random variable XN = X + I. Its expected value is defined as follows: EXN = E(X) + I.

Proposition 2.1 [21]

Properties of the expected value of a neutrosophic random variable.

• E(aXN + b + cI) = aE(XN) + b + cI, a, b, cR,

• If XN and YN are two neutrosophic random variables, then E(XN ± YN) = E(XN) ± E(YN),

• E[(a + bI)XN] = E(aXN + bIXN) = E(aXN) + E(bIXN) = aE(XN) + bIE(XN), a, bR and

• |E(XN)| = E|XN|.

Definition 2.10 [21]

Consider the neutrosophic random variable XN = X + I. Then, its variance is equal to X’s variance, i.e., VXN = V (X).

### 3. Neutrosophic Conditional Probabilities

Smarandache [9] discussed neutrosophic conditional probability by comparing it with classical probability. In classical probability, if A and B are independent events, then P(AgivenB) = P(A). This is similar to neutrosophic independent events, i.e., NP(AgivenB) = NP(A). Additionally, in classical probability, the Bayesian rule is

$P(A?B)=P(B?A)P(A)P(B),$

where the neutrosophic Bayesian rule is

$NB(A?B)=(ch(A?B),ch(indetermA?B),ch(A¯?B)).$

In this section, we introduce the concepts of the neutrosophic distribution function, neutrosophic regular conditional probabilities, and neutrosophic marginal density function. The properties of these concepts were proved, and some examples were obtained.

### Definition 3.1

Let (Ω,ℑ, P) and let (X, Y ) be two-dimensional real random variables whose distribution function is given by FXY (x, y) = P(X ? x, Y ? y). Then, we define the neutrosophic distribution function as

$FXNYN(x,y)=P(XN≤x,YN≤y)=P(X+I≤x,Y+I≤y)=P(X≤x-I,Y≤y-I).$

### Definition 3.2

Let R2 be a completely separable metric space. Then, there exist regular conditional probabilities. Now, given the sub σalgebra σ(X) or σ(Y ) and the related conditional distribution function denoted by

$FX/Y (x?{ω:Y=y}), FY/X(y?{ω:X=x}),$

the neutrosophic regular conditional probabilities are defined as

$F[X/Y]N (x-I?{ω-I:YN=y-I}),F[Y/X]N (y-I?{ω-I:XN=X-I}).$

### Remark 3.1

Let f(x, yxΔy+o(x, yxΔy) = P(x < X ? x + Δx, y < Y ? +Δy), then this equation can be rewritten as follows:

$f(x-I,y-I)Δ (x-I) Δ (y-I)+o(x-I,y-I,Δ (x-I),Δ (y-I))=P (x-I

where o(xI, yI, Δ(xI), Δ(yI)) → 0 for Δ(xI), Δ(yI) → (0, 0).

For the neutrosophic marginal density function of XN,

$fXN(x-I)Δ (x-I)+o¯(x-I,Δ (x-I))=P(x-I

where $fXN(x-I)=∫Rf(x-I,y-I) dy$.

Now,

$P(y-I

Letting Δ(xI) tend to 0, we have

$limΔ(x-I)→0P(y-I

This relation shows that the function

$fYN/XN(y-I?x-I)=fXYN(x-I,y-I)fXN(x-I)$

is the neutrosophic conditional density of YN given XN. Similarly, the neutrosophic conditional density of XN given YN is calculated by

$fXN/YN(x-I?y-I.)=fXYN(x-I,y-I)fYN(y-I).$

Following are some theorems related to expected value of a neutrosophic random variable:

### Theorem 3.1

(Linearity expected of two neutrosophic random variables).

$E (XN+YN)=E (X)+E (Y)+2I.$
Proof
Case 1

Continuous

$E(XN+YN)=E([X+I]+[Y+I])=∫XNYN([X+I]+[Y+I])fXNYN(x-I,y-I)dxdy=∫XNYN([X+I])fXNYN(x-I,y-I)dxdy+∫XNYN([Y+I])fXNYN(x-I,y-I)dxdy=∫XNYNXfXNYN(x-I,y-I)dxdy+I∫XNYNfXNYN(x-I,y-I)dxdy+∫XNYNY fXNYN(x-I,y-I)dxdy+I∫XNYNfXNYN(x-I,y-I)dxdy=E(X)+I+E(Y)+I=E(X)+E(Y)+2I.$
Case 2

Discrete

$E(XN+YN)=E([X+I]+[Y+I])=∑XNYN([X+I]+[Y+I])fXNYN(x-I,y-I)=∑XNYN([X+I])fXNYN(x-I,y-I)+∑XNYN([Y+I])fXNYN(x-I,y-I)=∑XNYNXfXNYN(x-I,y-I)+I∑XNYNfXNYN(x-I,y-I)+∑XNYNY fXNYN(x-I,y-I)+I∑XNYNfXNYN(x-I,y-I)=E(X)+I+E(Y)+I=E(X)+E(Y)+2I.$

### Theorem 3.2

(Multiplication expected of two neutrosophic random variables).

$E(XNYN)=E(XY)+IE(X)+IE(Y)+I2.$

### Proof

Case 1

Continuous

$E(XNYN)=E([X+I][Y+I]),=∫XNYN([X+I][Y+I])fXNYN(x-I,y-I)dxdy=∫XNYN([XY+IX+IY+I2]fXNYN(x-I,y-I)dxdy=∫XNYNXYfXNYN(x-I,y-I)dxdy,+∫XNYNIXfXNYN(x-I,y-I)dxdy+∫XNYN[IY]fXNYN(x-I,y-I)dxdy+∫XNYNI2fXNYN(x-I,y-I),dxdy=∫XNYNXYfXNYN(x-I,y-I)dxdy,+I∫XNYNXfXNYN(x-I,y-I)dxdy+I∫XNYNYfXNYN(x-I,y-I)dxdy+I2∫XNYNfXNYN(x-I,y-I)dxdy=E(XY)+IE(X)+IE(Y)+I2.$
Case 2

Discrete

$E(XNYN)=E([X+I][Y+I])=∑XNYN([X+I][Y+I])fXNYN(x-I,y-I)=∑XNYN[XY+IX+IY+I2]fXNYN(x-I,y-I)=∑XNYNXYfXNYN(x-I,y-I)+∑XNYNIXfXNYN(x-I,y-I)+∑XNYNIYfXNYN(x-I,y-I)+∑XNYNI2fXNYN(x-I,y-I)=∑XNYNXY fXNYN(x-I,y-I)+I∑XNYNXfXNYN(x-I,y-I)+I∑XNYNYfXNYN(x-I,y-I)+I2∑XNYNfXNYN(x-I,y-I)=E(XY)+IE(X)+IE(Y)+I2.$

In classical probability, E(X) = E(E(X\Y )). In the following theorem, we show that E(E(XN\YN)) = E [XN].

### Theorem 3.3

E(E(XN\YN)) = E [XN].

### Proof

Case 1

Continuous

$E(E(XN\YN))=∫YNE[XN\YN=y]F(YN)dy=∫YNE[XN\Y=y-I]F(Y-1)dy=∫YN∫XN(x-I)F[XN\Y=y-I]F(y-I)dydx=∫YN∫XN(x-I)F[X=x-I,Y=y-I]F(y-I)F(y-I)dydx=∫YN∫XN(x-I)F[X=x-I,Y=y-I]dydx=∫XN(x-I)∫YNF[X=x-I,Y=y-I]dydx=∫XN(x-I)F[X=x-I]dx=E[x-I]=E[XN].$
Case 2

Discrete

$E(E(XN\YN))=∑YNE[XN\YN=y]P{YN=y}=∑YNE[XN\Y=y-I]P{Y=y-I}=∑YN∑XN(x-I)P[XN=x\Y=y-I]P{Y=y-I}=∑YN∑XN(x-I)P[X=x-I,Y=y-I]P{Y=y-I}P{Y=y-I}=∑YN∑XN(x-I)P[X=x-I,Y=y-I]=∑XN(x-I)∑YNP[X=x-I,Y=y-I]=∑XN(x-I)P[X=x-I]=E[XN].$

### Theorem 3.4

For any two neutrosophic random variables XN and YN, assuming that expectations exist, we have that if r (x, y) is a function of x and y, then

$E [r(x,y)\YN]=E [r(x,y)].$

### Proof

Case 1

Continuous

$E(E(r(x,y)\YN))=∫YNE[r(x,y)\YN=y]P{YN=y}dx=∫YNE[r(x,y)\Y=y-I]P{Y=y-I}dx=∫YN∫XNr(x,y)P[XN=x\Y=y-I]×P{Y=y-I}dydx=∫YN∫XNr(x,y)P[X=x-I,Y=y-I]P{Y=y-I}×P{Y=y-I}dydx=∫YN∫XNr(x,y)P[X=x-I,Y=y-I]dydx=∫XNr(x,y)∫YNP[X=x-I,Y=y-I]dxdy=∫XNr(x,y)∫YNP[X=x-I,Y=y-I]dxdy=∫XNr(x,y)P[X=x-I]=E[r(x,y)].$
Case 2

Discrete

$E(E(r(x,y)\YN))=∑YNE[r(x,y)\YN=y]P{YN=y}=∑YNE[r(x,y)\Y=y-I]P{Y=y-I}=∑YN∑XNr(x,y)P[XN=x\Y=y-I]P{Y=y-I}=∑YN∑XNr(x,y)P[X=x-I,Y=y-I]P{Y=y-I}P{Y=y-I}=∑YN∑XNr(x,y)P[X=x-I,Y=y-I]=∑XNr(x,y)∑YNP[X=x-I,Y=y-I]=∑XNr(x,y)P[X=x-I]=E[r(x,y)].$

In classical probability, two discrete random variables X and Y are independent if PXY (x, y) = PX (x) PY (y), ∀x, y. Equivalently, the two continuous random variables X and Y are independent if FXY (x, y) = FX (x) FY (y), ∀x, y. Regarding this information and the definition of a neutrosophic random variable, we can give the following definition:

### Definition 3.3

Two discrete neutrosophic random variables X and Y are neutrosophic independent if

$PXNYN(x,y)=PXN(x)PYN(y)=PX(x-I)PY(y-I),∀x,y.$

Equivalently, X and Y are continuous, neutrosophic independent if

$FXNYN(x,y)=FXN(x)FYN(y)=FX(x-I)FY(y-I),∀x,y.$

### Theorem 3.5

If XN and YN are two neutrosophic independent random variables, then E(XNYN) = E(XN)E(YN).

### Proof

Case 1

Continuous

$E(XNYN)=E([X+I][Y+I])=∫XN∫YN([X+I][Y+I])fXNYN(x-I,y-I)dxdy=∫XN∫YN([X+I][Y+I])fXN(x-I)fYN(y-I)dxdy=∫XN[X+I]fXN(x-I)dx∫YN[Y+I]fYN(y-I)dy=E(XN)E(YN).$
Case 2

Discrete

$E(XNYN)=E([X+I][Y+I])=∑XN∑YN([X+I][Y+I])fXNYN(x-I,y-I)=∑XN∑YN([X+I][Y+I])fXN(x-I)fYN(y-I)=∑XN[X+I]fXN(x-I)∑YN[Y+I]fYN(y-I)=E(XN)E(YN).$

### Corollary 3.1

If XN and YN are two discrete neutrosophic independent random variables, then

$PXN/YN(x-I?y-I)=PXN(x-I).$

### Proof

$PXN/YN(x-I?y-I)=P(XN=x-I?YN=y-I)=PXNYN(x-I,y-I)PYN(y-I)=PXN(x-I)PYN(y-I)PYN(y-I)=PXN(x-I).$

### Corollary 3.2

If XN and YN are two continuous neutrosophic independent random variables, then

$FXN/YN(x-I?y-I)=FXN(x-I).$

### Proof

$FXN/YN(x-I?y-I)=F(XN=x-I?YN=y-I)=FXNYN(x-I,y-I)FYN(y-I)=FXN(x-I)FYN(y-I)FYN(y-I)=FXN(x-I).$

### Corollary 3.3

If XN and YN are two neutrosophic independent random variables, then

$covXNYN(XN,YN)=E[XNYN]-E[XN]E[YN].$

### Proof

$covXNYN(XN,YN)=E[(XN-E[XN])(YN-E[YN])]=E[XNYN]-E[XNE[YN]]-E[E[XN]YN]+E[E[XN]E[YN]]=E[XNYN]-E[YN]E[XN]-E[XN]E[YN]+E[XN]E[YN]=E[XNYN]-E[YN]E[XN]=E[XNYN]-E[XN]E[YN].$

### Remark 3.2

If XN = YN then

$covXNYN(XN,XN)=E [XNXN]-E [XN] E [XN],=E [XN2]-(E [XN])2=Var (XN).$

### Corollary 3.4

If XN and YN are two neutrosophic independent random variables, then

$VarXNYN(XN+YN)=Var (XN)+Var (YN)-2Cov(XN,YN).$

### Proof

$VarXNYN(XN+YN)=E [(XN+YN-E [XN+YN])2]=E [(XN+YN-E [XN]-E[YN])2]=E [((XN-E[XN])+(YN-E[YN]))2]=E [(XN-E [XN])2+(YN-E [YN])2-2(XN-E [XN]) (YN-E [YN])]=Var (XN)+Var (YN)-2Cov(XN,YN).$

### Example 3.1

Let X and Y be random variables with a probability density function as follows (Figure 1):

$fXY(x,y)=x2+2y2, 0

We can then write a joint neutrosophic density function as follows:

For 0 < xI < 1, 0 < yI < 1,

$fXNYN(x,y)=fXY(x-I,y-I)=(x-I)2+2(y-I)2=x2-2Ix+I2+y2-2Iy+I2,$

where I < x<1 + I, I < y <1 + I.

To prove that the density function fXY (x, y) is the joint neutrosophic density function:

$∫I1+I∫I1+IfXNYN(x,y)dxdy=∫I1+I∫I1+I[(x-I)2+2(y-I)2]dxdy=∫I1+I[(x-I)33+2x(y-I)2]I1+Idy=∫I1+I([(1+I-I)33+2(1+I)(y-I)2]-[(I-I)33+2I(y-I)2]) dy=∫I1+I([13+2(1+I) (y-I)2]-[2I (y-I)2]) dy=([13y+2(1+I)(y-I)33-2I(y-I)33])I1+I=13([y+2(1+I) (y-I)3-2I(y-I)3])I1+I=13[((1+I)+2(1+I) (1+I-I)3-2I(1+I-I)3)-(I+2(1+I) (I-I)3-2I(I-I)3)]=13[1+I+2+2I-2I-I]=13[1+2]=1.$

We can find the marginal fXN (x) neutrosophic density function as follows:

$fXN(x)=∫I1+IfXNYN(x,y)dy=∫I1+I[(x-I)2+2(y-I)2] dy=[(x-I)2y+2(y-I)33]I1+I=[(x-I)2(1+I)+2(1+I-I)33]-[(x-I)2I+2(I-I)33]=[(x-I)2(1+I)+23]-[(x-I)2I]=[(x-I)2+(x-I)2I+23]-[(x-I)2I]=[(x-I)2+(x-I)2I-(x-I)2I+23]=[(x-I)2+23].$

To prove its PDF, let $fXN(x)=[(x-I)2+23]$. Then

$∫I1+I[(x-I)2+23]dx=[(x-I)33+23x]I1+I=[(1+I-I)33+23(1+I)]-[(I-I)33+23I]=[13+23(1+I)]-[23I]=13+23(1+I)-23I=13+23+23I-23I=13+23=1$

We can also find the marginal fYN (x) neutrosophic density function as follows:

$fyN(y)=∫I1+IfXNYN(x,y)dx=∫I1+I[(x-I)2+2(y-I)2]dy=[(x-I)33+2(y-I)2x]I1+I=[(1+I-I)33+2(y-I)2(1+I)]-[(I-I)33+2(y-I)2I]=[13+2(y-I)2+2(y-I)2I-2(y-I)2I]=13+2(y-I)2.$

To prove its PDF, let $fYN(x)=13+2(y-I)2$. Then

$∫I1+IfyN(y)dy=∫I1+I[13+2(y-I)2]dy=[13y+2(y-I)33]I1+I=[13(1+I)+2(1+I-I)33]-[13I+2(I-I)33]=[13+13I-13I+23]=[13+23]=1.$

### Example 3.2

Let X and Y be random variables with a probability density function as follows (Figure 2):

$PXY(x,y)=qy-2p2;x

Then we can write a joint neutrosophic density function as follows:

$PXNYN(x,y)=PXY(x-I,y-I)=qy-I-2p2,x-I

We can find the marginal PXN (x) neutrosophic density function as follows:

$PXN(x,y)=P [X=x-I]=∑y=x+1∞qy-I-2p2=p2∑y=x+1∞qy-I-2=p2qx+1-I-2+p2qx+2-I-2+p2qx+3-I-2+…=p2qx-1-I(1+q+q2+q3+…)=p2qx-1-I11-q=pqx-1-I,x=I+1,I+2,…$

To prove its PDF, let $p2qx-1-I11-q=pqx-1-I$, x = I + 1, I + 2, .... Then

$∑x=I+1∞pqx-1-I=p∑x=I+1∞qx-1-I=p(qI+1-1-I+qI+2-1-I+qI+3-1-I+…)=p(q0+q1+q2+…)=p(1+q+q2+q3+…)=p1-q=1.$

We can also find the marginal PYN (x) neutrosophic density function as follows:

$PYN(x,y)=P[Y=y-I]=∑x=I+1y-1qy-I-I-2p2=p2∑x=I+1y-1qy-I-2=(y-2-I)p2(q)y-2-I;y=2+I,3+1,…$

To prove its PDF, let PYN (x) = (y – 2 – I)p2(q)y–2–I ; y = 2 + I, 3 + I, .... Then

$∑y=I+2∞(y-2-I)p2(q)y-2-I=(2+I-2-I)p2(q)2+I-2-I+(3+I-2-I)p2(q)3+I-2-I+(4+I-2-I)p2(q)4+I-2-I+…=(0)p2(q)0+(1)p2(q)1+(2)p2(q)2+…=p2q(1+2q+3q2+…)=1.$

Conditional probabilities are an important topic that contribute to solving numerous everyday problems. Many researchers have studied these probabilities under uncertain conditions. In this paper, conditional probabilities are studied, for the first time, under the neutrosophic theory. We introduced a neutrosophic conditional probability as a generalization of the classical conditional probability, and presented its properties. The concepts of joint distribution function, regular conditional probabilities, marginal density function, expected value, and joint density function in the classical type are generalized with the neutrosophic type using discrete and continuous cases. Numerous properties and examples are provided to demonstrate the significance of this study. We suggest that researchers use these results and apply them to bivariate distribution probabilities. We also suggest expanding these results to the concept of n-valued neutrosophic logic.

The authors would like to acknowledge the financial support received from Al-Albayt University and Jadara University.

### Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Fig. 1.

Simulation of fXNYN (x, y) in Example 3.1 with different indeterminacy I.

Fig. 2.

Simulation of PXNYN (x, y) in Example 3.2 with different indeterminacy I where P = 0.3.

1. Zadeh, LA (1965). Fuzzy sets. Information and Control. 8, 338-353. https://doi.org/10.1016/S0019-9958(65)90241-X
2. Atanassov, KT (1986). Intuitionistic fuzzy sets. Fuzzy Sets and Systems. 20, 87-96. https://doi.org/10.1016/S0165-0114(86)80034-3
3. Pawlak, Z (1982). Rough sets. International Journal of Computer & Information Sciences. 11, 341-356. https://doi.org/10.1007/BF01001956
4. Smarandache, F (1998). Neutrosophy: Neutrosophic Probability, Set, and Logic. Rehoboth, NM: American Research Press
5. Smarandache, F (1999). A Unifying Field in Logics: Neutrosophy: Neutrosophic Probability, Set and Logic. Rehoboth, NM: American Research Press
6. Smarandache, F (2013). Introduction to Neutrosophic Measure, Neutrosophic Integral, and Neutrosophic Probability. Craiova, Romania: Sitech
7. Smarandache, F (2014). Introduction to Neutrosophic Statistics. Craiova, Romania: Sitech
8. Smarandache, F (2020). Plithogeny, Plithogenic Set, Logic, Probability, and Statistics. Delhi, India: Indo American Books
9. Smarandache, F (2013). Neutrosophic measure and neutrosophic integral. Neutrosophic Sets and Systems. 1. article no. 2
10. Hanafy, IM, Salama, AA, Khaled, OM, and Mahfouz, KM (2014). Correlation of neutrosophic sets in probability spaces. Journal of Applied Mathematics, Statistics and Informatics. 10, 45-52. https://doi.org/10.2478/jamsi-2014-0004
11. Hanafy, IM, Salama, AA, and Mahfouz, K (2012). Correlation coefficients of generalized intuitionistic fuzzy sets by centroid method. IOSR Journal of Mechanical and Civil Engineering. 3, 11-14. https://doi.org/10.9790/1684-0351114
12. Hanafy, IM, Salama, AA, and Mahfouz, K (2012). Correlation of neutrosophic data. International Refereed Journal of Engineering and Science (IRJES). 1, 39-43.
13. Hanafy, IM, Salama, AA, and Mahfouz, M (2013). Correlation coefficients of neutrosophic sets by centroid method. International Journal of Probability and Statistics. 2, 9-12.
14. Salama, AA, Khaled, OM, and Mahfouz, KM (2014). Neutrosophic correlation and simple linear regression. Neutrosophic Sets and Systems. 5, 3-8.
15. Yuhua, F (2015). Examples of neutrosophic probability in physics. Neutrosophic Sets and Systems. 7, 32-33.
16. Patro, SK, and Smarandache, F (2016). The neutrosophic statistical distribution, more problems, more solutions. Neutrosophic Sets and Systems. 12, 73-79.
17. Smarandache, F, Abbas, N, Chibani, Y, Hadjadji, B, and Omar, ZA (2017). PCR5 and neutrosophic probability in target identification. Neutrosophic Sets and Systems. 16, 76-79.
18. Guo, Q, Wang, H, He, Y, Deng, Y, and Smarandache, F (2017). An evidence fusion method with importance discounting factors based on neutrosophic probability analysis in DSmT framework. Neutrosophic Sets and Systems. 17, 64-73.
19. Gafar, MG, and El-Henawy, I (2017). Integrated framework of optimization technique and information theory measures for modeling neutrosophic variables. Neutrosophic Sets and Systems. 15, 80-89.
20. Alhabib, R, Ranna, MM, Farah, H, and Salama, AA (2018). Some neutrosophic probability distributions. Neutrosophic Sets and Systems. 22, 30-38.
21. Zeina, MB, and Hatip, A (2021). Neutrosophic random variables. Neutrosophic Sets and Systems. 39, 44-52.

Ahmad M. H. Al-khazaleh received his B.Sc. degree in Mathematics from Al-Albayt University, Jordan, M.Sc. degree in Mathematics and statistics from Al-Albayt University, Jordan, and Ph.D. degree in Statistics from The National University of Malaysia (UKM). He has worked as a lecturer in the Department of Mathematics at Al-Albayt University and as an assistant professor at the same university. His research interests include probability, statistics, fuzzy probability and fuzzy statistics.

E-mail: ahmed 2005kh@yahoo.com

Shawkat Alkhazaleh is an Associate Professor in the Department of Mathematics at Jadara University in Jordan. He received his M.A. and Ph.D. from the National University of Malaysia (UKM). He specialized in fuzzy sets and soft fuzzy sets and topics related to uncertainty and has done extensive research in this field. In addition to being a faculty member at the College of Science and Information Technology, he is currently the Vice Dean for scientific research at Jadara University.

E-mail: shmk79@gmail.com

### Article

#### Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(1): 78-88

Published online March 25, 2022 https://doi.org/10.5391/IJFIS.2022.22.1.78

Copyright © The Korean Institute of Intelligent Systems.

## Neutrosophic Conditional Probabilities: Theories and Applications

Ahmad M. H. Al-khazaleh1 and Shawkat Alkhazaleh2

1Department of Mathematics, Faculty of Science, Al-Albayt University, Al-Mafraq, Jordan
2Department of Mathematics, Faculty of Science and Information Technology, Jadara University, Irbid, Jordan

Correspondence to:Shawkat Alkhazaleh (shmk79@gmail.com)

Received: August 19, 2021; Revised: September 11, 2021; Accepted: September 23, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

Data could be uncertain, and the levels of precision of data are intuitively different. Neutrosophic set expressions are considered an alternative to represent imprecise data in such cases. In this paper, a general definition of neutrosophic conditional probability is introduced as a generalization of the classical conditional probability. Additionally, the properties of this neutrosophic conditional probability are presented. The concepts of joint distribution function, regular conditional probabilities, marginal density function, expected value, and joint density function in the classical type are generalized to a neutrosophic type with two discrete and continuous neutrosophic random variables. Various properties and examples are presented to demonstrate the significance of this study.

Keywords: Conditional probability, Neutrosophic conditional probability, Neutrosophic distribution function, Marginal neutrosophic density function, Neutrosophic expected value, Joint neutrosophic density function

### 1. Introduction

Crisp is the most important requirement in classical mathematics, whereas real problems involve uncertain data. Thus, the solution to these problems involves the use of mathematical principles based on uncertainty (not crisp). Therefore, many scientists and engineers have been interested in uncertainty modeling to describe and debrief useful information hidden in uncertain data. To help them deal with this uncertainty, numerous theories such as the fuzzy set theory [1], intuitionistic fuzzy set theory [2], rough set theory [3], and neutrosophic set theory [4,5] have been proposed in recent years.

Smarandache proposed the theory of neutrosophic logic as a general framework for the unification of many existing logics, such as the intuitionistic fuzzy logic. This theory aims to be a new mathematical tool for handling problems involving imprecise, indeterminant, and inconsistent data. The main idea of neutrosophic logic is to distinguish each logical statement in a three-dimensional neutrosophic space, wherein each dimension of the space represents the truth T, falsehood F, and indeterminacy I of the statement under consideration.

The neutrosophic probability given by Smarandache [6], which is a generalization of the classical and imprecise probabilities in which the chance that an event A occurs is t% true-where t varies in the subset T, i% is indeterminate - where i varies in the subset I, and f% is false - where f varies in the subset F. In addition, if nsup ≤ 1 is in classical probability, then nsup ≤ 3+ is in the neutrosophic probability. Moreover, the probability of an event is a subset T in [0, 1] in imprecise probability, not a fixed number p in [0, 1] and the opposite subset F from the unit interval [0, 1]. There is no indeterminate subset I in the imprecise probability.

Although the neutrosophic probability theory is one of the most important tools and has applications in real life, it has not received significant attention. However, it has been the focus of some studies. For more information about neutrosophic probability, see [68].

In 2003, Smarandache [9], for the first time, introduced the notions of neutrosophic measure and neutrosophic integral. Neutrosophic measure is a generalization of the classical measure when the space contains some indeterminacy, and the neutrosophic integral is defined on the neutrosophic measure. Hanafy et al. [1013] studied the correlation coefficient under uncertainty. Thereafter, Salama et al. [14] in 2014, introduced and studied the concepts of correlation and correlation coefficient of neutrosophic data in probability spaces and some of their properties. In addition, they introduced and studied the neutrosophic simple linear regression model and provided a possibility of its application to data processing. By applying the neutrosophic probability in physics, Yuhua [15] in 2015, determined the neutrosophic probability of accelerating the expansion of the partial universe. Some problems and solutions related to the neutrosophic statistical distribution, given by Patro and Smarandache [16] in 2016 and Smarandache et al. [17] in 2017, used proportional conflict redistribution rule number 5 (PCR5) to combine the information of two sources providing subjective probabilities of an event A occurring with a chance that A occurs, an indeterminate chance that A occurs, and a chance that A does not occur. Likewise, in 2017, Guo et al. [18] proposed an evidence fusion method based on neutrosophic probability analysis in the DSmT framework. They also introduced some basic theories, including DST, DSmT, and the dissimilarity measure of evidence. Consequently, in 2017, Gafar and El-Henawy [19] presented a framework of ant colony optimization and entropy theory and used it to define a neutrosophic variable from concrete data. In their paper, they exhibited the incorporation of a hybrid search model amongst ant colony optimization and information theory measures to demonstrate a neutrosophic variable. Taking a new step towards the study of neutrosophic probabilities in 2018, Alhabib et al. [20] introduced and studied some neutrosophic probability distributions by generalizing some classical probability distributions such as the Poisson distribution, exponential distribution, and uniform distribution to the neutrosophic type. Subsequently, in 2019, Alhasan and Smarandache studied the neutrosophic Weibull distribution and the Weibull family along with the relationship of the functions with the neutrosophic Weibull—such as the inverse Weibull, Rayleigh distribution, three-parameter Weibull, beta Weibull, five Weibull, and six Weibull distributions under the neutrosophic case. A general definition of neutrosophic random variables was introduced by Zeina and Hatip [21] in 2021. They studied the properties of this concept and generalized the probability distribution function, cumulative distribution function, expected value, variance, standard deviation, mean deviation, rth quartiles, moment generating function, and characteristic function from crisp logic to neutrosophic logic. In this paper, as a generalization of the classical conditional probability, we introduce a general definition of neutrosophic conditional probability and its properties. In addition, we will generalize, from the classical type to the neutrosophic type, the concepts of joint distribution function, regular conditional probabilities, marginal density function, expected value, and joint density function. We do this using two neutrosophic random variables, discrete and continuous. The significance of this study is demonstrated by providing numerous properties and examples.

### 2. Preliminary

In this section, we recall the definitions that are related to this work. The neutrosophic set, neutrosophic probability, and neutrosophic random variables are defined.

### 2.1 General Definitions

Definition 2.1 [4,5]

Let X be a non-empty fixed set. A neutrosophic set A is an object having the form

${x,(μA(x),δA(x),γA(x)):x∈X},$

where μA(x), δA(x), and γA(x) represent the degree of membership, degree of indeterminacy, and degree of non-membership, respectively, of each element xX to the set A.

Definition 2.2 [7]

Classical neutrosophic number has the form a + bI where a, b are real or complex numbers, and I is the indeterminacy, such that 0.I = 0 and 12 = I, which results in In = I for all positive integers n.

Definition 2.3 [9]

The neutrosophic probability of an event A occurring is

$NP(A)=(ch(A),ch(neutA),ch(antiA))=(T,I,F)$

where T, I,F are standard or nonstandard subsets of the nonstandard unitary interval.

### 2.2 Neutrosophic Random Variable

A neutrosophic random (stochastic) variable is subject to change due to both randomness and indeterminacy, while the classical random (stochastic) variable is subject to change only due to randomness. The values of this variable represent the possible outcomes and possible indeterminacies. Randomness and indeterminacy can be either objective or subjective.

Definition 2.4 [9]

A neutrosophic random variable is a variable that may have an indeterminate outcome.

Definition 2.5 [9]

A neutrosophic random (stochastic) process represents the evolution of some neutrosophic random values over time. This is a collection of random neutrosophic variables.

Definition 2.6 [21]

Consider the crisp random variable X, having a real value, which is defined as follows: X : Ω → R, where Ω is the event space. We define the neutrosophic random variable XN as follows: XN : Ω → R(I) and XN = X + I where I is indeterminacy.

Definition 2.7 [21]

Consider the neutrosophic random variable XN = X + I, where the CDF of X is FX(x) = P(Xx). Then, the PDF and CDF of the neutrosophic random variables are defined as follows:

• FXN (x) = FX(xI),

• fXN (x) = fX(xI).

Definition 2.8 [21]

Consider the neutrosophic random variable XN = X + I, where the CDF of X is FX(x) = P(Xx). Then, the PDF and CDF of the neutrosophic random variables are defined as follows:

• FXN (x) = FX(xI),

• fXN (x) = fX(xI).

Definition 2.9 [21]

Consider the neutrosophic random variable XN = X + I. Its expected value is defined as follows: EXN = E(X) + I.

Proposition 2.1 [21]

Properties of the expected value of a neutrosophic random variable.

• E(aXN + b + cI) = aE(XN) + b + cI, a, b, cR,

• If XN and YN are two neutrosophic random variables, then E(XN ± YN) = E(XN) ± E(YN),

• E[(a + bI)XN] = E(aXN + bIXN) = E(aXN) + E(bIXN) = aE(XN) + bIE(XN), a, bR and

• |E(XN)| = E|XN|.

Definition 2.10 [21]

Consider the neutrosophic random variable XN = X + I. Then, its variance is equal to X’s variance, i.e., VXN = V (X).

### 3. Neutrosophic Conditional Probabilities

Smarandache [9] discussed neutrosophic conditional probability by comparing it with classical probability. In classical probability, if A and B are independent events, then P(AgivenB) = P(A). This is similar to neutrosophic independent events, i.e., NP(AgivenB) = NP(A). Additionally, in classical probability, the Bayesian rule is

$P(A?B)=P(B?A)P(A)P(B),$

where the neutrosophic Bayesian rule is

$NB(A?B)=(ch(A?B),ch(indetermA?B),ch(A¯?B)).$

In this section, we introduce the concepts of the neutrosophic distribution function, neutrosophic regular conditional probabilities, and neutrosophic marginal density function. The properties of these concepts were proved, and some examples were obtained.

### Definition 3.1

Let (Ω,ℑ, P) and let (X, Y ) be two-dimensional real random variables whose distribution function is given by FXY (x, y) = P(X ? x, Y ? y). Then, we define the neutrosophic distribution function as

$FXNYN(x,y)=P(XN≤x,YN≤y)=P(X+I≤x,Y+I≤y)=P(X≤x-I,Y≤y-I).$

### Definition 3.2

Let R2 be a completely separable metric space. Then, there exist regular conditional probabilities. Now, given the sub σalgebra σ(X) or σ(Y ) and the related conditional distribution function denoted by

$FX/Y (x?{ω:Y=y}), FY/X(y?{ω:X=x}),$

the neutrosophic regular conditional probabilities are defined as

$F[X/Y]N (x-I?{ω-I:YN=y-I}),F[Y/X]N (y-I?{ω-I:XN=X-I}).$

### Remark 3.1

Let f(x, yxΔy+o(x, yxΔy) = P(x < X ? x + Δx, y < Y ? +Δy), then this equation can be rewritten as follows:

$f(x-I,y-I)Δ (x-I) Δ (y-I)+o(x-I,y-I,Δ (x-I),Δ (y-I))=P (x-I

where o(xI, yI, Δ(xI), Δ(yI)) → 0 for Δ(xI), Δ(yI) → (0, 0).

For the neutrosophic marginal density function of XN,

$fXN(x-I)Δ (x-I)+o¯(x-I,Δ (x-I))=P(x-I

where $fXN(x-I)=∫Rf(x-I,y-I) dy$.

Now,

$P(y-I

Letting Δ(xI) tend to 0, we have

$limΔ(x-I)→0P(y-I

This relation shows that the function

$fYN/XN(y-I?x-I)=fXYN(x-I,y-I)fXN(x-I)$

is the neutrosophic conditional density of YN given XN. Similarly, the neutrosophic conditional density of XN given YN is calculated by

$fXN/YN(x-I?y-I.)=fXYN(x-I,y-I)fYN(y-I).$

Following are some theorems related to expected value of a neutrosophic random variable:

### Theorem 3.1

(Linearity expected of two neutrosophic random variables).

$E (XN+YN)=E (X)+E (Y)+2I.$
Proof
Case 1

Continuous

$E(XN+YN)=E([X+I]+[Y+I])=∫XNYN([X+I]+[Y+I])fXNYN(x-I,y-I)dxdy=∫XNYN([X+I])fXNYN(x-I,y-I)dxdy+∫XNYN([Y+I])fXNYN(x-I,y-I)dxdy=∫XNYNXfXNYN(x-I,y-I)dxdy+I∫XNYNfXNYN(x-I,y-I)dxdy+∫XNYNY fXNYN(x-I,y-I)dxdy+I∫XNYNfXNYN(x-I,y-I)dxdy=E(X)+I+E(Y)+I=E(X)+E(Y)+2I.$
Case 2

Discrete

$E(XN+YN)=E([X+I]+[Y+I])=∑XNYN([X+I]+[Y+I])fXNYN(x-I,y-I)=∑XNYN([X+I])fXNYN(x-I,y-I)+∑XNYN([Y+I])fXNYN(x-I,y-I)=∑XNYNXfXNYN(x-I,y-I)+I∑XNYNfXNYN(x-I,y-I)+∑XNYNY fXNYN(x-I,y-I)+I∑XNYNfXNYN(x-I,y-I)=E(X)+I+E(Y)+I=E(X)+E(Y)+2I.$

### Theorem 3.2

(Multiplication expected of two neutrosophic random variables).

$E(XNYN)=E(XY)+IE(X)+IE(Y)+I2.$

### Proof

Case 1

Continuous

$E(XNYN)=E([X+I][Y+I]),=∫XNYN([X+I][Y+I])fXNYN(x-I,y-I)dxdy=∫XNYN([XY+IX+IY+I2]fXNYN(x-I,y-I)dxdy=∫XNYNXYfXNYN(x-I,y-I)dxdy,+∫XNYNIXfXNYN(x-I,y-I)dxdy+∫XNYN[IY]fXNYN(x-I,y-I)dxdy+∫XNYNI2fXNYN(x-I,y-I),dxdy=∫XNYNXYfXNYN(x-I,y-I)dxdy,+I∫XNYNXfXNYN(x-I,y-I)dxdy+I∫XNYNYfXNYN(x-I,y-I)dxdy+I2∫XNYNfXNYN(x-I,y-I)dxdy=E(XY)+IE(X)+IE(Y)+I2.$
Case 2

Discrete

$E(XNYN)=E([X+I][Y+I])=∑XNYN([X+I][Y+I])fXNYN(x-I,y-I)=∑XNYN[XY+IX+IY+I2]fXNYN(x-I,y-I)=∑XNYNXYfXNYN(x-I,y-I)+∑XNYNIXfXNYN(x-I,y-I)+∑XNYNIYfXNYN(x-I,y-I)+∑XNYNI2fXNYN(x-I,y-I)=∑XNYNXY fXNYN(x-I,y-I)+I∑XNYNXfXNYN(x-I,y-I)+I∑XNYNYfXNYN(x-I,y-I)+I2∑XNYNfXNYN(x-I,y-I)=E(XY)+IE(X)+IE(Y)+I2.$

In classical probability, E(X) = E(E(X\Y )). In the following theorem, we show that E(E(XN\YN)) = E [XN].

### Theorem 3.3

E(E(XN\YN)) = E [XN].

### Proof

Case 1

Continuous

$E(E(XN\YN))=∫YNE[XN\YN=y]F(YN)dy=∫YNE[XN\Y=y-I]F(Y-1)dy=∫YN∫XN(x-I)F[XN\Y=y-I]F(y-I)dydx=∫YN∫XN(x-I)F[X=x-I,Y=y-I]F(y-I)F(y-I)dydx=∫YN∫XN(x-I)F[X=x-I,Y=y-I]dydx=∫XN(x-I)∫YNF[X=x-I,Y=y-I]dydx=∫XN(x-I)F[X=x-I]dx=E[x-I]=E[XN].$
Case 2

Discrete

$E(E(XN\YN))=∑YNE[XN\YN=y]P{YN=y}=∑YNE[XN\Y=y-I]P{Y=y-I}=∑YN∑XN(x-I)P[XN=x\Y=y-I]P{Y=y-I}=∑YN∑XN(x-I)P[X=x-I,Y=y-I]P{Y=y-I}P{Y=y-I}=∑YN∑XN(x-I)P[X=x-I,Y=y-I]=∑XN(x-I)∑YNP[X=x-I,Y=y-I]=∑XN(x-I)P[X=x-I]=E[XN].$

### Theorem 3.4

For any two neutrosophic random variables XN and YN, assuming that expectations exist, we have that if r (x, y) is a function of x and y, then

$E [r(x,y)\YN]=E [r(x,y)].$

### Proof

Case 1

Continuous

$E(E(r(x,y)\YN))=∫YNE[r(x,y)\YN=y]P{YN=y}dx=∫YNE[r(x,y)\Y=y-I]P{Y=y-I}dx=∫YN∫XNr(x,y)P[XN=x\Y=y-I]×P{Y=y-I}dydx=∫YN∫XNr(x,y)P[X=x-I,Y=y-I]P{Y=y-I}×P{Y=y-I}dydx=∫YN∫XNr(x,y)P[X=x-I,Y=y-I]dydx=∫XNr(x,y)∫YNP[X=x-I,Y=y-I]dxdy=∫XNr(x,y)∫YNP[X=x-I,Y=y-I]dxdy=∫XNr(x,y)P[X=x-I]=E[r(x,y)].$
Case 2

Discrete

$E(E(r(x,y)\YN))=∑YNE[r(x,y)\YN=y]P{YN=y}=∑YNE[r(x,y)\Y=y-I]P{Y=y-I}=∑YN∑XNr(x,y)P[XN=x\Y=y-I]P{Y=y-I}=∑YN∑XNr(x,y)P[X=x-I,Y=y-I]P{Y=y-I}P{Y=y-I}=∑YN∑XNr(x,y)P[X=x-I,Y=y-I]=∑XNr(x,y)∑YNP[X=x-I,Y=y-I]=∑XNr(x,y)P[X=x-I]=E[r(x,y)].$

In classical probability, two discrete random variables X and Y are independent if PXY (x, y) = PX (x) PY (y), ∀x, y. Equivalently, the two continuous random variables X and Y are independent if FXY (x, y) = FX (x) FY (y), ∀x, y. Regarding this information and the definition of a neutrosophic random variable, we can give the following definition:

### Definition 3.3

Two discrete neutrosophic random variables X and Y are neutrosophic independent if

$PXNYN(x,y)=PXN(x)PYN(y)=PX(x-I)PY(y-I),∀x,y.$

Equivalently, X and Y are continuous, neutrosophic independent if

$FXNYN(x,y)=FXN(x)FYN(y)=FX(x-I)FY(y-I),∀x,y.$

### Theorem 3.5

If XN and YN are two neutrosophic independent random variables, then E(XNYN) = E(XN)E(YN).

### Proof

Case 1

Continuous

$E(XNYN)=E([X+I][Y+I])=∫XN∫YN([X+I][Y+I])fXNYN(x-I,y-I)dxdy=∫XN∫YN([X+I][Y+I])fXN(x-I)fYN(y-I)dxdy=∫XN[X+I]fXN(x-I)dx∫YN[Y+I]fYN(y-I)dy=E(XN)E(YN).$
Case 2

Discrete

$E(XNYN)=E([X+I][Y+I])=∑XN∑YN([X+I][Y+I])fXNYN(x-I,y-I)=∑XN∑YN([X+I][Y+I])fXN(x-I)fYN(y-I)=∑XN[X+I]fXN(x-I)∑YN[Y+I]fYN(y-I)=E(XN)E(YN).$

### Corollary 3.1

If XN and YN are two discrete neutrosophic independent random variables, then

$PXN/YN(x-I?y-I)=PXN(x-I).$

### Proof

$PXN/YN(x-I?y-I)=P(XN=x-I?YN=y-I)=PXNYN(x-I,y-I)PYN(y-I)=PXN(x-I)PYN(y-I)PYN(y-I)=PXN(x-I).$

### Corollary 3.2

If XN and YN are two continuous neutrosophic independent random variables, then

$FXN/YN(x-I?y-I)=FXN(x-I).$

### Proof

$FXN/YN(x-I?y-I)=F(XN=x-I?YN=y-I)=FXNYN(x-I,y-I)FYN(y-I)=FXN(x-I)FYN(y-I)FYN(y-I)=FXN(x-I).$

### Corollary 3.3

If XN and YN are two neutrosophic independent random variables, then

$covXNYN(XN,YN)=E[XNYN]-E[XN]E[YN].$

### Proof

$covXNYN(XN,YN)=E[(XN-E[XN])(YN-E[YN])]=E[XNYN]-E[XNE[YN]]-E[E[XN]YN]+E[E[XN]E[YN]]=E[XNYN]-E[YN]E[XN]-E[XN]E[YN]+E[XN]E[YN]=E[XNYN]-E[YN]E[XN]=E[XNYN]-E[XN]E[YN].$

### Remark 3.2

If XN = YN then

$covXNYN(XN,XN)=E [XNXN]-E [XN] E [XN],=E [XN2]-(E [XN])2=Var (XN).$

### Corollary 3.4

If XN and YN are two neutrosophic independent random variables, then

$VarXNYN(XN+YN)=Var (XN)+Var (YN)-2Cov(XN,YN).$

### Proof

$VarXNYN(XN+YN)=E [(XN+YN-E [XN+YN])2]=E [(XN+YN-E [XN]-E[YN])2]=E [((XN-E[XN])+(YN-E[YN]))2]=E [(XN-E [XN])2+(YN-E [YN])2-2(XN-E [XN]) (YN-E [YN])]=Var (XN)+Var (YN)-2Cov(XN,YN).$

### Example 3.1

Let X and Y be random variables with a probability density function as follows (Figure 1):

$fXY(x,y)=x2+2y2, 0

We can then write a joint neutrosophic density function as follows:

For 0 < xI < 1, 0 < yI < 1,

$fXNYN(x,y)=fXY(x-I,y-I)=(x-I)2+2(y-I)2=x2-2Ix+I2+y2-2Iy+I2,$

where I < x<1 + I, I < y <1 + I.

To prove that the density function fXY (x, y) is the joint neutrosophic density function:

$∫I1+I∫I1+IfXNYN(x,y)dxdy=∫I1+I∫I1+I[(x-I)2+2(y-I)2]dxdy=∫I1+I[(x-I)33+2x(y-I)2]I1+Idy=∫I1+I([(1+I-I)33+2(1+I)(y-I)2]-[(I-I)33+2I(y-I)2]) dy=∫I1+I([13+2(1+I) (y-I)2]-[2I (y-I)2]) dy=([13y+2(1+I)(y-I)33-2I(y-I)33])I1+I=13([y+2(1+I) (y-I)3-2I(y-I)3])I1+I=13[((1+I)+2(1+I) (1+I-I)3-2I(1+I-I)3)-(I+2(1+I) (I-I)3-2I(I-I)3)]=13[1+I+2+2I-2I-I]=13[1+2]=1.$

We can find the marginal fXN (x) neutrosophic density function as follows:

$fXN(x)=∫I1+IfXNYN(x,y)dy=∫I1+I[(x-I)2+2(y-I)2] dy=[(x-I)2y+2(y-I)33]I1+I=[(x-I)2(1+I)+2(1+I-I)33]-[(x-I)2I+2(I-I)33]=[(x-I)2(1+I)+23]-[(x-I)2I]=[(x-I)2+(x-I)2I+23]-[(x-I)2I]=[(x-I)2+(x-I)2I-(x-I)2I+23]=[(x-I)2+23].$

To prove its PDF, let $fXN(x)=[(x-I)2+23]$. Then

$∫I1+I[(x-I)2+23]dx=[(x-I)33+23x]I1+I=[(1+I-I)33+23(1+I)]-[(I-I)33+23I]=[13+23(1+I)]-[23I]=13+23(1+I)-23I=13+23+23I-23I=13+23=1$

We can also find the marginal fYN (x) neutrosophic density function as follows:

$fyN(y)=∫I1+IfXNYN(x,y)dx=∫I1+I[(x-I)2+2(y-I)2]dy=[(x-I)33+2(y-I)2x]I1+I=[(1+I-I)33+2(y-I)2(1+I)]-[(I-I)33+2(y-I)2I]=[13+2(y-I)2+2(y-I)2I-2(y-I)2I]=13+2(y-I)2.$

To prove its PDF, let $fYN(x)=13+2(y-I)2$. Then

$∫I1+IfyN(y)dy=∫I1+I[13+2(y-I)2]dy=[13y+2(y-I)33]I1+I=[13(1+I)+2(1+I-I)33]-[13I+2(I-I)33]=[13+13I-13I+23]=[13+23]=1.$

### Example 3.2

Let X and Y be random variables with a probability density function as follows (Figure 2):

$PXY(x,y)=qy-2p2;x

Then we can write a joint neutrosophic density function as follows:

$PXNYN(x,y)=PXY(x-I,y-I)=qy-I-2p2,x-I

We can find the marginal PXN (x) neutrosophic density function as follows:

$PXN(x,y)=P [X=x-I]=∑y=x+1∞qy-I-2p2=p2∑y=x+1∞qy-I-2=p2qx+1-I-2+p2qx+2-I-2+p2qx+3-I-2+…=p2qx-1-I(1+q+q2+q3+…)=p2qx-1-I11-q=pqx-1-I,x=I+1,I+2,…$

To prove its PDF, let $p2qx-1-I11-q=pqx-1-I$, x = I + 1, I + 2, .... Then

$∑x=I+1∞pqx-1-I=p∑x=I+1∞qx-1-I=p(qI+1-1-I+qI+2-1-I+qI+3-1-I+…)=p(q0+q1+q2+…)=p(1+q+q2+q3+…)=p1-q=1.$

We can also find the marginal PYN (x) neutrosophic density function as follows:

$PYN(x,y)=P[Y=y-I]=∑x=I+1y-1qy-I-I-2p2=p2∑x=I+1y-1qy-I-2=(y-2-I)p2(q)y-2-I;y=2+I,3+1,…$

To prove its PDF, let PYN (x) = (y – 2 – I)p2(q)y–2–I ; y = 2 + I, 3 + I, .... Then

$∑y=I+2∞(y-2-I)p2(q)y-2-I=(2+I-2-I)p2(q)2+I-2-I+(3+I-2-I)p2(q)3+I-2-I+(4+I-2-I)p2(q)4+I-2-I+…=(0)p2(q)0+(1)p2(q)1+(2)p2(q)2+…=p2q(1+2q+3q2+…)=1.$

### 4. Conclusion

Conditional probabilities are an important topic that contribute to solving numerous everyday problems. Many researchers have studied these probabilities under uncertain conditions. In this paper, conditional probabilities are studied, for the first time, under the neutrosophic theory. We introduced a neutrosophic conditional probability as a generalization of the classical conditional probability, and presented its properties. The concepts of joint distribution function, regular conditional probabilities, marginal density function, expected value, and joint density function in the classical type are generalized with the neutrosophic type using discrete and continuous cases. Numerous properties and examples are provided to demonstrate the significance of this study. We suggest that researchers use these results and apply them to bivariate distribution probabilities. We also suggest expanding these results to the concept of n-valued neutrosophic logic.

### Fig 1.

Figure 1.

Simulation of fXNYN (x, y) in Example 3.1 with different indeterminacy I.

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 78-88https://doi.org/10.5391/IJFIS.2022.22.1.78

### Fig 2.

Figure 2.

Simulation of PXNYN (x, y) in Example 3.2 with different indeterminacy I where P = 0.3.

The International Journal of Fuzzy Logic and Intelligent Systems 2022; 22: 78-88https://doi.org/10.5391/IJFIS.2022.22.1.78

### References

1. Zadeh, LA (1965). Fuzzy sets. Information and Control. 8, 338-353. https://doi.org/10.1016/S0019-9958(65)90241-X
2. Atanassov, KT (1986). Intuitionistic fuzzy sets. Fuzzy Sets and Systems. 20, 87-96. https://doi.org/10.1016/S0165-0114(86)80034-3
3. Pawlak, Z (1982). Rough sets. International Journal of Computer & Information Sciences. 11, 341-356. https://doi.org/10.1007/BF01001956
4. Smarandache, F (1998). Neutrosophy: Neutrosophic Probability, Set, and Logic. Rehoboth, NM: American Research Press
5. Smarandache, F (1999). A Unifying Field in Logics: Neutrosophy: Neutrosophic Probability, Set and Logic. Rehoboth, NM: American Research Press
6. Smarandache, F (2013). Introduction to Neutrosophic Measure, Neutrosophic Integral, and Neutrosophic Probability. Craiova, Romania: Sitech
7. Smarandache, F (2014). Introduction to Neutrosophic Statistics. Craiova, Romania: Sitech
8. Smarandache, F (2020). Plithogeny, Plithogenic Set, Logic, Probability, and Statistics. Delhi, India: Indo American Books
9. Smarandache, F (2013). Neutrosophic measure and neutrosophic integral. Neutrosophic Sets and Systems. 1. article no. 2
10. Hanafy, IM, Salama, AA, Khaled, OM, and Mahfouz, KM (2014). Correlation of neutrosophic sets in probability spaces. Journal of Applied Mathematics, Statistics and Informatics. 10, 45-52. https://doi.org/10.2478/jamsi-2014-0004
11. Hanafy, IM, Salama, AA, and Mahfouz, K (2012). Correlation coefficients of generalized intuitionistic fuzzy sets by centroid method. IOSR Journal of Mechanical and Civil Engineering. 3, 11-14. https://doi.org/10.9790/1684-0351114
12. Hanafy, IM, Salama, AA, and Mahfouz, K (2012). Correlation of neutrosophic data. International Refereed Journal of Engineering and Science (IRJES). 1, 39-43.
13. Hanafy, IM, Salama, AA, and Mahfouz, M (2013). Correlation coefficients of neutrosophic sets by centroid method. International Journal of Probability and Statistics. 2, 9-12.
14. Salama, AA, Khaled, OM, and Mahfouz, KM (2014). Neutrosophic correlation and simple linear regression. Neutrosophic Sets and Systems. 5, 3-8.
15. Yuhua, F (2015). Examples of neutrosophic probability in physics. Neutrosophic Sets and Systems. 7, 32-33.
16. Patro, SK, and Smarandache, F (2016). The neutrosophic statistical distribution, more problems, more solutions. Neutrosophic Sets and Systems. 12, 73-79.
17. Smarandache, F, Abbas, N, Chibani, Y, Hadjadji, B, and Omar, ZA (2017). PCR5 and neutrosophic probability in target identification. Neutrosophic Sets and Systems. 16, 76-79.
18. Guo, Q, Wang, H, He, Y, Deng, Y, and Smarandache, F (2017). An evidence fusion method with importance discounting factors based on neutrosophic probability analysis in DSmT framework. Neutrosophic Sets and Systems. 17, 64-73.
19. Gafar, MG, and El-Henawy, I (2017). Integrated framework of optimization technique and information theory measures for modeling neutrosophic variables. Neutrosophic Sets and Systems. 15, 80-89.
20. Alhabib, R, Ranna, MM, Farah, H, and Salama, AA (2018). Some neutrosophic probability distributions. Neutrosophic Sets and Systems. 22, 30-38.
21. Zeina, MB, and Hatip, A (2021). Neutrosophic random variables. Neutrosophic Sets and Systems. 39, 44-52.