Article Search
닫기

Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(2): 101-122

Published online June 25, 2021

https://doi.org/10.5391/IJFIS.2021.21.2.101

© The Korean Institute of Intelligent Systems

Data Stream Classification Algorithms for Workload Orchestration in Vehicular Edge Computing: A Comparative Evaluation

Mutaz Al-Tarawneh

Computer Engineering Department, Mutah University, Karak, Jordan

Correspondence to :
Mutaz Al-Tarawneh (mutaz.altarawneh@mutah.edu.jo)

Received: February 3, 2021; Revised: March 28, 2021; Accepted: April 26, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper reports on the use of online data stream classification algorithms to support workload orchestration in vehicular edge computing environments. These algorithms can be used to predict the ability of available computational nodes to successfully handle computational tasks generated from vehicular applications. Several online data stream classification algorithms have been evaluated based on synthetic datasets generated from simulated vehicular edge computing environments. In addition, a multi-criteria decision analysis technique was utilized to rank the different algorithms based on their performance metrics. The evaluation results demonstrate that the considered algorithms can handle online classification operations with various trade-offs and dominance relations with respect to their obtained performance. In addition, the utilized multi-criteria decision analysis technique can efficiently rank various algorithms and identify the most appropriate algorithms to augment workload orchestration. Furthermore, the evaluation results show that the leveraging bagging algorithm, with an extremely fast decision tree base estimator, is able to maintain marked online classification performance and persistent competitive ranking among its counterparts for all datasets. Hence, it can be considered a promising choice to reinforce workload orchestration in vehicular edge computing environments.

Keywords: Online classification, Data stream, Performance, Ranking

In addition to the rapid development of artificial intelligence and data mining techniques, Internet of Things (IoT) technology has transformed traditional transportation systems into intelligent transportation systems (ITS) [1]. In the ITS paradigm, vehicles are equipped with various types of processing, sensing, detection, and communication devices, leading to more intelligent and connected vehicles. Therefore, a plethora of intelligent applications, such as autonomous driving, smart navigation, and infotainment, has been developed [24]. However, these applications are computationally intensive, with processing requirements that surpass the processing capacity of in-vehicle hardware resources. This discrepancy between the application requirements and the computing power of the in-vehicle hardware resources can hinder the development of ITS-centered applications [5]. To remedy this situation, cloud computing has been employed within vehicular networks, and applications have been allowed to utilize the powerful hardware resources of remote cloud data centers [6]. However, cloud data centers may incur very long transmission delays that violate the extremely low latency requirements of some applications in vehicular networks. To overcome this issue, vehicular edge computing (VEC), which combines vehicular networks with multi-access edge computing (MEC), has been proposed [7]. In the VEC paradigm, several edge servers are deployed at the network edge, providing adequate computing power to handle the increasing processing requirements of in-vehicle applications.

Hence, VEC is considered a promising concept that provides a baseline for various applications to enhance the quality of service (QoS) based on computation offloading [8]. It can improve the user’s quality of experience by offloading computational tasks to the edge servers, which are expected to handle the processing demand of vehicular applications. Thus, it can handle delay-sensitive vehicular applications [9,10]. The envisioned VEC scenarios encompass a dynamically changing process characterized by an evolving stream of offloading requests generated from a diverse set of applications with varying processing demands and delay requirements, an instantaneously changing network state, and an oscillatory utilization and capacity of the hardware resources of both edge and cloud servers [11]. In such a dynamic and heterogeneous environment, workload orchestration plays a vital role in handling the incoming offloading requests generated from different applications and deciding on the computational server to which each request should be offloaded, considering the application characteristics, network status, and utilization levels of both edge and cloud hardware resources. Apparently, as the number of vehicles and in-vehicle applications increases, the demand for the networking and hardware resources of the VEC system will also increase. The increased demand for these resources will ultimately lead to increased competition for resources, hindering the decision-making process involved in the operation of workload orchestration [12]. Consequently, predicting the status of hardware resources and their ability to successfully handle offloaded requests is a challenging yet inevitable process. In this regard, machine learning (ML) algorithms can provide efficient tools for making predictions in dynamically changing environments [13]. The ultimate goal of the prediction process is to help the workload orchestrator determine whether offloading an incoming request to a particular server will eventually succeed or fail. In other words, the prediction process associated with workload orchestration is a binary classification problem in which a class label is assigned to each incoming offloading request. The class label will be set to either success or failure. Whereas a success indicates that a particular computational server is predicted to successfully handle the offloaded request and deliver the execution results back to the request source, a failure indicates that the offloading process is expected to fail because of either vehicle mobility, network congestion, or unavailability of processing resources. In this context, the work in [13] is among the first efforts to propose an ML-based workload orchestrator for VEC systems. They used several classification algorithms and tested their ability to aid in the workload orchestration process. However, they used these algorithms in a batch-learning scheme. In this scheme, an ML model is built under the assumption that the entire dataset (i.e., offloading requests) is available in memory. However, batch learning schemes suffer from several limitations. First, the training phase may last for a long period and require a significant amount of processing and memory resources. Second, the performance of the trained model is sensitive to the training dataset size. Third, in the batch learning scheme, the training data are assumed to be static and do not change over time; once a model is trained, it cannot acquire knowledge or experience from new samples. In other words, when there is a change in the statistical properties of the model’s input (i.e., concept drift), a new model must be constructed [14]. Consequently, as the offloading requests in VEC environments represent a continuous stream of data, online data stream classification algorithms are viable alternatives to offline (i.e., batch) classification algorithms in VEC-based workload orchestration for several reasons. First, online data stream classification algorithms are designed to handle unbounded streams of data and incrementally learn from incoming data; they must be continuously updated while making predictions when required [15]. Second, online data stream classification algorithms can be utilized in real-time applications that cannot be handled by classical (i.e., batch) learning algorithms [16]. Third, they are able to detect concept drift, allowing the trained models to continuously adapt and evolve with dynamically changing data streams [17,18]. Therefore, online data stream classification is perceived as a suitable tool for workload orchestration in VEC systems, where the environmental state and application behavior may change over time in an unpredictable manner. Many algorithms for data stream classification have been proposed [1921]. Although some of these algorithms have been studied in multiple fields [16,2225], the evaluation of their performance and the trade-offs involved in their performance metrics and memory costs have not been addressed for VEC environments. Hence, this work seeks to identify which online data stream classification algorithms may be suitable for VEC environments through rigorous comparative evaluation. In addition, it aims to analyze the trade-offs associated with key performance metrics, such as the algorithm’s accuracy, precision, recall, kappa statistic, and memory requirements [26]. Typically, the evaluation of online data stream classification algorithms involves multiple, typically conflicting criteria. Hence, it can be modeled as a multi-criteria decision analysis (MCDA) problem [27]. MCDA is a formal process that can be employed in decision-making by allowing identification of the alternatives, selection of the evaluation indicators (i.e., criteria), and aggregation of the obtained performance scores under the considered performance evaluation criteria [28,29]. In this context, the composite indicators (CIs) are a family of MCDA methods that can be used to assign scores to various alternatives and rank them appropriately [3032]. Hence, this work employs a CI-based approach to rank various online data stream classification algorithms based on their obtained performance metrics. The experiments were carried out using several online data stream classification algorithms (Section 2) based on synthetically generated datasets. All tests were performed using a scikit-multiflow evaluation platform [33]. The main contributions of this study can be summarized as follows.

  • • Performing a rigorous evaluation of the performance of several online data stream classification algorithms on synthetic datasets generated from a simulated VEC environment.

  • • Applying a MCDA technique to rank various algorithms based on their observed performance trade-offs and dominance relations.

  • • Identifying the most suitable algorithm or set of algorithms to augment workload orchestration in VEC environments.

The remainder of this paper is organized as follows. Section 2 briefly explains the online data stream classification algorithms considered in this study. Section 3 describes the research methodology and utilizes research tools. Section 4 presents the results of the evaluation and trade-off analysis. Section 5 summarizes and concludes this paper.

Traditionally, ML algorithms have been used to construct models based on static datasets. However, there is a growing need for mechanisms capable of handling vast volumes of data that arrive as streams. In this regard, new data samples can be obtained at any time, and storing these data samples is inappropriate. On the one hand, learning from continuous data streams requires the ML model to be created and continuously updated throughout the stream. On the other hand, it is important to address concept drift, in which the statistical properties of the incoming data change over time [34,35]. In addition, for VEC environments, the obtained ML model must be updated instantaneously, requiring algorithms that can achieve adequate accuracy levels under limited processing power and memory space [26]. In this work, several data stream classification algorithms, with distinct model construction mechanisms, complexity, and memory requirements, are considered. The algorithms are summarized as follows.

2.1 Bayes Learning Algorithms

In this category, the naive Bayes (NB) algorithm was used [20]. The NB algorithm performs Bayesian prediction under the assumption that all inputs, that is, features of the input data samples, are independent. The NB algorithm is a simple classification algorithm with low processing requirements. Given n different classes, the trained NB classifier predicts the class to which it belongs with a high probability for every incoming data sample.

2.2 Lazy Learning Algorithms

The most commonly used lazy data stream classification algorithms are the k-nearest neighbors classifier (kNN) [36], kNN classifier with adaptive windowing (kNN-ADWIN) change detector,, and self-adjusting memory coupled with the kNN classifier (SAM-kNN) [37,38]. In the data stream setting, the kNN algorithm operates by keeping track of a window with a fixed number of recently observed training data samples. When a new input data sample arrives, the kNN algorithm searches within the recently stored samples and finds the closest neighbors using a particular distance measure. Then, the class label of the incoming data sample can be assigned accordingly. The kNN-ADWIN classifier is an improvement over the regular kNN classifier because of its resistance to concept drift. It employs the ADWIN change detector to determine which previous data samples to keep and which ones to discard (i.e., forget), which in turn regulates the sample window size. In addition, the SAM-kNN is a refinement over the other two algorithms, in which a SAM model constructs an ensemble of models targeting either current or previous concepts. These dedicated models can be applied according to the requirements of the current situation (i.e., concept). To accomplish this, the SAM model constructs both short-term (STM) and long-term (LTM) memories. Whereas the STM is built for the current concept, the LTM is used to store information about previous concepts. A cleaning process is then used to control the size of the STM while maintaining consistency between the LTM and STM.

2.3 Tree-Based Learning Algorithms

Tree-based algorithms are widely used in data-stream classification applications. In this study, three main tree-based algorithms are used: the Hoeffding tree (HT) [39], Hoeffding adaptive tree (HAT) [40], and extremely fast decision tree (EFDT) [41]. The HT algorithm is an incremental, anytime decision tree induction algorithm that has the ability to learn from massive online data streams, assuming that the distribution generating the incoming data samples is static and does not change over time. It relies on the fact that a small set of input samples can often be sufficient to select an optimal splitting attribute. This idea is mathematically supported by the Hoeffding bound, which quantifies the number of input samples required to estimate some statistics within a predefined precision. A theoretically attractive feature of the HT algorithm that is not shared by other online decision tree algorithms is that it has profound performance guarantees. Relying on the Hoeffding bound, the output of an HT learner is asymptotically nearly identical to that of a batch-based learner using infinitely many input data samples. The HAT algorithm utilizes the ADWIN change detector to monitor the performance of different tree branches. The branches whose performance decreases are replaced with new branches if the new branches are more accurate. Furthermore, the EFDT classification algorithm builds a tree in an incremental manner. It selects and deploys a split once it is confident that the split is useful and subsequently revisits that split decision, replacing it if it then becomes evident that a more useful split is present. The EFDT is able to learn quickly from static distributions and ultimately learn the asymptotic batch tree if the distribution generating the input data samples is stationary.

2.4 Rule-Based Learning Algorithms

The very fast decision rule (VFDR) is an online data stream classification algorithm [42]. The learning process of the VFDR algorithm is similar to that of the HT algorithm, but instead utilizes a collection of rules instead of a tree. The ultimate goal of VFDR is to create a collection of rules that constitute a highly interpretable classifier. Each rule is represented as a combination of conditions based on attribute values and a structure for maintaining sufficient statistics. These statistics are used to determine the class predicted by the associated rule for incoming data samples.

2.5 Ensemble Learning Algorithms

In this study, several ensemble learning methods are evaluated. These algorithms include Oza bagging (OB) [43], Oza bagging with ADWIN change detector (OB-ADWIN) [43], leveraging bagging (LB) [44], online synthetic minority oversampling technique bagging (SMOTE-B) [45], online under-over bagging (UO-B) [45], adaptive random forest (ARF) [46], and the dynamic weighted majority classifier (DWMC) [47]. For instance, OB is an online ensemble learning algorithm that can be considered a refinement over traditional batch-based bagging to handle incremental learning. For traditional batch-based bagging, M classifiers are trained on M distinct datasets created by drawing N samples from an N-sized training dataset with replacement. In the online learning settings, as there is no training dataset, but rather a stream of input data samples, the drawing of input samples with replacements cannot be trivially performed. The online OB mimics the training phase by using each arriving input data sample to train the base estimator over k times, which is drawn by the binomial distribution. Because the input data stream can be assumed to be infinite, and given that the binomial distribution tends to a Poisson (λ =1) distribution with infinite samples, [43] found the process adopted by the online OB algorithm to be a good ‘drawing with replacement’. OB-ADWIN is an improvement from the OB algorithm, where the ADWIN change detector is employed. In addition, the LB algorithm is based on the OB algorithm, in which a Poisson distribution is used to simulate the re-sampling process. The LB algorithm attempts to obtain better classification results by modifying the parameters of the Poisson distribution obtained from the binomial distribution when assuming an infinite input data stream. Hence, the LB algorithm changes the λ parameter of the Poisson distribution to six instead of one. The new value of λ would lead to greater diversity in the input space by attributing a different range of weights to the input data samples. To achieve further improvement over the OB algorithm, the LB uses output detection codes. In the detection codes, each class label is coded using an n-bit-long binary code, and each bit is associated with one of the n classifiers. As a new input data sample is analyzed, each classifier is trained on its associated bit. This assists the LB algorithm, to some extent, with error correction.

The ARF algorithm is an adaptation of the traditional batch-based random forest algorithm applied to an online data stream scope. In ARF, multiple decision trees are generated, and the decision on how to classify each incoming data sample is made through a weighted voting system. In the voting process, the decision tree with the best performance (in terms of either accuracy or the kappa statistic) receives a more substantial weight in the voting process, that is, a higher priority in the classification decision. The DWMC algorithm relies on four mechanisms to handle concept drift: it first trains online base learners of the ensemble; it then assigns weights to the base learners depending on their performance, removes them based on their performance, and adds new learners based on the global performance of the whole ensemble.

In the UO-B and SMOTE-B algorithms, the basic idea is to use existing algorithms for online ensemble-based learning and combine them with batch-mode techniques for cost-sensitive bagging. Such a combination is useful for classifying imbalanced data streams in which the vast majority of incoming data samples belong to the same class. The UO-B and SMOTE-B algorithms represent an adaptation of the UnderOverBagging and SMTEBagging ensemble methods [48] for online data stream settings. Given an imbalanced dataset with N samples, where N+ samples come from the minority class S+ and N samples from the majority class S, batch bagging-based learning algorithms rely on under-sampling (underbagging) the majority class, over-sampling (overbagging) the minority class, or UnderOverBagging, which is a uniform combination of both underbagging and overbagging. Furthermore, the re-sampling rate (a) varies over the bagging iterations, boosting the diversity among the base learners. In SMOTEBagging, the majority class is sampled with replacement at a rate of 100% (i.e., N from the majority class is generated), while CN+ samples from the minority class are generated for each base learner, for some C > 1, among which a% is generated by re-sampling, while the rest of the samples are obtained by the SMOTE technique [49]. In the online data stream settings, where no dataset is available, the sampling process is simulated by presenting each incoming data sample to the base model over k times, where k is sampled from a Poisson (λ) distribution. The value of the parameter λ is chosen in a manner that allows an online bagging scheme to mimic the behavior of its batch-based counterpart.

3.1 VEC System Model

Figure 1 shows the system model assumed in this work. This model is based on the work proposed in [13,50], where a model and a simulation environment for a VEC system were proposed. In this model, in-vehicle applications are assumed to periodically generate offloading requests for some computational tasks. In this work, the workload orchestrator is assumed to offload the current request (i.e., task) to either: (A) the road side unit (RSU) via the wireless local area network (WLAN) that currently covers the offloading vehicle, (B) the cloud server through the RSU - using the wide area network (WAN), or (C) the cloud server via the cellular network (CN).

It is assumed that there are three main in-vehicle application types. These applications and their corresponding tasks are uniquely characterized by a set of parameters, as shown in Table 1. The usage percentage is the percentage of vehicles running each application, task input file size is the mean size of the input data required by the offloaded task, task output file size is the average size of the output result generated after executing the offloaded task, virtual machine (VM) utilization indicates the percentage of the processing power consumed (i.e., utilized) by the offloaded task, and the number of instructions per task represents the expected processing demand of the offloaded task. As shown in Table 1, the characteristics of different applications were chosen such that adequate diversification in their task offloading frequency, processing demand, and network bandwidth requirements is achieved. Such a diverse set of applications will cause the simulated VEC environment to go through various states that cover a wide range of computational server loading and network bandwidth utilization levels.

3.2 Dataset Construction

To carry out a comparative evaluation between different data stream classification algorithms, synthetic datasets were generated based on the simulation tool of [13]. The datasets were created by simulating a VEC system with 2,400 vehicles and application characteristics, as shown in Table 1. In addition, the VEC simulation was accompanied by a hypothetical workload orchestrator that offloads the incoming requests according to Table 2. Each entry in Table 2 represents the probability that the orchestrator will make a particular decision for an incoming offloading request. As shown, these probabilities are application-dependent; for instance, there is a 0.60 probability of offloading tasks generated from App-1 to the VEC server, as compared with probabilities of 0.30 and 0.23 for App-2 and App-3, respectively. These offloading probabilities were set to mimic a relatively realistic situation that aligns with the application characteristics shown in Table 1. For instance, both App-2 and App-3 are more computationally intensive than App-1; they have a greater number of instructions per task. In addition, the average input/output file size is higher than that of App-1. Hence, they were assigned higher probabilities of being offloaded to the resource-enriched cloud server via the high-bandwidth WAN connection. Overall, the hypothetical orchestration behavior shown in Table 2 was observed to generate synthetic datasets with an adequate number of success and failure instances, leading to a more appropriate setting for evaluating the performance and discrimination ability of different online data stream classification algorithms. For each offloading request, a record-keeping process was performed. Record keeping involves recording the VEC environmental state when the offloading decision is made, in addition to the offloading outcome. The environmental state variables depend on the decision made by the workload orchestrator, as shown in Table 3. The environmental state variables contain information about application characteristics related to the offloaded task, average utilization of the VMs instantiated on the VEC server, upload and download delays associated with network connections used for task offloading, and the number of tasks recently offloaded to the selected server.

The offloading outcome is set to success if the task associated with the offloaded request is offloaded successfully and its result is delivered to the request source. It is set to failure if the execution result is not delivered successfully to the requesting vehicle. The failure of the offloading process may be due to several reasons, such as high resource utilization of the selected server, unavailability of network bandwidth, or vehicle mobility. Ultimately, the records obtained during the simulation process were used to create three different datasets, with each dataset corresponding to one of the three possible offloading decisions. These datasets are denoted as the Edge, CloudRSU, and CloudCN datasets. Each dataset stores the environmental state variables associated with each offloading request, along with the offloading outcome. The stored entries appear in chronological order. Table 4 lists some sample entries in the constructed datasets.

3.3 Evaluation Methodology

This section explains the main steps implemented to evaluate the performance of the considered data stream classification algorithms on the obtained datasets. Figure 2 depicts the evaluation procedure followed by the scikit-multiflow evaluation platform [33].

This evaluation procedure was performed for every data steam classification algorithm for each of the obtained datasets. As shown in Figure 2, the dataset is first loaded as a stream and then passed to the instantiated classification algorithm for online testing, incremental learning, and evaluation. In the evaluation platform, the instances contained in the obtained stream maintain their chronological order captured from the simulated VEC environment. In addition, each instance contains a copy of the dynamic state (i.e., application characteristics, designated server utilization, and network status) of the VEC environment upon making the orchestration decision. As a result, each tested algorithm in the evaluation platform is exposed to state variables that capture the state of the simulated VEC environment when making a classification decision. In this study, the classification algorithms were evaluated using a prequential evaluation method or the interleaved test-then-train method. The prequential evaluation method is designed specifically for online data stream settings, in the sense that each incoming input data sample (i.e., instance) serves two purposes, and those input instances are analyzed sequentially, in order of their arrival, and become immediately inaccessible. In this method, each incoming data sample is first used to test the classification algorithm (i.e., to make a prediction), after which the same input sample is used to train the classification algorithm. The considered performance metrics are incrementally updated after each observed instance, allowing a continuous update of the performance of each tested algorithm in addition to real-time tracking of its adaptability to unseen instances. Hence, the classification algorithm is always tested, and its performance metrics are updated on input data samples that have not yet been seen. The performance of the classification algorithms is quantified in terms of a number of widely used performance measures, including accuracy, precision, recall, F-score, kappa statistic, and memory size of the ML model obtained online. These measures are defined as follows:

  • Classification accuracy: the percentage of correctly classified input samples.

    Accuracy=TN+TPTP+FP+FN+TN,

    where TP, TN, FP, and FN denote true positive, true negative, false positive, and false negative, respectively. TP is the number of correctly classified positive (i.e., successful) input instances. TN is the number of correctly classified negative instances (i.e., failure). FP is the number of positive instances that are misclassified as negative. FN is the number of negative instances that are misclassified as positive.

  • Precision: measures the number of positive class predictions that actually belong to the positive class.

    Precision=TPTP+FP.

  • Recall: quantifies the number of positive class predictions made out of all positive instances in the observed stream.

    Recall=TPTP+FN.

  • F-score: is the harmonic mean of precision and recall.

    F-score=2×Precision×RecallPrecision+Recall.

  • Kappa statistic (κ): is a robust classification accuracy measure that takes into account the probability of agreement by chance, indicating an improvement over a majority class classifier, which predicts all incoming samples fall in the majority class [51]. It plays a crucial role in evaluating classification accuracy, especially for imbalanced data streams.

    κ=p0-pc1-pc,

    where p0 is the prequential accuracy of the classifier and pc is the probability that a chance classifier makes a correct prediction [52]. If κ = 1, the classification algorithm is always correct.

  • Model size: is a measure of the memory space occupied by the classification model obtained throughout the prequential evaluation process.

Typically, the online classification of data streams does not have a single optimal solution (i.e., algorithm) that optimizes all of the involved performance metrics, but rather a plethora of possible solutions with noticeable trade-offs between different performance metrics. For such scenarios, the notion of optimality is captured through Pareto Optimalty, which considers an algorithm to be inferior or superior to another algorithm only if it is inferior in all metrics or superior in all metrics. The trade-offs in the algorithm selection process are captured by algorithms that are superior with respect to some metrics but inferior in other metrics. Such pairs of algorithms that are both superior and inferior with respect to certain metrics are called non-dominated and form the Pareto front of the algorithm performance optimization problem [53]. Hence, this work identifies, for each potential dataset, the set of non-dominated algorithms (SNDA) based on the obtained performance metrics. Having obtained the SNDA, a CI-based MCDA procedure is followed to assign scores and ranks to the various algorithms. The main steps followed by the CI-based procedure are shown in Figure 3. First, a set of non-dominated algorithms was used to construct a performance matrix (SNDA matrix).

The rows of the matrix represent the alternatives (i.e., algorithms), whereas the columns indicate the performance of these algorithms under each performance metric or indicator. Second, performance metrics are assigned weights that can be considered trade-off coefficients, meaning that they represent the decrease in indicator (x) that can be compensated for by another indicator (y). Third, the performance matrix was normalized using one of the eight possible normalization techniques. The normalization techniques include the rank, percentile rank, standardized, min-max, logistic, categorical (−1,0,1), categorical (0.1,0.2,0.4,0.6,0.8,1), and target methods [29]. The outcome of the CI-based procedure can ultimately weigh or rank the SNDA and identify which algorithms are best suited for the classification problem in question. The normalization step helps in putting all performance indicators on the same scale and assigning a score to each alternative with respect to its counterparts. Fourth, the scores assigned after the normalized process are combined using a particular aggregation function to generate an index for each alternative (i.e., algorithm). These aggregation functions include additive, geometric, harmonic, minimum, and median functions [29]. Fifth, indices assigned to different algorithms are used to assign ranks to these algorithms. The ranks are in the range of 1 (best rank) to n (worst rank), where n is the number of elements in the SNDA. In this work, the steps outlined in the flowchart are performed for each possible combination of normalization and aggregation methods. Hence, an algorithm may be assigned different ranks under different combinations. To combine the results obtained under different combinations of normalization and aggregation, a rank frequency metric (rfm) is defined as shown in Eq. 6.

rfmA-i=nrankincombinations×100%,

where rfmAi is the rank frequency metric of algorithm A, which shows the proportion of indices that rank algorithm A in the ith position, nranki is the number of times the algorithm A is chosen for the ith rank, and ncombinations is the number of possible combinations of normalization and aggregation under which algorithm A is ranked.

4.1 Edge Dataset Results

Table 5 summarizes the performance of the considered online data stream classification algorithms on a stream generated based on the Edge dataset, in terms of accuracy, precision, recall, F-score, kappa and the model size. The generated stream consisted of 25,000 samples with 17,333 and 7,667 success and failure instances, respectively. All algorithms were tested using their default set of parameters defined in the scikit-multiflow evaluation platform [33]. In addition, the ensemble-based algorithms were further tested on different sets of base estimators. For instance, the default base estimator for the LB algorithm is the kNN classifier.

However, it was also tested in other configurations, where its base estimator is set to either HT (LB+HT), HAT (LB+HAT), ARF (LB+ARF), EFDT (LB+EFDT), or VFDRC (LB+VFDRC). Similarly, the previous statement applies to all other ensemble-based algorithms. As shown, the considered algorithms exhibited different performance values under the metrics used. In addition, no single algorithm surpasses all other algorithms in terms of all the performance metrics. In this regard, the LB + ARF ensemble algorithm achieved the highest accuracy, precision, F-score, and kappa values, while the NB algorithm outperformed other algorithms in terms of recall and model size metrics. Figure 4 shows a box plot of the obtained performance values for each metric. It visualizes the degree of variation between the algorithms for each performance metric.

In general, the algorithms vary widely in terms of their kappa statistics. In addition, the considered algorithms achieved better precision, recall, and F-score compared with their accuracy values. Apparently, the class imbalance observed in the Edge dataset (i.e., stream) has a direct consequence on the performance of the classification algorithms; only those algorithms capable of handling class imbalance and concept drift achieved relatively higher performance compared with other algorithms. Specifically, the ARF algorithm and other ensemble algorithms whose base estimator is set to either HT, HAT, ARF, EFDT, or VFDRC have relatively better performance metrics compared with all other algorithms.

In Figure 4, the model size score (MSS) is computed according to Eq. 7.

MSSi=1-MSi-MSminMSmax-MSmin,

where MSSi is the score assigned to algorithm i, MSi is the model size under the ith algorithm, and MSmin and MSmax are the minimum and maximum model sizes, respectively. The formula given in Eq. 7 normalizes the model size and converts it into an ascending attribute that favors algorithms with lower model sizes. It also puts the model size on the same scale as the other performance metrics. As shown in Figure 4, the vast majority of classification algorithms achieved higher model size scores (i.e., low model sizes as compared with other algorithms). Nevertheless, there are few algorithms that have significantly higher model sizes (i.e., low model size scores).

Because there is no single algorithm that is superior in all performance metrics, but rather a set of algorithms in which some of them are superior in some metrics and inferior in other metrics, it is important to identify the SNDA. Table 6 shows the SNDA for the Edge dataset classification, along with their performance metrics.

As shown in Table 6, this set consists of 25 algorithms with various trade-offs between the performance metrics considered. To determine the most viable options (i.e., algorithms) for the edge data stream, the information shown in Table 6 was fed to the CI-based MCDA process to obtain the rank frequency metric (rfm) for each algorithm in the SNDA of the edge data stream. Figure 5 shows the rfm values associated with various algorithms from the SNDA of the edge data stream. The results are shown for those algorithms that have been ranked at least once in the first three ranks (i.e., algorithms with non-zero values for their rfmA−1, rfmA−2, or rfmA−3). As depicted in Figure 5, the LB+ARF, LB+EFDT, and LB+HAT algorithms have higher values for rfmA−1, rfmA−2, and rfmA−3 than do their counterparts. In other words, they always compete for the first three ranks among the various combinations of normalization and aggregation methods. Apparently, the LB+ARF algorithm occupied the first rank for the majority of combinations and could be considered the most promising choice for the edge data stream classification problem. Conversely, the LB+HT, LB+VFDRC, DWMC+EFDT, OB-ADWIN+HAT, and OB-HAT algorithms occupied ranks that were equal to or worse than the fourth rank for a large portion of normalization and aggregation combinations.

4.2 Cloud via RSU Dataset Results

This section presents the performance metrics of the various data stream classification algorithms on a stream generated based on the CloudRSU dataset. The generated stream consists of 25,000 samples with 17,961 and 7,039 success and failure instances, respectively. Table 7 shows the accuracy, precision, recall, F-score, kappa, and model size metrics for the different algorithms. The algorithms obtained different performance values under the considered metrics. Similar to the edge data stream, there is no single algorithm that surpasses other algorithms in terms of all performance metrics. In this regard, the LB+ARF algorithm achieved the highest accuracy, F-score, and kappa values, while the LB-EFDT achieved the highest precision value, while the UB-O and NB algorithms achieved the highest recall and model size values, respectively. Figure 6 shows a box plot of the obtained performance metrics for the various algorithms. The vast majority of algorithms have comparable performance metrics, with the kappa statistic having generally lower values compared with other metrics.

Although the majority of the considered algorithms, especially the ensemble-based ones, have comparable performance metrics, there is no single algorithm that surpasses all other algorithms with respect to the considered metrics. Hence, it is necessary to analyze the dominance relationship between these algorithms and identify a SNDA. Table 8 lists the SNDA for the stream generated from the CloudRSU dataset. This set contains 22 algorithms. This set was then used to obtain the rank performance metrics, as shown in Figure 7. The results are for those algorithms that have occupied one of the first three ranks at least once under all possible combinations of normalization and aggregation methods.

The results shown in Figure 7 illustrate that the LB+EFDT, LB+HAT, and ARF algorithms dominate other algorithms in terms of the number of times they occupy one of the top three ranks. Evidently, the LB+EFDT algorithm ranks robustly within the first three ranks, with a high share of combinations assigning it to the first rank. Hence, it can be considered a suitable data stream classification algorithm for streams generated from the CloudRSU dataset.

4.3 Cloud via CN Dataset Results

This section presents the results obtained for the various algorithms on the data stream generated from the CloudCN dataset. Similar to the other two cases, the generated stream consisted of 25,000 instances. The number of success and failure instances in this stream were 16,188 and 8,812, respectively. Table 9 summarizes the performance metrics obtained using the studied algorithms. As shown, the algorithms vary in terms of their obtained performance values with no single algorithm dominating the others in all metrics. The LB+ARF, LB+EFDT, and NB algorithms achieved the highest accuracy, precision, recall, F-score, kappa, and model size, respectively.

Figure 8 shows the variability observed among the algorithms with respect to all performance metrics. As depicted, a large portion of the considered algorithms have very similar performance values. In addition, the values obtained under the kappa statistic are lower than those of the other metrics. The results of the algorithm dominance are shown in Table 10. As illustrated, the SNDA for the considered stream consists of 23 algorithms with various trade-offs between performance metrics.

Figure 9 shows the ranking results obtained by feeding the information given in Table 10 to the CI-based MCDA procedure. It considers only algorithms with non-zero values of rfmA−1, rfmA−2, or rfmA−3. The results depicted in Figure 9 show that the LB+VFDRC, LB+HT, and LB+EFDT surpass all other algorithms with respect to their rank frequency metric. While the LB+EFDT ensemble-based algorithm is assigned to the first rank for a larger number of combinations as compared to the LB+VFDRC, the latter is never assigned to a rank worse than third under all combinations of normalization and aggregation. In addition, the LB+HT and LB+EFDT algorithms occupied ranking positions worse than or equal to the fourth rank for a noticeable portion of scoring and ranking combinations. Moreover, algorithms other than the best three algorithms were allocated the fourth or worse ranks for a relatively large number of combinations, hindering their potential applicability to the considered online data stream classification task.

In summary, the results shown in Tables 6, 8, and 10 illustrate the fact that for each classification task, there is a set of non-dominated algorithms, with comparable performance metrics that can be utilized in real-life applications. The ranking results in Figures 5, 7, and 9 provide an insight into the algorithms that are superior to other members in the corresponding non-dominated set, considering the trade-offs observed under all performance metrics. Although the results presented in Sections 4.1, 4.2, and 4.3 highlight the most suitable algorithms for each classification task, it is worth noting that the LB+EFDT (i.e., leveraging bagging algorithm, whose base estimator is set to the extremely fast decision tree) demonstrates consistent competence to occupy one of the top-ranking positions for each of the considered datasets because of its competitive performance metrics, in addition to its reasonably acceptable model size. This algorithm maintains an ensemble of n EFDT base classifiers, where n is set to 10 by default in the utilized evaluation platform [33]. To classify an incoming instance, every individual classifier will produce a prediction (i.e., a vote), and the final classification result is obtained by aggregating the individual predictions. Following Condorcet’s jury theorem [5456], there is theoretical proof that the error rate of an ensemble approaches 0, in the limit, provided that two conditions are satisfied. First, individual base classifiers must perform better than random guessing. Second, individual classification models must exhibit diversity, that is, they should not produce correlated errors. In the case of the LB+EFDT algorithm, the EFDT algorithm is used as the base classification model. The EFDT is a refinement over the HT algorithm. Whereas the HT algorithm delays the selection of a split at a node until it is confident that it has determined the best split, the EFDT algorithm selects and deploys a split as soon as it is confident that the split is useful. In addition, the HT algorithm never revisits its splitting decision, while the EFDT algorithm revisits its splitting decision, replacing the split if it later becomes evident that a better split is available. Hence, the EFDT algorithm can learn rapidly from incoming instances with an inbuilt tolerance against concept drift, utilizing the most advantageous splitting decision recognized to date [41]. Consequently, the EFDT algorithm is considered a viable base classifier that meets the first condition implied by Condorcet’s jury theorem.

The LB algorithm employs online bagging to train its base models (i.e., classifiers). In this regard, as each incoming classification instance is observed, online re-sampling is performed by presenting that instance to each model k ~ Poisson (λ) times and updating each model accordingly. The value of k is regarded as the weight of the incoming instance. To increase online re-sampling, the LB algorithm uses a higher value of the λ parameter of the Poisson distribution, which is set to six by default. With this value of λ, the LB algorithm causes more randomness in the weights of the incoming instances. Hence, it achieves a higher input space diversity by assigning a different range of weights to each incoming instance. In addition, the LB algorithm leverages the bagging performance by adding randomization at the output of the ensemble using output codes. As shown in Section 2.5, the LB algorithm assigns to each possible class label a binary string of length n, where n is the number of base classifiers in the ensemble. Each base classifier learns one bit in a binary string. Unlike standard ensemble methods, the LB algorithm utilizes random output codes instead of deterministic ones. In other words, while the base classifiers in the standard methods predict the same function, using output codes allows each classifier in the LB ensemble to predict a different function [44]. This would minimize the influence of correlations between the base classifiers and, in turn, increase the diversity of the ensemble [57,58]. Hence, the introduction of randomization to the input and output of the ensemble classifiers yields, to some extent, a diversified ensemble satisfying the second condition imposed by Condorcet’s jury theorem. Furthermore, the LB algorithm uses the ADWIN algorithm to deal with concept drift, with one ADWIN instance for each classifier in the ensemble. Each time a concept drift is detected, the worst classifier is reset. Hence, the LB algorithm always monitors the quality of its online learning process and adheres to the current class distribution of incoming classification instances. In general, the inherent diversity among its base classifiers compensates for the classification errors induced by any individual classifier. This can be clearly observed in Table 5, where a single EFDT classifier achieved an accuracy of 0.7936, and the accuracy of the LB+EFDT ensemble reached 0.9326. Altogether, the LB+EFDT algorithm demonstrated the ability to maintain a noticeable performance under all considered datasets and performance metrics. Hence, it could be considered a feasible choice for online data stream classification in real-life VEC environments.

VEC has recently become an integral part of intelligent transportation systems. In these systems, workload orchestration plays a crucial role in selecting the most appropriate computational node to execute tasks generated from vehicular applications. The continuous streams of tasks generated from vehicular applications pose significant challenges in data management, execution, and storage. Online ML algorithms can address continuous data streams and manage large data sizes. Hence, this paper has addressed the potential application of online data stream classification algorithms to support workload orchestration in VEC environments. These algorithms can be used to predict the ability of the available computational nodes to successfully handle a particular task. In this work, various online data stream classification algorithms were evaluated based on synthetic datasets generated from simulated VEC environments. In addition, a MCDA technique was employed to rank various algorithms based on the obtained performance metrics. The evaluation results show that the considered algorithms can handle online classification operations with noticeable trade-offs and dominance relations with respect to their obtained performance. In addition, the employed multi-criteria decision analysis technique demonstrated the ability to rank different algorithms and identify the most appropriate algorithms to handle online classification in VEC environments.

Future work will consider implementing a workload orchestration algorithm based on the findings of this study to show how online data stream classification can enhance orchestration performance in dynamically changing VEC environments.

Fig. 1.

System model.


Fig. 2.

Evaluation flowchart.


Fig. 3.

CI MCDA flowchart.


Fig. 4.

Box plot of performance metrics on the Edge dataset.


Fig. 5.

Algorithm ranking for the Edge dataset.


Fig. 6.

Box plot of performance metrics on the CloudRSU dataset.


Fig. 7.

Algorithms ranking for the CloudRSU dataset.


Fig. 8.

Box plot of performance metrics on the CloudCN dataset.


Fig. 9.

Algorithm ranking for the CloudCN dataset.


Table. 1.

In-vehicle application characteristics.

App-1App-2App-3
Usage percent303535
Task input file size (kB)204020
Task output file size (kB)202080
Task generation rate (per second)0.30.20.07
Number of instructions per task (× 109)31020
VM utilization on RSU (%)62040
VM Utilization on Cloud (%)1.648

Table. 2.

Hypothetical orchestration probabilities.

VEC serverCloud via RSUCloud via CN
App-10.600.230.17
App-20.300.530.17
App-30.230.600.17

Table. 3.

VEC environment state variables per offloading decision.

VariableDecision
Number of instructions per taskAll
Number of recently uploaded tasksAll
Input file size (kB)All
Output file size (kB)All
Edge server utilization (%)VEC server
WLAN upload delay (ms)VEC server
WLAN download delay (ms)VEC server
WAN upload delay (ms)Cloud via RSU
WAN download delay (ms)Cloud via RSU
CN upload delay (ms)Cloud via CN
CN download delay (ms)Cloud via CN

Table. 4.

Sample dataset entries.

Edge datasetCloudRSU datasetCloudCN dataset
SuccessFailureSuccessFailureSuccessFailure
Number of instructions per task3,0573,1789,53119,2423,1969,598
Number of previously offloaded tasks2129----
Number of offloaded tasks--1184192
Input file size192121881921
Output file size182142222036
Edge server utilization2520.9---
WLAN upload delay23.0284.45---
WLAN download delay36.2028.44---
WAN upload delay--6.1816.30--
WAN download delay--12.0755.65--
CN upload delay----21.658
CN download delay----68.620.2

Table. 5.

Performance comparison on the Edge dataset.

AlgorithmAccuracyPrecisionRecallF-scoreKappaModel size (kB)
NB0.67480.67630.98570.80220.041912
KNN0.68750.74860.80220.77450.2675502
kNNADWIN0.68750.74870.80210.77450.2675513
SAM-KNN0.74680.77880.86810.82100.3927981137
VFDRC0.71990.76970.82960.79850.341381
HT0.73930.76550.87980.81870.3627145
HAT0.79640.82520.88260.85290.5233102
EFDT0.79360.83710.85110.84400.5385361
ARF0.89910.91690.93330.92500.77071502
DWMC0.77940.81490.86710.84020.484960
LB0.72410.80470.7620.78280.40544857
SMOTE-B0.67270.80350.66040.72500.330260337
UO-B0.71090.71230.93740.80950.25423453
OB0.68720.74490.79460.76890.28644446
OB-ADWIN0.69610.75420.79540.77430.31064440
LB+HT0.91730.93020.95190.94090.80311069
LB+HAT0.93040.94150.9590.95020.83491037
LB+ARF0.93800.95520.95520.95520.859524580
LB+EFDT0.93260.94690.95620.95150.84092063
LB+VFDRC0.91610.93050.94940.93990.8005756
DWMC+HT0.84120.8710.90460.88750.6183163
DWMC+HAT0.8530.8840.90660.89520.6495288
DWMC+ARF0.92010.93930.94570.94250.81177581
DWMC+EFDT0.870.89030.92640.90800.6871444
DWMC+VFDRC0.83090.87890.87650.87770.604114
SMOTE-B + HT0.79810.87910.82120.84920.544956153
SMOTE-B + HAT0.82490.89510.84620.87000.602756311
SMOTE-B + ARF0.88160.92470.90240.91340.726280722
SMOTE-B + EFDT0.85650.90810.8820.89490.669356513
SMOTE-B + FDRC0.78930.86090.82950.84490.516656147
UO-B + HT0.78720.77270.97870.86360.4018371
UO-B + HAT0.82310.88760.85240.86960.594956278
UO-B + ARF0.880.92340.90140.91230.722781812
UO-B + EFDT0.81180.79590.97920.87810.4844926
UO-B + VFDRC0.79730.79090.96120.86780.4505418
OB + HT0.78740.81160.90220.85450.46381661
OB + HAT0.84480.85790.92970.89240.6158746
OB + ARF0.84060.89070.87740.88400.629511315
OB + EFDT0.81510.84270.90110.87090.54673267
OB + VFDRC0.7790.79640.91450.85140.4286872
OB-ADWIN+HT0.830.8510.91440.88160.5816693
OB-ADWIN+HAT0.85420.86760.93150.89840.6415567
OB-ADWIN+ARF0.9020.91980.94040.93000.766715946
OB-ADWIN+EFDT0.86040.87930.92540.90180.6615816
OB-ADWIN+VFDRC0.82210.84170.91510.87690.5585556

Table. 6.

Non-dominated set of algorithms for the Edge dataset.

AlgorithmAccuracyKappaPrecisionRecallF-scoreModel size score
NB0.67480.04190.67630.98570.80221
kNN0.68750.26750.74860.80220.77450.9995
kNN-ADWIN0.68750.26750.74870.80210.77450.9995
SAMkNN0.74680.39270.77880.86810.8210
VFDRC0.71990.34130.76970.82960.79850.9999
HT0.73930.36270.76550.87980.81870.9999
HAT0.79640.52330.82520.88260.85290.9999
EFDT0.79360.53850.83710.85110.8440.9996
ARF0.89910.77070.91690.93330.9250.9985
DWMC0.77940.48490.81490.86710.84021
UO-B0.71090.25420.71230.93740.80950.9965
LB+HT0.91730.80310.93020.95190.94090.9989
LB+HAT0.93040.83490.94150.9590.95020.999
LB+ARF0.9380.85950.95520.95520.95520.975
LB+EFDT0.93260.84090.94690.95620.95150.9979
LB+VFDRC0.91610.80050.93050.94940.93990.9992
DWMC+HT0.84120.61830.8710.90460.88750.9998
DWMC+HAT0.8530.64950.8840.90660.89520.9997
DWMC+EFDT0.870.68710.89030.92640.9080.9996
DWMC+VFDRC0.83090.6040.87890.87650.87770.9999
UB-O+HT0.78720.40180.77270.97870.86360.9996
UB-O+EFDT0.81180.48440.79590.97920.87810.9991
UB-O+VFDRC0.79730.45050.79090.96120.86780.9996
OB+HAT0.84480.61580.85790.92970.89240.9993
OB-ADWIN + HAT0.85420.64150.86760.93150.89840.9994

Table. 7.

Performance comparison on the CloudRSU dataset.

AlgorithmAccuracyPrecisionRecallF-scoreKappaModel size (kB)
NB0.90950.93340.94090.93710.775411
KNN0.69800.75220.86340.80400.1599463
kNN-ADWIN0.69520.75300.85570.80110.1604474
SAM-KNN0.75250.77810.91620.84150.2934980406
VFDRC0.90870.94750.92390.93560.779154
HT0.92670.94560.95250.94900.8182152
HAT0.92380.93470.96090.94760.808072
EFDT0.92980.95750.94410.95080.8288143
ARF0.95220.96310.97050.96680.88141572
DWMC0.92530.94180.95480.94830.813752
LB0.72050.80620.80350.80480.31274467
SMOTE-B0.65430.78110.71890.74870.196155935
UO-B0.72640.73250.97450.83630.09583129
OB0.69520.75050.86160.80220.15194456
OB-ADWIN0.71450.76580.86710.81330.21643844
LB+HT0.94670.96110.96600.96350.87021179
LB+HAT0.95400.96900.96690.96790.8868430
LB+ARF0.96140.97260.97370.97310.904918477
LB+EFDT0.95990.97430.96970.97200.90151467
LB+VFDRC0.94880.96500.96360.96430.8739448
DWMC+HT0.93490.94980.96100.95540.8378110
DWMC+HAT0.93800.94980.96460.95710.8453234
DWMC+ARF0.95610.96740.97160.96950.89155115
DWMC+EFDT0.94160.95520.96380.95950.8549167
DWMC+VFDRC0.91860.95480.93050.94250.8031135
SMOTE-B+HT0.92820.94580.95460.95020.821852406
SMOTE-B+HAT0.92510.94330.95280.94800.813952585
SMOTE-B+ARF0.95170.96650.96620.96630.881069819
SMOTE-B+EFDT0.93920.95390.96180.95780.849252699
SMOTE-B+VFDRC0.91510.95100.92950.94010.794152373
UO-B + HT0.92690.93250.96810.95000.8142388
UO-B + HAT0.92880.93490.96800.95120.8194554
UO-B + ARF0.94460.94980.97420.96180.860715765
UO-B + EFDT0.93200.93920.96790.95330.8282517
UO-B + VFDRC0.92570.93780.96000.94880.8133322
OB + HT0.92620.93480.96010.94730.81841322
OB + HAT0.92520.93620.96120.94850.8118647
OB + ARF0.96130.96130.97000.96560.877017768
OB + EFDT0.94160.95680.96190.95930.85531514
OB + VFDRC0.91820.94780.93760.94270.7998827
OB-ADWIN+HT0.92470.93660.96000.94820.8107399
OB-ADWIN+HAT0.92460.93550.96110.94810.8100648
OB-ADWIN+ARF0.95160.96330.96950.96640.880219609
OB-ADWIN+EFDT0.93740.95150.96180.95660.8444645
OB-ADWIN+VFDRC0.92410.94710.94710.94710.8128312

Table. 8.

Non-dominated set of algorithms for the CloudRSU dataset.

AlgorithmAccuracyKappaPrecisionRecallF-scoreModel size score
NB0.90950.77540.93340.94090.93711
VFDRC0.90870.77910.94750.92390.93560.999956283
HT0.92670.81820.94560.95250.94900.999855844
HAT0.92380.8080.93470.96090.94760.999937923
EFDT0.92980.82880.95750.94410.95080.999865371
ARF0.95220.88140.96310.97050.96680.998408233
DWMC0.92530.81370.94180.95480.94830.999957844
UO-B0.72640.09580.73250.97450.83630.996819955
LB + HT0.94670.87020.96110.9660.96350.998808409
LB + HAT0.9540.88680.9690.96690.96790.999572642
LB + ARF0.96140.90490.97260.97370.97310.981164191
LB + EFDT0.95990.90150.97430.96970.97200.998514415
DMCT + HT0.93490.83780.94980.9610.95540.999899143
DWMC + HAT0.9380.84530.94980.96460.95710.999772316
DWMC + ARF0.95610.89150.96740.97160.96950.994794179
DWMC + EFDT0.94160.85490.95520.96380.95950.999840483
DWMC + VFDRC0.91860.80310.95480.93050.94250.999873969
UO-B + HT0.92690.81420.93250.96810.95000.999615492
UB-O + HAT0.92880.81940.93490.9680.95120.999446182
UB-O + ARF0.94460.86070.94980.97420.96180.983930994
UB-O + EFDT0.9320.82820.93920.96790.95330.999483728
OB + ARF0.96130.8770.96130.970.96560.981887726

Table. 9.

Performance comparison on the CloudCN dataset.

AlgorithmAccuracyPrecisionRecallF-scoreKappaModel size (kB)
NB0.89690.91560.92580.92070.773611
KNN0.72360.74150.87840.80420.3456445
kNN-ADWIN0.72140.74080.87490.80230.3415452
SAMKNN0.70840.73490.85850.79190.3149979425
VFDRC0.89610.93100.90630.91850.7752102
HT0.91940.91760.96160.93910.8202115
HAT0.92240.92540.95700.94090.827986
EFDT0.92030.92700.95150.93910.8238156
ARF0.94200.93810.97460.95600.8712880
DWMC0.90170.91110.93960.92510.782352
LB0.72050.80620.80350.80480.31274467
SMOTE-B0.69960.77610.75190.76380.351550811
UO-B0.70510.69310.97550.81040.22464045
OB0.72520.74340.87750.80490.35074465
OB-ADWIN0.73060.74860.87770.80800.36563921
LB+HT0.94620.94340.97530.95910.8807711
LB+HAT0.94550.94440.97300.95850.8793704
LB+ARF0.95810.95860.97740.96790.907711654
LB+EFDT0.95810.96090.97490.96780.90782081
LB+VFDRC0.94720.94620.97370.95980.8832500
DWMC+HT0.92880.92700.96590.94610.8416198
DWMC+HAT0.93300.93230.96660.94910.8513360
DWMC+ARF0.94490.93880.97860.95830.87754565
DWMC+EFDT0.93230.93280.96480.94850.8499417
DWMC+VFDRC0.92320.93620.94550.94080.8313161
SMOTE-B + HT0.92630.92700.96170.94400.836347269
SMOTE-B +HAT0.93360.93410.96530.94940.852847389
SMOTE-B +ARF0.94020.92860.97090.94930.867362317
SMOTE-B +EFDT0.93360.93240.96740.94960.852548022
SMOTE-B +FDRC0.92730.92960.96030.94470.838947313
UO-B + HT0.92690.91130.98240.94550.8348281
UO-B + HAT0.93080.91580.98330.94840.8439656
UO-B + ARF0.93060.91220.98770.94840.842916995
UO-B + EFDT0.93000.91570.98210.94770.8422668
UO-B + VFDRC0.93010.92810.96670.94700.8445346
OB + HT0.92370.91530.97180.94270.82871138
OB + HAT0.93290.93030.96880.94920.8508753
OB + ARF0.93750.93190.97450.95270.860614264
OB + FDT0.93300.93010.96920.94920.85091980
OB + VFDRC0.91790.93050.94330.93690.8194823
OB-ADWIN+HT0.93260.92670.97260.94910.8495351
OB-ADWIN+HAT0.93440.93090.97050.95030.8540703
OB-ADWIN+ARF0.93830.93310.97430.95330.862512298
OB-ADWIN+EFDT0.93450.93210.96930.95030.8545689
OB-ADWIN+VFDRC0.92910.93240.95980.94590.8431452

Table. 10.

Non-dominated set of algorithms for the CloudCN dataset.

AlgorithmAccuracyKappaPrecisionRecallF-scoreModel size score
NB0.89690.77360.91560.92580.92071
VFDRC0.89610.77520.9310.90630.91850.999906689
HT0.91940.82020.91760.96160.93910.999893732
HAT0.92240.82790.92540.9570.94090.999923127
EFDT0.92030.82380.9270.95150.93910.999851544
ARF0.9420.87120.93810.97460.95600.999113041
DWMC0.90170.78230.91110.93960.92510.999957801
UB-O0.70510.22460.69310.97550.81040.99588126
LB+HT0.94620.88070.94340.97530.95910.999284837
LB+HAT0.94550.87930.94440.9730.95850.999292781
LB+ARF0.95810.90770.95860.97740.96790.988112142
LB+EFDT0.95810.90780.96090.97490.96780.997886041
LB+VFDRC0.94720.88320.94620.97370.95980.999501079
DWMC+HT0.92880.84160.9270.96590.94610.999809508
DWMC+HAT0.9330.85130.93230.96660.94910.999644093
DWMC+ARF0.94490.87750.93880.97860.95830.995349983
DWMC+EFDT0.93230.84990.93280.96480.94850.999585037
DWMC+VFDRC0.92320.83130.93620.94550.94080.999847256
UB-O+HT0.92690.83480.91130.98240.94550.999723845
UB-O+HAT0.93080.84390.91580.98330.94840.999341514
UB-O+ARF0.93060.84290.91220.98770.94840.982658624
UB-O+VFDRC0.93010.84450.92810.96670.94700.999657887
OB-ADWIN+HT0.93260.84950.92670.97260.94900.999652466

  1. Guo, H, Liu, J, Ren, J, and Zhang, Y (2020). Intelligent task offloading in vehicular edge computing networks. IEEE Wireless Communications. 27, 126-132. https://doi.org/10.1109/MWC.001.1900489
    CrossRef
  2. Yurtsever, E, Lambert, J, Carballo, A, and Takeda, K (2020). A survey of autonomous driving: common practices and emerging technologies. IEEE Access. 8, 58443-58469. https://doi.org/10.1109/ACCESS.2020.2983149
    CrossRef
  3. Neil, J, Cosart, L, and Zampetti, G (2020). Precise timing for vehicle navigation in the smart city: an overview. IEEE Communications Magazine. 58, 54-59. https://doi.org/10.1109/MCOM.001.1900596
    CrossRef
  4. Sabella, D, Brevi, D, Bonetto, E, Ranjan, A, Manzalini, A, and Salerno, D. MEC-based infotainment services for smart roads in 5G environments, 1-6. https://doi.org/10.1109/VTC2020-Spring48590.2020.9128807
  5. Atallah, RF, Assi, CM, and Khabbaz, MJ (2019). Scheduling the operation of a connected vehicular network using deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems. 20, 1669-1682. https://doi.org/10.1109/TITS.2018.2832219
    CrossRef
  6. Sun, Y, Guo, X, Zhou, S, Jiang, Z, Liu, X, and Niu, Z . Learning-based task offloading for vehicular cloud computing systems., Proceedings of 2018 IEEE International Conference on Communications (ICC), 2018, Kansas City, MO, Array, pp.1-7. https://doi.org/10.1109/ICC.2018.8422661
  7. Liu, L, Chen, C, Pei, Q, Maharjan, S, and Zhang, Y (2020). Vehicular edge computing and networking: a survey. Mobile Networks and Applications. https://doi.org/10.1007/s11036-020-01624-1
    CrossRef
  8. Zhang, K, Mao, Y, Leng, S, Maharjan, S, and Zhang, Y . Optimal delay constrained offloading for vehicular edge computing networks., Proceedings of 2017 IEEE International Conference on Communications (ICC), 2017, Paris, France, Array, pp.1-6. https://doi.org/10.1109/ICC.2017.7997360
  9. Liu, S, Liu, L, Tang, J, Yu, B, Wang, Y, and Shi, W (2019). Edge computing for autonomous driving: opportunities and challenges. Proceedings of the IEEE. 107, 1697-1716. https://doi.org/10.1109/JPROC.2019.2915983
    CrossRef
  10. Payalan, YF, and Guvensan, MA (2020). Towards nextgeneration vehicles featuring the vehicle intelligence. IEEE Transactions on Intelligent Transportation Systems. 21, 30-47. https://doi.org/10.1109/TITS.2019.2917866
    CrossRef
  11. Sonmez, C, Ozgovde, A, and Ersoy, C (2019). Fuzzy workload orchestration for edge computing. IEEE Transactions on Network and Service Management. 16, 769-782. https://doi.org/10.1109/TNSM.2019.2901346
    CrossRef
  12. Wang, Y, Wang, S, Zhang, S, and Cen, H (2019). An edge-assisted data distribution method for vehicular network services. IEEE Access. 7, 147713-147720. https://doi.org/10.1109/ACCESS.2019.2946484
    CrossRef
  13. Sonmez, C, Tunca, C, Ozgovde, A, and Ersoy, C (2021). Machine learning-based workload orchestrator for vehicular edge computing. IEEE Transactions on Intelligent Transportation Systems. 22, 2239-2251. https://doi.org/10.1109/TITS.2020.3024233
    CrossRef
  14. Rojas, JS, Pekar, A, Rendon, A, and Corrales, JC (2020). Smart user consumption profiling: Incremental learning-based OTT service degradation. IEEE Access. 8, 207426-207442. https://doi.org/10.1109/ACCESS.2020.3037971
    CrossRef
  15. Nallaperuma, D, Nawaratne, R, Bandaragoda, T, Adikari, A, Nguyen, S, Kempitiya, T, De Silva, D, Alahakoon, D, and Pothuhera, D (2019). Online incremental machine learning platform for big data-driven smart traffic management. IEEE Transactions on Intelligent Transportation Systems. 20, 4679-4690. https://doi.org/10.1109/TITS.2019.2924883
    CrossRef
  16. Adhikari, U, Morris, TH, and Pan, S (2018). Applying Hoeffding adaptive trees for real-time cyber-power event and intrusion classification. IEEE Transactions on Smart Grid. 9, 4049-4060. https://doi.org/10.1109/TSG.2017.2647778
    CrossRef
  17. Li, X, Zhou, Y, Jin, Z, Yu, P, and Zhou, S (2020). A classification and novel class detection algorithm for concept drift data stream based on the cohesiveness and separation index of Mahalanobis distance. Journal of Electrical and Computer Engineering. 2020. article no 4027423
    CrossRef
  18. Rutkowski, L, Jaworski, M, and Duda, P (2020). Basic concepts of data stream mining. Stream Data Mining: Algorithms and Their Probabilistic Properties. Cham, Switzerland: Springer, pp. 13-33 https://doi.org/10.1007/978-3-030-13962-9_2
  19. Gepperth, A, and Hammer, B . Incremental learning algorithms and applications., Proceedings of the 24th European Symposium on Artificial Neural Networks (ESANN), 2016, Bruges, Belgium.
  20. Yang, Q, Gu, Y, and Wu, D . Survey of incremental learning., Proceedings of 2019 Chinese Control And Decision Conference (CCDC), 2019, Nanchang, China, Array, pp.399-404. https://doi.org/10.1109/CCDC.2019.8832774
  21. Wankhade, KK, Dongre, SS, and Jondhale, KC (2020). Data stream classification: a review. Iran Journal of Computer Science. 3, 239-260. https://doi.org/10.1007/s42044-020-00061-3
    CrossRef
  22. El Mrabet, Z, Selvaraj, DF, and Ranganathan, P . Adaptive Hoeffding tree with transfer learning for streaming synchrophasor data sets., Proceedings of 2019 IEEE International Conference on Big Data (Big Data), 2019, Los Angeles, CA, Array, pp.5697-5704. https://doi.org/10.1109/BigData47090.2019.9005720
  23. Nixon, C, Sedky, M, and Hassan, M . Practical application of machine learning based online intrusion detection to internet of things networks., Proceedings of 2019 IEEE Global Conference on Internet of Things (GCIoT), 2019, Dubai, UAE, Array, pp.1-5. https://doi.org/10.1109/GCIoT47977.2019.9058410
  24. Da Costa, VGT, Santana, EJ, Lopes, JF, and Barbon, S (2019). Evaluating the four-way performance trade-off for stream classification. Green, Pervasive, and Cloud Computing. Cham, Switzerland: Springer, pp. 3-17 https://doi.org/10.1007/978-3-030-19223-5_1
    CrossRef
  25. Lopes, JF, Santana, EJ, da Costa, VGT, Zarpelao, BB, and Junior, SB (2020). Evaluating the four-way performance trade-off for data stream classification in edge computing. IEEE Transactions on Network and Service Management. 17, 1013-1025. https://doi.org/10.1109/TNSM.2020.2983921
    CrossRef
  26. Bifet, A, Gavalda, R, Holmes, G, and Pfahringer, B (2018). Machine Learning for Data Streams: With Practical Examples in MOA. Cambridge, MA: MIT Press
    CrossRef
  27. Greco, S, Figueira, J, and Ehrgott, M (2016). Multiple Criteria Decision Analysis. New York, NY: Springer
    CrossRef
  28. Cinelli, M, Kadzinski, M, Gonzalez, M, and Slowinski, R (2020). How to support the application of multiple criteria decision analysis? Let us start with a comprehensive taxonomy. Omega. 96. article no 102261
    Pubmed KoreaMed CrossRef
  29. Cinelli, M, Spada, M, Kim, W, Zhang, Y, and Burgherr, P (2021). MCDA Index Tool: an interactive software to develop indices and rankings. Environment Systems and Decisions. 41, 82-109. https://doi.org/10.1007/s10669-020-09784-x
    CrossRef
  30. Diaz-Balteiro, L, Gonzalez-Pachon, J, and Romero, C (2017). Measuring systems sustainability with multi-criteria methods: a critical review. European Journal of Operational Research. 258, 607-616. https://doi.org/10.1016/j.ejor.2016.08.075
    CrossRef
  31. El Gibari, S, Gomez, T, and Ruiz, F (2019). Building composite indicators using multicriteria methods: a review. Journal of Business Economics. 89, 1-24. https://doi.org/10.1007/s11573-018-0902-z
    CrossRef
  32. Greco, S, Ishizaka, A, Tasiou, M, and Torrisi, G (2019). On the methodological framework of composite indices: a review of the issues of weighting, aggregation, and robustness. Social Indicators Research. 141, 61-94. https://doi.org/10.1007/s11205-017-1832-9
    CrossRef
  33. Montiel, J, Read, J, Bifet, A, and Abdessalem, T (2018). Scikit-multiflow: a multi-output streaming framework. The Journal of Machine Learning Research. 19, 2915-2914.
  34. Webb, GI, Hyde, R, Cao, H, Nguyen, HL, and Petitjean, F (2016). Characterizing concept drift. Data Mining and Knowledge Discovery. 30, 964-994. https://doi.org/10.1007/s10618-015-0448-4
    CrossRef
  35. Demsar, J, and Bosnic, Z (2018). Detecting concept drift in data streams using model explanation. Expert Systems with Applications. 92, 546-559. https://doi.org/10.1016/j.eswa.2017.10.003
    CrossRef
  36. Forster, K, Monteleone, S, Calatroni, A, Roggen, D, and Troster, G . Incremental kNN classifier exploiting correct-error teacher for activity recognition., Proceedings of 2010 9th International Conference on Machine Learning and Applications, 2010, Washington, DC, Array, pp.445-450. https://doi.org/10.1109/ICMLA.2010.72
  37. Bifet, A, and Gavalda, R . Learning from time-changing data with adaptive windowing., Proceedings of the 2007 SIAM International Conference on Data Mining, 2007, Minneapolis, MN, Array, pp.443-448. https://doi.org/10.1137/1.9781611972771.42
  38. Losing, V, Hammer, B, and Wersing, H . KNN classifier with self adjusting memory for heterogeneous concept drift., Proceedings of 2016 IEEE 16th International Conference on Data Mining (ICDM), 2016, Barcelona, Spain, Array, pp.291-300. https://doi.org/10.1109/ICDM.2016.0040
  39. Hulten, G, Spencer, L, and Domingos, P . Mining time-changing data streams., Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2001, San Francisco, CA, Array, pp.97-106. https://doi.org/10.1145/502512.502529
  40. Bifet, A, and Gavalda, R (2009). Adaptive learning from evolving data streams. Advances in Intelligent Data Analysis. Heidelberg, Germany: Springer, pp. 249-260 https://doi.org/10.1007/978-3-642-03915-7_22
  41. Manapragada, C, Webb, GI, and Salehi, M . Extremely fast decision tree., Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, London, UK, Array, pp.1953-1962. https://doi.org/10.1145/3219819.3220005
  42. Kosina, P, and Gama, J (2015). Very fast decision rules for classification in data streams. Data Mining and Knowledge Discovery. 29, 168-202. https://doi.org/10.1007/s10618-013-0340-z
    CrossRef
  43. Oza, NC . Online bagging and boosting., Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, 2001, Waikoloa, HI, Array, pp.2340-2345. https://doi.org/10.1109/ICSMC.2005.1571498
  44. Bifet, A, Holmes, G, and Pfahringer, B (2010). Leveraging bagging for evolving data streams. Machine Learning and Knowledge Discovery in Databases. Heidelberg, Germany: Springer, pp. 135-150 https://doi.org/10.1007/978-3-642-15880-3_15
  45. Wang, B, and Pineau, J (2016). Online bagging and boosting for imbalanced data streams. IEEE Transactions on Knowledge and Data Engineering. 28, 3353-3366. https://doi.org/10.1109/TKDE.2016.2609424
    CrossRef
  46. Gomes, HM, Bifet, A, Read, J, Barddal, JP, Enembreck, F, Pfharinger, B, Holmes, G, and Abdessalem, T (2017). Adaptive random forests for evolving data stream classification. Machine Learning. 106, 1469-1495. https://doi.org/10.1007/s10994-017-5642-8
    CrossRef
  47. Kolter, JZ, and Maloof, MA (2007). Dynamic weighted majority: an ensemble method for drifting concepts. he Journal of Machine Learning Research. 8, 2755-2790.
  48. Wang, S, and Yao, X . Diversity analysis on imbalanced data sets by using ensemble models., Proceedings of 2009 IEEE Symposium on Computational Intelligence and Data Mining, 2009, Nashville, TN, Array, pp.324-331. https://doi.org/10.1109/CIDM.2009.4938667
  49. Chawla, NV, Bowyer, KW, Hall, LO, and Kegelmeyer, WP (2002). SMOTE: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research. 16, 321-357. https://doi.org/10.1613/jair.953
    CrossRef
  50. Sonmez, C, Ozgovde, A, and Ersoy, C (2018). Edgecloudsim: an environment for performance evaluation of edge computing systems. Transactions on Emerging Telecommunications Technologies. 29. article no. e3493
    CrossRef
  51. Vasiloudis, T, Beligianni, F, and De Francisci Morales, G . BoostVHT: boosting distributed streaming decision trees., Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 2017, Singapore, Array, pp.899-908. https://doi.org/10.1145/3132847.3132974
  52. Bifet, A, de Francisci Morales, G, Read, J, Holmes, G, and Pfahringer, B . Efficient online evaluation of big data stream classifiers., Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, Sydney, Australia, Array, pp.59-68. https://doi.org/10.1145/2783258.2783372
  53. Ishibuchi, H, Masuda, H, and Nojima, Y . Selecting a small number of non-dominated solutions to be presented to the decision maker., Proceedings of 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2014, San Diego, CA, Array, pp.3816-3821. https://doi.org/10.1109/SMC.2014.6974525
  54. Hansen, LK, and Salamon, P (1990). Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence. 12, 993-1001. https://doi.org/10.1109/34.58871
    CrossRef
  55. Ladha, KK (1993). Condorcet’s jury theorem in light of de Finetti’s theorem. Social Choice and Welfare. 10, 69-85. https://doi.org/10.1007/BF00187434
    CrossRef
  56. van Rijn, JN, Holmes, G, Pfahringer, B, and Vanschoren, J (2018). The online performance estimation framework: heterogeneous ensemble learning for data streams. Machine Learning. 107, 149-176. https://doi.org/10.1007/s10994-017-5686-9
    CrossRef
  57. Lv, Y, Peng, S, Yuan, Y, Wang, C, Yin, P, Liu, J, and Wang, C (2019). A classifier using online bagging ensemble method for big data stream learning. Tsinghua Science and Technology. 24, 379-388. https://doi.org/10.26599/TST.2018.9010119
    CrossRef
  58. Kolarik, M, Sarnovsky, M, and Paralic, J . Diversity in ensemble model for classification of data streams with concept drift., Proceedings of 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), 2021, Herl’any, Slovakia, Array, pp.000355-000360. https://doi.org/10.1109/SAMI50585.2021.9378625

Article

Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(2): 101-122

Published online June 25, 2021 https://doi.org/10.5391/IJFIS.2021.21.2.101

Copyright © The Korean Institute of Intelligent Systems.

Data Stream Classification Algorithms for Workload Orchestration in Vehicular Edge Computing: A Comparative Evaluation

Mutaz Al-Tarawneh

Computer Engineering Department, Mutah University, Karak, Jordan

Correspondence to:Mutaz Al-Tarawneh (mutaz.altarawneh@mutah.edu.jo)

Received: February 3, 2021; Revised: March 28, 2021; Accepted: April 26, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper reports on the use of online data stream classification algorithms to support workload orchestration in vehicular edge computing environments. These algorithms can be used to predict the ability of available computational nodes to successfully handle computational tasks generated from vehicular applications. Several online data stream classification algorithms have been evaluated based on synthetic datasets generated from simulated vehicular edge computing environments. In addition, a multi-criteria decision analysis technique was utilized to rank the different algorithms based on their performance metrics. The evaluation results demonstrate that the considered algorithms can handle online classification operations with various trade-offs and dominance relations with respect to their obtained performance. In addition, the utilized multi-criteria decision analysis technique can efficiently rank various algorithms and identify the most appropriate algorithms to augment workload orchestration. Furthermore, the evaluation results show that the leveraging bagging algorithm, with an extremely fast decision tree base estimator, is able to maintain marked online classification performance and persistent competitive ranking among its counterparts for all datasets. Hence, it can be considered a promising choice to reinforce workload orchestration in vehicular edge computing environments.

Keywords: Online classification, Data stream, Performance, Ranking

1. Introduction

In addition to the rapid development of artificial intelligence and data mining techniques, Internet of Things (IoT) technology has transformed traditional transportation systems into intelligent transportation systems (ITS) [1]. In the ITS paradigm, vehicles are equipped with various types of processing, sensing, detection, and communication devices, leading to more intelligent and connected vehicles. Therefore, a plethora of intelligent applications, such as autonomous driving, smart navigation, and infotainment, has been developed [24]. However, these applications are computationally intensive, with processing requirements that surpass the processing capacity of in-vehicle hardware resources. This discrepancy between the application requirements and the computing power of the in-vehicle hardware resources can hinder the development of ITS-centered applications [5]. To remedy this situation, cloud computing has been employed within vehicular networks, and applications have been allowed to utilize the powerful hardware resources of remote cloud data centers [6]. However, cloud data centers may incur very long transmission delays that violate the extremely low latency requirements of some applications in vehicular networks. To overcome this issue, vehicular edge computing (VEC), which combines vehicular networks with multi-access edge computing (MEC), has been proposed [7]. In the VEC paradigm, several edge servers are deployed at the network edge, providing adequate computing power to handle the increasing processing requirements of in-vehicle applications.

Hence, VEC is considered a promising concept that provides a baseline for various applications to enhance the quality of service (QoS) based on computation offloading [8]. It can improve the user’s quality of experience by offloading computational tasks to the edge servers, which are expected to handle the processing demand of vehicular applications. Thus, it can handle delay-sensitive vehicular applications [9,10]. The envisioned VEC scenarios encompass a dynamically changing process characterized by an evolving stream of offloading requests generated from a diverse set of applications with varying processing demands and delay requirements, an instantaneously changing network state, and an oscillatory utilization and capacity of the hardware resources of both edge and cloud servers [11]. In such a dynamic and heterogeneous environment, workload orchestration plays a vital role in handling the incoming offloading requests generated from different applications and deciding on the computational server to which each request should be offloaded, considering the application characteristics, network status, and utilization levels of both edge and cloud hardware resources. Apparently, as the number of vehicles and in-vehicle applications increases, the demand for the networking and hardware resources of the VEC system will also increase. The increased demand for these resources will ultimately lead to increased competition for resources, hindering the decision-making process involved in the operation of workload orchestration [12]. Consequently, predicting the status of hardware resources and their ability to successfully handle offloaded requests is a challenging yet inevitable process. In this regard, machine learning (ML) algorithms can provide efficient tools for making predictions in dynamically changing environments [13]. The ultimate goal of the prediction process is to help the workload orchestrator determine whether offloading an incoming request to a particular server will eventually succeed or fail. In other words, the prediction process associated with workload orchestration is a binary classification problem in which a class label is assigned to each incoming offloading request. The class label will be set to either success or failure. Whereas a success indicates that a particular computational server is predicted to successfully handle the offloaded request and deliver the execution results back to the request source, a failure indicates that the offloading process is expected to fail because of either vehicle mobility, network congestion, or unavailability of processing resources. In this context, the work in [13] is among the first efforts to propose an ML-based workload orchestrator for VEC systems. They used several classification algorithms and tested their ability to aid in the workload orchestration process. However, they used these algorithms in a batch-learning scheme. In this scheme, an ML model is built under the assumption that the entire dataset (i.e., offloading requests) is available in memory. However, batch learning schemes suffer from several limitations. First, the training phase may last for a long period and require a significant amount of processing and memory resources. Second, the performance of the trained model is sensitive to the training dataset size. Third, in the batch learning scheme, the training data are assumed to be static and do not change over time; once a model is trained, it cannot acquire knowledge or experience from new samples. In other words, when there is a change in the statistical properties of the model’s input (i.e., concept drift), a new model must be constructed [14]. Consequently, as the offloading requests in VEC environments represent a continuous stream of data, online data stream classification algorithms are viable alternatives to offline (i.e., batch) classification algorithms in VEC-based workload orchestration for several reasons. First, online data stream classification algorithms are designed to handle unbounded streams of data and incrementally learn from incoming data; they must be continuously updated while making predictions when required [15]. Second, online data stream classification algorithms can be utilized in real-time applications that cannot be handled by classical (i.e., batch) learning algorithms [16]. Third, they are able to detect concept drift, allowing the trained models to continuously adapt and evolve with dynamically changing data streams [17,18]. Therefore, online data stream classification is perceived as a suitable tool for workload orchestration in VEC systems, where the environmental state and application behavior may change over time in an unpredictable manner. Many algorithms for data stream classification have been proposed [1921]. Although some of these algorithms have been studied in multiple fields [16,2225], the evaluation of their performance and the trade-offs involved in their performance metrics and memory costs have not been addressed for VEC environments. Hence, this work seeks to identify which online data stream classification algorithms may be suitable for VEC environments through rigorous comparative evaluation. In addition, it aims to analyze the trade-offs associated with key performance metrics, such as the algorithm’s accuracy, precision, recall, kappa statistic, and memory requirements [26]. Typically, the evaluation of online data stream classification algorithms involves multiple, typically conflicting criteria. Hence, it can be modeled as a multi-criteria decision analysis (MCDA) problem [27]. MCDA is a formal process that can be employed in decision-making by allowing identification of the alternatives, selection of the evaluation indicators (i.e., criteria), and aggregation of the obtained performance scores under the considered performance evaluation criteria [28,29]. In this context, the composite indicators (CIs) are a family of MCDA methods that can be used to assign scores to various alternatives and rank them appropriately [3032]. Hence, this work employs a CI-based approach to rank various online data stream classification algorithms based on their obtained performance metrics. The experiments were carried out using several online data stream classification algorithms (Section 2) based on synthetically generated datasets. All tests were performed using a scikit-multiflow evaluation platform [33]. The main contributions of this study can be summarized as follows.

  • • Performing a rigorous evaluation of the performance of several online data stream classification algorithms on synthetic datasets generated from a simulated VEC environment.

  • • Applying a MCDA technique to rank various algorithms based on their observed performance trade-offs and dominance relations.

  • • Identifying the most suitable algorithm or set of algorithms to augment workload orchestration in VEC environments.

The remainder of this paper is organized as follows. Section 2 briefly explains the online data stream classification algorithms considered in this study. Section 3 describes the research methodology and utilizes research tools. Section 4 presents the results of the evaluation and trade-off analysis. Section 5 summarizes and concludes this paper.

2. Data Stream Classification

Traditionally, ML algorithms have been used to construct models based on static datasets. However, there is a growing need for mechanisms capable of handling vast volumes of data that arrive as streams. In this regard, new data samples can be obtained at any time, and storing these data samples is inappropriate. On the one hand, learning from continuous data streams requires the ML model to be created and continuously updated throughout the stream. On the other hand, it is important to address concept drift, in which the statistical properties of the incoming data change over time [34,35]. In addition, for VEC environments, the obtained ML model must be updated instantaneously, requiring algorithms that can achieve adequate accuracy levels under limited processing power and memory space [26]. In this work, several data stream classification algorithms, with distinct model construction mechanisms, complexity, and memory requirements, are considered. The algorithms are summarized as follows.

2.1 Bayes Learning Algorithms

In this category, the naive Bayes (NB) algorithm was used [20]. The NB algorithm performs Bayesian prediction under the assumption that all inputs, that is, features of the input data samples, are independent. The NB algorithm is a simple classification algorithm with low processing requirements. Given n different classes, the trained NB classifier predicts the class to which it belongs with a high probability for every incoming data sample.

2.2 Lazy Learning Algorithms

The most commonly used lazy data stream classification algorithms are the k-nearest neighbors classifier (kNN) [36], kNN classifier with adaptive windowing (kNN-ADWIN) change detector,, and self-adjusting memory coupled with the kNN classifier (SAM-kNN) [37,38]. In the data stream setting, the kNN algorithm operates by keeping track of a window with a fixed number of recently observed training data samples. When a new input data sample arrives, the kNN algorithm searches within the recently stored samples and finds the closest neighbors using a particular distance measure. Then, the class label of the incoming data sample can be assigned accordingly. The kNN-ADWIN classifier is an improvement over the regular kNN classifier because of its resistance to concept drift. It employs the ADWIN change detector to determine which previous data samples to keep and which ones to discard (i.e., forget), which in turn regulates the sample window size. In addition, the SAM-kNN is a refinement over the other two algorithms, in which a SAM model constructs an ensemble of models targeting either current or previous concepts. These dedicated models can be applied according to the requirements of the current situation (i.e., concept). To accomplish this, the SAM model constructs both short-term (STM) and long-term (LTM) memories. Whereas the STM is built for the current concept, the LTM is used to store information about previous concepts. A cleaning process is then used to control the size of the STM while maintaining consistency between the LTM and STM.

2.3 Tree-Based Learning Algorithms

Tree-based algorithms are widely used in data-stream classification applications. In this study, three main tree-based algorithms are used: the Hoeffding tree (HT) [39], Hoeffding adaptive tree (HAT) [40], and extremely fast decision tree (EFDT) [41]. The HT algorithm is an incremental, anytime decision tree induction algorithm that has the ability to learn from massive online data streams, assuming that the distribution generating the incoming data samples is static and does not change over time. It relies on the fact that a small set of input samples can often be sufficient to select an optimal splitting attribute. This idea is mathematically supported by the Hoeffding bound, which quantifies the number of input samples required to estimate some statistics within a predefined precision. A theoretically attractive feature of the HT algorithm that is not shared by other online decision tree algorithms is that it has profound performance guarantees. Relying on the Hoeffding bound, the output of an HT learner is asymptotically nearly identical to that of a batch-based learner using infinitely many input data samples. The HAT algorithm utilizes the ADWIN change detector to monitor the performance of different tree branches. The branches whose performance decreases are replaced with new branches if the new branches are more accurate. Furthermore, the EFDT classification algorithm builds a tree in an incremental manner. It selects and deploys a split once it is confident that the split is useful and subsequently revisits that split decision, replacing it if it then becomes evident that a more useful split is present. The EFDT is able to learn quickly from static distributions and ultimately learn the asymptotic batch tree if the distribution generating the input data samples is stationary.

2.4 Rule-Based Learning Algorithms

The very fast decision rule (VFDR) is an online data stream classification algorithm [42]. The learning process of the VFDR algorithm is similar to that of the HT algorithm, but instead utilizes a collection of rules instead of a tree. The ultimate goal of VFDR is to create a collection of rules that constitute a highly interpretable classifier. Each rule is represented as a combination of conditions based on attribute values and a structure for maintaining sufficient statistics. These statistics are used to determine the class predicted by the associated rule for incoming data samples.

2.5 Ensemble Learning Algorithms

In this study, several ensemble learning methods are evaluated. These algorithms include Oza bagging (OB) [43], Oza bagging with ADWIN change detector (OB-ADWIN) [43], leveraging bagging (LB) [44], online synthetic minority oversampling technique bagging (SMOTE-B) [45], online under-over bagging (UO-B) [45], adaptive random forest (ARF) [46], and the dynamic weighted majority classifier (DWMC) [47]. For instance, OB is an online ensemble learning algorithm that can be considered a refinement over traditional batch-based bagging to handle incremental learning. For traditional batch-based bagging, M classifiers are trained on M distinct datasets created by drawing N samples from an N-sized training dataset with replacement. In the online learning settings, as there is no training dataset, but rather a stream of input data samples, the drawing of input samples with replacements cannot be trivially performed. The online OB mimics the training phase by using each arriving input data sample to train the base estimator over k times, which is drawn by the binomial distribution. Because the input data stream can be assumed to be infinite, and given that the binomial distribution tends to a Poisson (λ =1) distribution with infinite samples, [43] found the process adopted by the online OB algorithm to be a good ‘drawing with replacement’. OB-ADWIN is an improvement from the OB algorithm, where the ADWIN change detector is employed. In addition, the LB algorithm is based on the OB algorithm, in which a Poisson distribution is used to simulate the re-sampling process. The LB algorithm attempts to obtain better classification results by modifying the parameters of the Poisson distribution obtained from the binomial distribution when assuming an infinite input data stream. Hence, the LB algorithm changes the λ parameter of the Poisson distribution to six instead of one. The new value of λ would lead to greater diversity in the input space by attributing a different range of weights to the input data samples. To achieve further improvement over the OB algorithm, the LB uses output detection codes. In the detection codes, each class label is coded using an n-bit-long binary code, and each bit is associated with one of the n classifiers. As a new input data sample is analyzed, each classifier is trained on its associated bit. This assists the LB algorithm, to some extent, with error correction.

The ARF algorithm is an adaptation of the traditional batch-based random forest algorithm applied to an online data stream scope. In ARF, multiple decision trees are generated, and the decision on how to classify each incoming data sample is made through a weighted voting system. In the voting process, the decision tree with the best performance (in terms of either accuracy or the kappa statistic) receives a more substantial weight in the voting process, that is, a higher priority in the classification decision. The DWMC algorithm relies on four mechanisms to handle concept drift: it first trains online base learners of the ensemble; it then assigns weights to the base learners depending on their performance, removes them based on their performance, and adds new learners based on the global performance of the whole ensemble.

In the UO-B and SMOTE-B algorithms, the basic idea is to use existing algorithms for online ensemble-based learning and combine them with batch-mode techniques for cost-sensitive bagging. Such a combination is useful for classifying imbalanced data streams in which the vast majority of incoming data samples belong to the same class. The UO-B and SMOTE-B algorithms represent an adaptation of the UnderOverBagging and SMTEBagging ensemble methods [48] for online data stream settings. Given an imbalanced dataset with N samples, where N+ samples come from the minority class S+ and N samples from the majority class S, batch bagging-based learning algorithms rely on under-sampling (underbagging) the majority class, over-sampling (overbagging) the minority class, or UnderOverBagging, which is a uniform combination of both underbagging and overbagging. Furthermore, the re-sampling rate (a) varies over the bagging iterations, boosting the diversity among the base learners. In SMOTEBagging, the majority class is sampled with replacement at a rate of 100% (i.e., N from the majority class is generated), while CN+ samples from the minority class are generated for each base learner, for some C > 1, among which a% is generated by re-sampling, while the rest of the samples are obtained by the SMOTE technique [49]. In the online data stream settings, where no dataset is available, the sampling process is simulated by presenting each incoming data sample to the base model over k times, where k is sampled from a Poisson (λ) distribution. The value of the parameter λ is chosen in a manner that allows an online bagging scheme to mimic the behavior of its batch-based counterpart.

3. Research Tools and Methodology

3.1 VEC System Model

Figure 1 shows the system model assumed in this work. This model is based on the work proposed in [13,50], where a model and a simulation environment for a VEC system were proposed. In this model, in-vehicle applications are assumed to periodically generate offloading requests for some computational tasks. In this work, the workload orchestrator is assumed to offload the current request (i.e., task) to either: (A) the road side unit (RSU) via the wireless local area network (WLAN) that currently covers the offloading vehicle, (B) the cloud server through the RSU - using the wide area network (WAN), or (C) the cloud server via the cellular network (CN).

It is assumed that there are three main in-vehicle application types. These applications and their corresponding tasks are uniquely characterized by a set of parameters, as shown in Table 1. The usage percentage is the percentage of vehicles running each application, task input file size is the mean size of the input data required by the offloaded task, task output file size is the average size of the output result generated after executing the offloaded task, virtual machine (VM) utilization indicates the percentage of the processing power consumed (i.e., utilized) by the offloaded task, and the number of instructions per task represents the expected processing demand of the offloaded task. As shown in Table 1, the characteristics of different applications were chosen such that adequate diversification in their task offloading frequency, processing demand, and network bandwidth requirements is achieved. Such a diverse set of applications will cause the simulated VEC environment to go through various states that cover a wide range of computational server loading and network bandwidth utilization levels.

3.2 Dataset Construction

To carry out a comparative evaluation between different data stream classification algorithms, synthetic datasets were generated based on the simulation tool of [13]. The datasets were created by simulating a VEC system with 2,400 vehicles and application characteristics, as shown in Table 1. In addition, the VEC simulation was accompanied by a hypothetical workload orchestrator that offloads the incoming requests according to Table 2. Each entry in Table 2 represents the probability that the orchestrator will make a particular decision for an incoming offloading request. As shown, these probabilities are application-dependent; for instance, there is a 0.60 probability of offloading tasks generated from App-1 to the VEC server, as compared with probabilities of 0.30 and 0.23 for App-2 and App-3, respectively. These offloading probabilities were set to mimic a relatively realistic situation that aligns with the application characteristics shown in Table 1. For instance, both App-2 and App-3 are more computationally intensive than App-1; they have a greater number of instructions per task. In addition, the average input/output file size is higher than that of App-1. Hence, they were assigned higher probabilities of being offloaded to the resource-enriched cloud server via the high-bandwidth WAN connection. Overall, the hypothetical orchestration behavior shown in Table 2 was observed to generate synthetic datasets with an adequate number of success and failure instances, leading to a more appropriate setting for evaluating the performance and discrimination ability of different online data stream classification algorithms. For each offloading request, a record-keeping process was performed. Record keeping involves recording the VEC environmental state when the offloading decision is made, in addition to the offloading outcome. The environmental state variables depend on the decision made by the workload orchestrator, as shown in Table 3. The environmental state variables contain information about application characteristics related to the offloaded task, average utilization of the VMs instantiated on the VEC server, upload and download delays associated with network connections used for task offloading, and the number of tasks recently offloaded to the selected server.

The offloading outcome is set to success if the task associated with the offloaded request is offloaded successfully and its result is delivered to the request source. It is set to failure if the execution result is not delivered successfully to the requesting vehicle. The failure of the offloading process may be due to several reasons, such as high resource utilization of the selected server, unavailability of network bandwidth, or vehicle mobility. Ultimately, the records obtained during the simulation process were used to create three different datasets, with each dataset corresponding to one of the three possible offloading decisions. These datasets are denoted as the Edge, CloudRSU, and CloudCN datasets. Each dataset stores the environmental state variables associated with each offloading request, along with the offloading outcome. The stored entries appear in chronological order. Table 4 lists some sample entries in the constructed datasets.

3.3 Evaluation Methodology

This section explains the main steps implemented to evaluate the performance of the considered data stream classification algorithms on the obtained datasets. Figure 2 depicts the evaluation procedure followed by the scikit-multiflow evaluation platform [33].

This evaluation procedure was performed for every data steam classification algorithm for each of the obtained datasets. As shown in Figure 2, the dataset is first loaded as a stream and then passed to the instantiated classification algorithm for online testing, incremental learning, and evaluation. In the evaluation platform, the instances contained in the obtained stream maintain their chronological order captured from the simulated VEC environment. In addition, each instance contains a copy of the dynamic state (i.e., application characteristics, designated server utilization, and network status) of the VEC environment upon making the orchestration decision. As a result, each tested algorithm in the evaluation platform is exposed to state variables that capture the state of the simulated VEC environment when making a classification decision. In this study, the classification algorithms were evaluated using a prequential evaluation method or the interleaved test-then-train method. The prequential evaluation method is designed specifically for online data stream settings, in the sense that each incoming input data sample (i.e., instance) serves two purposes, and those input instances are analyzed sequentially, in order of their arrival, and become immediately inaccessible. In this method, each incoming data sample is first used to test the classification algorithm (i.e., to make a prediction), after which the same input sample is used to train the classification algorithm. The considered performance metrics are incrementally updated after each observed instance, allowing a continuous update of the performance of each tested algorithm in addition to real-time tracking of its adaptability to unseen instances. Hence, the classification algorithm is always tested, and its performance metrics are updated on input data samples that have not yet been seen. The performance of the classification algorithms is quantified in terms of a number of widely used performance measures, including accuracy, precision, recall, F-score, kappa statistic, and memory size of the ML model obtained online. These measures are defined as follows:

  • Classification accuracy: the percentage of correctly classified input samples.

    Accuracy=TN+TPTP+FP+FN+TN,

    where TP, TN, FP, and FN denote true positive, true negative, false positive, and false negative, respectively. TP is the number of correctly classified positive (i.e., successful) input instances. TN is the number of correctly classified negative instances (i.e., failure). FP is the number of positive instances that are misclassified as negative. FN is the number of negative instances that are misclassified as positive.

  • Precision: measures the number of positive class predictions that actually belong to the positive class.

    Precision=TPTP+FP.

  • Recall: quantifies the number of positive class predictions made out of all positive instances in the observed stream.

    Recall=TPTP+FN.

  • F-score: is the harmonic mean of precision and recall.

    F-score=2×Precision×RecallPrecision+Recall.

  • Kappa statistic (κ): is a robust classification accuracy measure that takes into account the probability of agreement by chance, indicating an improvement over a majority class classifier, which predicts all incoming samples fall in the majority class [51]. It plays a crucial role in evaluating classification accuracy, especially for imbalanced data streams.

    κ=p0-pc1-pc,

    where p0 is the prequential accuracy of the classifier and pc is the probability that a chance classifier makes a correct prediction [52]. If κ = 1, the classification algorithm is always correct.

  • Model size: is a measure of the memory space occupied by the classification model obtained throughout the prequential evaluation process.

Typically, the online classification of data streams does not have a single optimal solution (i.e., algorithm) that optimizes all of the involved performance metrics, but rather a plethora of possible solutions with noticeable trade-offs between different performance metrics. For such scenarios, the notion of optimality is captured through Pareto Optimalty, which considers an algorithm to be inferior or superior to another algorithm only if it is inferior in all metrics or superior in all metrics. The trade-offs in the algorithm selection process are captured by algorithms that are superior with respect to some metrics but inferior in other metrics. Such pairs of algorithms that are both superior and inferior with respect to certain metrics are called non-dominated and form the Pareto front of the algorithm performance optimization problem [53]. Hence, this work identifies, for each potential dataset, the set of non-dominated algorithms (SNDA) based on the obtained performance metrics. Having obtained the SNDA, a CI-based MCDA procedure is followed to assign scores and ranks to the various algorithms. The main steps followed by the CI-based procedure are shown in Figure 3. First, a set of non-dominated algorithms was used to construct a performance matrix (SNDA matrix).

The rows of the matrix represent the alternatives (i.e., algorithms), whereas the columns indicate the performance of these algorithms under each performance metric or indicator. Second, performance metrics are assigned weights that can be considered trade-off coefficients, meaning that they represent the decrease in indicator (x) that can be compensated for by another indicator (y). Third, the performance matrix was normalized using one of the eight possible normalization techniques. The normalization techniques include the rank, percentile rank, standardized, min-max, logistic, categorical (−1,0,1), categorical (0.1,0.2,0.4,0.6,0.8,1), and target methods [29]. The outcome of the CI-based procedure can ultimately weigh or rank the SNDA and identify which algorithms are best suited for the classification problem in question. The normalization step helps in putting all performance indicators on the same scale and assigning a score to each alternative with respect to its counterparts. Fourth, the scores assigned after the normalized process are combined using a particular aggregation function to generate an index for each alternative (i.e., algorithm). These aggregation functions include additive, geometric, harmonic, minimum, and median functions [29]. Fifth, indices assigned to different algorithms are used to assign ranks to these algorithms. The ranks are in the range of 1 (best rank) to n (worst rank), where n is the number of elements in the SNDA. In this work, the steps outlined in the flowchart are performed for each possible combination of normalization and aggregation methods. Hence, an algorithm may be assigned different ranks under different combinations. To combine the results obtained under different combinations of normalization and aggregation, a rank frequency metric (rfm) is defined as shown in Eq. 6.

rfmA-i=nrankincombinations×100%,

where rfmAi is the rank frequency metric of algorithm A, which shows the proportion of indices that rank algorithm A in the ith position, nranki is the number of times the algorithm A is chosen for the ith rank, and ncombinations is the number of possible combinations of normalization and aggregation under which algorithm A is ranked.

4. Results and Analysis

4.1 Edge Dataset Results

Table 5 summarizes the performance of the considered online data stream classification algorithms on a stream generated based on the Edge dataset, in terms of accuracy, precision, recall, F-score, kappa and the model size. The generated stream consisted of 25,000 samples with 17,333 and 7,667 success and failure instances, respectively. All algorithms were tested using their default set of parameters defined in the scikit-multiflow evaluation platform [33]. In addition, the ensemble-based algorithms were further tested on different sets of base estimators. For instance, the default base estimator for the LB algorithm is the kNN classifier.

However, it was also tested in other configurations, where its base estimator is set to either HT (LB+HT), HAT (LB+HAT), ARF (LB+ARF), EFDT (LB+EFDT), or VFDRC (LB+VFDRC). Similarly, the previous statement applies to all other ensemble-based algorithms. As shown, the considered algorithms exhibited different performance values under the metrics used. In addition, no single algorithm surpasses all other algorithms in terms of all the performance metrics. In this regard, the LB + ARF ensemble algorithm achieved the highest accuracy, precision, F-score, and kappa values, while the NB algorithm outperformed other algorithms in terms of recall and model size metrics. Figure 4 shows a box plot of the obtained performance values for each metric. It visualizes the degree of variation between the algorithms for each performance metric.

In general, the algorithms vary widely in terms of their kappa statistics. In addition, the considered algorithms achieved better precision, recall, and F-score compared with their accuracy values. Apparently, the class imbalance observed in the Edge dataset (i.e., stream) has a direct consequence on the performance of the classification algorithms; only those algorithms capable of handling class imbalance and concept drift achieved relatively higher performance compared with other algorithms. Specifically, the ARF algorithm and other ensemble algorithms whose base estimator is set to either HT, HAT, ARF, EFDT, or VFDRC have relatively better performance metrics compared with all other algorithms.

In Figure 4, the model size score (MSS) is computed according to Eq. 7.

MSSi=1-MSi-MSminMSmax-MSmin,

where MSSi is the score assigned to algorithm i, MSi is the model size under the ith algorithm, and MSmin and MSmax are the minimum and maximum model sizes, respectively. The formula given in Eq. 7 normalizes the model size and converts it into an ascending attribute that favors algorithms with lower model sizes. It also puts the model size on the same scale as the other performance metrics. As shown in Figure 4, the vast majority of classification algorithms achieved higher model size scores (i.e., low model sizes as compared with other algorithms). Nevertheless, there are few algorithms that have significantly higher model sizes (i.e., low model size scores).

Because there is no single algorithm that is superior in all performance metrics, but rather a set of algorithms in which some of them are superior in some metrics and inferior in other metrics, it is important to identify the SNDA. Table 6 shows the SNDA for the Edge dataset classification, along with their performance metrics.

As shown in Table 6, this set consists of 25 algorithms with various trade-offs between the performance metrics considered. To determine the most viable options (i.e., algorithms) for the edge data stream, the information shown in Table 6 was fed to the CI-based MCDA process to obtain the rank frequency metric (rfm) for each algorithm in the SNDA of the edge data stream. Figure 5 shows the rfm values associated with various algorithms from the SNDA of the edge data stream. The results are shown for those algorithms that have been ranked at least once in the first three ranks (i.e., algorithms with non-zero values for their rfmA−1, rfmA−2, or rfmA−3). As depicted in Figure 5, the LB+ARF, LB+EFDT, and LB+HAT algorithms have higher values for rfmA−1, rfmA−2, and rfmA−3 than do their counterparts. In other words, they always compete for the first three ranks among the various combinations of normalization and aggregation methods. Apparently, the LB+ARF algorithm occupied the first rank for the majority of combinations and could be considered the most promising choice for the edge data stream classification problem. Conversely, the LB+HT, LB+VFDRC, DWMC+EFDT, OB-ADWIN+HAT, and OB-HAT algorithms occupied ranks that were equal to or worse than the fourth rank for a large portion of normalization and aggregation combinations.

4.2 Cloud via RSU Dataset Results

This section presents the performance metrics of the various data stream classification algorithms on a stream generated based on the CloudRSU dataset. The generated stream consists of 25,000 samples with 17,961 and 7,039 success and failure instances, respectively. Table 7 shows the accuracy, precision, recall, F-score, kappa, and model size metrics for the different algorithms. The algorithms obtained different performance values under the considered metrics. Similar to the edge data stream, there is no single algorithm that surpasses other algorithms in terms of all performance metrics. In this regard, the LB+ARF algorithm achieved the highest accuracy, F-score, and kappa values, while the LB-EFDT achieved the highest precision value, while the UB-O and NB algorithms achieved the highest recall and model size values, respectively. Figure 6 shows a box plot of the obtained performance metrics for the various algorithms. The vast majority of algorithms have comparable performance metrics, with the kappa statistic having generally lower values compared with other metrics.

Although the majority of the considered algorithms, especially the ensemble-based ones, have comparable performance metrics, there is no single algorithm that surpasses all other algorithms with respect to the considered metrics. Hence, it is necessary to analyze the dominance relationship between these algorithms and identify a SNDA. Table 8 lists the SNDA for the stream generated from the CloudRSU dataset. This set contains 22 algorithms. This set was then used to obtain the rank performance metrics, as shown in Figure 7. The results are for those algorithms that have occupied one of the first three ranks at least once under all possible combinations of normalization and aggregation methods.

The results shown in Figure 7 illustrate that the LB+EFDT, LB+HAT, and ARF algorithms dominate other algorithms in terms of the number of times they occupy one of the top three ranks. Evidently, the LB+EFDT algorithm ranks robustly within the first three ranks, with a high share of combinations assigning it to the first rank. Hence, it can be considered a suitable data stream classification algorithm for streams generated from the CloudRSU dataset.

4.3 Cloud via CN Dataset Results

This section presents the results obtained for the various algorithms on the data stream generated from the CloudCN dataset. Similar to the other two cases, the generated stream consisted of 25,000 instances. The number of success and failure instances in this stream were 16,188 and 8,812, respectively. Table 9 summarizes the performance metrics obtained using the studied algorithms. As shown, the algorithms vary in terms of their obtained performance values with no single algorithm dominating the others in all metrics. The LB+ARF, LB+EFDT, and NB algorithms achieved the highest accuracy, precision, recall, F-score, kappa, and model size, respectively.

Figure 8 shows the variability observed among the algorithms with respect to all performance metrics. As depicted, a large portion of the considered algorithms have very similar performance values. In addition, the values obtained under the kappa statistic are lower than those of the other metrics. The results of the algorithm dominance are shown in Table 10. As illustrated, the SNDA for the considered stream consists of 23 algorithms with various trade-offs between performance metrics.

Figure 9 shows the ranking results obtained by feeding the information given in Table 10 to the CI-based MCDA procedure. It considers only algorithms with non-zero values of rfmA−1, rfmA−2, or rfmA−3. The results depicted in Figure 9 show that the LB+VFDRC, LB+HT, and LB+EFDT surpass all other algorithms with respect to their rank frequency metric. While the LB+EFDT ensemble-based algorithm is assigned to the first rank for a larger number of combinations as compared to the LB+VFDRC, the latter is never assigned to a rank worse than third under all combinations of normalization and aggregation. In addition, the LB+HT and LB+EFDT algorithms occupied ranking positions worse than or equal to the fourth rank for a noticeable portion of scoring and ranking combinations. Moreover, algorithms other than the best three algorithms were allocated the fourth or worse ranks for a relatively large number of combinations, hindering their potential applicability to the considered online data stream classification task.

In summary, the results shown in Tables 6, 8, and 10 illustrate the fact that for each classification task, there is a set of non-dominated algorithms, with comparable performance metrics that can be utilized in real-life applications. The ranking results in Figures 5, 7, and 9 provide an insight into the algorithms that are superior to other members in the corresponding non-dominated set, considering the trade-offs observed under all performance metrics. Although the results presented in Sections 4.1, 4.2, and 4.3 highlight the most suitable algorithms for each classification task, it is worth noting that the LB+EFDT (i.e., leveraging bagging algorithm, whose base estimator is set to the extremely fast decision tree) demonstrates consistent competence to occupy one of the top-ranking positions for each of the considered datasets because of its competitive performance metrics, in addition to its reasonably acceptable model size. This algorithm maintains an ensemble of n EFDT base classifiers, where n is set to 10 by default in the utilized evaluation platform [33]. To classify an incoming instance, every individual classifier will produce a prediction (i.e., a vote), and the final classification result is obtained by aggregating the individual predictions. Following Condorcet’s jury theorem [5456], there is theoretical proof that the error rate of an ensemble approaches 0, in the limit, provided that two conditions are satisfied. First, individual base classifiers must perform better than random guessing. Second, individual classification models must exhibit diversity, that is, they should not produce correlated errors. In the case of the LB+EFDT algorithm, the EFDT algorithm is used as the base classification model. The EFDT is a refinement over the HT algorithm. Whereas the HT algorithm delays the selection of a split at a node until it is confident that it has determined the best split, the EFDT algorithm selects and deploys a split as soon as it is confident that the split is useful. In addition, the HT algorithm never revisits its splitting decision, while the EFDT algorithm revisits its splitting decision, replacing the split if it later becomes evident that a better split is available. Hence, the EFDT algorithm can learn rapidly from incoming instances with an inbuilt tolerance against concept drift, utilizing the most advantageous splitting decision recognized to date [41]. Consequently, the EFDT algorithm is considered a viable base classifier that meets the first condition implied by Condorcet’s jury theorem.

The LB algorithm employs online bagging to train its base models (i.e., classifiers). In this regard, as each incoming classification instance is observed, online re-sampling is performed by presenting that instance to each model k ~ Poisson (λ) times and updating each model accordingly. The value of k is regarded as the weight of the incoming instance. To increase online re-sampling, the LB algorithm uses a higher value of the λ parameter of the Poisson distribution, which is set to six by default. With this value of λ, the LB algorithm causes more randomness in the weights of the incoming instances. Hence, it achieves a higher input space diversity by assigning a different range of weights to each incoming instance. In addition, the LB algorithm leverages the bagging performance by adding randomization at the output of the ensemble using output codes. As shown in Section 2.5, the LB algorithm assigns to each possible class label a binary string of length n, where n is the number of base classifiers in the ensemble. Each base classifier learns one bit in a binary string. Unlike standard ensemble methods, the LB algorithm utilizes random output codes instead of deterministic ones. In other words, while the base classifiers in the standard methods predict the same function, using output codes allows each classifier in the LB ensemble to predict a different function [44]. This would minimize the influence of correlations between the base classifiers and, in turn, increase the diversity of the ensemble [57,58]. Hence, the introduction of randomization to the input and output of the ensemble classifiers yields, to some extent, a diversified ensemble satisfying the second condition imposed by Condorcet’s jury theorem. Furthermore, the LB algorithm uses the ADWIN algorithm to deal with concept drift, with one ADWIN instance for each classifier in the ensemble. Each time a concept drift is detected, the worst classifier is reset. Hence, the LB algorithm always monitors the quality of its online learning process and adheres to the current class distribution of incoming classification instances. In general, the inherent diversity among its base classifiers compensates for the classification errors induced by any individual classifier. This can be clearly observed in Table 5, where a single EFDT classifier achieved an accuracy of 0.7936, and the accuracy of the LB+EFDT ensemble reached 0.9326. Altogether, the LB+EFDT algorithm demonstrated the ability to maintain a noticeable performance under all considered datasets and performance metrics. Hence, it could be considered a feasible choice for online data stream classification in real-life VEC environments.

5. Conclusion

VEC has recently become an integral part of intelligent transportation systems. In these systems, workload orchestration plays a crucial role in selecting the most appropriate computational node to execute tasks generated from vehicular applications. The continuous streams of tasks generated from vehicular applications pose significant challenges in data management, execution, and storage. Online ML algorithms can address continuous data streams and manage large data sizes. Hence, this paper has addressed the potential application of online data stream classification algorithms to support workload orchestration in VEC environments. These algorithms can be used to predict the ability of the available computational nodes to successfully handle a particular task. In this work, various online data stream classification algorithms were evaluated based on synthetic datasets generated from simulated VEC environments. In addition, a MCDA technique was employed to rank various algorithms based on the obtained performance metrics. The evaluation results show that the considered algorithms can handle online classification operations with noticeable trade-offs and dominance relations with respect to their obtained performance. In addition, the employed multi-criteria decision analysis technique demonstrated the ability to rank different algorithms and identify the most appropriate algorithms to handle online classification in VEC environments.

Future work will consider implementing a workload orchestration algorithm based on the findings of this study to show how online data stream classification can enhance orchestration performance in dynamically changing VEC environments.

Fig 1.

Figure 1.

System model.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

Fig 2.

Figure 2.

Evaluation flowchart.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

Fig 3.

Figure 3.

CI MCDA flowchart.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

Fig 4.

Figure 4.

Box plot of performance metrics on the Edge dataset.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

Fig 5.

Figure 5.

Algorithm ranking for the Edge dataset.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

Fig 6.

Figure 6.

Box plot of performance metrics on the CloudRSU dataset.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

Fig 7.

Figure 7.

Algorithms ranking for the CloudRSU dataset.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

Fig 8.

Figure 8.

Box plot of performance metrics on the CloudCN dataset.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

Fig 9.

Figure 9.

Algorithm ranking for the CloudCN dataset.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 101-122https://doi.org/10.5391/IJFIS.2021.21.2.101

In-vehicle application characteristics.

App-1App-2App-3
Usage percent303535
Task input file size (kB)204020
Task output file size (kB)202080
Task generation rate (per second)0.30.20.07
Number of instructions per task (× 109)31020
VM utilization on RSU (%)62040
VM Utilization on Cloud (%)1.648


Hypothetical orchestration probabilities.

VEC serverCloud via RSUCloud via CN
App-10.600.230.17
App-20.300.530.17
App-30.230.600.17


VEC environment state variables per offloading decision.

VariableDecision
Number of instructions per taskAll
Number of recently uploaded tasksAll
Input file size (kB)All
Output file size (kB)All
Edge server utilization (%)VEC server
WLAN upload delay (ms)VEC server
WLAN download delay (ms)VEC server
WAN upload delay (ms)Cloud via RSU
WAN download delay (ms)Cloud via RSU
CN upload delay (ms)Cloud via CN
CN download delay (ms)Cloud via CN


Sample dataset entries.

Edge datasetCloudRSU datasetCloudCN dataset
SuccessFailureSuccessFailureSuccessFailure
Number of instructions per task3,0573,1789,53119,2423,1969,598
Number of previously offloaded tasks2129----
Number of offloaded tasks--1184192
Input file size192121881921
Output file size182142222036
Edge server utilization2520.9---
WLAN upload delay23.0284.45---
WLAN download delay36.2028.44---
WAN upload delay--6.1816.30--
WAN download delay--12.0755.65--
CN upload delay----21.658
CN download delay----68.620.2


Performance comparison on the Edge dataset.

AlgorithmAccuracyPrecisionRecallF-scoreKappaModel size (kB)
NB0.67480.67630.98570.80220.041912
KNN0.68750.74860.80220.77450.2675502
kNNADWIN0.68750.74870.80210.77450.2675513
SAM-KNN0.74680.77880.86810.82100.3927981137
VFDRC0.71990.76970.82960.79850.341381
HT0.73930.76550.87980.81870.3627145
HAT0.79640.82520.88260.85290.5233102
EFDT0.79360.83710.85110.84400.5385361
ARF0.89910.91690.93330.92500.77071502
DWMC0.77940.81490.86710.84020.484960
LB0.72410.80470.7620.78280.40544857
SMOTE-B0.67270.80350.66040.72500.330260337
UO-B0.71090.71230.93740.80950.25423453
OB0.68720.74490.79460.76890.28644446
OB-ADWIN0.69610.75420.79540.77430.31064440
LB+HT0.91730.93020.95190.94090.80311069
LB+HAT0.93040.94150.9590.95020.83491037
LB+ARF0.93800.95520.95520.95520.859524580
LB+EFDT0.93260.94690.95620.95150.84092063
LB+VFDRC0.91610.93050.94940.93990.8005756
DWMC+HT0.84120.8710.90460.88750.6183163
DWMC+HAT0.8530.8840.90660.89520.6495288
DWMC+ARF0.92010.93930.94570.94250.81177581
DWMC+EFDT0.870.89030.92640.90800.6871444
DWMC+VFDRC0.83090.87890.87650.87770.604114
SMOTE-B + HT0.79810.87910.82120.84920.544956153
SMOTE-B + HAT0.82490.89510.84620.87000.602756311
SMOTE-B + ARF0.88160.92470.90240.91340.726280722
SMOTE-B + EFDT0.85650.90810.8820.89490.669356513
SMOTE-B + FDRC0.78930.86090.82950.84490.516656147
UO-B + HT0.78720.77270.97870.86360.4018371
UO-B + HAT0.82310.88760.85240.86960.594956278
UO-B + ARF0.880.92340.90140.91230.722781812
UO-B + EFDT0.81180.79590.97920.87810.4844926
UO-B + VFDRC0.79730.79090.96120.86780.4505418
OB + HT0.78740.81160.90220.85450.46381661
OB + HAT0.84480.85790.92970.89240.6158746
OB + ARF0.84060.89070.87740.88400.629511315
OB + EFDT0.81510.84270.90110.87090.54673267
OB + VFDRC0.7790.79640.91450.85140.4286872
OB-ADWIN+HT0.830.8510.91440.88160.5816693
OB-ADWIN+HAT0.85420.86760.93150.89840.6415567
OB-ADWIN+ARF0.9020.91980.94040.93000.766715946
OB-ADWIN+EFDT0.86040.87930.92540.90180.6615816
OB-ADWIN+VFDRC0.82210.84170.91510.87690.5585556


Non-dominated set of algorithms for the Edge dataset.

AlgorithmAccuracyKappaPrecisionRecallF-scoreModel size score
NB0.67480.04190.67630.98570.80221
kNN0.68750.26750.74860.80220.77450.9995
kNN-ADWIN0.68750.26750.74870.80210.77450.9995
SAMkNN0.74680.39270.77880.86810.8210
VFDRC0.71990.34130.76970.82960.79850.9999
HT0.73930.36270.76550.87980.81870.9999
HAT0.79640.52330.82520.88260.85290.9999
EFDT0.79360.53850.83710.85110.8440.9996
ARF0.89910.77070.91690.93330.9250.9985
DWMC0.77940.48490.81490.86710.84021
UO-B0.71090.25420.71230.93740.80950.9965
LB+HT0.91730.80310.93020.95190.94090.9989
LB+HAT0.93040.83490.94150.9590.95020.999
LB+ARF0.9380.85950.95520.95520.95520.975
LB+EFDT0.93260.84090.94690.95620.95150.9979
LB+VFDRC0.91610.80050.93050.94940.93990.9992
DWMC+HT0.84120.61830.8710.90460.88750.9998
DWMC+HAT0.8530.64950.8840.90660.89520.9997
DWMC+EFDT0.870.68710.89030.92640.9080.9996
DWMC+VFDRC0.83090.6040.87890.87650.87770.9999
UB-O+HT0.78720.40180.77270.97870.86360.9996
UB-O+EFDT0.81180.48440.79590.97920.87810.9991
UB-O+VFDRC0.79730.45050.79090.96120.86780.9996
OB+HAT0.84480.61580.85790.92970.89240.9993
OB-ADWIN + HAT0.85420.64150.86760.93150.89840.9994


Performance comparison on the CloudRSU dataset.

AlgorithmAccuracyPrecisionRecallF-scoreKappaModel size (kB)
NB0.90950.93340.94090.93710.775411
KNN0.69800.75220.86340.80400.1599463
kNN-ADWIN0.69520.75300.85570.80110.1604474
SAM-KNN0.75250.77810.91620.84150.2934980406
VFDRC0.90870.94750.92390.93560.779154
HT0.92670.94560.95250.94900.8182152
HAT0.92380.93470.96090.94760.808072
EFDT0.92980.95750.94410.95080.8288143
ARF0.95220.96310.97050.96680.88141572
DWMC0.92530.94180.95480.94830.813752
LB0.72050.80620.80350.80480.31274467
SMOTE-B0.65430.78110.71890.74870.196155935
UO-B0.72640.73250.97450.83630.09583129
OB0.69520.75050.86160.80220.15194456
OB-ADWIN0.71450.76580.86710.81330.21643844
LB+HT0.94670.96110.96600.96350.87021179
LB+HAT0.95400.96900.96690.96790.8868430
LB+ARF0.96140.97260.97370.97310.904918477
LB+EFDT0.95990.97430.96970.97200.90151467
LB+VFDRC0.94880.96500.96360.96430.8739448
DWMC+HT0.93490.94980.96100.95540.8378110
DWMC+HAT0.93800.94980.96460.95710.8453234
DWMC+ARF0.95610.96740.97160.96950.89155115
DWMC+EFDT0.94160.95520.96380.95950.8549167
DWMC+VFDRC0.91860.95480.93050.94250.8031135
SMOTE-B+HT0.92820.94580.95460.95020.821852406
SMOTE-B+HAT0.92510.94330.95280.94800.813952585
SMOTE-B+ARF0.95170.96650.96620.96630.881069819
SMOTE-B+EFDT0.93920.95390.96180.95780.849252699
SMOTE-B+VFDRC0.91510.95100.92950.94010.794152373
UO-B + HT0.92690.93250.96810.95000.8142388
UO-B + HAT0.92880.93490.96800.95120.8194554
UO-B + ARF0.94460.94980.97420.96180.860715765
UO-B + EFDT0.93200.93920.96790.95330.8282517
UO-B + VFDRC0.92570.93780.96000.94880.8133322
OB + HT0.92620.93480.96010.94730.81841322
OB + HAT0.92520.93620.96120.94850.8118647
OB + ARF0.96130.96130.97000.96560.877017768
OB + EFDT0.94160.95680.96190.95930.85531514
OB + VFDRC0.91820.94780.93760.94270.7998827
OB-ADWIN+HT0.92470.93660.96000.94820.8107399
OB-ADWIN+HAT0.92460.93550.96110.94810.8100648
OB-ADWIN+ARF0.95160.96330.96950.96640.880219609
OB-ADWIN+EFDT0.93740.95150.96180.95660.8444645
OB-ADWIN+VFDRC0.92410.94710.94710.94710.8128312


Non-dominated set of algorithms for the CloudRSU dataset.

AlgorithmAccuracyKappaPrecisionRecallF-scoreModel size score
NB0.90950.77540.93340.94090.93711
VFDRC0.90870.77910.94750.92390.93560.999956283
HT0.92670.81820.94560.95250.94900.999855844
HAT0.92380.8080.93470.96090.94760.999937923
EFDT0.92980.82880.95750.94410.95080.999865371
ARF0.95220.88140.96310.97050.96680.998408233
DWMC0.92530.81370.94180.95480.94830.999957844
UO-B0.72640.09580.73250.97450.83630.996819955
LB + HT0.94670.87020.96110.9660.96350.998808409
LB + HAT0.9540.88680.9690.96690.96790.999572642
LB + ARF0.96140.90490.97260.97370.97310.981164191
LB + EFDT0.95990.90150.97430.96970.97200.998514415
DMCT + HT0.93490.83780.94980.9610.95540.999899143
DWMC + HAT0.9380.84530.94980.96460.95710.999772316
DWMC + ARF0.95610.89150.96740.97160.96950.994794179
DWMC + EFDT0.94160.85490.95520.96380.95950.999840483
DWMC + VFDRC0.91860.80310.95480.93050.94250.999873969
UO-B + HT0.92690.81420.93250.96810.95000.999615492
UB-O + HAT0.92880.81940.93490.9680.95120.999446182
UB-O + ARF0.94460.86070.94980.97420.96180.983930994
UB-O + EFDT0.9320.82820.93920.96790.95330.999483728
OB + ARF0.96130.8770.96130.970.96560.981887726


Performance comparison on the CloudCN dataset.

AlgorithmAccuracyPrecisionRecallF-scoreKappaModel size (kB)
NB0.89690.91560.92580.92070.773611
KNN0.72360.74150.87840.80420.3456445
kNN-ADWIN0.72140.74080.87490.80230.3415452
SAMKNN0.70840.73490.85850.79190.3149979425
VFDRC0.89610.93100.90630.91850.7752102
HT0.91940.91760.96160.93910.8202115
HAT0.92240.92540.95700.94090.827986
EFDT0.92030.92700.95150.93910.8238156
ARF0.94200.93810.97460.95600.8712880
DWMC0.90170.91110.93960.92510.782352
LB0.72050.80620.80350.80480.31274467
SMOTE-B0.69960.77610.75190.76380.351550811
UO-B0.70510.69310.97550.81040.22464045
OB0.72520.74340.87750.80490.35074465
OB-ADWIN0.73060.74860.87770.80800.36563921
LB+HT0.94620.94340.97530.95910.8807711
LB+HAT0.94550.94440.97300.95850.8793704
LB+ARF0.95810.95860.97740.96790.907711654
LB+EFDT0.95810.96090.97490.96780.90782081
LB+VFDRC0.94720.94620.97370.95980.8832500
DWMC+HT0.92880.92700.96590.94610.8416198
DWMC+HAT0.93300.93230.96660.94910.8513360
DWMC+ARF0.94490.93880.97860.95830.87754565
DWMC+EFDT0.93230.93280.96480.94850.8499417
DWMC+VFDRC0.92320.93620.94550.94080.8313161
SMOTE-B + HT0.92630.92700.96170.94400.836347269
SMOTE-B +HAT0.93360.93410.96530.94940.852847389
SMOTE-B +ARF0.94020.92860.97090.94930.867362317
SMOTE-B +EFDT0.93360.93240.96740.94960.852548022
SMOTE-B +FDRC0.92730.92960.96030.94470.838947313
UO-B + HT0.92690.91130.98240.94550.8348281
UO-B + HAT0.93080.91580.98330.94840.8439656
UO-B + ARF0.93060.91220.98770.94840.842916995
UO-B + EFDT0.93000.91570.98210.94770.8422668
UO-B + VFDRC0.93010.92810.96670.94700.8445346
OB + HT0.92370.91530.97180.94270.82871138
OB + HAT0.93290.93030.96880.94920.8508753
OB + ARF0.93750.93190.97450.95270.860614264
OB + FDT0.93300.93010.96920.94920.85091980
OB + VFDRC0.91790.93050.94330.93690.8194823
OB-ADWIN+HT0.93260.92670.97260.94910.8495351
OB-ADWIN+HAT0.93440.93090.97050.95030.8540703
OB-ADWIN+ARF0.93830.93310.97430.95330.862512298
OB-ADWIN+EFDT0.93450.93210.96930.95030.8545689
OB-ADWIN+VFDRC0.92910.93240.95980.94590.8431452


Non-dominated set of algorithms for the CloudCN dataset.

AlgorithmAccuracyKappaPrecisionRecallF-scoreModel size score
NB0.89690.77360.91560.92580.92071
VFDRC0.89610.77520.9310.90630.91850.999906689
HT0.91940.82020.91760.96160.93910.999893732
HAT0.92240.82790.92540.9570.94090.999923127
EFDT0.92030.82380.9270.95150.93910.999851544
ARF0.9420.87120.93810.97460.95600.999113041
DWMC0.90170.78230.91110.93960.92510.999957801
UB-O0.70510.22460.69310.97550.81040.99588126
LB+HT0.94620.88070.94340.97530.95910.999284837
LB+HAT0.94550.87930.94440.9730.95850.999292781
LB+ARF0.95810.90770.95860.97740.96790.988112142
LB+EFDT0.95810.90780.96090.97490.96780.997886041
LB+VFDRC0.94720.88320.94620.97370.95980.999501079
DWMC+HT0.92880.84160.9270.96590.94610.999809508
DWMC+HAT0.9330.85130.93230.96660.94910.999644093
DWMC+ARF0.94490.87750.93880.97860.95830.995349983
DWMC+EFDT0.93230.84990.93280.96480.94850.999585037
DWMC+VFDRC0.92320.83130.93620.94550.94080.999847256
UB-O+HT0.92690.83480.91130.98240.94550.999723845
UB-O+HAT0.93080.84390.91580.98330.94840.999341514
UB-O+ARF0.93060.84290.91220.98770.94840.982658624
UB-O+VFDRC0.93010.84450.92810.96670.94700.999657887
OB-ADWIN+HT0.93260.84950.92670.97260.94900.999652466

References

  1. Guo, H, Liu, J, Ren, J, and Zhang, Y (2020). Intelligent task offloading in vehicular edge computing networks. IEEE Wireless Communications. 27, 126-132. https://doi.org/10.1109/MWC.001.1900489
    CrossRef
  2. Yurtsever, E, Lambert, J, Carballo, A, and Takeda, K (2020). A survey of autonomous driving: common practices and emerging technologies. IEEE Access. 8, 58443-58469. https://doi.org/10.1109/ACCESS.2020.2983149
    CrossRef
  3. Neil, J, Cosart, L, and Zampetti, G (2020). Precise timing for vehicle navigation in the smart city: an overview. IEEE Communications Magazine. 58, 54-59. https://doi.org/10.1109/MCOM.001.1900596
    CrossRef
  4. Sabella, D, Brevi, D, Bonetto, E, Ranjan, A, Manzalini, A, and Salerno, D. MEC-based infotainment services for smart roads in 5G environments, 1-6. https://doi.org/10.1109/VTC2020-Spring48590.2020.9128807
  5. Atallah, RF, Assi, CM, and Khabbaz, MJ (2019). Scheduling the operation of a connected vehicular network using deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems. 20, 1669-1682. https://doi.org/10.1109/TITS.2018.2832219
    CrossRef
  6. Sun, Y, Guo, X, Zhou, S, Jiang, Z, Liu, X, and Niu, Z . Learning-based task offloading for vehicular cloud computing systems., Proceedings of 2018 IEEE International Conference on Communications (ICC), 2018, Kansas City, MO, Array, pp.1-7. https://doi.org/10.1109/ICC.2018.8422661
  7. Liu, L, Chen, C, Pei, Q, Maharjan, S, and Zhang, Y (2020). Vehicular edge computing and networking: a survey. Mobile Networks and Applications. https://doi.org/10.1007/s11036-020-01624-1
    CrossRef
  8. Zhang, K, Mao, Y, Leng, S, Maharjan, S, and Zhang, Y . Optimal delay constrained offloading for vehicular edge computing networks., Proceedings of 2017 IEEE International Conference on Communications (ICC), 2017, Paris, France, Array, pp.1-6. https://doi.org/10.1109/ICC.2017.7997360
  9. Liu, S, Liu, L, Tang, J, Yu, B, Wang, Y, and Shi, W (2019). Edge computing for autonomous driving: opportunities and challenges. Proceedings of the IEEE. 107, 1697-1716. https://doi.org/10.1109/JPROC.2019.2915983
    CrossRef
  10. Payalan, YF, and Guvensan, MA (2020). Towards nextgeneration vehicles featuring the vehicle intelligence. IEEE Transactions on Intelligent Transportation Systems. 21, 30-47. https://doi.org/10.1109/TITS.2019.2917866
    CrossRef
  11. Sonmez, C, Ozgovde, A, and Ersoy, C (2019). Fuzzy workload orchestration for edge computing. IEEE Transactions on Network and Service Management. 16, 769-782. https://doi.org/10.1109/TNSM.2019.2901346
    CrossRef
  12. Wang, Y, Wang, S, Zhang, S, and Cen, H (2019). An edge-assisted data distribution method for vehicular network services. IEEE Access. 7, 147713-147720. https://doi.org/10.1109/ACCESS.2019.2946484
    CrossRef
  13. Sonmez, C, Tunca, C, Ozgovde, A, and Ersoy, C (2021). Machine learning-based workload orchestrator for vehicular edge computing. IEEE Transactions on Intelligent Transportation Systems. 22, 2239-2251. https://doi.org/10.1109/TITS.2020.3024233
    CrossRef
  14. Rojas, JS, Pekar, A, Rendon, A, and Corrales, JC (2020). Smart user consumption profiling: Incremental learning-based OTT service degradation. IEEE Access. 8, 207426-207442. https://doi.org/10.1109/ACCESS.2020.3037971
    CrossRef
  15. Nallaperuma, D, Nawaratne, R, Bandaragoda, T, Adikari, A, Nguyen, S, Kempitiya, T, De Silva, D, Alahakoon, D, and Pothuhera, D (2019). Online incremental machine learning platform for big data-driven smart traffic management. IEEE Transactions on Intelligent Transportation Systems. 20, 4679-4690. https://doi.org/10.1109/TITS.2019.2924883
    CrossRef
  16. Adhikari, U, Morris, TH, and Pan, S (2018). Applying Hoeffding adaptive trees for real-time cyber-power event and intrusion classification. IEEE Transactions on Smart Grid. 9, 4049-4060. https://doi.org/10.1109/TSG.2017.2647778
    CrossRef
  17. Li, X, Zhou, Y, Jin, Z, Yu, P, and Zhou, S (2020). A classification and novel class detection algorithm for concept drift data stream based on the cohesiveness and separation index of Mahalanobis distance. Journal of Electrical and Computer Engineering. 2020. article no 4027423
    CrossRef
  18. Rutkowski, L, Jaworski, M, and Duda, P (2020). Basic concepts of data stream mining. Stream Data Mining: Algorithms and Their Probabilistic Properties. Cham, Switzerland: Springer, pp. 13-33 https://doi.org/10.1007/978-3-030-13962-9_2
  19. Gepperth, A, and Hammer, B . Incremental learning algorithms and applications., Proceedings of the 24th European Symposium on Artificial Neural Networks (ESANN), 2016, Bruges, Belgium.
  20. Yang, Q, Gu, Y, and Wu, D . Survey of incremental learning., Proceedings of 2019 Chinese Control And Decision Conference (CCDC), 2019, Nanchang, China, Array, pp.399-404. https://doi.org/10.1109/CCDC.2019.8832774
  21. Wankhade, KK, Dongre, SS, and Jondhale, KC (2020). Data stream classification: a review. Iran Journal of Computer Science. 3, 239-260. https://doi.org/10.1007/s42044-020-00061-3
    CrossRef
  22. El Mrabet, Z, Selvaraj, DF, and Ranganathan, P . Adaptive Hoeffding tree with transfer learning for streaming synchrophasor data sets., Proceedings of 2019 IEEE International Conference on Big Data (Big Data), 2019, Los Angeles, CA, Array, pp.5697-5704. https://doi.org/10.1109/BigData47090.2019.9005720
  23. Nixon, C, Sedky, M, and Hassan, M . Practical application of machine learning based online intrusion detection to internet of things networks., Proceedings of 2019 IEEE Global Conference on Internet of Things (GCIoT), 2019, Dubai, UAE, Array, pp.1-5. https://doi.org/10.1109/GCIoT47977.2019.9058410
  24. Da Costa, VGT, Santana, EJ, Lopes, JF, and Barbon, S (2019). Evaluating the four-way performance trade-off for stream classification. Green, Pervasive, and Cloud Computing. Cham, Switzerland: Springer, pp. 3-17 https://doi.org/10.1007/978-3-030-19223-5_1
    CrossRef
  25. Lopes, JF, Santana, EJ, da Costa, VGT, Zarpelao, BB, and Junior, SB (2020). Evaluating the four-way performance trade-off for data stream classification in edge computing. IEEE Transactions on Network and Service Management. 17, 1013-1025. https://doi.org/10.1109/TNSM.2020.2983921
    CrossRef
  26. Bifet, A, Gavalda, R, Holmes, G, and Pfahringer, B (2018). Machine Learning for Data Streams: With Practical Examples in MOA. Cambridge, MA: MIT Press
    CrossRef
  27. Greco, S, Figueira, J, and Ehrgott, M (2016). Multiple Criteria Decision Analysis. New York, NY: Springer
    CrossRef
  28. Cinelli, M, Kadzinski, M, Gonzalez, M, and Slowinski, R (2020). How to support the application of multiple criteria decision analysis? Let us start with a comprehensive taxonomy. Omega. 96. article no 102261
    Pubmed KoreaMed CrossRef
  29. Cinelli, M, Spada, M, Kim, W, Zhang, Y, and Burgherr, P (2021). MCDA Index Tool: an interactive software to develop indices and rankings. Environment Systems and Decisions. 41, 82-109. https://doi.org/10.1007/s10669-020-09784-x
    CrossRef
  30. Diaz-Balteiro, L, Gonzalez-Pachon, J, and Romero, C (2017). Measuring systems sustainability with multi-criteria methods: a critical review. European Journal of Operational Research. 258, 607-616. https://doi.org/10.1016/j.ejor.2016.08.075
    CrossRef
  31. El Gibari, S, Gomez, T, and Ruiz, F (2019). Building composite indicators using multicriteria methods: a review. Journal of Business Economics. 89, 1-24. https://doi.org/10.1007/s11573-018-0902-z
    CrossRef
  32. Greco, S, Ishizaka, A, Tasiou, M, and Torrisi, G (2019). On the methodological framework of composite indices: a review of the issues of weighting, aggregation, and robustness. Social Indicators Research. 141, 61-94. https://doi.org/10.1007/s11205-017-1832-9
    CrossRef
  33. Montiel, J, Read, J, Bifet, A, and Abdessalem, T (2018). Scikit-multiflow: a multi-output streaming framework. The Journal of Machine Learning Research. 19, 2915-2914.
  34. Webb, GI, Hyde, R, Cao, H, Nguyen, HL, and Petitjean, F (2016). Characterizing concept drift. Data Mining and Knowledge Discovery. 30, 964-994. https://doi.org/10.1007/s10618-015-0448-4
    CrossRef
  35. Demsar, J, and Bosnic, Z (2018). Detecting concept drift in data streams using model explanation. Expert Systems with Applications. 92, 546-559. https://doi.org/10.1016/j.eswa.2017.10.003
    CrossRef
  36. Forster, K, Monteleone, S, Calatroni, A, Roggen, D, and Troster, G . Incremental kNN classifier exploiting correct-error teacher for activity recognition., Proceedings of 2010 9th International Conference on Machine Learning and Applications, 2010, Washington, DC, Array, pp.445-450. https://doi.org/10.1109/ICMLA.2010.72
  37. Bifet, A, and Gavalda, R . Learning from time-changing data with adaptive windowing., Proceedings of the 2007 SIAM International Conference on Data Mining, 2007, Minneapolis, MN, Array, pp.443-448. https://doi.org/10.1137/1.9781611972771.42
  38. Losing, V, Hammer, B, and Wersing, H . KNN classifier with self adjusting memory for heterogeneous concept drift., Proceedings of 2016 IEEE 16th International Conference on Data Mining (ICDM), 2016, Barcelona, Spain, Array, pp.291-300. https://doi.org/10.1109/ICDM.2016.0040
  39. Hulten, G, Spencer, L, and Domingos, P . Mining time-changing data streams., Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2001, San Francisco, CA, Array, pp.97-106. https://doi.org/10.1145/502512.502529
  40. Bifet, A, and Gavalda, R (2009). Adaptive learning from evolving data streams. Advances in Intelligent Data Analysis. Heidelberg, Germany: Springer, pp. 249-260 https://doi.org/10.1007/978-3-642-03915-7_22
  41. Manapragada, C, Webb, GI, and Salehi, M . Extremely fast decision tree., Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, London, UK, Array, pp.1953-1962. https://doi.org/10.1145/3219819.3220005
  42. Kosina, P, and Gama, J (2015). Very fast decision rules for classification in data streams. Data Mining and Knowledge Discovery. 29, 168-202. https://doi.org/10.1007/s10618-013-0340-z
    CrossRef
  43. Oza, NC . Online bagging and boosting., Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, 2001, Waikoloa, HI, Array, pp.2340-2345. https://doi.org/10.1109/ICSMC.2005.1571498
  44. Bifet, A, Holmes, G, and Pfahringer, B (2010). Leveraging bagging for evolving data streams. Machine Learning and Knowledge Discovery in Databases. Heidelberg, Germany: Springer, pp. 135-150 https://doi.org/10.1007/978-3-642-15880-3_15
  45. Wang, B, and Pineau, J (2016). Online bagging and boosting for imbalanced data streams. IEEE Transactions on Knowledge and Data Engineering. 28, 3353-3366. https://doi.org/10.1109/TKDE.2016.2609424
    CrossRef
  46. Gomes, HM, Bifet, A, Read, J, Barddal, JP, Enembreck, F, Pfharinger, B, Holmes, G, and Abdessalem, T (2017). Adaptive random forests for evolving data stream classification. Machine Learning. 106, 1469-1495. https://doi.org/10.1007/s10994-017-5642-8
    CrossRef
  47. Kolter, JZ, and Maloof, MA (2007). Dynamic weighted majority: an ensemble method for drifting concepts. he Journal of Machine Learning Research. 8, 2755-2790.
  48. Wang, S, and Yao, X . Diversity analysis on imbalanced data sets by using ensemble models., Proceedings of 2009 IEEE Symposium on Computational Intelligence and Data Mining, 2009, Nashville, TN, Array, pp.324-331. https://doi.org/10.1109/CIDM.2009.4938667
  49. Chawla, NV, Bowyer, KW, Hall, LO, and Kegelmeyer, WP (2002). SMOTE: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research. 16, 321-357. https://doi.org/10.1613/jair.953
    CrossRef
  50. Sonmez, C, Ozgovde, A, and Ersoy, C (2018). Edgecloudsim: an environment for performance evaluation of edge computing systems. Transactions on Emerging Telecommunications Technologies. 29. article no. e3493
    CrossRef
  51. Vasiloudis, T, Beligianni, F, and De Francisci Morales, G . BoostVHT: boosting distributed streaming decision trees., Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 2017, Singapore, Array, pp.899-908. https://doi.org/10.1145/3132847.3132974
  52. Bifet, A, de Francisci Morales, G, Read, J, Holmes, G, and Pfahringer, B . Efficient online evaluation of big data stream classifiers., Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, Sydney, Australia, Array, pp.59-68. https://doi.org/10.1145/2783258.2783372
  53. Ishibuchi, H, Masuda, H, and Nojima, Y . Selecting a small number of non-dominated solutions to be presented to the decision maker., Proceedings of 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2014, San Diego, CA, Array, pp.3816-3821. https://doi.org/10.1109/SMC.2014.6974525
  54. Hansen, LK, and Salamon, P (1990). Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence. 12, 993-1001. https://doi.org/10.1109/34.58871
    CrossRef
  55. Ladha, KK (1993). Condorcet’s jury theorem in light of de Finetti’s theorem. Social Choice and Welfare. 10, 69-85. https://doi.org/10.1007/BF00187434
    CrossRef
  56. van Rijn, JN, Holmes, G, Pfahringer, B, and Vanschoren, J (2018). The online performance estimation framework: heterogeneous ensemble learning for data streams. Machine Learning. 107, 149-176. https://doi.org/10.1007/s10994-017-5686-9
    CrossRef
  57. Lv, Y, Peng, S, Yuan, Y, Wang, C, Yin, P, Liu, J, and Wang, C (2019). A classifier using online bagging ensemble method for big data stream learning. Tsinghua Science and Technology. 24, 379-388. https://doi.org/10.26599/TST.2018.9010119
    CrossRef
  58. Kolarik, M, Sarnovsky, M, and Paralic, J . Diversity in ensemble model for classification of data streams with concept drift., Proceedings of 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), 2021, Herl’any, Slovakia, Array, pp.000355-000360. https://doi.org/10.1109/SAMI50585.2021.9378625

Share this article on :

Related articles in IJFIS

Most KeyWord