Software-Defined Networking (SDN) leverages the implementation of reliable, flexible and efficient network security mechanisms which make use of novel techniques such as artificial intelligence (AI) and machine learning (ML). In particular, these techniques - together with SDN - are the key enablers for the design of anomaly detection methods which are based on efficient traffic flow monitoring. In this paper, we tackle this problem by proposing an efficient anomaly detection framework, denoted as DeepGuard, which improves the detection performance of cyberattacks in SDN based networks by adopting a fine-grained traffic flow monitoring mechanism. Specifically, the proposed framework utilizes a deep reinforcement learning technique, i.e., Double Deep Q -Network (DDQN), to learn traffic flow matching strategies maximizing the traffic flow granularity while proactively protecting the SDN data plane from being overloaded. Afterwards, by implementing the learned optimal traffic flow matching control policy, the most beneficial traffic information for anomaly detection is acquired at runtime—thereby improving the cyberattack detection performance. The performance of the proposed framework is validated by extensive experiments, and the results show that DeepGuard yields significant performance improvements compared to existing traffic flow matching mechanisms regarding the level of traffic flow granularity. In the case of distributed denial-of-service (DDoS) attacks, DeepGuard achieves a remarkable attack detection performance while effectively preventing forwarding performance degradation in the SDN data plane.
In recent years, Tactile Internet (TI) has become a familiar concept to humankind. It is expected to have the potential to create many new opportunities and applications that reshape our life and economy. However, the biggest challenge for recognizing the TI – the “1-millisecond challenge” remains unchanged, and it requires additional research efforts. In this paper, we will dissect what has been done and what needs to be done for the “TI ecosystem”. We will also investigate the TI concept from the perspective of the “network latency evolution”, as well as analyzing the architecture and the emerging technologies, which are needed to meet the strict requirements of the TI.
Currently, wireless sensor networks (WSNs) are providing practical solutions for various applications, including smart agriculture and healthcare, and have provided essential support by wirelessly connecting the numerous nodes or sensors that function in sensing systems needed for transmission to backends via multiple hops for data analysis. One key limitation of these sensors is the self-contained energy provided by the embedded battery due to their (tiny) size, (in) accessibility, and (low) cost constraints. Therefore, a key challenge is to efficiently control the energy consumption of the sensors, or in other words, to prolong the overall network lifetime of a large-scale sensor farm. Studies have worked toward optimizing energy in communication, and one promising approach focuses on clustering. In this approach, a cluster of sensors is formed, and its representatives, namely, a cluster head (CH) and cluster members (CMs), with the latter transmitting the sensing data within a short range to the CH. The CH then aggregates the data and forwards it to the base station (BS) using a multihop method. However, maintaining equal clustering regardless of key parameters such as distance and density potentially results in a shortened network lifetime. Thus, this study investigates the application of fuzzy logic (FL) to determine various parameters and membership functions and thereby obtain appropriate clustering criteria. We propose an FL-based clustering architecture consisting of four stages: competition radius (CR) determination, CH election, CM joining, and determination of selection criteria for the next CH (relaying). A performance analysis was conducted against state-of-the-art distributed clustering protocols, i.e., the multiobjective optimization fuzzy clustering algorithm (MOFCA), energy-efficient unequal clustering (EEUC), distributed unequal clustering using FL (DUCF), and the energy-aware unequal clustering fuzzy (EAUCF) scheme. The proposed method displayed promising performance in terms of network lifetime and energy usage.
Congestion control is necessary for enhancing the quality of service in wireless sensor networks (WSNs). With advances in sensing technology, a substantial amount of data traversing a WSN can easily cause congestion, especially given limited resources. As a consequence, network throughput decreases due to significant packet loss and increased delays. Moreover, congestion not only adversely affects the data traffic and transmission success rate but also excessively dissipates energy, which in turn reduces the sensor node and, hence, network lifespans. A typical congestion control strategy was designed to address congestion due to transient events. However, on many occasions, congestion was caused by repeated anomalies and, as a consequence, persisted for an extended period. This paper thus proposes a congestion control strategy that can eliminate both types of congestion. The study adopted a fuzzy logic algorithm for resolving congestion in three key areas: optimal path selection, traffic rate adjustment that incorporates a momentum indicator, and an optimal timeout setting for a circuit breaker to limit persistent congestion. With fuzzy logic, decisions can be made efficiently based on probabilistic weights derived from fitness functions of congestion-relevant parameters. The simulation and experimental results reported herein demonstrate that the proposed strategy outperforms state-of-the-art strategies in terms of the traffic rate, transmission delay, queue utilization, and energy efficiency.
In this paper, we propose a novel framework, namely Q-TRANSFER, to address the insufficiency problem of the actual training data sets in modern networking platforms in order to push the application of deep learning in the context of communication networking. In particular, we adopt deep transfer learning, i.e., network-transfer, as a key technique to mitigate the data set insufficiency problem. Being aware of the negative transfer we - instead of trying to alleviate the negative transfer - aim to maximise the positive transfer. We examine a basic networking deep transfer learning system and formulate the optimisation problem of achieving the most beneficial knowledge from a source deep learning model. In order to circumvent the optimisation problem, we further propose to employ a reinforcement learning approach, i.e., Q-learning algorithm. As a case study, a DDoS attack detection method using a Multilayer Perceptrons algorithm (MLP) is taken to demonstrate the effectiveness and capabilities of the Q- TRANSFER framework. Results obtained from extensive experiments confirm that the most beneficial attack detection knowledge is derived from the source deep learning model by applying the Q-learning algorithm. The efficiency is increased up to 43.58 % compared to traditional deep transfer learning methods. To the best of our knowledge, this is the first study on optimising knowledge transfer for deep learning applications in the field of networking.
Software-Defined Networking (SDN) introduces a centralized network control and management by separating the data plane from the control plane which facilitates traffic flow monitoring, security analysis and policy formulation. However, it is challenging to choose a proper degree of traffic flow handling granularity while proactively protecting forwarding devices from getting overloaded. In this paper, we propose a novel traffic flow matching control framework called Q-DATA that applies reinforcement learning in order to enhance the traffic flow monitoring performance in SDN based networks and prevent traffic forwarding performance degradation. We first describe and analyse an SDN-based traffic flow matching control system that applies a reinforcement learning approach based on Q-learning algorithm in order to maximize the traffic flow granularity. It also considers the forwarding performance status of the SDN switches derived from a Support Vector Machine based algorithm. Next, we outline the Q-DATA framework that incorporates the optimal traffic flow matching policy derived from the traffic flow matching control system to efficiently provide the most detailed traffic flow information that other mechanisms require. Our novel approach is realized as a REST SDN application and evaluated in an SDN environment. Through comprehensive experiments, the results show that—compared to the default behavior of common SDN controllers and to our previous DATA mechanism—the new Q-DATA framework yields a remarkable improvement in terms of traffic forwarding performance degradation protection of SDN switches while still providing the most detailed traffic flow information on demand.
In this paper, we consider multiscale methods for nonlinear elasticity. In particular, we investigate the Generalized Multiscale Finite Element Method (GMsFEM) for a strain- limiting elasticity problem. Being a special case of the naturally implicit constitutive theory of nonlinear elasticity, strain-limiting relation has presented an interesting class of material bodies, for which strains remain bounded (even infinitesimal) while stresses can become arbitrarily large. The nonlinearity and material heterogeneities can create multiscale features in the solution, and multiscale methods are therefore necessary. To handle the resulting nonlinear monotone quasilinear elliptic equation, we use linearization based on the Picard iteration. We consider two types of basis functions, offline and online basis functions, following the general framework of GMsFEM. The offline basis functions depend nonlinearly on the solution. Thus, we design an indicator function and we will recompute the offline basis functions when the indicator function predicts that the material property has significant change during the iterations. On the other hand, we will use the residual based online basis functions to reduce the error substantially when updating basis functions is necessary. Our numerical results show that the above combination of offline and online basis functions is able to give accurate solutions with only a few basis functions per each coarse region and adaptive updating basis functions in selected iterations. https://www.sciencedirect.com/science/article/abs/pii/S0377042719301785?via%3Dihub
Wireless sensor networks (WSNs) have evolved to become an integral part of the contemporary Internet of Things (IoT) paradigm. The sensor node activities of both sensing phenomena in their immediate environments and reporting their findings to a centralized base station (BS) have remained a core platform to sustain heterogeneous service-centric applications. However, the adversarial threat to the sensors of the IoT paradigm remains significant. Denial of service (DoS) attacks, comprising a large volume of network packets, targeting a given sensor node(s) of the network, may cripple routine operations and cause catastrophic losses to emergency services. This paper presents an intelligent DoS detection framework comprising modules for data generation, feature ranking and generation, and training and testing. The proposed framework is experimentally tested under actual IoT attack scenarios, and the accuracy of the results is greater than that of traditional classification techniques.
The explosive rise of intelligent devices with ubiquitous connectivity have dramatically increased Internet of Things (IoT) traffic in the cloud environment and created potential attack surfaces for cyber-attacks. Traditional security approaches are insufficient and inefficient to address security threats in cloud-based IoT networks. In this vein, software defined networking (SDN), network function virtualization (NFV), and machine learning techniques introduce numerous advantages that can effectively resolve cybersecurity matters for cloud-based IoT systems. In this paper, we propose a collaborative and intelligent network-based intrusion detection system (NIDS) architecture, namely $SeArch$ for SDN-based cloud IoT networks. It composes a hierarchical layer of intelligent IDS nodes working in collaboration to detect anomalies and formulate policy into the SDN-based IoT gateway devices to stop malicious traffic as fast as possible. We first describe a new NIDS architecture with a comprehensive analysis in terms of the system resource and path selection optimizations. Next, the system process logic is extensively investigated through main consecutive procedures, including initialization, runtime operation, and database update. Afterward, we conduct a detailed implementation of the proposed solution in an SDN-based environment and perform a variety of experiments. Finally, evaluation results of the $SeArch$ architecture yield outstanding performance in anomaly detection and mitigation as well as bottleneck problem handling in the SDN-based cloud IoT networks in comparison with existing solutions.
Maximizing network coverage is among the key factors in designing efficient sensor-deployment algorithms for wireless sensor networks (WSNs). In this study, we consider a WSN in which mobile sensor nodes (SNs) are randomly deployed over a two-dimensional region with the existence of coverage holes due to the absence of any SNs. To improve the network coverage, we thus propose a novel distributed deployment algorithm – coverage hole-healing algorithm (CHHA) – to maximize the area coverage by healing the coverage holes such that the total SN moving distance is minimized. Once the network is formed after an initial random placement of the SNs, CHHA is applied to detect coverage holes, including hole-boundary SNs, based on computational geometry, i.e., Delaunay triangulation. The distributed deployment feature of CHHA applies a concept to virtual forces that is used to decide the movement of mobile SNs to heal the coverage holes. The simulation results show that our proposed algorithm is capable of exact detection of coverage holes in addition to area-coverage improvement by healing the holes. The results also demonstrate the effectiveness of CHHA compared with other competitive approaches, namely, VFA, VEDGE, and HEAL, in terms of total moving distance.
In this paper, we investigate the physical layer security (PLS) performance for the Internet of Things (IoT), which is modeled as an IoT sensor network (ISN). The considered system consists of multiple power transfer stations (PTSs), multiple IoT sensor nodes (SNs), one legitimate fusion center (LFC) and multiple eavesdropping fusion centers (EFCs), which attempt to extract the transmitted information at SNs without an active attack. The SNs and the EFCs are equipped with a single antenna, while the LFC is equipped with multiple antennas. Specifically, the SNs harvest energy from the PTSs and then use the harvested energy to transmit the information to the LFC. In this research, the energy harvesting (EH) process is considered in the following two strategies: 1) the SN harvests energy from all PTSs, and 2) the SN harvests energy from the best PTS. To guarantee security for the considered system before the SN sends the packet, the SN’s power is controlled by a suitable power policy that is based on the channel state information (CSI), harvested energy, and security constraints. An algorithm for the nearly optimal EH time is implemented. Accordingly, the analytical expressions for the existence probability of secrecy capacity and secrecy outage probability (SOP) are derived by using the statistical characteristics of the signal-to-noise ratio (SNR). In addition, we analyze the secrecy performance for various system parameters, such as the location of system elements, the number of PTSs, and the number of EFCs. Finally, the results of Monte Carlo simulations are provided to confirm the correctness of our analysis and derivation.
The transmission of safety applications in VANET is clearly the top concern. However, the non-safety applications, which improve the quality of experience, as well as the efficiency of the traffic, also need to be paid more attention. In VANET, the safety applications are transmitted in control channel interval and the non-safety applications can be transmitted only in service channel interval. In this paper, we propose a priority-based multichannel Medium Access Control to support the non-safety applications in service channel interval in Vehicle-to-Infrastructure communication with the presence of roadside unit. The novel Medium Access Control also allocates service channel according to the priority and divides the vehicles, which have the same required service channel, into four priority groups to enhance the Enhanced Distributed Channel Access mechanism. We use the mathematical model and the simulation for evaluating the performance under the influence of velocity with a different number of vehicles in the network.
Preserving coverage is one of the most essential functions to guarantee quality of service in wireless sensor networks. With this key constraint, the energy consumption of the sensors including their transmission behaviour is a challenging problem in term of how to efficiently use them while achieving good coverage performance. This research proposes a clustering protocol, point-coverage-aware (PCACP), based on point-coverage awareness with energy optimisations that focuses on a holistic view with respect to activation sensors, network clustering and multi-hop communication, to improve energy efficiency, i.e., network lifetime extension while preserving coverage and maximising the network coverage. The simulation results demonstrate the effectiveness of PCACP, which strongly improves the performance. Given a diversity of deployments with scalability concerns, PCACP outperformed other competitive protocols, i.e., low energy adaptive clustering hierarchy (LEACH), coverage-preserving clustering protocol (CPCP), energy-aware distributed clustering (EADC) and energy and coverage-aware distributed clustering (ECDC) in terms of conserving energy, sensing point coverage ratios and overall network lifetime.
In this paper, we investigate radio frequency (RF) energy harvesting (EH) in wireless sensor networks (WSNs) using non-orthogonal multiple access (NOMA) uplink transmission with regard to a probable secrecy outage during the transmission between sensor nodes (SNs) and base station (BS) in the presence of eavesdroppers (EAVs). In particular, the communication protocol is divided into two phases: 1) first, the SNs harvest energy from multiple power transfer stations (PTSs), and then, 2) the cluster heads are elected to transmit information to the BS using the harvested energy. In the first phase, we derive a 2D RF energy model to harvest energy for the SNs. During the second phase, the communication faces multiple EAVs who attempt to capture the information of legitimate users; thus, we propose a strategy to select cluster heads and implement the NOMA technique in the transmission of the cluster heads to enhance the secrecy performance. For the performance evaluation, the exact closed-form expressions for the secrecy outage probability (SOP) at the cluster heads are derived. A nearly optimal EH time algorithm for the cluster head is also proposed. In addition, the impacts of system parameters, such as the EH time, the EH efficiency coefficient, the distance between the cluster heads and the BS, and the number of SNs as well as EAVs on the SOP, are investigated. Finally, Monte Carlo simulations are performed to show the accuracy of the theoretical analysis; it is also shown that the secrecy performance of NOMA in RF EH WSN can be improved using the optimal EH time.
The deployment of sensor nodes (SNs) to form a network with coverage ability is one of the most important challenges of wireless sensor networks. In this paper, we study an efficient distributed deployment algorithm for barrier coverage improvement with mobile sensors, in which the SNs can be relocated after the initial deployment. To achieve the maximum number of barriers, we propose a distributed algorithm to construct k-barrier coverage by relocation of the SNs. Different from existing approaches, we propose a novel clustering technique based on the network area to reduce the information exchange messages. Then, based on the SNs clusters, we propose a heuristic method to assign the SNs evenly into each cluster with regard to the required number of SNs of each cluster and decide the moving SNs by computing the optimal relocation, considering moving distance minimization. The main goal of this approach is to relocate the SNs to form the maximum number of barriers with a minimum relocation cost, in terms of sensor energy consumption of communication and movement. The simulation results demonstrate the effectiveness of our algorithm when compared with other competing approaches.