The process of estimating the number of individuals within a defined area, commonly referred to as people counting, is of paramount importance in the realm of safety, security and crisis management. It serves as a crucial tool for accurately monitoring crowd dynamics and facilitating well-informed decision-making during critical situations. In our current study, we place a special emphasis on the utilization of the WiFi fingerprint technique, leveraging probe request messages emitted by smart devices as a proxy for people counting. However, it is essential to recognize the evolving landscape of privacy regulations and the concerted efforts by major smart-device manufacturers to enhance user privacy, exemplified by the introduction of MAC addresses randomization techniques. In this context, we designed a crowd monitoring solution that exploits Bloom filters for ensuring a formal deniability, aligning with the stringent requirements set forth by regulations like the European GDPR  . Our proposed solution not only addresses the essential task of people counting but also incorporates advanced privacy-preserving mechanisms. Importantly, it seamlessly integrates with trajectory-based crowd monitoring, offering a comprehensive approach to managing crowds while respecting individual privacy rights.
The new wave of device-level cyber-attacks has targeted IoT critical applications, such as in power distribution systems integrated with the Internet communications infrastructure. These systems utilise Group Domain of Interpretation (GDOI) as designated by International Electrotechnical Commission (IEC) power utility standards IEC 61850 and IEC 62351. However, GDOI cannot protect against novel threats, such as IoT device-level attacks that can modify device firmware and configuration files to create command and control malicious communication. As a consequence, the attacks can compromise substations with potentially catastrophic consequences. With this in mind, this article proposes a permissioned/private blockchain-based authentication framework that provides a solution to current security threats such as the IoT device-level attacks. Our work improves the GDOI protocol applied in critical IoT applications by achieving decentralized and distributed device authentication. The security of our proposal is demonstrated against against known attacks as well as through formal mechanisms via the joint use of the AVISPA and SPAN tools. The proposed approach adds negligible authentication latency, thus ensuring appropriate scalability as the number of nodes increases.
As a wireless ad hoc network, VANET is susceptible to various threats including eavesdropping and tampering, due to its insecure wireless channels. The group key agreement protocol is widely used in VANET due to its ability to allow participants to communicate securely in insecure network environments. However, excessive reliance on trusted authority (TA) in traditional group key protocols may cause single point of failure. Additionally, having a high computational and communication cost is a common phenomenon in existing protocols. To address the above issues, we have designed a lightweight group key agreement protocol using blockchain technology and Chinese remainder theorem(CRT). In our protocol, the blockchain technology is used to facilitate faster negotiation of group key between Roadside Units (RSUs) and vehicles within its communication range. To avoid Single point of failure, TA only provides services during the user joining and leaving phase. To reduce computational and communication costs during the identity authentication process, RSU can perform batch authentication on vehicles. At the same time, participating vehicles only need to obtain the correct session key from the return message broadcasted by the RSU. Our protocol also supports dynamic management of vehicles. We used formal security proof and performance analysis in our scheme, indicating that our scheme meets the basic security requirements of the block key protocol design in VANET. Meanwhile, the analysis of computational costs and communication burden shows that our scheme is more effective in VANET group scenarios.
In this paper, we investigate the successful transmission probability of an aerial cellular network in which an unmanned aerial vehicle (UAV) as a Macrocell Base Station (UAV-BS) serves other UAVs as aerial users. The beamforming capable antennas are mounted on the UAVs, to increase the throughput of the network. The random effects of inner forces such as controlling errors or outer forces like the air conditions result in the random fluctuations. We assume Rician fading distribution over the links between the UAVs, then, we calculate the distribution of the channels under hovering fluctuations. Also, we derive the closed form expressions for successful transmission probability. Defining an optimization problem on the average successful transmission probability of the network, we obtain the best placement of UAV-BS along with the resource allocation. The problem turns out to be a non-convex problem and time consuming via numerical exhaustive search methods. Instead, we solve the optimization problem for its lower bound. Maximization problem for the achieved lower bound is equivalent to maximize the main problem. Then, we use some approximations to convert it to a low complex problem to find the solution. We use the entity of the low complex problem to obtain the allocated power for each UAV and in the following, the problem becomes convex which is solved by KKT conditions to obtain the location of UAV-BS. The theoretical results show that optimizing the lower bound probability achieves the suboptimal solution for power assignment and placement problem, which is verified by simulation results.
So far, various data-driven approaches have been presented to obtain channel state information (CSI) in mmWave multiple-input-multiple-output (MIMO) wireless networks. In almost all previous works, training and testing channels were assumed to have the same distribution, which may not be the case in practice. In this paper, we address this challenge, by proposing a learning framework that is a combination of a long short-term memory (LSTM) network and a deep neural network (DNN) for estimating CSI in a dynamic wireless communication environment. Furthermore, we use federated learning (FL) to train the learning-based channel estimation (CE) model. More specifically, we introduce a two-stage downlink pilot transmission procedure, where in the initial stage, long frame length downlink pilot signals are used to train the introduced RNN-DNN model. Following that, users will receive shorter-frame-length pilot signals that can be used for CSI estimation. To speed up the training procedure of the proposed network, we first generate a pre-trained model and then modify it according to the collected data samples. Simulation results demonstrate that, when the channel distribution is unavailable, the proposed approach performs significantly better than the most recent channel estimation algorithms in terms of estimation performance and computational complexity.
Abstract Internet of Things (IoT) based 6G is expected to revolutionize our world. various candidate technologies have been proposed to meet the requirements of IoT systems based on 6G, symbiotic radio (SR) is one of these technologies. This paper aims to use symbiotic radio technology to support the passive internet of things and enhance the uplink transmission performance. In SR the IoT tag is parasitic on the primary transmission that the tags transmission shares not only the radio spectrum of the primary transmission, but also the power, and infrastructure of the neighbor smartphone primary system which enhances the spectrum and energy efficiency of the system. Then the IoT tags information is sent to cloud for analysis through the Macro base station MBS or the wireless access point WAP where the smart phones are used as a relay to transmit this information to the MBS or WAP. In this paper two optimization problems are formulated to maximize the total throughput of the system. First, a problem of achieving the optimum mode selection of LTE or Wi-Fi Network (MBS or WAP) by transmitting an expected tags information load from the smartphone to MBS or WAP aiming to maximize the system throughput. The matching game algorithm is used to solve this problem. Second, a problem of achieving optimum clustering of tags where the tags are divided into virtual clusters and finding which smartphones’ LTE/Wi-Fi downlink signal all cluster members can ride to maximize the system throughput. A deep Q-network (DDQN) model is proposed for solving this optimization problem with low complexity. Simulation results show that our proposed algorithms increase the total system data rate by average 90% above the system by using LTE network first and 20% above the system without using DDQL algorithm. Furthermore, our proposed algorithms enhance the capacity of the system on the average by 100% above system using LTE network first without DDQL algorithm.algorithm.
This article proposes an algorithm for autonomous navigation of mobile robots that merges Reinforcement Learning with Extended Kalman Filter (EKF) as a localization technique, namely, EKF-DQN, aiming to accelerate learning and improve the reward values obtained in the process of apprenticeship. More specifically, Deep Neural Networks (DQN - Deep-Q-Networks) are used to control the trajectory of an autonomous vehicle in an indoor environment. Due to the ability of EKF to predict states, this algorithm is proposed to be used as a learning accelerator of the DQN network, predicting states ahead and inserting this information in the memory replay. Aiming at the safety of the navigation process, it is also proposed a visual safety system that avoids collisions of the mobile vehicle with people moving in the environment. The efficiency of the proposed algorithm is verified through computer simulations using the CoppeliaSIM simulator with code insertion in Python. The simulation results show that the EKF-DQN algorithm accelerates the maximization of rewards obtained and provides a higher success rate in fulfilling the proposed mobile robot mission compared to the DQN and Q-Learning algorithms.