Anu Jagannath

and 2 more

The year 2019 witnessed the rollout of the 5G standard, which promises to offer significant data rate improvement over 4G. While 5G is still in its infancy, there has been an increased shift in the research community for communication technologies beyond 5G. The recent emergence of machine learning approaches for enhancing wireless communications and empowering them with much-desired intelligence holds immense potential for redefining wireless communication for 6G. The evolving communication systems will be bottlenecked in terms of latency, throughput, and reliability by the underlying signal processing at the physical layer. In this position paper, we motivate the need to redesign iterative signal processing algorithms by leveraging deep unfolding techniques to fulfill the physical layer requirements for 6G networks. To this end, we begin by presenting the service requirements and the key challenges posed by the envisioned 6G communication architecture. We outline the deficiencies of the traditional algorithmic principles and data-hungry deep learning (DL) approaches in the context of 6G networks. Specifically, deep unfolded signal processing is presented by sketching the interplay between domain knowledge and DL. The deep unfolded approaches reviewed in this article are positioned explicitly in the context of the requirements imposed by the next generation of cellular networks. Finally, this article motivates open research challenges to truly realize hardware-efficient edge intelligence for future 6G networks.

Anu Jagannath

and 1 more

Future communication networks must address the scarce spectrum to accommodate extensive growth of heterogeneous wireless devices. Efforts are underway to address spectrum coexistence, enhance spectrum awareness, and bolster authentication schemes. Wireless signal recognition is becoming increasingly more significant for spectrum monitoring, spectrum management, secure communications, among others. Consequently, comprehensive spectrum awareness on the edge has the potential to serve as a key enabler for the emerging beyond 5G (fifth generation) networks. State-of-the-art studies in this domain have (i) only focused on a single task - modulation or signal (protocol) classification - which in many cases is insufficient information for a system to act on, (ii) consider either radar or communication waveforms (homogeneous waveform category), and (iii) does not address edge deployment during neural network design phase. In this work, for the first time in the wireless communication domain, we exploit the potential of deep neural networks based multi-task learning (MTL) framework to simultaneously learn modulation and signal classification tasks while considering heterogeneous wireless signals such as radar and communication waveforms in the electromagnetic spectrum. The proposed MTL architecture benefits from the mutual relation between the two tasks in improving the classification accuracy as well as the learning efficiency with a lightweight neural network model. We additionally include experimental evaluations of the model with over-the-air collected samples and demonstrate first-hand insight on model compression along with deep learning pipeline for deployment on resource-constrained edge devices. We demonstrate significant computational, memory, and accuracy improvement of the proposed model over two reference architectures. In addition to modeling a lightweight MTL model suitable for resource-constrained embedded radio platforms, we provide a comprehensive heterogeneous wireless signals dataset for public use.

Anu Jagannath

and 1 more

A computationally efficient framework to fingerprint real-world Bluetooth devices is presented in this work. Despite the active research in this topic, a generalizable framework suitable for real-world deployment in terms of performance in a new and evolving environment as well as hardware efficiency of the architecture is lacking. We propose an embedding-assisted attentional framework (Mbed-ATN) suitable for fingerprinting actual Bluetooth devices and analyze its generalization capability in different settings and demonstrate the effect of sample length and anti-aliasing decimation on its performance. The embedding module serves as a dimensionality reduction unit that maps the high dimensional 3D input tensor to a 1D feature vector for further processing by the ATN module. Furthermore, unlike the prior research in this field, we closely evaluate the complexity of the model and test its fingerprinting capability with real-world Bluetooth dataset collected under a different time frame and experimental setting while being trained on another. Our study reveals a 7.3x and 65.2x lesser memory usage with the proposed Mbed-ATN architecture in contrast to Oracle at input sample lengths of M=10 kS and M=100 kS respectively. Further, the proposed Mbed-ATN showcases a 16.9x fewer FLOPs and 7.5x lesser trainable parameters when compared to Oracle. Finally, we show that when subject to anti-aliasing decimation and at greater input sample lengths of 1 MS, the proposed Mbed-ATN framework results in a 5.32x higher TPR, 37.9 % fewer false alarms, and 6.74x higher accuracy under the challenging real-world setting.

Jithin Jagannath

and 2 more

Anu Jagannath

and 1 more