Accepted Papers

Analysis of Lossy Generative Data Compression for Robust Remote Deep Inference
Mathew Williams1, Silvija Kokalj-Filipovic1, and Armani Rodriguez1
1 Rowan University, Glassboro, NJ, USA

Networks of wireless sensors, including Internet of Things (IoT), motivate the use of lossy compression of the sensor data to match the available network bandwidth (BW). Hence, sensor data intended for inference by a remote deep learning (RDL) model is likely to be reconstructed with distortion, from a compressed representation received by the remote user over a wireless channel. Our focus is a particular type of lossy compression algorithm based on DL models, and known as learned compression (LC). The link between the information loss and compression rate in LCs has not been studied yet in the framework of information theory, nor practically associated with any meta-data which could describe the type and level of information loss to downstream users. This may make this compression undetectable yet potentially harmful. We study the robustness of a RDL classification model against the lossy compression of the input, including the robustness under an adversarial attack. We apply different compression methods of MNIST images, such as JPEG and a hierarchical LC, all with different compression ratios. For any lossy reconstruction and its uncompressed original, several techniques for topological feature characterization based on persistent homology are used to highlight the important differences amongst compression approaches that may affect the robust accuracy of a DL classifier trained on the original data. We conclude that LC is preferred in the described context, because we achieve the same accuracy as with the originals (with and without an adversarial attack) on a trained DL MNIST classifier, while using only 1/4 of the BW. We show that calculated topological features differ between JPEG and the comparable LC reconstructions, which are closer to the features of the original. We show that there is a distribution shift in those features due to the attack. Finally, most LC models are generative, meaning that we can generate multiple statistically independent compressed representations of a data point, which opens the possibility for the inference error correction at the RDL model. Due to space limitations, we leave this aspect for future work.

How Can the Adversary Effectively Distinguish Cellular IoT Devices from non-IoT devices Using LSTM Networks?
Zhengping Jay Luo1, Will A. Pitera1, Shangqing Zhao2, Zhuo Lu3, and Yalin Sagduyu4
1 Rider University, Lawrenceville, NJ, USA
2 The University of Oklahoma, Tulsa, OK, USA
3 University of South Florida, Tampa, FL, USA
4 Virginia Tech, Blacksburg, VA, USA

Internet of Things (IoT) has become a key enabler to connect edge devices with each other and to the Internet. Massive IoT services provided by cellular networks offer various applications such as smart metering and smart cities. Security of the massive IoT devices working along with traditional devices such as smartphones and laptops has become a major concern. How to protect those IoT devices from being identified by malicious attackers is often the first defense for the cellular IoT devices. In this paper, we provide an effective attacking method of identifying the cellular IoT devices from cellular networks. Inspired from characteristics of Long Short-Term Memory (LSTM) networks, we develop a method that can not only capture the context information but also adapt to the dynamic changes of the environment over time. The experimental validation shows the performance of high detection rates within less than 10 epochs of training on public datasets.

Machine Learning-Based Jamming Detection and Classification in Wireless Networks
Enrico Testi1, Luca Arcangeloni1, and Andrea Giorgetti1
1 University of Bologna, Cesena, Italy

The development of novel tools to detect, classify and counteract the new generation of smart jammers in Internet of Things (IoT) is of paramount importance. Detection and classification have to be performed in a short time, with high reliability, and preserving the privacy of network users. In this work, we propose a novel machine learning (ML)-based jamming detection and classification algorithm which can be implemented in the network gateway (GW). The proposed method is based on energy detector (ED), the extraction of specific problem-tailored features, dimensionality reduction, and multi-class classification. Extensive numerical results have been carried out to evaluate the performance of detection and classification, varying the number of principal components selected through dimensionality reduction, the observation window length, the shadowing intensity, and the signal-to-jammer ratio (SJR). Our solution reaches remarkably high accuracy, i.e., up to 99%, outperforming a state-of-the-art solution. That is a very promising result considering that the approach does not need to inspect the decoded information, thus preserving the privacy of the network users.

Machine Learning Assisted Physical Layer Secret Key Generation in the One-Time-Pad Encryption Scheme
Liquan Chen1, Yufan Song1, Tianyu Lu1, and Peng Zhang1
1 Southeast University, Nanjing, China

The one-time-pad (OTP) technique is widely used in the encryption to achieve the information-theoretic security, and the physical-layer secret key generation (SKG) technique can provide the random keys for OTP. However, the keys are susceptible to eavesdroppers in the correlated channel scenarios. To address this issue, this paper proposes a machine learning (ML) assisted SKG in the OTP encryption scheme. We make a complete and rigorous analysis of the information leakage under the influence of space-time correlation. The derivation based on information theory proves the security of the key generation scheme. To simplify the computational complexity, we generate the initial key based on ML. The simulation results show that the probability of information being leaked in the next frame increases if information is leaked in the current frame. Meanwhile, the simulation results verify that our ML assisted SKG scheme can generate keys securely for OTP under different space-time correlation.

A Key Generation Scheme for IoV Communication Based on Neural Network Autoencoders
Liquan Chen1, Han Wang1, Tianyu Lu1, and Zeyu Xu1
1 Southeast University, Nanjing, China

In recent years, the Internet of Vehicles (IoV) has become more and more widely used. Due to the high dynamic and point-to-point characteristics of IoV communication, IoV needs a secure and effective key generation mechanism. Physical layer key generation has become a promising technology, known for its lightweight and information theory security. IoV communication is usually realized using Wi-Fi, ZigBee, LoRa and other technologies. Based on ESP32 device, this paper explores the use of WiFi communication in the vehicle-to-everything (V2X) scenario of IoV. Focusing on the V2X scenario, we conduct channel modeling based on line of sight (LoS) and multipath fading and present Secure-Vehicle-Key, which is an environment-adaptive key generation scheme using neural network autoencoders. This scheme can realize the dynamic balance of reliability and confidentiality to meet the requirements of different vehicle network situations. Compared with the reconciliation scheme implemented by Slepian-Wolf low-density parity check (LDPC) codes, the method in this paper reduces the bit disagreement rate (BDR) of key generation by 30%-40% and passes the NIST randomness test with excellent results.

Exploring Adversarial Attacks on Learning-based Localization
Frost Mitchell1, Phillip Smith1, Aditya Bhaskara1, and Sneha Kumar Kasera1
1 University of Utah, Salt Lake City, UT, USA

We investigate the robustness of a convolutional neural network (CNN) RF transmitter localization model in the face of adversarial actors which may poison or spoof sensor data to disrupt or defeat the algorithm. We train the CNN to estimate transmitter locations based on sensor coordinates and received signal strength (RSS) measurements from a real-world dataset. We consider attacks from adversaries with varying capabilities to include naive, random attacks and omniscient, worst-case attacks. We apply countermeasures based on statistical outlier approaches and train the CNN against adversarial attacks to improve performance. Adversarial training is shown to completely neutralize some attacks and improve accuracy by up to 65% in other cases. Our evaluation of countermeasures indicates that a combination of statistical techniques and adversarial training can provide more robust defense against adversarial attacks.

Approximate Wireless Communication for Federated Learning
Xiang Ma1, Haijian Sun2, Rose Hu1, and Yi Qian3
1 Utah State University, Logan, USA
2 University of Georgia, Athens, USA
3 University of Nebraska-Lincoln, Lincoln, USA

This paper presents an approximate wireless communication scheme for federated learning (FL) model aggregation in the uplink transmission. We consider a realistic channel that reveals bit errors during FL model exchange in wireless networks. Our study demonstrates that random bit errors during model transmission can significantly affect FL performance. To overcome this challenge, we propose an approximate communication scheme based on the mathematical and statistical proof that machine learning (ML) model gradients are bounded under certain constraints. This bound enables us to introduce a novel encoding scheme for float-to-binary representation of gradient values and their QAM constellation mapping. Besides, since FL gradients are error-resilient, the proposed scheme simply delivers gradients with errors when the channel quality is satisfactory, eliminating extensive error-correcting codes and/or retransmission. The direct benefits include less overhead and lower latency. The proposed scheme is well-suited for resource-constrained devices in wireless networks. Through simulations, we show that the proposed scheme is effective in reducing the impact of bit errors on FL performance and saves at least half the time than transmission with error correction and retransmission to achieve the same learning performance. In addition, we investigated the effectiveness of bit protection mechanisms in high-order modulation when gray coding is employed and found that this approach considerably enhances learning performance.

Hierarchical Over-the-Air Federated Learning with Differential Privacy
Zixi Wang1, Arick Grootveld1, and M. Cenk Gursoy1
1 Syracuse University, Syracuse, USA

Federated learning (FL) is a burgeoning field that examines the cooperative interaction of machine learning (ML) models with users, enabling the training of a global model while each user retains its data locally. With differential privacy (DP), FL also becomes an enabler for training ML models in a more private manner. While there has been a growing body of work exploring various aspects of FL, most studies, especially in the context of hierarchical federated learning (HFL), treat different levels of the hierarchy as a composition of two DP mechanisms. In this paper, we introduce a DP based privacy preserving method with hierarchical over-the-air FL and address both communication and privacy aspects in an end-to-end fashion.

Increasing the Robustness of a Machine Learning-based IoT Malware Detection Method with Adversarial Training
József Sándor1, Roland Nagy1, and Levente Buttyán1
1 Budapest University of Technology and Economics, Budapest, Hungary

We study the robustness of SIMBIoTA-ML, a recently proposed machine learning-based IoT malware detection solution against adversarial samples. First, we propose two adversarial sample creation strategies that modify existing malware binaries by appending extra bytes to them such that those extra bytes are never executed, but they make the modified samples dissimilar to the original ones. We show that SIMBIoTA-ML is robust against the first strategy, but it can be misled by the second one. To overcome this problem, we propose to use adversarial training, i.e., to extend the training set of SIMBIoTA-ML with samples that are crafted by using the adversarial evasion strategies. We measure the detection accuracy of SIMBIoTA-ML trained on such an extended training set and show that it remains high both for the original malware samples and for the adversarial samples.