Deep-learning side-channel attacks can reveal encryption keys on a device by analyzing power consumption with neural networks. However, the portability of deep-learning side-channel attacks can be affected when training data (from the training device) and test data (from the test device) are discrepant. Recent studies have examined the portability of deep-learning side-channel attacks against hardware discrepancies between two devices.
In this paper, we investigate the portability of deep-learning side-channel attacks against software discrepancies between the training device and test device. Specifically, we examine four factors that can lead to software discrepancies, including random delays, instruction rewriting, optimization levels, and code obfuscation. Our experimental results show that software discrepancies caused by each factor can significantly downgrade the attack performance of deep-learning side-channel attacks, and even prevent an attacker from recovering keys. To mitigate the impacts of software discrepancies, we investigate three mitigation methods, including adjusting Points of Interest, domain adaptation, and multi-domain training, from the perspective of an attacker. Our results indicate that multi-domain training is the most effective approach among the three, but it can be difficult to scale given the diversity of software discrepancies.
Our attack demonstrates the ability of an adversary to receive signals close to the victim receiver and in real-time generate spoofing signals for an arbitrary location without modifying the navigation message contents. We exploit the essential common reception and transmission time method used to estimate pseudorange in GNSS receivers, thereby potentially rendering any cryptographic authentication useless. We build a proof-of-concept real-time spoofer capable of receiving authenticated GNSS signals and generating spoofing signals for any arbitrary location and motion without requiring any high-speed communication networks or modifying the message contents. Our evaluations show that it is possible to spoof a victim receiver to locations as far as 4000~km away from the actual location and with any dynamic motion path. This work further highlights the fundamental limitations in securing a broadcast signaling-based localization system even if all communications are cryptographically protected.
Satellite communications are increasingly crucial for telecommunications, navigation, and Earth observation. However, many widely used satellites do not cryptographically secure the downlink, opening the door for radio spoofing attacks. Recent developments in software-defined radio hardware have enabled attacks on wireless systems including GNSS, which can be effectively spoofed using only cheap hardware available off the shelf. However, these conclusions do not generalize well to other satellite systems such as high data rate backhauls or satellite-to-customer connections, where the spoofing requirements are currently unknown.
In this paper, we present a systematic review of spoofing attacks against satellite downlink communications systems. We establish a threat model linking attack feasibility and impact to required budget through real-world experiments and channel simulations. Our results show that nearly all evaluated satellite systems were overshadowable at a distance of 1 km in the worst case, for a budget of ~2000 USD or less.
We evaluate how key challenges surrounding modulation schemes, antenna directionality, and legitimate satellite signal strength can be overcome in practice through antenna sidelobe targeting, overshadowing, and automatic gain control takeover. We also show that, surprisingly, protocols designed to be more robust against channel noise are significantly less robust against an overshadowing attacker. We conclude with a discussion of physical-layer countermeasures specifically applicable to satellite systems which can not be cryptographically upgraded.
Investigating the security of Wi-Fi devices often requires writing scripts that send unexpected or malformed frames, to subsequently monitor how the devices respond. Such tests generally use Linux and off-the-self Wi-Fi dongles. Typically, the dongle is put into monitor mode to get access to the raw content of received Wi-Fi frames and to inject, i.e., transmit, customized frames.
In this paper, we demonstrate that monitor mode on Linux may, unbeknownst to the user, mistakenly inject Wi-Fi frames or even drop selected frames instead of sending them. We discuss cases where this causes security testing tools to misbehave, making users to believe that a device under test is secure while in reality it is vulnerable to an attack. To remedy this problem, we create a script to test raw frame injection, and we extend the Radiotap standard to gain more control over frame injection. Our extension is now part of the Radiotap standard and has been implemented in Linux. We tested it using commercial Wi-Fi dongles and using openwifi, which is an open implementation of Wi-Fi on top of software-defined radios. With our improved setup, we reproduced tests for the KRACK and FragAttack vulnerabilities, and discovered previously unknown vulnerabilities in three smartphones.