## 1 Introduction

### 1.1 Problem statement

In this project, we propose the use of policy gradient methods to perform spoofing of devices in wireless networks by impersonating their wireless transmission fingerprints. Physical layer authentication relies on detecting unique imperfections in signals transmitted by radio devices in order to obtain their fingerprint and identify them. These imperfections are present within analog components of a radio device and can differentiate radio devices even if their manufacturer and make/model are identical. Radio fingerprints are usually considered hard to reproduce or replay because the replicating or replaying device suffers from its own impairments which disturb the features in the RF fingerprint.

### 1.2 System Model

We consider a wireless environment in which there are $|T|$ transmitters $T$ which are authorized to transmit to a single receiver $R$. $R$ is equipped with a pre-trained neural network-based authenticator $D_{R}$ that uses raw IQ samples of the received signals to perform a binary authentication decision at the physical layer, denoting whether the signal under consideration is from an authorized transmitter or not. There is an adversarial transmitter $T_{A}$ that wants to communicate with $R$, and it tries to do this by impersonating one of the $N$ authorized transmitters.

In a wireless communication system, there are three main sources of non-linearities that are imparted on the intended transmitted signal: if $x(t)$ is the signal at the beginning of the transmitter chain, the signal at the end of the receiver chain will be of the form $y(t)=f_{R}(f_{C}(f_{T}(x(t))))$, where $f_{R}$, $f_{C}$ and $f_{T}$ are fingerprints introduced by the receiver hardware, channel and transmitter hardware respectively. Physical-layer based wireless authentication systems in the literature are mostly designed to differentiate transmitters based on $f_{T}$ (for example, in [1]), so we will try to emulate a similar setting in our approach as well. We assume that we can place an adversarial receiver $R_{A}$ close-enough to $R$ such that the channel from $T_{A}$ to $R_{A}$ is similar to the channel from $T$ to $R$. So $R_{A}$ receives signals with a similar $f_{C}$ to signals that $R$ receives. Furthermore, as a simplification, in this project we will assume that we can find an $R_{A}$ device with a similar $f_{R}$ to $R$, which is not unreasonable since high quality wireless receivers of the same make/model will have a smaller variance of $f_{R}$ [2]. This means that effectively, $D_{R}$ and $D$ will have learned to discriminate based on the same $f_{T}$.

Now $R_{A}$ builds a discriminator $D$ that tries to distinguish between the signals it receives from $T$ and the signals it receives from $T_{A}$. Note that $R_{A}$ will have a ground truth since $T_{A}$ includes a flag in its transmitted signals. For each received signal from $T_{A}$, $R_{A}$ transmits its classification decision back to $T_{A}$ as feedback. $T_{A}$ tries to build a generator $G$ whose purpose is to distort the complex IQ samples of input discrete time signal $z(n)$ at $T_{A}$ such that after it is transmitted, it will be classified as authenticated at $D$. At convergence, $T_{A}$ should be able to generate signals that are good enough to fool $D_{R}$ (at $R$) into believing that they are from $T$.

### 1.3 Proposed Solution

The idea is to model $D$ and $G$ as neural networks. $D$ takes $N_{S}$ most recent samples of the sampled signal $y(n)$ and outputs a scalar value through a sigmoid that can be thresholded to get a binary value, representing whether the signal is authorized or not. We also plan to attempt using $|T|+1$ classes, with $|T|$ corresponding to $|T|$ authorized transmitters and a single class corresponding to non-authorized transmitters. Since $D$ is aware of the ground-truth, it is straightforward to train $D$ through gradient descent in a classical supervised learning fashion. However, this supervised method is not available for $G$ as there is no ground-truth. So we propose to train it using policy gradient methods.

We can model the neural network $G$ as the policy $\pi$ of a Markov Decision Process.

The action $a_{n}\in\mathbb{R}^{2}$ is the output of the neural network $\pi(s_{n})$, where $s_{n}$ is the state at time $n$. The generator $G$ can then be viewed as the source of the signal $a(n)=a_{n}[0]+ja_{n}[1]$. The signal $a(n)$ is transmitted to the receiver to obtain $y(n)=f_{R}(f_{C}(f_{T}(a(n))))$. The reward $r_{n}$ at time $n$ will be the feedback from $D$ based on $\{y(n),y(n-1),...,y(n-N_{s})\}$. Depending on the main type of distortion that the impersonator tries to mimic, different definitions of the state $s_{n}$ can be used.

1. 1.

The state is a vector $s_{n}=\left[\text{Re}\{z(n)\},\text{Im}\{z(n)\}\right]$ containing the real and imaginary part of the most recent sample of the signal $z(n)$. This is applicable when the distortion over each sample is independent of the other samples. For example, the distortion imparted by the power amplifier in the RF chain will have this property [1].

2. 2.

The state is a sequence of length $N_{P}$, $s_{n}=\{\left[\text{Re}\{z(n)\},\text{Im}\{z(n)\}\right],a_{n-1},\dots,a_{n-N_% {P}-1}\}$. $N_{P}$ is a hyperparamter. This state is applicable when the distortion of the most recent sample of the signal is dependent on previously transmitted samples and the current sample.

3. 3.

The state $s_{n}$ is $\{\left[\text{Re}\{z(n)\},\text{Im}\{z(n)\}\right],H_{n-1}\}$, where is $H_{n-1}$ is the hidden state of $G(s_{n-1})$, when it is modeled as a recurrent neural network. This state can in theory apply to any type of non-linearity.

Assume we have collected a trajectory $\tau$ defined as a sequence of states, actions, and rewards, $\{s_{0},a_{0},r_{0},s_{1},a_{1},r_{1},\dots,s_{T}\}$. Now, the goal is to tune the parameters $\theta$ (weights and biases) of $G$:

 $\displaystyle\begin{array}[]{l}\text{maximize}_{\theta}\leavevmode\nobreak\ % \mathbb{E}_{\tau}[r_{n}|{\theta}]\end{array}$ (1)

To solve this problem, we can use a policy gradient method: we repeatedly estimate the gradient of the policy’s performance with respect to its parameters and use that to update its parameters. To estimate the gradients, we will use a score function gradient estimator. With the introduction of a baseline $b(s)$ to reduce variance, an estimate $\hat{g}$ for $\nabla_{\theta}\mathbb{E}_{\tau}[r_{n}|{\theta}]$ is [3]

 $\displaystyle\nabla_{\theta}\mathbb{E}_{\tau}[r_{n}|\theta]\approx\hat{g}=\sum% _{n=0}^{T-1}\nabla_{\theta}\log\pi_{\theta}(a_{n}|s_{n})\left(r_{n}-b(s_{n})\right)$ (2)

Now, the policy update can be performed with stochastic gradient ascent $\theta\leftarrow\theta+\epsilon\hat{g}$. This process can be repeated for a number of iterations in an algorithm termed Vanilla Policy Gradient, with a trajectory collected for each iteration, until $G$ converges to a satisfactory state. This algorithm also allows for $b(s)$ to be trained along with $\theta$ [3].

In our case, $N_{s}$ symbols of the signal have to be processed (perturbed) by the generator before transmitting, and hence a reward $r_{n}$ is not immediately available for every state $s_{n}$. To estimate $r_{n}$, we propose to perform a Monte-Carlo search from $s_{n}$ until the final state, using a roll-out policy $G_{\beta}$, which may or may not be equal to $G$ [4].

## 2 Experimental Evaluation

For a preliminary evaluation of our proposed approach, we made the following assumptions: 1. $R_{A}$ is absent and $R$ itself will provide us with a scalar value denoting the probability that the signal from $T_{A}$ is authorized. This will be used to construct the reward. 2. The channel between the $T,T_{A}$ and $R$ is perfect (there are no channel effects). 3. The transmitter fingerprint is due to the power amplifier non-linearity, modeled by the Saleh model [1]. 4. We use the state definition 3 in Sec. 1.3. 5. $G_{\beta}$ is initialized to $G$ and is periodically updated to $G$.

To model the generator we used a simple LSTM recurrent neural network that outputs the mean and covariance of a two dimensional Gaussian distribution, which we can sample to obtain the action, and also use to find the action probability (when calculating gradients). We also explored three possible alternatives for the discriminator architecture: 1. Binary discriminator (BDisc) - with a single sigmoid output 2. Multi-class discriminator (MDisc) - $|T|+1$ classes, the last one representing an outlier and 3. One-vs-All discriminator (OvA) - a model with $T$ binary discriminator networks, each predicting whether the input belongs to a given transmitter. Although OvA should perform better theoretically, the binary discriminator was the most stable, and was used for this evaluation. However, we expect to fine-tune the architecture and hyper-parameters of OvA to use in future iterations. To train BDisc, we assumed there were 5 known unauthorized transmitters, signals from whom were used as negative samples. It was able to achieve an average testing accuracy of 80% (with both authorized and impersonator signals) and a impersonator rejection accuracy of 95%.

To begin with, the generator was initialized to one which acts like an autoencoder, so that it does not add any perturbations (this can be done by training with MSE loss). When training the generator using the proposed approach, the main obstacle was stability. As shown in Fig. 3, on some occasions it was able to converge to a state where it was able to fool the discriminator with 100% accuracy (this is possible because of the perfect channel assumption). However, on most occasions it would fail to converge at all. We hope to address this issue in the following ways: 1. Gradient clipping (clip gradients of large magnitude) 2. Limiting the action space of the generator - this is important because in practice, each sample of a signal must be restricted to a certain region of the complex plane to remain decodable by the receiver.

The current code for this project can be found at https://github.com/samurdhilbk/siggan.

## References

• [1] S. S. Hanna and D. Cabric (2019) Deep learning based transmitter identification using power amplifier nonlinearity. In 2019 International Conference on Computing, Networking and Communications (ICNC), pp. 674–680. Cited by: item 1, §1.2, §2.
• [2] K. Sankhe, M. Belgiovine, F. Zhou, L. Angioloni, F. Restuccia, S. D’Oro, T. Melodia, S. Ioannidis, and K. Chowdhury (2020) No radio left behind: radio fingerprinting through deep learning of physical-layer hardware impairments. IEEE Transactions on Cognitive Communications and Networking 6 (1), pp. 165–178. Cited by: §1.2.
• [3] J. Schulman (2016-12) Optimizing expectations: from deep reinforcement learning to stochastic computation graphs. Ph.D. Thesis, EECS Department, University of California, Berkeley. External Links: Link Cited by: §1.3.
• [4] L. Yu, W. Zhang, J. Wang, and Y. Yu (2017) Seqgan: sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §1.3.