Penetrating Physical Layer Wireless Authentication with a Generative Adversarial Attack

1 Introduction

1.1 Problem statement

In this project, we propose the use of policy gradient methods to perform spoofing of devices in wireless networks by impersonating their wireless transmission fingerprints. Physical layer authentication relies on detecting unique imperfections in signals transmitted by radio devices in order to obtain their fingerprint and identify them. These imperfections are present within analog components of a radio device and can differentiate radio devices even if their manufacturer and make/model are identical. Radio fingerprints are usually considered hard to reproduce or replay because the replicating or replaying device suffers from its own impairments which disturb the features in the RF fingerprint.

1.2 System Model

We consider a wireless environment in which there are |T| transmitters T which are authorized to transmit to a single receiver R. R is equipped with a pre-trained neural network-based authenticator DR that uses raw IQ samples of the received signals to perform a binary authentication decision at the physical layer, denoting whether the signal under consideration is from an authorized transmitter or not. There is an adversarial transmitter TA that wants to communicate with R, and it tries to do this by impersonating one of the N authorized transmitters.

In a wireless communication system, there are three main sources of non-linearities that are imparted on the intended transmitted signal: if x(t) is the signal at the beginning of the transmitter chain, the signal at the end of the receiver chain will be of the form y(t)=fR(fC(fT(x(t)))), where fR, fC and fT are fingerprints introduced by the receiver hardware, channel and transmitter hardware respectively. Physical-layer based wireless authentication systems in the literature are mostly designed to differentiate transmitters based on fT (for example, in [1]), so we will try to emulate a similar setting in our approach as well. We assume that we can place an adversarial receiver RA close-enough to R such that the channel from TA to RA is similar to the channel from T to R. So RA receives signals with a similar fC to signals that R receives. Furthermore, as a simplification, in this project we will assume that we can find an RA device with a similar fR to R, which is not unreasonable since high quality wireless receivers of the same make/model will have a smaller variance of fR [2]. This means that effectively, DR and D will have learned to discriminate based on the same fT.

Now RA builds a discriminator D that tries to distinguish between the signals it receives from T and the signals it receives from TA. Note that RA will have a ground truth since TA includes a flag in its transmitted signals. For each received signal from TA, RA transmits its classification decision back to TA as feedback. TA tries to build a generator G whose purpose is to distort the complex IQ samples of input discrete time signal z(n) at TA such that after it is transmitted, it will be classified as authenticated at D. At convergence, TA should be able to generate signals that are good enough to fool DR (at R) into believing that they are from T.

Figure 1: Effect of changing ϵ on the fooling rate at different SNRs
Figure 2: Effect of changing β on the fooling rate at different ϵ (Note: x-axis is in a log10 scale)

1.3 Proposed Solution

The idea is to model D and G as neural networks. D takes NS most recent samples of the sampled signal y(n) and outputs a scalar value through a sigmoid that can be thresholded to get a binary value, representing whether the signal is authorized or not. We also plan to attempt using |T|+1 classes, with |T| corresponding to |T| authorized transmitters and a single class corresponding to non-authorized transmitters. Since D is aware of the ground-truth, it is straightforward to train D through gradient descent in a classical supervised learning fashion. However, this supervised method is not available for G as there is no ground-truth. So we propose to train it using policy gradient methods.

We can model the neural network G as the policy π of a Markov Decision Process.

The action an2 is the output of the neural network π(sn), where sn is the state at time n. The generator G can then be viewed as the source of the signal a(n)=an[0]+jan[1]. The signal a(n) is transmitted to the receiver to obtain y(n)=fR(fC(fT(a(n)))). The reward rn at time n will be the feedback from D based on {y(n),y(n-1),,y(n-Ns)}. Depending on the main type of distortion that the impersonator tries to mimic, different definitions of the state sn can be used.

  1. 1.

    The state is a vector sn=[Re{z(n)},Im{z(n)}] containing the real and imaginary part of the most recent sample of the signal z(n). This is applicable when the distortion over each sample is independent of the other samples. For example, the distortion imparted by the power amplifier in the RF chain will have this property [1].

  2. 2.

    The state is a sequence of length NP, sn={[Re{z(n)},Im{z(n)}],an-1,,an-NP-1}. NP is a hyperparamter. This state is applicable when the distortion of the most recent sample of the signal is dependent on previously transmitted samples and the current sample.

  3. 3.

    The state sn is {[Re{z(n)},Im{z(n)}],Hn-1}, where is Hn-1 is the hidden state of G(sn-1), when it is modeled as a recurrent neural network. This state can in theory apply to any type of non-linearity.

Assume we have collected a trajectory τ defined as a sequence of states, actions, and rewards, {s0,a0,r0,s1,a1,r1,,sT}. Now, the goal is to tune the parameters θ (weights and biases) of G:

maximizeθ𝔼τ[rn|θ] (1)

To solve this problem, we can use a policy gradient method: we repeatedly estimate the gradient of the policy’s performance with respect to its parameters and use that to update its parameters. To estimate the gradients, we will use a score function gradient estimator. With the introduction of a baseline b(s) to reduce variance, an estimate g^ for θ𝔼τ[rn|θ] is [3]

θ𝔼τ[rn|θ]g^=n=0T-1θlogπθ(an|sn)(rn-b(sn)) (2)

Now, the policy update can be performed with stochastic gradient ascent θθ+ϵg^. This process can be repeated for a number of iterations in an algorithm termed Vanilla Policy Gradient, with a trajectory collected for each iteration, until G converges to a satisfactory state. This algorithm also allows for b(s) to be trained along with θ [3].

In our case, Ns symbols of the signal have to be processed (perturbed) by the generator before transmitting, and hence a reward rn is not immediately available for every state sn. To estimate rn, we propose to perform a Monte-Carlo search from sn until the final state, using a roll-out policy Gβ, which may or may not be equal to G [4].

Input : Generator policy Gθ roll-out policy Gβ, Discriminator DR, set of signals 𝒮A from TA;
Output : Gθ;
1 Initialize Gθ,Gβ;
2 Pretrain Gθ using MSE on 𝒮A;
3 βθ;
4 for i=1,,K do
5         for g[1,gsteps] do
6                Generate a sequence {s1,a1,s2,a2,,sΓ,aΓ}Gθ;
7                for t=1,,Γ do
8                       Calculate rt using (1);
9                      
10                end for
11               Calculate θ𝔼τ[J] using (2);
12                Update Gθ using gradient descent;
13               
14         end for
15        βθ;
16        
17 end for
Algorithm 1 Generative adversarial attack with a cooperative R

2 Experimental Evaluation

For a preliminary evaluation of our proposed approach, we made the following assumptions: 1. RA is absent and R itself will provide us with a scalar value denoting the probability that the signal from TA is authorized. This will be used to construct the reward. 2. The channel between the T,TA and R is perfect (there are no channel effects). 3. The transmitter fingerprint is due to the power amplifier non-linearity, modeled by the Saleh model [1]. 4. We use the state definition 3 in Sec. 1.3. 5. Gβ is initialized to G and is periodically updated to G.

To model the generator we used a simple LSTM recurrent neural network that outputs the mean and covariance of a two dimensional Gaussian distribution, which we can sample to obtain the action, and also use to find the action probability (when calculating gradients). We also explored three possible alternatives for the discriminator architecture: 1. Binary discriminator (BDisc) - with a single sigmoid output 2. Multi-class discriminator (MDisc) - |T|+1 classes, the last one representing an outlier and 3. One-vs-All discriminator (OvA) - a model with T binary discriminator networks, each predicting whether the input belongs to a given transmitter. Although OvA should perform better theoretically, the binary discriminator was the most stable, and was used for this evaluation. However, we expect to fine-tune the architecture and hyper-parameters of OvA to use in future iterations. To train BDisc, we assumed there were 5 known unauthorized transmitters, signals from whom were used as negative samples. It was able to achieve an average testing accuracy of 80% (with both authorized and impersonator signals) and a impersonator rejection accuracy of 95%.

Figure 3: An example of a training episode where the generator was able to converge

To begin with, the generator was initialized to one which acts like an autoencoder, so that it does not add any perturbations (this can be done by training with MSE loss). When training the generator using the proposed approach, the main obstacle was stability. As shown in Fig. 3, on some occasions it was able to converge to a state where it was able to fool the discriminator with 100% accuracy (this is possible because of the perfect channel assumption). However, on most occasions it would fail to converge at all. We hope to address this issue in the following ways: 1. Gradient clipping (clip gradients of large magnitude) 2. Limiting the action space of the generator - this is important because in practice, each sample of a signal must be restricted to a certain region of the complex plane to remain decodable by the receiver.

The current code for this project can be found at https://github.com/samurdhilbk/siggan.

References

  • [1] S. S. Hanna and D. Cabric (2019) Deep learning based transmitter identification using power amplifier nonlinearity. In 2019 International Conference on Computing, Networking and Communications (ICNC), pp. 674–680. Cited by: item 1, §1.2, §2.
  • [2] K. Sankhe, M. Belgiovine, F. Zhou, L. Angioloni, F. Restuccia, S. D’Oro, T. Melodia, S. Ioannidis, and K. Chowdhury (2020) No radio left behind: radio fingerprinting through deep learning of physical-layer hardware impairments. IEEE Transactions on Cognitive Communications and Networking 6 (1), pp. 165–178. Cited by: §1.2.
  • [3] J. Schulman (2016-12) Optimizing expectations: from deep reinforcement learning to stochastic computation graphs. Ph.D. Thesis, EECS Department, University of California, Berkeley. External Links: Link Cited by: §1.3.
  • [4] L. Yu, W. Zhang, J. Wang, and Y. Yu (2017) Seqgan: sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §1.3.