This is a summary of a paper presented in COSERA 2018, Arxiv link and full abstract at the end. The authors are Thomas Feuillen, Chunlei Xu, Luc Vandendorpe, Laurent Jacques.
Why 1-bit ADCs ?
- There is a trade-off for ADCs between the price, the sampling rate and the desired resolution. So being able to reduce the resolution allow for more flexibility when choosing ADCs.
- The amount of data to be processed is increasing everywhere, and radar is no exception. Reducing the bit-rate transmitted to the processing unit might be interesting in multi sensor applications.
So quantizing to lower resolution might be a way to tackle these issues.
Before showing our results let’s go through the system model and the quantization as well as the reconstruction algorithm.
System Model:
This paper focuses on 2-D localization of targets according to their ranges \(R\) and angle of arrival \(\theta\). We consider an FMCW radar with 2 receiving antennas with a spacing of \(d\) between them.
The FMCW radar transmits a chirp that spans a bandwidth \(B\) at a center frequency \(f_0\).
where \(\boldsymbol{\Gamma}_0\) and \(\boldsymbol{\Gamma}_1\) are the two full resolution received signals coming from \(K\) targets. The measurements matrix \(\boldsymbol{\Phi}\) is a Fourier matrix : \({\boldsymbol{\Phi}= \{ e^{-\sf{i} \frac{4\pi}{c} f_m R_n }\}_{mn} \in \mathbb{C} ^{M\times N}}\)
The linear model \( \boldsymbol{\Gamma} = [\boldsymbol{\Gamma}_0, \boldsymbol{\Gamma}_1] = \boldsymbol{\Phi}\, [\boldsymbol{x}, \boldsymbol{G} \boldsymbol{x}]\) shows that the supports of the two vectors are identical and that the information of the angles of arrival are encoded in the phase difference represented by \(\boldsymbol{G}=\text{diag}(e^{\sf{i} \frac{2 \pi}{c} f_0 d \sin \theta_1},\,\cdots, e^{\sf{i} \frac{2 \pi}{c} f_0 d \sin \theta_N})\) with \(\text{supp} (\boldsymbol{\theta})= \text{supp }(\boldsymbol{x})\).
Quantization Model
The chosen quantizer is an uniform quantizer that is applied on both the real and imaginary part of the signal:
$$\lambda \in \mathbb{R} \mapsto \mathcal{Q}(\lambda) = \delta \lfloor \frac{\lambda}{\delta}\rfloor + \frac{\delta}{2}
$$
The parameter \(\delta\) is set to effectively behaves as a \(\text{sign}\) operator. $$\delta>2|\boldsymbol \lambda|_\infty$$
The end result is that only the quadrant in which the signal lies is recorded.
Is this quantization without unrecoverable losses ?
When encountering harsh situation, such as low \(SNR\), one of the default solution is to take more measurements to compensate, another solution is to try to have a better reconstruction algorithm. Unfortunately, for 1-bit radar there are cases where these common remedies are not sufficient.
Ambiguities might arise for cases where the two different full resolution signals coming from the receiving antennae, are identical once quantized.
$$\mathcal{Q}( \boldsymbol{\Phi}\boldsymbol{x})=\mathcal{Q}(\boldsymbol{\Phi} \boldsymbol{G} \boldsymbol{x})$$
For small angle of arrival:
Considering one target, one obvious case is when the angle of arrival is so small that \(\boldsymbol G \rightarrow Id\).
For bigger angles of arrival:
The signal model shows that for, \(K=1\), the full resolution signal is simply a phasor turning at a velocity depending on the range. Certain ranges will induce phaseshifts between measurements in a way that will limit the number of different measurements. For example, in the picture on the right, the considered range is \(R=\frac{R_{max}}{2}\), giving \(\pi\) between measurements. The signal of the first receiving antenna has only two different measurements, even with an infinitely large \(M\). The second antenna receives the same signal with a phase shift according to the angle.
It is easy to see, for both of these cases, that once quantized the information encoded in \(\boldsymbol G \) has completely vanished regardless of its actual value. This induces a bound on the reconstruction that does not depend on \(M\).
Dithering
The previous section showcases the limitation of this quantization, in the complete lack of measurement diversity that it generates.
This lack of diversity comes from the fact that the thresholds used for the quantization are static, giving thus always the same measurements.
Now if the boundaries were to change at each sample, the quantizer now becomes:
$$\mathcal Q( \boldsymbol \Phi \boldsymbol x) \Rightarrow \text{A}(\cdot)=\mathcal{Q}(\boldsymbol \Phi \cdot + \boldsymbol \xi )$$
Where \(\boldsymbol \xi \) is a dithering operator that as long as it follows : \(\boldsymbol \xi \in \mathbb{C}^{m} \Rightarrow \xi_i=\xi_i^\mathbb{R}+j\xi_i^\mathbb{I}\), with \(\xi_i^\mathbb{R}, \xi_i^\mathbb{I} \sim \mathcal{U}(-\frac{\delta}{2},\frac{\delta}{2})\) will help to retain more information out of the signals after quantization. The idea behind it is that \(\text{E}[\mathcal{Q}(\boldsymbol \Phi \boldsymbol x+ \boldsymbol \xi )]=\boldsymbol \Phi \boldsymbol x\)
The acquisition schemes can be represented in hardware as :
Reconstruction Algorithm
The reconstruction algorithm used in this paper is Projected Back Projection:
$$ \hat{\boldsymbol X} = \mathcal H_K(\frac{1}{M} \boldsymbol \Phi^* \boldsymbol Z) $$
- \(\mathcal H_K(\cdot)\) projects to the domain of \(K\)sparse with common support.
- Angle Estimation: Using \(\hat{\boldsymbol X}\), \(\theta_n = \arcsin\big(\frac{c}{2\pi f_0 d} \angle(\hat x_1[n]^* \hat x_0[n])\big)\).
This simple algorithm has several advantages. First, it has only one iteration, in fact, it looks more like the first iteration of a more complex algorithm. Second, the dithering is not used explicitly in the reconstruction, the algorithm is just enjoying the new properties given to the quantized signals by the dithering. Lastly, PBP used in conjunction with uniform quantizer and a uniform dithering has been proven to have a reconstruction bound :
$$\|\hat{\boldsymbol X} – \boldsymbol X\|_F = O\big((1+\delta)\sqrt{{K}/{M}}\big)$$
that decreases with the number of measurements \(M\).
This is what, at the moment, sets this algorithm apart from other algorithms with possibly better performances in the simulations but with no theoretical bound.
Simulation results
Set-up
Montecarlo runs were performed on a 2-D grid from \(-\frac{\pi}{2}\) to \(\frac{\pi}{2}\) and from \(0\)m to \(153\)m. Each run picks a random cell for each target in this grid and creates the according signals. One target has an amplitude of 1 (the strong) and \(K-1\)according to a uniform distribution between 0 and 1 (the weak). The metric (in m) used is the following :
$$\min_k |R e^{\sf{i} \theta}-\hat{R}_k e^{\sf{i} \hat{\theta}_k}|_2$$
Which is simply the distance in meters between the considered target and the closest one in the \(K\) estimated. The simulations are noiseless and the dimensions of \(\boldsymbol{x}\) is 256.
For \(K=1\)
The number of measurements \(M\) is set to 512. This might seem odd at first as Compressive Sensing usually steers the number of measurements to \(M \ll N\). However, the resolution has now drop to 1-bit, thus, reducing the total amount of information recorded. The bit-rate for this case is \(BR= 1 \times 512\). Compared to full resolution (here considered as 32-bit, computer resolution) the bit-rate reduction is of \(BRR=\frac{32 \times 256 – 1 \times 512}{32 \times 256 }=93.75\%\).
As expected, the scheme without any dithering is exhibiting artifacts in the range domain.
The dithered scheme shows good performances without any artifacts.
For \(K=2\)
The following graphs show the mean performances regardless of the power of the target.
The fourth graph is the benchmark performed using no quantization and with full measurements (\(BRR=0\%\)). The first three graphs are constrained to same bit-rate, giving \(M=512\) for the 1-bit scheme and \(M=16\) for the 32-bit scheme. It is quite clear that the trade-off between the resolution and the number of measurements for this low bit-rate highlight the gain of the proposed scheme compared to the full resolution one.
There is a noticeable drop in the mean performances from the increase of sparsity. Studying now the performances for each target separately, gives us:
No Dithering:
The strongest target is estimated accurately and with significantly less artifacts than previously. In fact the weak target is acting as a dithering, helping the reconstruction of the strong one but its own localization is not achieved.
Dithering:
The dithered scheme achieved performances quite similar to case of one target for the strongest one. The weaker target is now consistently estimated, albeit with lower accuracy. It is important to note that as PBP does not use the dithering explicitly, it has no way to differentiate signals from the added noise, hence the results for the weaker target.
Measurement results
The previous results where in the case of noiseless ideal measurements. We, therefore, decided to confront our results to the non-idealities inherent to radar measurements.
As such, we used a commercial 24Ghz FMCW radar placed in an anechoic chamber in our laboratories in front of two corner reflectors at different ranges and angles. Non-idealities in the scene are the noise, the non point-like shape of the targets and of course the radar itself is not perfect.
The dithering is scaled from no dithering to the full dynamic (\(\alpha \in [0,…,1]\)). The graph above shows the mean localization error for various scaling of \(\boldsymbol{\xi}\). For \(\alpha \ge 0.2\), the dithering allows an increase in the performances. However, as \(\alpha\) increases above 0.6, the accuracy of the localization decreases because of the already present noise. This hints at a more intricate interaction between the dithering and the system noise.
These results translate in the estimation of only one target correctly (in blue) without any dithering and of the two targets with the optimized scaled dithering (\(\alpha=0.45\)) but with higher variances in the estimated positions.
Conclusion & Future works
We saw that 2-D localization of targets using 1-bit FMCW radar is achievable. The developed theory is in agreement with the simulations and the measurements, especially with regards to the limit of not using any dithering. The measurements also cast a light on the interaction between the dithering and the model noise that will be investigated further in future works.
Abstract:
We present a novel scheme allowing for 2D target localization using highly quantized 1-bit measurements from a Frequency Modulated Continuous Wave (FMCW) radar with two receiving antennas. Quantization of radar signals introduces localization artifacts, we remove this limitation by inserting a dithering on the unquantized observations. We then adapt the projected back projection algorithm to estimate both the range and angle of targets from the dithered quantized radar observations, with provably decaying reconstruction error when the number of observations increases. Simulations are performed to highlight the accuracy of the dithered scheme in noiseless conditions when compared to the non-dithered and full 32-bit resolution under severe bit-rate reduction. Finally, measurements are performed using a radar sensor to demonstrate the effectiveness and performances of the proposed quantized dithered scheme in real conditions.