In this second blog-post we present a work submitted in RadarConf2019, where the range recovery of targets using highly quantized data is studied. Compared to the previous post (presented in COSERA2018), we study more deeply the trade-off between the number of measurements and their resolutions for a fixed bit-rate or bit-budget, i.e., a comparison between quantity and quality.

\(\newcommand{\bs}{\boldsymbol} \)\(\newcommand{\bb}{\mathbb} \)\(\newcommand{\ts}{\textstyle}\)\(\newcommand{\cl}{\mathcal}\)\(\newcommand{\im}{{\sf i}\mkern1mu}\)In applications with a high number of sensors or high sampling rate, reducing the amount of data is often necessary in order to transfer or to process it. Instead of reducing the number of samples, we study how to reduce their resolution, lowering the number of bits necessary to represent them.

The publication is entitled : Quantity over Quality: Dithered Quantization for Compressive Radar Systems , the authors are Thomas Feuillen, Chunlei Xu, Jérôme Louveaux, Luc Vandendorpe, Laurent Jacques. Abstract and arxiv link at the end. The radar measurements exploited in this publication will soon be in open access as well.

Radar system model:

In this publication we are only considering static targets. We use a Frequency Modulation Continuous Wave (FMCW) radar with one antenna transmitting and one receiving. The transmitting antenna is emitting a saw-tooth pattern:

$$  s(t)=\sqrt{P_t}\exp\big(\im 2\pi (\int_{0}^t   f_c(\xi) \mathrm{d} \xi) +\sf i \phi_0\big) $$

Where the frequency pattern is defined as \( f_c(t)=f_0 + B\,(\frac{t}{T} \bmod 1)\), with \(f_0 \) the central frequency, \(\bmod \) the modulo operator, \(T \) the duration of one ramp, and \(B\) the spanned bandwidth. Note that, in practical applications, \(B \) is not a design parameter but rather a constraint imposed by government regulations.

Considering at first one target at a range \(R>0\) the received signal is a time delayed version of \(s(t)\), i.e.,

$$ r(t) = A s(t-\tau_0),$$

where \( \tau_0=\frac{2R}{\sf c}\), with \(\sf c\) the speed of light. Continuous wave radars use coherent demodulation between \(r(t)\) and \(s(t)\). The base-band received signal can, therefore,  be expressed as :

$$ r(t)  =A \exp\big(-\im 2\pi \tau_0 f_c(t) \big)$$

This expression shows that the received signal is a complex exponential whose frequency is related to the range \(R\). The coherent demodulation allows us to express this  time difference as a carrier frequency difference \(\frac{B}{T}\tau_0\) thanks to the chirp function \( f_c(t)\), represented hereafter.

We sample \(r(t)\) at the receiver at a rate \(T/N\) for some integer \(N\), i.e., at time samples \(t_m := m (T/N), m\in \mathbb{Z}\).

$$ r[m]=A \exp\big(-\im 2\pi f_m \frac{2R}{\sf c} \big)$$

This sampling rate defines a grid with a step size \(\Delta_f=\frac{B}{N}\) on which the sampled carriers \(f_m\) lie. This, in turns, defines the range resolution \({\sf c}/(2B)\) and the maximum range \(R_{\rm max} := {\sf c} N/(2B)\) at which \(R\) can be estimated.

Assuming a purely additive model, which means that all targets are in line of sight and there is no interactions between them, the received signal coming from \(K\) targets can be expressed as :

$$ r[m]=\sum_k^K A_k \exp\big(-\im 2\pi f_m \frac{2R_k}{\sf c} \big)$$

The received signal is simply a sum of complex exponential with angular velocities depending on their respective ranges.

Traditional Compressive Sensing solutions for radar systems often aims at subsampling \(M \ll N\) time samples in \(t_m : 1 \le m \le N\), in other words it tries to reach an under-sampled model with a lower number of measurements \(M\) compared to the dimension of the signal of interest \(N\) while still offering good estimation of range profiles. In Quantized Compressive Sensing, however, the goal is not to reduce the number of measurements but rather to reduce the total amount of bits used to represent the whole measurement frame. This amounts to minimizing the bit-rate \(\cl B = b \times M\) where \(b\) is the resolution of each measurement in bits. In our settings, \(M\) can be larger than \(N\), which thus means that we record the signal on several chirps (as represented in the figure above).

In order to have a sampling time as low as possible, \(\lfloor \frac{M}{N} \rfloor \) full chirps are sampled with \(M-N\lfloor \frac{M}{N} \rfloor \) measurements taken at random on the last chirp.

We can recast the previous signal \(r(t)\) this a linear system.

$$ \bs r = \bs \Phi \bs a= \bs F_{\Omega}^* \bs a$$

In this model, \(\bs a = (a_1,\,\cdots, a_N)^\top\) is a sparse vector representing the range profile of \(K\) targets at ranges in \(\cl R := \{R_n := n ({\sf c}/{2 B}), {1 \leq n \leq N}\}\) with \(\|\bs a\|_0 := |\text{supp} (\bs{a})| \le K \). Simply put, the amplitude \(a_n \neq 0 \) if there is a target at the \(n^{\rm th} \) range bin \(R_n \). The entries of the measurement matrix \(\bs \Phi\) are defined as \(\Phi_{m,n}=e^{\im 2 \pi f_m \frac{2 R_n}{\sf c}}\). Given the structure of \(\bs \Phi\), the system can also be seen as measuring the Fourier transform of \(\bs a\). Therefore, it can be expressed as \(\bs \Phi= \bs F_{{\Omega}}^*\), with \(\bs F_{{\Omega}}^*\) the Fourier matrix (with possibly repeated columns for \(M>N\)).

Quantization : Model and Ambiguity

Now that the full resolution system has been introduced, let’s look at how we want to quantize the radar measurements.

Radar signals lie in the complex domain, which in hardware means that they are represented by two channels (I and Q). The quantization is performed on both :

$$ \mathcal{Q}_{b}^\mathbb{C}(\bs r)= \mathcal{Q}_{b}(\Re(\bs r))+\im \mathcal{Q}_{b}(\Im(\bs r))$$

Here, \(\mathcal{Q}_{b}(\lambda) \) is a rescaled lower floor operator defined as:

$$\ts \mathcal{Q}_{b}(\lambda) := \delta \lfloor\frac{\lambda}{\delta}\rfloor +\frac{\delta}{2},\quad \forall \lambda \in \bb R.$$

where \(\delta\) is the quantization bin. An ADC has a dynamic range \([-\Delta,\Delta]\) (in volts) where the signal is represented with \(b\) bits. The quantization bin is then defined as \(\delta= 2^{1-b}\Delta\). We assume \(\Delta>\|\bs  y\|_{\infty}\), so there is no saturation in the signal apart from the quantization.

Is this quantization without any loss ?

Usually, when encountering bad performances in classical full resolution settings, for example due to low SNR, one solution is to increase the acquisition time (i.e., the number of measurements) or to have a better algorithm to tackle the noise. In the context of quantization, there exist cases where these remedies fail because the information in the signal has been discarded by the quantization, regardless of \(M\) or the reconstruction algorithm.

This happens when, for all values of \(M\), two different full resolution signals \(\bs r_0\) and \(\bs r_1\) are sent to the same bits once quantized, i.e.,

$$\mathcal{Q}_b^{\mathbb{C}}(\bs r_0)=\mathcal{Q}_b^{\mathbb{C}}(\bs r_1)$$

This renders the estimation process ambiguous.

To illustrate this, let us compare the projection using \(\bs F^*\) of a 1-sparse vector with a target in the \(n_0^{th}\) range bin with a unit amplitude and a 2-sparse vector whose entries are non-zero in \(n_0\) and \(n_1\) with an amplitude of 1 and \(0<\gamma<1\), that is,

$$\begin{array}{rl}
r_{0}[m]&=e^{-\im\psi_{n_0}} e^{-\im 2\pi \frac{m n_0}{N}}\\
r_{1}[m]&=e^{-\im\psi_{n_0}}e^{-\im 2\pi \frac{m n_0}{N}}+\gamma e^{-\im\psi_{n_1}}e^{-\im 2\pi \frac{m n_1}{N}}
\end{array}$$

These two signals are represented in the graph below. Given that \(\gamma<1\), \(r_1[i]\) lies in \(\mathbb{C}\) on a circle with radius \(\gamma\) centered on \(r_0[i]\).

For the 1-bit case, it is easy to see that the quantization will sent \(\bs r_0\) and \(\bs r_1\) to the same bits if:

$$\min_{m}\, \min(|\Re(r_0[m])|,\, |\Im(r_0[m])|)\ >\ \gamma$$

In this case, the quantized observation \(\mathcal{Q}_b^{\mathbb{C}}(\bs r_1)\) of the  2-sparse vector  is non-distinguishable from those of the 1-sparse signal, i.e., \(\mathcal{Q}_b^{\mathbb{C}}(\bs r_0)\); the estimation cannot be performed correctly, regardless of the optimality of the algorithm used because the required information is simply gone. This shows that for certain configurations of \(\bs a\) the reconstruction quality will be severely bounded.

We propose to solve this by adding a complex dither before the quantization

$$ \bs y = \mathcal{A}_b(\bs a) :=\mathcal{Q}_b^{\mathbb{C}}(\bs \Phi \bs a + \bs \xi)$$

where \(\boldsymbol \xi \in \mathbb{C}^{m} \Rightarrow \xi_i=\xi_i^\mathbb{R}+j\xi_i^\mathbb{I}\), with \(\xi_i^\mathbb{R}, \xi_i^\mathbb{I} \sim \mathcal{U}(-\frac{\delta}{2},\frac{\delta}{2})\).

In essence, this random dithering will allow us to recover more information from the quantized signal because the quantization thresholds are now changing at each samples. This helps to prevent the ambiguities described above.

More details in the previous post on the benefits of this added dithering.

Reconstruction Algorithm

PBP

The first reconstruction algorithm is the Projected Back Projection method. This simple algorithm is defined as:

$$\ts \hat{\bs a} = \cl H_K \big(\frac{1}{M} \bs \Phi^* \bs y \big),$$

where \(\bs y\) is the quantized signal, \(\cl H_K\) is the hard-thresholding operator keeping the \(K\) biggest elements and setting the rest to \(0\).

This algorithm, when used in conjunction with a dither, was shown to have a bound on the reconstruction error that scales with \(M\):

$$ \|\bs a-\hat{\bs a}\| = \mathcal{O}(M^{-\frac{1}{2}}).$$

This means that regardless of the sparse vector \(\bs a\), the added dither will prohibit the apparition of ambiguities and enhance the estimation.

One can also see that the dithering is never used explicitly in this reconstruction. Indeed, PBP is only enjoying the new properties given to the quantized signals without actually knowing what happened at the acquisition.

QIHT

To further improve on PBP,  we also studied the quantized version of IHT, i.e.,

$$\hat{\bs a}^{j+1} = \ts \mathcal{H}_K \big[ \hat{\bs a}^j+ \frac{\mu}{M} \bs \Phi^* \big(\bs y – \mathcal{A}_b(\hat{\bs a}^j ) \big)\big], \text{with $\hat{\bs a}^0=0$}$$

Quantized Iterative Hard Thresholding enforces consistency between the measurements and the estimated K-sparse signal through several iterations, thus enhancing the estimate provided by PBP.

The possible gain of this enforced consistency comes at price in the implementation. PBP uses a random noise (such as noise diode) that does not need to be known or recorded by the system, while QIHT requires an explicit knowledge of the dither that imposes new constraints on the hardware.

Numerical Results

We assessed the performances of the proposed scheme using Monte Carlo simulations.

For each run, we generate a \(K\)-sparse vectors (\(K \in [2,…,10]\)) with random complex amplitude, we then create multiple \(\bs \Phi\) matrix for different \(M\), with \(N=256\). The projected signal is then quantized according to different resolutions with and without dithering.

Recovering the range of targets is tantamount to estimating the support of \(\bs a\). Accordingly, the metric used here is the True Positive Rate defined as \(TPR=\frac{|\text{supp}(\bs a) \cap \text{ supp} (\hat{\bs a})|}{K}\).

PBP with and without dithering

In the graphs below we compare the dithered (in solid), non-dithered (in dashed), and unquantized (dotted) schemes. The resolution is represented using colors: 1-bit in orange, 2-bit in green, non-quantized signal in grey are assumed to be represented on 32-bit. The TPR is studied for different bit-rate, defined as \(\cl B=b \times M\) and represented on the graphs as \(\log_{2}(\cl B)\). The two vertical grey lines represent the bit-rate \(2^8\) and \(2^{13}\) where the 1-bit (32-bit) quantization induces a number of measurements \(M=256=N\) (\(M=16\)) and \(M=8192\) (\(M=256\)).

The following graphs are for \(K=2\) and \(K=10\). We can clearly see that the quantized scheme outperforms the non-quantized for bit rate below \(2^{13}\) (\(M<256\) for the 32-bit scheme). We also see that the non dithered 1 and 2-bit scheme saturates when the bit-rate reaches a regime where \(M\ge N\) whereas the dithered scheme have a TPR that scales with \(M\). This is directly linked to the previously mentioned ambiguity; the harsh non-dithered quantization destroys too much information which removes targets from the signals.

For \(K=10\) and for any given bit-rate, the performances of the 1-bit PBP with dithering are significantly worse than the non-dithered. As stated before, PBP has no way to differentiate between what is the signal of interests (with possibly low amplitudes targets) from the added dither which spans the same dynamic. Seeing this as trying to recover a signal from its quantized projection given an SNR of \(0dB\) for high quantization, we can clearly see the limit of PBP in this configuration.

PBP vs QIHT

We now study the gain that this known dither provides by comparing QIHT and PBP. The solid, dashed, and dotted lines are for the dithered, non-dithered, and non-quantized scheme. The red curves are for the 1-bit QIHT and the purple ones for 1-bit PBP, again the grey lines are the non-quantized 32-bit scheme (with triangle marker for IHT and round marker for PBP).

For the \(K=2\), 1-bit dihered QIHT has a TPR that is higher than any other schemes regardless of the bit-rate. As for PBP, the non-dithered QIHT schemes also saturates for bit-rate where \(M\ge N\). Moreover, the limitation of PBP for \(K=10\) with dithering is not present for QIHT. This low resolution scheme is actually the best for a bit-rate as low as \(2^9\). This bit-rate induces \(M=32\) for the non-quantized scheme and \(M=512\) for the highly quantized 1-bit case. In this case, quantity outweighs quality. Furthermore, this bit-rate represents a \(93.75 \%\) data reduction when compared to classic high resolution \(M=N\) acquisition process.

Measurements in Laboratory

The previous simulations were performed in a noiseless ideal setting where the radar is acquiring a perfect Fourier transform of the environment that only consists of point like targets. In reality, radar measurements always come with effects that will deviate from this ideal linear model. We thus tested the resilience of our proposed method against actual radar measurements.

We used a 24GHz FMCW radar from RfBeam and 2 targets simulators from AMG microwave. This setup allows to transmit real FMCW radar waveform and to record the reflected signals coming from two makeshift targets. 

 Each simulator acts as a controllable delay line, simulating the delay of a target. The signal is captured by the simulator, then goes through a delay line before being radiated back to the radar. As the length of these line is controlled, it is able to simulate a 2-sparse vector \(\bs a \) with a user-defined support. The received based-band signals is now affected by the simulated support but also by all the non idealities coming from the radar hardware (such as IQ imbalance, RF non linearities, …) that were previously not considered in the simulations.

196 runs with different generated \(\bs a\) were recorded. The performances in terms of TPR for the 1-bit QIHT (in red) and PBP (in purple) are compared to the non-quantized schemes (in grey) for different bit-rate.We see exactly the same tendencies as the one observed for the Monte Carlo simulations for 2-sparse signals. Both non-dithered schemes saturate whereas the dithered ones continue to scale as the bit-rate, and thus  \(M\), increases. We also see that QIHT with dithering is the best scheme when low bit-rates are considered. Although it only the first steps of many, this shows that the proposed study has potential in real world application.

Conclusion and Future Work

In this post, we saw that highly quantized radar signals are still able to be processed in order to recover the ranges of targets. We also showed that ambiguities might arise when no dithering is applied. We tested two greedy algorithms, Projected Back Projection and Quantized Iterative Hard Thresholding with both synthetic data and real radar measurements. We also showed that there is a trade-off between the resolution and the number of measurements of signals. This trade-off favors lower resolution schemes such as 1-bit dithered QIHT compared to classic full resolution acquisition for low bit-rate, in others words their quantity over their quality.

Future work will consist of adding more complexity to the system model such as noise or more antennas and see how this will impact a actual hardware implementation of a highly quantized and dithered FMCW radar.

Quantity over Quality: Dithered Quantization for Compressive Radar Systems

Thomas Feuillen, Chunlei Xu, Jérôme Louveaux, Luc Vandendorpe and Laurent Jacques

Arxiv Link
Abstract

In this paper, we investigate a trade-off between the number of radar observations (or measurements) and their resolution in the context of radar range estimation. To this end, we introduce a novel estimation scheme that can deal with strongly quantized received signals, going as low as 1-bit per signal sample. We leverage for this a dithered quantized compressive sensing framework that can be applied to classic radar processing and hardware. This allows us to remove ambiguous scenarios prohibiting correct range estimation from (undithered) quantized base band radar signal. Two range estimation algorithms are studied: Projected Back Projection (PBP) and Quantized Iterative Hard Thresholding (QIHT). The effectiveness of the reconstruction methods combined with the dithering strategy is shown through Monte Carlo simulations. Furthermore we show that: (i), in dithered quantization, the accuracy of target range estimation improves when the bit-rate (i.e., the total number of measured bits) increases, whereas the accuracy of other undithered schemes saturate in this case; and (ii), for fixed, low bit-rate scenarios, severely quantized dithered schemes exhibit better performances than their full resolution counterparts. These observations are confirmed using real measurements obtained in a controlled environment, demonstrating the feasibility of the method in real ranging applications.