diff --git a/biblio.bib b/biblio.bib index 534b559..7390bc5 100644 --- a/biblio.bib +++ b/biblio.bib @@ -197,3 +197,35 @@ volume=67, number=5, month={May} } + +@inproceedings{carolina1, + title={Digital electronics based on red pitaya platform for coherent fiber links}, + author={Olaya, AC Cardenas and Micalizio, S and Ortolano, M and Calosso, CE and Rubiola, E and Friedt, JM}, + booktitle={2016 European Frequency and Time Forum (EFTF)}, + pages={1--4}, + year={2016}, + organization={IEEE} +} + +@article{carolina2, + title={Phase Noise and Frequency Stability of the Red-Pitaya Internal PLL}, + author={Olaya, Andrea Carolina C{\'a}rdenas and Calosso, Claudio Eligio and Friedt, Jean-Michel and Micalizio, Salvatore and Rubiola, Enrico}, + journal={IEEE transactions on ultrasonics, ferroelectrics, and frequency control}, + volume={66}, + number={2}, + pages={412--416}, + year={2019}, + publisher={IEEE} +} + +@article{sherman, + title={Oscillator metrology with software defined radio}, + author={Sherman, Jeff A and J{\"o}rdens, Robert}, + journal={Review of Scientific Instruments}, + volume={87}, + number={5}, + pages={054711}, + year={2016}, + publisher={AIP Publishing} +} + diff --git a/ifcs2018_journal.tex b/ifcs2018_journal.tex index c473f0f..7312321 100644 --- a/ifcs2018_journal.tex +++ b/ifcs2018_journal.tex @@ -8,7 +8,6 @@ % Gwen : peut-on faire un vrai banc de bruit de phase avec ce FIR, ie ajouter ADC, NCO et mixer % (zedboard ou redpit) -% ajouter pyramide "juste" % label schema : verifier que "argumenter de la cascade de FIR" est fait \documentclass[a4paper,conference]{IEEEtran/IEEEtran} @@ -53,7 +52,11 @@ of ultrastable clocks, stringent filtering requirements are defined by spurious noise rejection needs. Since real time radiofrequency processing must be performed in a Field Programmable Array to meet timing constraints, we investigate optimization strategies to design filters meeting rejection characteristics while limiting the hardware resources -required and keeping timing constraints within the targeted measurement bandwidths. +required and keeping timing constraints within the targeted measurement bandwidths. The +presented technique is applicable to scheduling any sequence of processing blocks characterized +by a throughput, resource occupation and performance tabulated as a function of configuration +characateristics, as is the case for filters with their coefficients and resolution yielding +rejection and number of multipliers. \end{abstract} \begin{IEEEkeywords} @@ -99,7 +102,7 @@ data being processed. \section{Finite impulse response filter} -We select FIR filter for their unconditional stability and ease of design. A FIR filter is defined +We select FIR filters for their unconditional stability and ease of design. A FIR filter is defined by a set of weights $b_k$ applied to the inputs $x_k$ through a convolution to generate the outputs $y_k$ \begin{align} @@ -108,12 +111,13 @@ outputs $y_k$ \end{align} As opposed to an implementation on a general purpose processor in which word size is defined by the -processor architecture, implementing such a filter on an FPGA offer more degrees of freedom since +processor architecture, implementing such a filter on an FPGA offers more degrees of freedom since not only the coefficient values and number of taps must be defined, but also the number of bits defining the coefficients and the sample size. For this reason, and because we consider pipeline processing (as opposed to First-In, First-Out FIFO memory batch processing) of radiofrequency signals, High Level Synthesis (HLS) languages \cite{kasbah2008multigrid} are not considered but -the problem is tackled at the Very-high-speed-integrated-circuit Hardware Description Language (VHDL) level. +the problem is tackled at the Very-high-speed-integrated-circuit Hardware Description Language +(VHDL) level. Since latency is not an issue in a openloop phase noise characterization instrument, the large numbre of taps in the FIR, as opposed to the shorter Infinite Impulse Response (IIR) filter, is not considered as an issue as would be in a closed loop system. @@ -151,27 +155,40 @@ current implementation: the decimation is assumed to be located after the FIR ca moment. \section{Methodology description} -We want create a new methodology to develop any Digital Signal Processing (DSP) chain -and for any hardware platform (Altera, Xilinx...). To do this we have defined an -abstract model to represent some basic operations of DSP. - -For the moment, we are focused on only two operations: the filtering and the shifting of data. -We have chosen this basic operation because the shifting and the filtering have already be studied in -lot of works \cite{lim_1996, lim_1988, young_1992, smith_1998} hence it will be easier -to check and validate our results. - -However having only two operations is insufficient to work with complex DSP but -in this paper we only want demonstrate the relevance and the efficiency of our approach. -In future work it will be possible to add more operations and we are able to -model any DSP chain. - -We will apply our methodology on very simple DSP chain. We generate a digital signal -thanks at generator of Pseudo-Random Number (PRN) or thanks at an Analog to Digital -Converter (ADC). Once we have a digital signal, we filter it to decrease the noise level. -Finally we stored some burst of filtered samples before post-processing it. -In this particular case, we want optimize the filtering step to have the best noise -rejection for constrain number of resource or to have the minimal resources -consumption for a given rejection objective. + +Our objective is to develop a new methodology applicable to any Digital Signal Processing (DSP) +chain obtained by assembling basic processing blocks, with hardware and manufacturer independence. +Achieving such a target requires defining an abstract model to represent some basic properties +of DSP blocks such as perfomance (i.e. rejection or ripples in the bandpass for filters) and +resource occupation. These abstract properties, not necessarily related to the detailed hardware +implementation of a given platform, will feed a scheduler solver aimed at assembling the optimum +target, whether in terms of maximizing performance for a given arbitrary resource occupation, or +minimizing resource occupation for a given perfomance. In our approach, the solution of the +solver is then synthesized using the dedicated tool provided by each platform manufacturer +to assess the validity of our abstract resource occupation indicator, and the result of running +the DSP chain on the FPGA allows for assessing the performance of the scheduler. We emphasize +that all solutions found by the solver are synthesized and executed on hardware at the end +of the analysis. + +In this demonstration , we focus on only two operations: filtering and shifting the number of +bits needed to represent the data along the processing chain. +We have chosen these basic operations because shifting and the filtering have already been studied +in the literature \cite{lim_1996, lim_1988, young_1992, smith_1998} providing a framework for +assessing our results. Furthermore, filtering is a core step in any radiofrequency frontend +requiring pipelined processing at full bandwidth for the earliest steps, including for +time and frequency transfer or characterization \cite{carolina1,carolina2,rsi}. + +Addressing only two operations allows for demonstrating the methodology but should not be +considered as a limitation of the framework which can be extended to assembling any number +of skeleton blocks as long as perfomance and resource occupation can be determined. Hence, +in this paper we will apply our methodology on simple DSP chains: a white noise input signal +is generated using a Pseudo-Random Number (PRN) generator or thanks at a radiofrequency-grade +Analog to Digital Converter (ADC) loaded by a 50~$\Omega$ resistor. Once samples have been +digitized at a rate of 125~MS/s, filtering is applied to qualify the processing block performance -- +practically meeting the radiofrequency frontend requirement of noise and bandwidth reduction +by filtering and decimating. Finally, bursts of filtered samples are stored for post-processing, +allowing to assess either filter rejection for a given resource usage, or validating the rejection +when implementing a solution minimizing resource occupation. The first step of our approach is to model the DSP chain and since we just optimize the filtering, we have not modeling the PRN generator or the ADC. The filtering can be @@ -185,9 +202,10 @@ less resources. Hence in the case of cascaded filter, we define a stage as a fil and a shifter (the shift could be omitted if we do not need to divide the filtered data). \subsection{Model of a FIR filter} -A cascade of filter are composed of $n$ stage. In stage $i$ ($1 \leq i \leq n$) -the FIR has $C_i$ coefficients and each coefficients are integer values with $\pi^C_i$ -bits and the filtered data are shifted of $\pi^S_i$ bits. We define also $\pi^-_i$ as + +A cascade of filters is composed of $n$ FIR stages. In stage $i$ ($1 \leq i \leq n$) +the FIR has $C_i$ coefficients and each coefficient is an integer value with $\pi^C_i$ +bits while the filtered data are shifted by $\pi^S_i$ bits. We define also $\pi^-_i$ as the size of input data and $\pi^+_i$ as the size of output data. The figure~\ref{fig:fir_stage} shows a filtering stage. @@ -209,19 +227,23 @@ shows a filtering stage. \label{fig:fir_stage} \end{figure} -FIR $i$ can reject $F(C_i, \pi_i^C)$ dB. $F$ is determined numerically. -To measure this rejection, we use GNU Octave software to design FIR filter coefficients thanks to two -algorithms (\texttt{firls} and \texttt{fir1}). +FIR $i$ has been characterized through numerical simulation as able to reject $F(C_i, \pi_i^C)$ dB. +This rejection has been computed using GNU Octave software FIR coefficient design functions +(\texttt{firls} and \texttt{fir1}). For each configuration $(C_i, \pi_i^C)$, we first create a FIR with floating point coefficients and a given $C_i$ number of coefficients. Then, the floating point coefficients are discretized into integers. In order to ensure that the coefficients are coded on $\pi_i^C$~bits effectively, the coefficients are normalized by their absolute maximum before being scaled to integer coefficients. -At least one coefficient is coded on $\pi_i^C$~bits, and in practice only $b_{C_i/2}$ is coded on $\pi_i^C$~bits while the other are coded on very fewer bits. +At least one coefficient is coded on $\pi_i^C$~bits, and in practice only $b_{C_i/2}$ is coded on $\pi_i^C$~bits while the others are coded on much fewer bits. -With these coefficients, the \texttt{freqz} function is used to estimate the magnitude of the filter. -Comparing the performance between FIRs requires however a unique criterion. As shown in figure~\ref{fig:fir_mag}, -the FIR magnitude exhibits two parts. +With these coefficients, the \texttt{freqz} function is used to estimate the magnitude of the filter +transfer function. +Comparing the performance between FIRs requires however defining a unique criterion. As shown in figure~\ref{fig:fir_mag}, +the FIR magnitude exhibits two parts: we focus here on the transitions width and the rejection rather than on the +bandpass ripples as emphasized in \cite{lim_1988,lim_1996}. \begin{figure} +\begin{center} +\scalebox{0.8}{ \centering \begin{tikzpicture}[scale=0.3] \draw[<->] (0,15) -- (0,0) -- (21,0) ; @@ -249,38 +271,42 @@ the FIR magnitude exhibits two parts. \draw[dashed] (12,8) -- (16,8) ; \end{tikzpicture} +} +\end{center} \caption{Shape of the filter transmitted power $P$ as a function of frequency $f$: the passband is considered to occupy the initial 40\% of the Nyquist frequency range, the stopband the last 40\%, allowing 20\% transition width.} \label{fig:fir_mag} \end{figure} -In the transition band, the behavior of the filter is left free, we only care about the passband and the stopband. -Our first criterion considers the mean value of the stopband rejection, as shown in figure~\ref{fig:mean_criterion}. This criterion does not work because we do not consider the shape of the passband. -A second criterion considers the maximum rejection within the stopband minus the mean of the absolute value of passband rejection. With this criterion, the results are significantly improved as shown in figure~\ref{fig:custom_criterion}. +In the transition band, the behavior of the filter is left free, we only care about the passband and the stopband characteristics. +Our initial criterion considered the mean value of the stopband rejection, as shown in figure~\ref{fig:mean_criterion}. This criterion +yields unacceptable results since notches overestimate the rejection capability of the filter. Furthermore, the losses within +the passband are not considered and might be excessive for excessively wide transitions widths introduced for filters with few coefficients. +Such biases are compensated for by the second considered criterion which is based on computing the maximum rejection within the stopband minus the mean of the absolute value of passband rejection. With this criterion, the results are significantly improved as shown in figure~\ref{fig:custom_criterion} and meet the expected rejection capability of low pass filters. \begin{figure} \centering \includegraphics[width=\linewidth]{images/colored_mean_criterion} -\caption{Mean criterion comparison between monolithic filter and cascade filters} +\caption{Mean stopband rejection criterion comparison between monolithic filter and cascaded filters} \label{fig:mean_criterion} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{images/colored_custom_criterion} -\caption{Custom criterion comparison between monolithic filter and cascade filters} +\caption{Custom criterion (maximum rejection in the stopband minus the mean of the absolute value of the passband rejection) +comparison between monolithic filter and cascaded filters} \label{fig:custom_criterion} \end{figure} -Thanks to this criterion we are able to automatically generate lot of fir coefficients -and estimate their rejection. The figure~\ref{fig:rejection_pyramid} exhibits the -rejection in function of the number of coefficients and their number of bits. -We can observe it looks like a pyramid so the edge represents the best -coefficient set. Indeed if we choose a number of coefficients, increasing the number -of bits over the edge will not improve the rejection. Conversely when we choose -a number of bits, too much increase the number of coefficients will not improve -the rejection. Hence the best coefficient set are on the edge of pyramid. +Thanks to the latter criterion which will be used in the remainder of this paper, we are able to automatically generate multiple FIR taps +and estimate their rejection. Figure~\ref{fig:rejection_pyramid} exhibits the +rejection as a function of the number of coefficients and the number of bits representing these coefficients. +The curve shaped as a pyramid exhibits optimum configurations sets at the vertex where both edges meet. +Indeed for a given number of coefficients, increasing the number of bits over the edge will not improve the rejection. +Conversely when setting the a given number of bits, increasing the number of coefficients will not improve +the rejection. Hence the best coefficient set are on the vertex of the pyramid. \begin{figure} \centering @@ -289,18 +315,19 @@ the rejection. Hence the best coefficient set are on the edge of pyramid. \label{fig:rejection_pyramid} \end{figure} -Although we have a efficient criterion to estimate the rejection of one set of coefficient -we have a problem when we sum two or more criterion. If the FIR filter coefficients are the same -between the stage, we have: +Although we have an efficient criterion to estimate the rejection of one set of coefficients (taps), +we have a problem when we cascade filters and estimate the criterion as a sum two or more individual criteria. +If the FIR filter coefficients are the same between the stages, we have: $$F_{total} = F_1 + F_2$$ -But when we choose two different set of coefficient, the previous equality are not -true. The figure~\ref{fig:sum_rejection} illustrates the problem. The red and blue curves -are two different filter coefficient and we can see that their maximum on the stopband -are not at the same frequency. So when we sum the rejection criteria (the dotted yellow line) -we do not meet the dashed yellow line. Define the rejection of cascaded filters -is more difficult than just take the summation between all the rejection criteria of each filter. -However this summation gives us an upper bound for rejection although in fact we obtain -better rejection than expected. +But selecting two different sets of coefficient will yield a more complex situation in which +the previous relation is no longer valid as illustrated on figure~\ref{fig:sum_rejection}. The red and blue curves +are two different filters with maximums and notches not located at the same frequency offsets. +Hence when summing the transfer functions, the resulting rejection shown as the dashed yellow line is improved +with respect to a basic sum of the rejection criteria shown as a the dotted yellow line. +Thus, estimating the rejection of filter cascades is more complex than takin the sum of all the rejection +criteria of each filter. However since the this sum underestimates the rejection capability of the cascade, +this upper bound is considered as a pessimistic and acceptable criterion for deciding on the suitability +of the filter cascade to meet design criteria. \begin{figure} \centering @@ -309,12 +336,17 @@ better rejection than expected. \label{fig:sum_rejection} \end{figure} -The first problem we address is to maximize the rejection under bounded silicon area -and feasibility constraints. Variable $a_i$ is the area taken by filter~$i$ +Based on this analysis, we address the estimate of resource consumption (called +silicon area -- in the case of FPGAs meaning processing cells) as a function of +filter characteristics. As a reminder, we do not aim at matching actual hardware +configuration but consider an arbitrary silicon area occupied by each processing function, +and will assess after synthesis the adequation of this arbitrary unit with actual +hardware resources provided by FPGA manufacturers. The sum of individual processing +unit areas is constrained by a total silicon area representative of FPGA global resources. +Formally, variable $a_i$ is the area taken by filter~$i$ (in arbitrary unit). Variable $r_i$ is the rejection of filter~$i$ (in dB). Constant $\mathcal{A}$ is the total available area. We model our problem as follows: -Finally we can describe our abstract model with following expressions : \begin{align} \text{Maximize } & \sum_{i=1}^n r_i \notag \\ \sum_{i=1}^n a_i & \leq \mathcal{A} & \label{eq:area} \\ @@ -328,12 +360,12 @@ r_i & = F(C_i, \pi_i^C), & \forall i \in [1, n] \label{eq:rejectiondef} \\ Equation~\ref{eq:area} states that the total area taken by the filters must be less than the available area. Equation~\ref{eq:areadef} gives the definition of -the area for a filter. More precisely, it is the area of the FIR as the Shifter -does not need any circuitry. We consider that the FIR needs $C_i$ registers of size +the area used by a filter, considered as the area of the FIR since the Shifter is +assumed not to require significant resources. We consider that the FIR needs $C_i$ registers of size $\pi_i^C + \pi_i^-$~bits to store the results of the multiplications of the -input data and the coefficients. Equation~\ref{eq:rejectiondef} gives the -definition of the rejection of the filter thanks to function~$F$ that we defined -previously. The Shifter does not introduce negative rejection as we explain later, +input data with the coefficients. Equation~\ref{eq:rejectiondef} gives the +definition of the rejection of the filter thanks to the tabulated function~$F$ that we defined +previously. The Shifter does not introduce negative rejection as we will explain later, so the rejection only comes from the FIR. Equation~\ref{eq:bits} states the relation between $\pi_i^+$ and $\pi_i^-$. The multiplications in the FIR add $\pi_i^C$ bits as most coefficients are close to zero, and the Shifter removes @@ -342,12 +374,12 @@ a filter is the same as the input number of bits of the next filter. Equation~\ref{eq:maxshift} ensures that the Shifter does not introduce negative rejection. Indeed, the results of the FIR can be right shifted without compromising the quality of the rejection until a threshold. Each bit of the output data -increases the maximum rejection level of 6~dB. We add one to take the sign bit +increases the maximum rejection level by 6~dB. We add one to take the sign bit into account. If equation~\ref{eq:maxshift} was not present, the Shifter could shift too much and introduce some noise in the output data. Each supplementary -shift bit would cause 6~dB of noise. A totally equivalent equation is: -$\pi_i^S \leq \pi_i^- + \pi_i^C - 1 - \sum_{k=1}^{i} \left(1 + \frac{r_j}{6}\right) $. -Finally, equation~\ref{eq:init} gives the global input's number of bits. +shift bit would cause an additional 6~dB rejection rise. A totally equivalent equation is: +$\pi_i^S \leq \pi_i^- + \pi_i^C - 1 - \sum_{k=1}^{i} \left(1 + \frac{r_j}{6}\right)$. +Finally, equation~\ref{eq:init} gives the number of bits of the global input. This model is non-linear and even non-quadratic, as $F$ does not have a known linear or quadratic expression. We introduce $p$ FIR configurations @@ -371,15 +403,16 @@ This modified model is quadratic, and it can be linearised if necessary. The Gur model, and since Gurobi is able to linearize, the model is left as is. This model has $O(np)$ variables and $O(n)$ constraints. -The section~\ref{sec:fixed_area} shows the results for the first version of quadratic program but the section~\ref{sec:fixed_rej} -presents the results for the complementary problem. In this case we want -minimize the occupied area for a targeted rejection level. Hence we have replace -the objective function with: +Two problems will be addressed using the workflow described in the next section: on the one +hand maximizing the rejection capability of a set of cascaded filters occupying a fixed arbitrary +silcon area (section~\ref{sec:fixed_area}) and on the second hand the dual problem of minimizing the silicon area +for a fixed rejection criterion (section~\ref{sec:fixed_rej}). In the latter case, the +objective function is replaced with: \begin{align} \text{Minimize } & \sum_{i=1}^n a_i \notag \end{align} -We adapt our constraints of quadratic program to replace the equation \ref{eq:area} -by the equation \ref{eq:rejection_min} where $\mathcal{R}$ is the minimal +We adapt our constraints of quadratic program to replace equation \ref{eq:area} +with equation \ref{eq:rejection_min} where $\mathcal{R}$ is the minimal rejection required. \begin{align} @@ -389,8 +422,9 @@ rejection required. \section{Design workflow} \label{sec:workflow} -In this section, we describe the workflow to compute all the results presented in section~\ref{sec:fixed_area}. -Figure~\ref{fig:workflow} shows the global workflow and the different steps involved in the computations of the results. +In this section, we describe the workflow to compute all the results presented in sections~\ref{sec:fixed_area} +and \ref{sec:fixed_rej}. Figure~\ref{fig:workflow} shows the global workflow and the different steps involved +in the computation of the results. \begin{figure} \centering @@ -423,29 +457,33 @@ Figure~\ref{fig:workflow} shows the global workflow and the different steps invo The filter solver is a C++ program that takes as input the maximum area $\mathcal{A}$, the number of stages $n$, the size of the input signal $\Pi^I$, the FIR configurations $(C_{ij}, \pi_{ij}^C)$ and the function $F$. It creates -the quadratic programs and uses the Gurobi solver to get the optimal results. +the quadratic programs and uses the Gurobi solver to estimate the optimal results. Then it produces two scripts: a TCL script ((1a) on figure~\ref{fig:workflow}) and a deploy script ((1b) on figure~\ref{fig:workflow}). The TCL script describes the whole digital processing chain from the beginning -(the raw signal data) to the end (the filtered data). -The raw input data generated from a Pseudo Random Number (PRN) +(the raw signal data) to the end (the filtered data) in a language compatible +with proprietary synthesis software, namely Vivado for Xilinx and Quartus for +Intel/Altera. The raw input data generated from a 20-bit Pseudo Random Number (PRN) generator inside the FPGA and $\Pi^I$ is fixed at 16~bits. Then the script builds each stage of the chain with a generic FIR task that comes from a skeleton library. The generic FIR is highly configurable with the number of coefficients and the size of the coefficients. The coefficients themselves are not stored in the script. -Whereas the signal is processed in real-time, the output signal is stored as -consecutive bursts of data. +As the signal is processed in real-time, the output signal is stored as +consecutive bursts of data for post-processing, mainly assessing the consistency of the +implemented FIR cascade transfer function with the design criteria and the expected +transfer function. The TCL script is used by Vivado to produce the FPGA bitstream ((2) on figure~\ref{fig:workflow}). We use the 2018.2 version of Xilinx Vivado and we execute the synthesized bitstream on a Redpitaya board fitted with a Xilinx Zynq-7010 series -FPGA (xc7z010clg400-1) and two 125~MS/s ADC. -The board works with a Buildroot Linux image. We have developed some tools and -drivers to flash and communicate with the FPGA. They are used to automatize all -the workflow inside the board: load the filter coefficients and retrieve the -computed data. +FPGA (xc7z010clg400-1) and two LTC2145 14-bit 125~MS/s ADC, loaded with 50~$\Omega$ resistors to +provide a broadband noise source. +The board runs the Linux kernel and surrounding environment produced from the +Buildroot framework available at \url{https://github.com/trabucayre/redpitaya/}: configuring +the Zynq FPGA, feeding the FIR with the set of coefficients, executing the simulation and +fetching the results is automated. The deploy script uploads the bitstream to the board ((3) on figure~\ref{fig:workflow}), flashes the FPGA, loads the different drivers, @@ -457,14 +495,11 @@ the output data ((5) on figure~\ref{fig:workflow}). The results are normalized so that the Power Spectrum Density (PSD) starts at zero and the different configurations can be compared. -The workflow used to compute the results in section~\ref{sec:fixed_rej}, we -have just adapted the quadratic program but the rest of the workflow is unchanged. - -\section{Experiments with fixed area space} +\section{Maximizing the rejection at fixed silicon area} \label{sec:fixed_area} This section presents the output of the filter solver {\em i.e.} the computed configurations for each stage, the computed rejection and the computed silicon area. -This is interesting to understand the choices made by the solver to compute its solutions. +Such results allow for understanding the choices made by the solver to compute its solutions. The experimental setup is composed of three cases. The raw input is generated by a Pseudo Random Number (PRN) generator, which fixes the input data size $\Pi^I$. @@ -542,13 +577,13 @@ Table~\ref{tbl:gurobi_max_1500} shows the results obtained by the filter solver From these tables, we can first state that the more stages are used to define the cascaded FIR filters, the better the rejection. It was an expected result as it has been previously observed that many small filters are better than -a single large filter \cite{lim_1988, lim_1996, young_1992}, despite such conclusion +a single large filter \cite{lim_1988, lim_1996, young_1992}, despite such conclusions being hardly used in practice due to the lack of tools for identifying individual filter coefficients in the cascaded approach. Second, the larger the silicon area, the better the rejection. This was also an -expected result as more area means a filter of better quality (more coefficients -or more bits per coefficient). +expected result as more area means a filter of better quality with more coefficients +or more bits per coefficient. Then, we also observe that the first stage can have a larger shift than the other stages. This is explained by the fact that the solver tries to use just enough @@ -558,9 +593,9 @@ gives the relation between both values. Finally, we note that the solver consumes all the given silicon area. -The following graphs present the rejection for real data on the FPGA. In all following +The following graphs present the rejection for real data on the FPGA. In all the following figures, the solid line represents the actual rejection of the filtered -data on the FPGA as measured experimentally and the dashed line are the noise level +data on the FPGA as measured experimentally and the dashed line are the noise levels given by the quadratic solver. The configurations are those computed in the previous section. Figure~\ref{fig:max_500_result} shows the rejection of the different configurations in the case of MAX/500. @@ -603,10 +638,10 @@ architecture to another. Table~\ref{tbl:resources_usage} shows the resources usage in the case of MAX/500, MAX/1000 and MAX/1500 \emph{i.e.} when the maximum allowed silicon area is fixed to 500, 1000 and 1500 arbitrary units. We have taken care to extract solely the resources used by -the FIR filters and remove additional processing blocks including FIFO and PL to -PS communication. +the FIR filters and remove additional processing blocks including FIFO and Programmable +Logic (PL -- FPGA) to Processing System (PS -- general purpose processor) communication. -\begin{table} +\begin{table}[h!tb] \caption{Resource occupation. The last column refers to available resources on a Zynq-7010 as found on the Redpitaya.} \label{tbl:resources_usage} \centering @@ -632,30 +667,28 @@ PS communication. \end{table} In some cases, Vivado replaces the DSPs by Look Up Tables (LUTs). We assume that, -when the filters coefficients are small enough, or when the input size is small -enough, Vivado optimized resource consumption by selecting multiplexers to +when the filter coefficients are small enough, or when the input size is small +enough, Vivado optimizes resource consumption by selecting multiplexers to implement the multiplications instead of a DSP. In this case, it is quite difficult to compare the whole silicon budget. -However, a rough estimation can be made with a simple equivalence. Looking at +However, a rough estimation can be made with a simple equivalence: looking at the first column (MAX/500), where the number of LUTs is quite stable for $n \geq 2$, we can deduce that a DSP is roughly equivalent to 100~LUTs in terms of silicon -area use. With this equivalence, our 500 arbitraty units corresponds to 2500 LUTs, -1000 arbitrary units corresponds to 5000 LUTs and 1500 arbitrary units corresponds +area use. With this equivalence, our 500 arbitraty units correspond to 2500 LUTs, +1000 arbitrary units correspond to 5000 LUTs and 1500 arbitrary units correspond to 7300 LUTs. The conclusion is that the orders of magnitude of our arbitrary -unit are quite good. The relatively small differences can probably be explained +unit map well to actual hardware resources. The relatively small differences can probably be explained by the optimizations done by Vivado based on the detailed map of available processing resources. -We present the computation time to solve the quadratic problem. -For each case, the filter solver software are executed with a Intel(R) Xeon(R) CPU E5606 -cadenced at 2.13~GHz. The CPU has 8 cores that are used by Gurobi to solve -the quadratic problem. - -Table~\ref{tbl:area_time} shows the time needed to solve the quadratic +We now present the computation time needed to solve the quadratic problem. +For each case, the filter solver software is executed on a Intel(R) Xeon(R) CPU E5606 +clocked at 2.13~GHz. The CPU has 8 cores that are used by Gurobi to solve +the quadratic problem. Table~\ref{tbl:area_time} shows the time needed to solve the quadratic problem when the maximal area is fixed to 500, 1000 and 1500 arbitrary units. -\begin{table} -\caption{Time to solve the quadratic program with Gurobi} +\begin{table}[h!tb] +\caption{Time needed to solve the quadratic program with Gurobi} \label{tbl:area_time} \centering \begin{tabular}{|c|c|c|c|}\hline @@ -671,15 +704,16 @@ $n$ & Time (MAX/500) & Time (MAX/1000) & Time (MAX/1500) As expected, the computation time seems to rise exponentially with the number of stages. % TODO: exponentiel ? When the area is limited, the design exploration space is more limited and the solver is able to find an optimal solution faster. On the contrary, in the case of MAX/1500 with -5~stages, we were not able to obtain a result after 40~hours of computation so we decided to stop. +5~stages, we were not able to obtain a result after 40~hours of computation when the program was +manually stopped. + +\subsection{Minimizing resource occupation at fixed rejection}\label{sec:fixed_rej} -\section{Experiments with fixed rejection target} -\label{sec:fixed_rej} -This section presents the results of complementary quadratic program which we -minimize the area occupation for a targeted noise level. +This section presents the results of the complementary quadratic program aimed at +minimizing the area occupation for a targeted rejection level. The experimental setup is also composed of three cases. The raw input is the same -as previous section, a PRN generator, which fixes the input data size $\Pi^I$. +as in the previous section, from a PRN generator, which fixes the input data size $\Pi^I$. Then the targeted rejection $\mathcal{R}$ has been fixed to either 40, 60 or 80~dB. Hence, the three cases have been named: MIN/40, MIN/60, MIN/80. The number of configurations $p$ is the same as previous section. @@ -690,7 +724,7 @@ Table~\ref{tbl:gurobi_min_80} shows the results obtained by the filter solver fo \renewcommand{\arraystretch}{1.4} -\begin{table} +\begin{table}[h!tb] \caption{Configurations $(C_i, \pi_i^C, \pi_i^S)$, rejections and areas (in arbitrary units) for MIN/40} \label{tbl:gurobi_min_40} \centering @@ -708,7 +742,7 @@ Table~\ref{tbl:gurobi_min_80} shows the results obtained by the filter solver fo } \end{table} -\begin{table} +\begin{table}[h!tb] \caption{Configurations $(C_i, \pi_i^C, \pi_i^S)$, rejections and areas (in arbitrary units) for MIN/60} \label{tbl:gurobi_min_60} \centering @@ -727,7 +761,7 @@ Table~\ref{tbl:gurobi_min_80} shows the results obtained by the filter solver fo } \end{table} -\begin{table} +\begin{table}[h!tb] \caption{Configurations $(C_i, \pi_i^C, \pi_i^S)$, rejections and areas (in arbitrary units) for MIN/80} \label{tbl:gurobi_min_80} \centering @@ -747,24 +781,28 @@ Table~\ref{tbl:gurobi_min_80} shows the results obtained by the filter solver fo \end{table} \renewcommand{\arraystretch}{1} -From these tables, we can first state that all configuration reach the target rejection -level and more we have stages lesser is the area occupied in arbitrary unit. -Futhermore, the area of the monolithic filter is twice bigger than the two cascaded. -More generally, more there is filters lower is the occupied area. - -Like in previous section, the solver choose always a little filter as first -filter stage and the second one is often the biggest filter. this choice can be explain -as the previous section. The solver uses just enough bits to not degrade the input -signal and in second filter it can choose a better filter to improve rejection without -have too bits in the output data. - -For the specific case in MIN/40 for $n = 5$ the solver has determined that the optimal -number of filter is 4 so it not chose any configuration in last filter. Hence this +% JMF : je croyais que dans un cas le monolithique n'y arrivait juste pas : tu as retire' ce cas ? +From these tables, we can first state that all configurations reach the targeted rejection +level or even better thanks to our underestimate of the cascade rejection as the sum of the +individual filter rejection +% we have stages lesser is the area occupied in arbitrary unit. JMF : je ne comprends pas cette phrase +Futhermore, the area of the monolithic filter is twice as big as the two cascaded filters +(1131 and 1760 arbitrary units v.s 547 and 903 arbitrary units for 60 and 80~dB rejection +respectively). More generally, the more filters are cascaded, the lower the occupied area. + +Like in previous section, the solver chooses always a little filter as first +filter stage and the second one is often the biggest filter. This choice can be explained +as in the previous section, with the solver using just enough bits not to degrade the input +signal and in the second filter selecting a better filter to improve rejection without +having too many bits in the output data. + +For the specific case of MIN/40 for $n = 5$ the solver has determined that the optimal +number of filters is 4 so it did not chose any configuration for the last filter. Hence this solution is equivalent to the result for $n = 4$. -The following graphs present the rejection for real data on the FPGA. In all following +The following graphs present the rejection for real data on the FPGA. In all the following figures, the solid line represents the actual rejection of the filtered -data on the FPGA as measured experimentally and the dashed line are the noise level +data on the FPGA as measured experimentally and the dashed line is the noise level given by the quadratic solver. Figure~\ref{fig:min_40} shows the rejection of the different configurations in the case of MIN/40. @@ -792,11 +830,11 @@ Figure~\ref{fig:min_80} shows the rejection of the different configurations in t \label{fig:min_80} \end{figure} -We observe that all rejections given by the quadratic solver are close to the real -rejection. All curves prove that the constraint to reach the target rejection is -respected both monolithic filter or cascaded filters. +We observe that all rejections given by the quadratic solver are close to the experimentally +measured rejection. All curves prove that the constraint to reach the target rejection is +respected with both monolithic or cascaded filters. -Table~\ref{tbl:resources_usage} shows the resources usage in the case of MIN/40, MIN/60 and +Table~\ref{tbl:resources_usage} shows the resource usage in the case of MIN/40, MIN/60 and MIN/80 \emph{i.e.} when the target rejection is fixed to 40, 60 and 80~dB. We have taken care to extract solely the resources used by the FIR filters and remove additional processing blocks including FIFO and PL to @@ -827,16 +865,17 @@ PS communication. \end{tabular} \end{table} -If we keep the previous estimation of cost of one DSP in term of LUT (1 DSP $\approx$ 100 LUT) -the real resource consumption decrease in function of number of stage filter according +If we keep the previous estimation of cost of one DSP in terms of LUT (1 DSP $\approx$ 100 LUT) +the real resource consumption decreases as a function of the number of stages in the cascaded +filter according to the solution given by the quadratic solver. Indeed, we have always a decreasing consumption even if the difference between the monolithic and the two cascaded -filters is lesser than expected. +filters is less than expected. -Finally, the table~\ref{tbl:area_time_comp} shows the computation time to solve +Finally, table~\ref{tbl:area_time_comp} shows the computation time to solve the quadratic program. -\begin{table} +\begin{table}[h!tb] \caption{Time to solve the quadratic program with Gurobi} \label{tbl:area_time_comp} \centering @@ -850,29 +889,31 @@ $n$ & Time (MIN/40) & Time (MIN/60) & Time (MIN/80) \\\ \end{tabular} \end{table} -The time needed to solve this configuration are substantially faster than time -needed in the previous section. Indeed the worst time in this case is only 3~minutes -in balance of 3~days on previous section. We are able to solve more easily this -problem than the previous one. +The time needed to solve this configuration is significantly shorter than the time +needed in the previous section. Indeed the worst time in this case is only 3~minutes, +compared to 3~days in the previous section: this problem is more easily solved than the +previous one. \section{Conclusion} -In this paper, we have proposed a new approach to work with a cascade of FIR filter inside a FPGA. -This method aims to be hardware independent and focus an high-level of abstraction. -We have modeled the FIR filter operation and the data shift impact. With this model -we have created a quadratic program to select the optimal FIR coefficient set to reject a -maximum of noise. In our experiments we have chosen deliberately some common tools -to design the filter coefficients but we can use any other method. +We have proposed a new approach to schedule a set of signal processing blocks whose performances +and resource consumption has been tabulated, and applied this methodology to the practical +case of implementing cascaded FIR filters inside a FPGA. +This method aims to be hardware independent and focuses an a high-level of abstraction. +We have modeled the FIR filter operation and the impact of data shift. Thanks to this model, +we have created a quadratic program to select the optimal FIR taps to reach a targeted +rejection. Individual filter taps have been identified using commonly available tools and the +emphasis is on FIR assembly rather than individual FIR coefficient identification. Our experimental results are very promising in providing a rational approach to selecting the coefficients of each FIR filter in the context of a performance target for a chain of -such filters. The FPGA design that is produced automatically by our -workflow is able to filter an input signal as expected which validates our model and our approach. -We can easily change the quadratic program to adapt it to an other problem. +such filters. The FPGA design that is produced automatically by the proposed +workflow is able to filter an input signal as expected, validating experimentally our model and our approach. +The quadratic program can be adapted it to an other problem based on assembling skeleton blocks. A perspective is to model and add the decimators to the processing chain to have a classical -FIR filter and decimator. The impact of the decimator is not so trivial, especially in terms of silicon -area for the subsequent stages since some hardware optimization can be applied in +FIR filter and decimator. The impact of the decimator is not trivial, especially in terms of silicon +area usage for subsequent stages since some hardware optimization can be applied in this case. The software used to demonstrate the concepts developed in this paper is based on the