ifcs2018_proceeding.tex 38.6 KB
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648
\documentclass[a4paper,conference]{IEEEtran/IEEEtran}
\usepackage{graphicx,color,hyperref}
\usepackage{amsfonts}
\usepackage{amsthm}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{algorithm2e}
\usepackage{url,balance}
\usepackage[normalem]{ulem}
% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}
\textheight=26cm
\setlength{\footskip}{30pt}
\pagenumbering{gobble}
\begin{document}
\title{Filter optimization for real time digital processing of radiofrequency signals: application
to oscillator metrology}

\author{\IEEEauthorblockN{A. Hugeat\IEEEauthorrefmark{1}\IEEEauthorrefmark{2}, J. Bernard\IEEEauthorrefmark{2},
G. Goavec-M\'erou\IEEEauthorrefmark{1},
P.-Y. Bourgeois\IEEEauthorrefmark{1}, J.-M. Friedt\IEEEauthorrefmark{1}}
\IEEEauthorblockA{\IEEEauthorrefmark{1}FEMTO-ST, Time \& Frequency department, Besan\c con, France }
\IEEEauthorblockA{\IEEEauthorrefmark{2}FEMTO-ST, Computer Science department DISC, Besan\c con, France \\
Email: \{pyb2,jmfriedt\}@femto-st.fr}
}
\maketitle
\thispagestyle{plain}
\pagestyle{plain}
\newtheorem{definition}{Definition}

\begin{abstract}
Software Defined Radio (SDR) provides stability, flexibility and reconfigurability to
radiofrequency signal processing. Applied to oscillator characterization in the context
of ultrastable clocks, stringent filtering requirements are defined by spurious signal or
noise rejection needs. Since real time radiofrequency processing must be performed in a
Field Programmable Array to meet timing constraints, we investigate optimization strategies
to design filters meeting rejection characteristics while limiting the hardware resources
required and keeping timing constraints within the targeted measurement bandwidths.
\end{abstract}

\begin{IEEEkeywords}
Software Defined Radio, Mixed-Integer Linear Programming, Finite Impulse Response filter
\end{IEEEkeywords}

\section{Digital signal processing of ultrastable clock signals}

Analog oscillator phase noise characteristics are classically performed by downconverting
the radiofrequency signal using a saturated mixer to bring the radiofrequency signal to baseband,
followed by a Fourier analysis of the beat signal to analyze phase fluctuations close to carrier. In
a fully digital approach, the radiofrequency signal is digitized and numerically downconverted by
multiplying the samples with a local numerically controlled oscillator (Fig. \ref{schema}) \cite{rsi}.

\begin{figure}[h!tb]
\begin{center}
\includegraphics[width=.8\linewidth]{images/schema}
\end{center}
\caption{Fully digital oscillator phase noise characterization: the Device Under Test
(DUT) signal is sampled by the radiofrequency grade Analog to Digital Converter (ADC) and
downconverted by mixing with a Numerically Controlled Oscillator (NCO). Unwanted signals
and noise aliases are rejected by a Low Pass Filter (LPF) implemented as a cascade of Finite
Impulse Response (FIR) filters. The signal is then decimated before a Fourier analysis displays
the spectral characteristics of the phase fluctuations.}
\label{schema}
\end{figure}

As with the analog mixer,
the non-linear behavior of the downconverter introduces noise or spurious signal aliasing as
well as the generation of the frequency sum signal in addition to the frequency difference.
These unwanted spectral characteristics must be rejected before decimating the data stream
for the phase noise spectral characterization \cite{andrich2018high}. The characteristics introduced between the
downconverter
and the decimation processing blocks are core characteristics of an oscillator characterization
system, and must reject out-of-band signals below the targeted phase noise -- typically in the
sub -170~dBc/Hz for ultrastable oscillator we aim at characterizing. The filter blocks will
use most resources of the Field Programmable Gate Array (FPGA) used to process the radiofrequency
datastream: optimizing the performance of the filter while reducing the needed resources is
hence tackled in a systematic approach using optimization techniques. Most significantly, we
tackle the issue by attempting to cascade multiple Finite Impulse Response (FIR) filters with
tunable number of coefficients and tunable number of bits representing the coefficients and the
data being processed.

\section{Finite impulse response filter}

We select FIR filter for their unconditional stability and ease of design. A FIR filter is defined
by a set of weights $b_k$ applied to the inputs $x_k$ through a convolution to generate the
outputs $y_k$
$$y_n=\sum_{k=0}^N b_k x_{n-k}$$

As opposed to an implementation on a general purpose processor in which word size is defined by the
processor architecture, implementing such a filter on an FPGA offer more degrees of freedom since
not only the coefficient values and number of taps must be defined, but also the number of bits
defining the coefficients and the sample size. For this reason, and because we consider pipeline
processing (as opposed to First-In, First-Out FIFO memory batch processing) of radiofrequency
signals, High Level Synthesis (HLS) languages \cite{kasbah2008multigrid} are not considered but
the problem is tackled at the Very-high-speed-integrated-circuit Hardware Description Language (VHDL).
Since latency is not an issue in a openloop phase noise characterization instrument, the large
numbre of taps in the FIR, as opposed to the shorter Infinite Impulse Response (IIR) filter,
is not considered as an issue as would be in a closed loop system.

The coefficients are classically expressed as floating point values. However, this binary
number representation is not efficient for fast arithmetic computation by an FPGA. Instead,
we select to quantify these floating point values into integer values. This quantization
will result in some precision loss.

%As illustrated in Fig. \ref{float_vs_int}, we see that we aren't
%need too coefficients or too sample size. If we have lot of coefficients but a small sample size,
%the first and last are equal to zero. But if we have too sample size for few coefficients that not improve the quality.

% JMF je ne comprends pas la derniere phrase ci-dessus ni la figure ci dessous
% AH en gros je voulais dire que prendre trop peu de bit avec trop de coeff, ça induit ta figure (bien mieux faite que moi)
%    et que l'inverse trop de bit sur pas assez de coeff on ne gagne rien, je vais essayer de la reformuler

%\begin{figure}[h!tb]
%\includegraphics[width=\linewidth]{images/float-vs-integer.pdf}
%\caption{Impact of the quantization resolution of the coefficients}
%\label{float_vs_int}
%\end{figure}

\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{images/demo_filtre}
\caption{Impact of the quantization resolution of the coefficients: the quantization is
set to 6~bits -- with the horizontal black lines indicating $\pm$1 least significant bit -- setting
the 30~first and 30~last coefficients out of the initial 128~band-pass
filter coefficients to 0 (red dots).}
\label{float_vs_int}
\end{figure}

The tradeoff between quantization resolution and number of coefficients when considering
integer operations is not trivial. As an illustration of the issue related to the
relation between number of fiter taps and quantization, Fig. \ref{float_vs_int} exhibits
a 128-coefficient FIR bandpass filter designed using floating point numbers (blue). Upon
quantization on 6~bit integers, 60 of the 128~coefficients in the beginning and end of the
taps become null, making the large number of coefficients irrelevant and allowing to save
processing resource by shrinking the filter length. This tradeoff aimed at minimizing resources
to reach a given rejection level, or maximizing out of band rejection for a given computational
resource, will drive the investigation on cascading filters designed with varying tap resolution
and tap length, as will be shown in the next section. Indeed, our development strategy closely
follows the skeleton approach \cite{crookes1998environment, crookes2000design, benkrid2002towards}
in which basic blocks are defined and characterized before being assembled \cite{hide}
in a complete processing chain. In our case, assembling the filter blocks is a simpler block
combination process since we assume a single value to be processed and a single value to be
generated at each clock cycle. The FIR filters will not be considered to decimate in the
current implementation: the decimation is assumed to be located after the FIR cascade at the
moment.

\section{Filter optimization}

A basic approach for implementing the FIR filter is to compute the transfer function of
a monolithic filter: this single filter defines all coefficients with the same resolution
(number of bits) and processes data represented with their own resolution. Meeting the
filter shape requires a large number of coefficients, limited by resources of the FPGA since
this filter must process data stream at the radiofrequency sampling rate after the mixer.

An optimization problem \cite{leung2004handbook} aims at improving one or many
performance criteria within a constrained resource environment. Amongst the tools
developed to meet this aim, Mixed-Integer Linear Programming (MILP) provides the framework to
formally define the stated problem and search for an optimal use of available
resources \cite{yu2007design, kodek1980design}.

First we need to ensure that our problem is a real optimization problem. When
designing a processing function in the FPGA, we aim at meeting some requirement such as
the throughput, the computation time or the noise rejection noise. However, due to limited
resources to design the process like BRAM (high performance RAM), DSP (Digital Signal Processor)
or LUT (Look Up Table), a tradeoff must be generally searched between performance and available
computational resources: optimizing some criteria within finite, limited
resources indeed matches the definition of a classical optimization problem.

Specifically the degrees of freedom when addressing the problem of replacing the single monolithic
FIR with a cascade of optimized filters are the number of coefficients $N_i$ of each filter $i$,
the number of bits $C_i$ representing the coefficients and the number of bits $D_i$ representing
the data fed to the filter. Because each FIR in the chain is fed the output of the previous stage,
the optimization of the complete processing chain within a constrained resource environment is not
trivial. The resource occupation of a FIR filter is considered as $(D_i+C_i) \times N_i$ which is
the number of bits needed in a worst case condition to represent the output of the FIR. Such an
occupied area estimate assumes that the number of gates scales as the number of bits and the number
of coefficients, but does not account for the detailed implementation of the hardware. Indeed,
various FPGA implementations will provide different hardware functionalities, and we shall consider
at the end of the design a synthesis step using vendor software to assess the validity of the solution
found. As an example of the limitation linked to the lack of detailed hardware consideration, Block Random
Access Memory (BRAM) used to store filter coefficients are not shared amongst filters, and multiplications
are most efficiently implemented by using DSP blocks whose input word
size is finite. DSPs are a scarce resource to be saved in a practical implementation. Keeping a high
abstraction on the resource occupation is nevertheless selected in the following discussion in order
to leave enough degrees of freedom in the problem to try and find original solutions: too many
constraints in the initial statement of the problem leave little room for finding an optimal solution.

\begin{figure}[h!tb]
\begin{center}
\includegraphics[width=.5\linewidth]{schema2}
\caption{Shape of the filter transmitted power $P$ as a function of frequency:
the bandpass BP is considered to occupy the initial
40\% of the Nyquist frequency range, the stopband the last 40\%, allowing 20\% transition
width.}
\label{rejection-shape}
\end{center}
\end{figure}

Following these considerations, the model is expressed as:
\begin{align}
  \begin{cases}
    \mathcal{R}_i &= \mathcal{F}(N_i, C_i)\\
    \mathcal{A}_i &= N_i * C_i + D_i\\
    \Delta_i &= \Delta _{i-1} + \mathcal{P}_i
  \end{cases}
  \label{model-FIR}
\end{align}
To explain the system \ref{model-FIR}, $\mathcal{R}_i$ represents the rejection of depending on $N_i$ and $C_i$, $\mathcal{A}$
is a theoretical area occupation of the processing block on the FPGA, and $\Delta_i$ is the total rejection for the current stage $i$.
Since the function $\mathcal{F}$ cannot be explictly expressed, we run simulations to determine the rejection depending
on $N_i$ and $C_i$. However, selecting the right filter requires a clear definition of the rejection criterion. Selecting an
incorrect criterion will lead the linear program solver to produce a solution which might not meet the user requirements.
Hence, amongst various criteria including the mean or median value of the FIR response in the stopband as will
be illustrated lated (section \ref{median}), we have designed
a criterion aimed at avoiding ripples in the passband and considering the maximum of the FIR spectral response in the stopband
(Fig. \ref{rejection-shape}). The bandpass criterion is defined as the sum of the absolute values of the spectral response
in the bandpass, reminiscent of a standard deviation of the spectral response: this criterion must be minimized to avoid
ripples in the passband. The stopband transfer function maximum must also be minimized in order to improve the filter
rejection capability. Weighing these two criteria allows designing the linear program to be solved.

\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{images/noise-rejection.pdf}
\caption{Rejection as a function of number of coefficients and number of bits}
\label{noise-rejection}
\end{figure}

The objective function maximizes the noise rejection ($\max(\Delta_{i_{\max}})$) while keeping resource occupation below
a user-defined threshold. The MILP solver is allowed to choose the number of successive
filters, within an upper bound. The last problem is to model the noise rejection. Since filter
noise rejection capability is not modeled with linear equations, a look-up-table is generated
for multiple filter configurations in which the $C_i$, $D_i$ and $N_i$ parameters are varied: for each
one of these conditions, the low-pass filter rejection defined as the mean power between
half the Nyquist frequency and the Nyquist frequency is stored as computed by the frequency response
of the digital filter (Fig. \ref{noise-rejection}). An intuitive analysis of this chart hints at an optimum
set of tap length and number of bit for representing the coefficients along the line of the pyramidal
shaped rejection capability function.

Linear program formalism for solving the problem is well documented: an objective function is
defined which is linearly dependent on the parameters to be optimized. Constraints are expressed
as linear equation and solved using one of the available solvers, in our case GLPK\cite{glpk}.
With the notation explain in system \ref{model-FIR}, we have defined our linear problem like this:
\paragraph{Variables}
\begin{align*}
x_{i,j} \in \lbrace 0,1 \rbrace & \text{ $i$ is a given filter} \\
& \text{ $j$ is the stage} \\
& \text{ If $x_{i,j}$ is equal to 1, the filter is selected} \\
\end{align*}
\paragraph{Constants}
\begin{align*}
\mathcal{F} = \lbrace F_1 ... F_p \rbrace & \text{ All possible filters}\\
& \text{ $p$ is the number of different filters} \\
C(i) & \text{ % Constant to let the
number of coefficients %} \\ & \text{
for filter $i$}\\
\pi_C(i) & \text{ % Constant to let the
number of bits of %}\\ & \text{
each coefficient for filter $i$}\\
\mathcal{A}_{\max} & \text{ Total space available inside the FPGA}
\end{align*}
\paragraph{Constraints}
\begin{align}
1 \leq i \leq p & \nonumber\\
1 \leq j \leq q & \text{ $q$ is the max of filter stage} \nonumber \\
\forall j, \mathlarger{\sum_{i}} x_{i,j} = 1 & \text{ At most one filter by stage} \nonumber\\
\mathcal{S}_0 = 0 & \text{ initial occupation} \nonumber\\
\forall j, \mathcal{S}_j = \mathcal{S}_{j-1} + \mathlarger{\sum_i (x_{i,j} \times \mathcal{A}_i)} \label{cstr_size} \\
\mathcal{S} \leq \mathcal{S}_{\max}\nonumber \\
\mathcal{N}_0 = 0 & \text{ initial rejection}\nonumber\\
\forall j, \mathcal{N}_j = \mathcal{N}_{j-1} + \mathlarger{\sum_i (x_{i,j} \times \mathcal{R}_i)} \label{cstr_rejection} \\
\mathcal{N}_q \geqslant 160 & \text{ an user defined bound}\nonumber\\
& \text{ (e.g. 160~dB here)}\nonumber\\\nonumber
\end{align}
\paragraph{Goal}
\begin{align*}
\min \mathcal{S}_q
\end{align*}

The constraint \ref{cstr_size} means the occupation for the current stage $j$ depends on
the previous occupation and the occupation of current selected filter (it is possible
that no filter is selected for this stage). And the second one \ref{cstr_rejection}
means the same thing but for the rejection, the rejection depends the previous rejection
plus the rejection of selected filter.

\subsection{Low bandpass ripple and maximum rejection criteria}

The MILP solver provides a solution to the problem by selecting a series of small FIR with
increasing number of bits representing data and coefficients as well as an increasing number
of coefficients, instead of a single monolithic filter. 

\begin{figure}[h!tb]
% \includegraphics[width=\linewidth]{images/compare-fir.pdf}
\includegraphics[width=\linewidth]{images/fir-mono-vs-fir-series-noise-fixe-jmf-light.pdf}
\caption{Comparison of the rejection capability between a series of FIR and a monolithic FIR
with a cutoff frequency set at half the Nyquist frequency.}
\label{compare-fir}
\end{figure}

Fig. \ref{compare-fir} exhibits the
performance comparison between one solution and a monolithic FIR when selecting a cutoff
frequency of half the Nyquist frequency: a series of 5 FIR and a series of 10 FIR with the
same space usage are provided as selected by the MILP solver. The FIR cascade provides improved
rejection than the monolithic FIR at the expense of a lower cutoff frequency which remains to
be tuned or compensated for.


The resource occupation when synthesizing such FIR on a Xilinx FPGA is summarized as Tab. \ref{t1}.
We have considered a set of resources representative of the hardware platform we work on,
Avnet's Zedboard featuring a Xilinx XC7Z020-CLG484-1 Zynq System on Chip (SoC). The results on
Tab. \ref{t1} emphasize that implementing the monolithic single FIR is impossible due to
the insufficient hardware resources (exhausted LUT resources), while the FIR cascading 5 or 10
filters fit in the available resources. However, in all cases the DSP resources are fully
used: while the design can be synthesized using Xilinx proprietary Vivado 2016.2 software,
implementing the design fails due to the excessive resource usage preventing routing the signals
on the FPGA. Such results emphasize on the one hand the improvement prospect of the optimization
procedure by finding non-trivial solutions matching resource constraints, but on the other
hand also illustrates the limitation of a model with an abstraction layer that does not account
for the detailed architecture of the hardware.

\begin{table}[h!tb]
\caption{Resource occupation on a Xilinx Zynq-7000 series FPGA when synthesizing the FIR cascade
identified as optimal by the MILP solver within a finite resource criterion. The last line refers
to available resources on a Zynq-7020 as found on the Zedboard.}
\begin{center}
\begin{tabular}{|c|cccc|}\hline
FIR & BlockRAM & LookUpTables & DSP & rejection (dB)\\\hline\hline
1 (monolithic) & 1 & 76183 & 220 & -162 \\
5 & 5 & 18597 & 220 & -160 \\
10 & 8 & 24729 & 220 & -161 \\\hline\hline
\textbf{Zynq 7020} & \textbf{420} & \textbf{53200} & \textbf{220} &  \\\hline
%\begin{tabular}{|c|ccccc|}\hline
%FIR & BRAM36 & BRAM18 & LUT & DSP & rejection (dB)\\\hline\hline
%1 (monolithic) & 1 & 0 & {\color{Red}76183} & 220 & -162 \\
%5 & 0 & 5 & {\color{Green}18597} & 220 & -160 \\
%10 & 0 & 8 & {\color{Green}24729} & 220 & -161 \\\hline\hline
%\textbf{Zynq 7020} & \textbf{140} & \textbf{280} & \textbf{53200} & \textbf{220} &  \\\hline
\end{tabular}
\end{center}
%\vspace{-0.7cm}
\label{t1}
\end{table}

\subsection{Alternate criteria}\label{median}

Fig. \ref{compare-fir} provides FIR solutions matching well the targeted transfer
function, namely low ripple in the bandpass defined as the first 40\% of the frequency
range and maximum rejection of 160~dB in the last 40\% stopband. We illustrate now, for
demonstrating the need to properly select the optimization criterion, two cases of poor
filter shapes obtained by selecting the mean value and median value of the rejection,
with no consideration for the ripples in the bandpass. The results of the optimizations,
in these cases, are shown in Figs. \ref{compare-mean} and \ref{compare-median}.

\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{images/fir-mono-vs-fir-series-noise-fixe-mean-light.pdf}
\caption{Comparison of the rejection capability between a series of FIR and a monolithic FIR
with a cutoff frequency set at half the Nyquist frequency.}
\label{compare-mean}
\end{figure}

In the case of the mean value criterion (Fig. \ref{compare-mean}), the solution is not
acceptable since the notch at the end of the transition band compensates for some unacceptable
rise in the rejection close to the Nyquist frequency. Applying such a filter might yield excessive
high frequency spurious components to be aliased at low frequency when decimating the signal.
Similarly, the lack of criterion on the bandpass shape induces a shape with poor flatness and
and slowly decaying transfer function starting to attenuate spectral components well before the
transition band starts. Such issues are partly aleviated by replacing a mean rejection value with
a median rejection value (Fig. \ref{compare-median}) but solutions remain unacceptable for
the reasons stated previously and much poorer than those found with the maximum rejection criterion
selected earlier (Fig. \ref{compare-fir}).

\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{images/fir-mono-vs-fir-series-noise-fixe-median-light.pdf}
\caption{Comparison of the rejection capability between a series of FIR and a monolithic FIR
with a cutoff frequency set at half the Nyquist frequency.}
\label{compare-median}
\end{figure}

\section{Filter coefficient selection}

The coefficients of a single monolithic filter are computed as the impulse response
of the filter transfer function, and practically approximated by a multitude of methods
including least square optimization (Matlab's {\tt firls} function), Hamming or Kaiser windowing
(Matlab's {\tt fir1} function).

\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{images/fir1-vs-firls}
\caption{Evolution of the rejection capability of least-square optimized filters and Hamming
FIR filters as a function of the number of coefficients, for floating point numbers and 8-bit
encoded integers.}
\label{2}
\end{figure}

Cascading filters opens a new optimization opportunity by
selecting various coefficient sets depending on the number of coefficients. Fig. \ref{2}
illustrates that for a number of coefficients ranging from 8 to 47, {\tt fir1} provides a better
rejection than {\tt firls}: since the linear solver increases the number of coefficients along
the processing chain, the type of selected filter also changes depending on the number of coefficients
and evolves along the processing chain.

\section{Conclusion}

We address the optimization problem of designing a low-pass filter chain in a Field Programmable Gate
Array for improved noise rejection within constrained resource occupation, as needed for
real time processing of radiofrequency signal when characterizing spectral phase noise
characteristics of stable oscillators. The flexibility of the digital approach makes the result
best suited for closing the loop and using the measurement output in a feedback loop for
controlling clocks, e.g. in a quartz-stabilized high performance clock whose long term behavior
is controlled by non-piezoelectric resonator (sapphire resonator, microwave or optical
atomic transition).

\section*{Acknowledgement}

This work is supported by the ANR Programme d'Investissement d'Avenir in
progress at the Time and Frequency Departments of the FEMTO-ST Institute
(Oscillator IMP, First-TF and Refimeve+), and by R\'egion de Franche-Comt\'e.
The authors would like to thank E. Rubiola, F. Vernotte, G. Cabodevila for support and
fruitful discussions.

\bibliographystyle{IEEEtran}
\balance
\bibliography{references,biblio}
\end{document}

	\section{Contexte d'ordonnancement}
	Dans cette partie, nous donnerons des d\'efinitions de termes rattach\'es au domaine de l'ordonnancement
	et nous verrons que le sujet trait\'e se rapproche beaucoup d'un problème d'ordonnancement. De ce fait
	nous pourrons aller plus loin que les travaux vus pr\'ec\'edemment et nous tenterons des approches d'ordonnancement
	et d'optimisation.

	\subsection{D\'efinition du vocabulaire}
	Avant tout, il faut d\'efinir ce qu'est un problème d'optimisation. Il y a deux d\'efinitions
	importantes à donner. La première est propos\'ee par Legrand et Robert dans leur livre \cite{def1-ordo} :
	\begin{definition}
		\label{def-ordo1}
		Un ordonnancement d'un système de t\^aches $G\ =\ (V,\ E,\ w)$ est une fonction $\sigma$ :
		$V \rightarrow \mathbb{N}$ telle que $\sigma(u) + w(u) \leq \sigma(v)$ pour toute arête $(u,\ v) \in E$.
	\end{definition}

	Dit plus simplement, l'ensemble $V$ repr\'esente les t\^aches à ex\'ecuter, l'ensemble $E$ repr\'esente les d\'ependances
	des t\^aches et $w$ les temps d'ex\'ecution de la t\^ache. La fonction $\sigma$ donne donc l'heure de d\'ebut de
	chacune des t\^aches. La d\'efinition dit que si une t\^ache $v$ d\'epend d'une t\^ache $u$ alors
	la date de d\'ebut de $v$ sera plus grande ou \'egale au d\'ebut de l'ex\'ecution de la t\^ache $u$ plus son
	temps d'ex\'ecution.

	Une autre d\'efinition importante qui est propos\'ee par Leung et al. \cite{def2-ordo} est :
	\begin{definition}
		\label{def-ordo2}
		L'ordonnancement traite de l'allocation de ressources rares à des activit\'es avec
		l'objectif d'optimiser un ou plusieurs critères de performance.
	\end{definition}

	Cette d\'efinition est plus g\'en\'erique mais elle nous int\'eresse d'avantage que la d\'efinition \ref{def-ordo1}.
	En effet, la partie qui nous int\'eresse dans cette première d\'efinition est le respect de la pr\'ec\'edance des t\^aches.
	Dans les faits les dates de d\'ebut ne nous int\'eressent pas r\'eellement.

	En revanche la d\'efinition \ref{def-ordo2} sera au c\oe{}ur du projet. Pour se convaincre de cela,
	il nous faut d'abord d\'efinir quel est le type de problème d'ordonnancement qu'on traite et quelles
	sont les m\'ethodes qu'on peut appliquer.

	Les problèmes d'ordonnancement peuvent être class\'es en diff\'erentes cat\'egories :
	\begin{itemize}
		\item T\^aches ind\'ependantes : dans cette cat\'egorie de problèmes, les t\^aches sont complètement ind\'ependantes
		les unes des autres. Dans notre cas, ce n'est pas le plus adapt\'e.
		\item Graphe de t\^aches : la d\'efinition \ref{def-ordo1} d\'ecrit cette cat\'egorie. La plupart du temps,
		les t\^aches sont repr\'esent\'ees par une DAG. Cette cat\'egorie est très proche de notre cas puisque nous devons \'egalement ex\'ecuter
		des t\^aches qui ont un certain nombre de d\'ependances. On pourra même dire que dans certain cas,
		on a des anti-arbres, c'est à dire que nous avons une multitude de t\^aches d'entr\'ees qui convergent vers une
		t\^ache de fin.
		\item Workflow : cette cat\'egorie est une sous cat\'egorie des graphes de t\^aches dans le sens où
		il s'agit d'un graphe de t\^aches r\'ep\'et\'e de nombreuses de fois. C'est exactement ce type de problème
		que nous traitons ici.
	\end{itemize}

	Bien entendu, cette liste n'est pas exhaustive et il existe de nombreuses autres classifications et sous-classifications
	de ces problèmes. Nous n'avons parl\'e ici que des cat\'egories les plus communes.

	Un autre point à d\'efinir, est le critère d'optimisation. Il y a là encore un grand nombre de
	critères possibles. Nous allons donc parler des principaux :
	\begin{itemize}
		\item Temps de compl\'etion total (ou Makespan en anglais) : ce critère est l'un des critères d'optimisation
		les plus courant. Il s'agit donc de minimiser la date de fin de la dernière t\^ache de l'ensemble des
		t\^aches à ex\'ecuter. L'enjeu de cette optimisation est donc de trouver l'ordonnancement optimal permettant
		la fin d'ex\'ecution au plus tôt.
		\item Somme des temps d'ex\'ecution (Flowtime en anglais) : il s'agit de faire la somme des temps d'ex\'ecution de toutes les t\^aches
		et d'optimiser ce r\'esultat.
		\item Le d\'ebit : ce critère quant à lui, vise à augmenter au maximum le d\'ebit de traitement des donn\'ees.
	\end{itemize}

	En plus de cela, on peut avoir besoin de plusieurs critères d'optimisation. Il s'agit dans ce cas d'une optimisation
	multi-critères. Bien entendu, cela complexifie d'autant plus le problème car la solution la plus optimale pour un
	des critères peut être très mauvaise pour un autre critère. De ce cas, il s'agira de trouver une solution qui permet
	de faire le meilleur compromis entre tous les critères.

	\subsection{Formalisation du problème}
	\label{formalisation}
	Maintenant que nous avons donn\'e le vocabulaire li\'e à l'ordonnancement, nous allons pouvoir essayer caract\'eriser
	formellement notre problème. En effet, nous allons reprendre les contraintes \'enonc\'ees dans la sections \ref{def-contraintes}
	et nous essayerons de les formaliser le plus finement possible.

	Comme nous l'avons dit, une t\^ache est un bloc de traitement. Chaque t\^ache $i$ dispose d'un ensemble de paramètres
	que nous nommerons $\mathcal{P}_{i}$. Cet ensemble $\mathcal{P}_i$ est propre à chaque t\^ache et il variera d'une
	t\^ache à l'autre. Nous reviendrons plus tard sur les paramètres qui peuvent composer cet ensemble.

	Outre cet ensemble $\mathcal{P}_i$, chaque t\^ache dispose de paramètres communs :
	\begin{itemize}
		\item Dur\'ee de la t\^ache : Comme nous l'avons dit auparavant, dans le cadre d'un FPGA le temps est compt\'e en nombre de coup d'horloge.
		En outre, les blocs sont toujours sollicit\'es, certains même sont capables de lire et de renvoyer une r\'esultat à chaque coups d'horloge.
		Donc la dur\'ee d'une t\^ache ne peut être le laps de temps entre l'entr\'ee d'une donn\'ee et la sortie d'une autre. Nous d\'efinirons la
		dur\'ee comme le temps de traitement d'une donn\'ee, c'est à dire la diff\'erence de temps entre la date de sortie d'une donn\'ee
		et de sa date d'entr\'ee. Nous nommerons cette dur\'ee $\delta_i$. % Je devrais la nomm\'ee w comme dans la def2
		\item La pr\'ecision : La pr\'ecision d'une donn\'ee est le nombre de bits significatifs qu'elle compte. En effet, au fil des traitements
		les pr\'ecisions peuvent varier. On nomme donc la pr\'ecision d'entr\'ee d'une t\^ache $i$ comme $\pi_i^-$ et la pr\'ecision en sortie $\pi_i^+$.
		\item La fr\'equence du flux en entr\'ee (ou sortie) : Cette fr\'equence repr\'esente la fr\'equence des donn\'ees qui arrivent (resp. sortent).
		Selon les t\^aches, les fr\'equences varieront. En effet, certains blocs ralentissent le flux c'est pourquoi on distingue la fr\'equence du
		flux en entr\'ee et la fr\'equence en sortie. Nous nommerons donc la fr\'equence du flux en entr\'ee $f_i^-$ et la fr\'equence en sortie $f_i^+$.
		\item La quantit\'e de donn\'ees en entr\'ee (ou en sortie) : Il s'agit de la quantit\'e de donn\'ees que le bloc s'attend à traiter (resp.
		est capable de produire). Les t\^aches peuvent avoir à traiter des gros volumes de donn\'ees et n'en ressortir qu'une partie. Cette
		fois encore, il nous faut donc diff\'erencier l'entr\'ee et la sortie. Nous nommerons donc la quantit\'e de donn\'ees entrantes $q_i^-$
		et la quantit\'e de donn\'ees sortantes $q_i^+$ pour une t\^ache $i$.
		\item Le d\'ebit d'entr\'ee (ou de sortie) : Ce paramètre correspond au d\'ebit de donn\'ees que la t\^ache est capable de traiter ou qu'elle
		fournit en sortie. Il s'agit simplement de l'expression des deux pr\'ec\'edents paramètres. Nous d\'efinirons donc la d\'ebit entrant de la
		t\^ache $i$ comme $d_i^-\ =\ q_i^-\ *\ f_i^-$ et le d\'ebit sortant comme $d_i^+\ =\ q_i^+\ *\ f_i^+$.
		\item La taille de la t\^ache : La taille dans les FPGA \'etant limit\'ee, ce paramètre exprime donc la place qu'occupe la t\^ache au sein du bloc.
		Nous nommerons $\mathcal{A}_i$ cette taille.
		\item Les pr\'ed\'ecesseurs et successeurs d'une t\^ache : cela nous permet de connaître les t\^aches requises pour pouvoir traiter
		la t\^ache $i$ ainsi que les t\^aches qui en d\'ependent. Ces ensemble sont not\'es $\Gamma _i ^-$ et $ \Gamma _i ^+$ \\
		%TODO Est-ce vraiment un paramètre ?
	\end{itemize}

	Ces diff\'erents paramètres communs sont fortement li\'es aux \'el\'ements de $\mathcal{P}_i$. Voici quelques exemples de relations
	que nous avons identifi\'ees :
	\begin{itemize}
		\item $ \delta _i ^+ \ = \ \mathcal{F}_{\delta}(\pi_i^-,\ \pi_i^+,\ d_i^-,\ d_i^+,\ \mathcal{P}_i) $ donne le temps d'ex\'ecution
		de la t\^ache en fonction de la pr\'ecision voulue, du d\'ebit et des paramètres internes.
		\item $ \pi _i ^+ \ = \ \mathcal{F}_{p}(\pi_i^-,\ \mathcal{P}_i) $, la fonction $F_p$ donne la pr\'ecision en sortie selon la pr\'ecision de d\'epart
		et les paramètres internes de la t\^ache.
		\item $d_i^+\ =\ \mathcal{F}_d(d_i^-, \mathcal{P}_i)$, la fonction $F_d$ donne le d\'ebit sortant de la t\^ache en fonction du d\'ebit
		sortant et des variables internes de la t\^ache.
		\item $A_i^+\ =\ \mathcal{F}_A(\pi_i^-,\ \pi_i^+,\ d_i^-,\ d_i^+, \mathcal{P}_i)$
	\end{itemize}
	Pour le moment, nous ne sommes pas capables de donner une d\'efinition g\'en\'erale de ces fonctions. Mais en revanche,
	sur quelques exemples simples (cf. \ref{def-contraintes}), nous parvenons à donner une \'evaluation de ces fonctions.

	Maintenant que nous avons donn\'e toutes les notations utiles, nous allons \'enoncer des contraintes relatives à notre problème. Soit
	un DGA $G(V,\ E)$, on a pour toutes arêtes $(i, j)\ \in\ E$ les in\'equations suivantes :

	\paragraph{Contrainte de pr\'ecision :}
	Cette in\'equation traduit la contrainte de pr\'ecision d'une t\^ache à l'autre :
	\begin{align*}
		\pi _i ^+ \geq \pi _j ^-
	\end{align*}

	\paragraph{Contrainte de d\'ebit :}
	Cette in\'equation traduit la contrainte de d\'ebit d'une t\^ache à l'autre :
	\begin{align*}
		d _i ^+ = q _j ^- * (f_i + (1 / s_j) ) & \text{ où } s_j \text{ est une valeur positive de temporisation de la t\^ache}
	\end{align*}

	\paragraph{Contrainte de synchronisation :}
	Il s'agit de la contrainte qui impose que si à un moment du traitement, le DAG se s\'epare en plusieurs branches parallèles
	et qu'elles se rejoignent plus tard, la somme des latences sur chacune des branches soit la même.
	Plus formellement, s'il existe plusieurs chemins disjoints, partant de la t\^ache $s$ et allant à la t\^ache de $f$ alors :
	\begin{align*}
		\forall \text{ chemin } \mathcal{C}1(s, .., f),
			\forall \text{ chemin } \mathcal{C}2(s, .., f)
				\text{ tel que } \mathcal{C}1 \neq \mathcal{C}2
		\Rightarrow
			\sum _{i} ^{i \in \mathcal{C}1} \delta_i = \sum _{i} ^{i \in \mathcal{C}2} \delta_i
	\end{align*}

	\paragraph{Contrainte de place :}
	Cette in\'equation traduit la contrainte de place dans le FPGA. La taille max de la puce FPGA est nomm\'e $\mathcal{A}_{FPGA}$ :
	\begin{align*}
		\sum ^{\text{t\^ache } i} \mathcal{A}_i \leq \mathcal{A}_{FPGA}
	\end{align*}

	\subsection{Exemples de mod\'elisation}
	\label{exemples-modeles}
	Nous allons maintenant prendre quelques blocs de traitement simples afin d'illustrer au mieux notre modèle.
	Pour tous nos exemple, nous prendrons un d\'ebit en entr\'ee de 200 Mo/s avec une pr\'ecision de 16 bit.

	Prenons tout d'abord l'exemple d'un bloc de d\'ecimation. Le but de ce bloc est de ralentir le flux en ne gardant
	que certaines donn\'ees à intervalle r\'egulier. Cet intervalle est appel\'e le facteur de d\'ecimation, on le notera $N$.

	Donc d'après notre mod\'elisation :
	\begin{itemize}
		\item $N \in \mathcal{P}_i$
		%TODO N ou 1 ?
		\item $\delta _i = N\ c.h.$ (coup d'horloge)
		\item $\pi _i ^+ = \pi _i ^- = 16 bits$
		\item $f _i ^+ = f _i ^-$
		\item $q _i ^+ = q _i ^- / N$
		\item $d _i ^+ = q _i ^- / N / f _i ^-$
		\item $\Gamma _i ^+ = \Gamma _i ^- = 1$\\
		%TODO Je ne sais pas trouver la taille...
	\end{itemize}

	Un autre exemple int\'eressant que l'on peut donner, c'est le cas des spliters. Il s'agit la aussi d'un bloc très
	simple qui permet de dupliquer un flux. On peut donc donner un nombre de sorties à cr\'eer, on note ce paramètre
	%TODO pas très inspir\'e...
	$X$. Voici ce que donne notre mod\'elisation :
	\begin{itemize}
		\item $X \in \mathcal{P}_i$
		\item $\delta _i = 1\ c.h.$
		\item $\pi _i ^+ = \pi _i ^- = 16 bits$
		\item $f _i ^+ = f _i ^-$
		\item $q _i ^+ = q _i ^-$
		\item $d _i ^+ = d _i ^-$
		\item $\Gamma _i ^- = 1$
		\item $\Gamma _i ^+ = X$\\
	\end{itemize}

	L'exemple suivant traite du cas du shifter. Il s'agit d'un bloc qui a pour but de diminuer le nombre de bits des
	donn\'ees afin d'acc\'el\'erer les traitement sur les blocs suivants. On peut donc donner le nombre de bits à shifter,
	on note ce paramètre $S$. Voici ce que donne notre mod\'elisation :
	\begin{itemize}
		\item $S \in \mathcal{P}_i$
		\item $\delta _i = 1\ c.h.$
		\item $\pi _i ^+ = \pi _i ^- - S$
		\item $f _i ^+ = f _i ^-$
		\item $q _i ^+ = q _i ^-$
		\item $d _i ^+ = d _i ^-$
		\item $\Gamma _i ^+ = \Gamma _i ^- = 1$\\
	\end{itemize}

	Nous allons traiter un dernier exemple un peu plus complexe, le cas d'un filtre d\'ecimateur (ou FIR). Ce bloc
	est compos\'e de beaucoup de paramètres internes. On peut d\'efinir un nombre d'\'etages $E$, qui repr\'esente le nombre
	d'it\'erations à faire avant d'arrêter le traitement. Afin d'effectuer son filtrage, on doit donner au bloc un ensemble
	de coefficients $C$ et par cons\'equent ces coefficients ont leur propre pr\'ecision $\pi _C$. Pour finir, le dernier
	paramètre à donner est le facteur de d\'ecimation $N$. Si on applique notre mod\'elisation, on peut obtenir cela :
	\begin{itemize}
		\item $E \in \mathcal{P}_i$
		\item $C \in \mathcal{P}_i$
		\item $\pi _C \in \mathcal{P}_i$
		\item $N \in \mathcal{P}_i$
		\item $\delta _i = E * |C| * q_i^-\ c.h.$ %Trop simpliste
		\item $\pi _i ^+ = \pi _i ^- * \pi _C$
		\item $f _i ^+ = f _i ^-$
		\item $q _i ^+ = q _i ^- / N$
		\item $d _i ^+ = q _i ^- / N / f _i ^-$
		\item $\Gamma _i ^+ = \Gamma _i ^- = 1$\\
	\end{itemize}

	Ces exemples ne sont que des modèles provisoires; pour s'assurer de leur performance, il faudra les
	confronter à des simulations.


Bien que les articles sur les skeletons, \cite{gwen-cogen}, \cite{skeleton} et \cite{hide}, nous aient donn\'e des indices sur une possible
	mod\'elisation, ils \'etaient encore trop focalis\'es sur l'optimisation spatiale des blocs. Nous nous sommes donc inspir\'es de ces travaux
	pour proposer notre modèle, en faisant abstraction des optimisations bas niveau.