In this subsection, we present an example to clarify how preprocessing is performed in our approach by PCIU. The example is based on the information for chunk #10 found in Table 1.
The former represents one of the main features of the PCIU algorithm, namely, that the algorithm can accommodate efficient incremental updates to the ruleset.
Since preprocessing in PCIU is a fast and efficient task, fragmentation is not considered to be an issue since the packet classification engine can be reset and restarted.
The PCIU algorithm was evaluated and compared to state-of-the-art techniques such as RFC and HiCut using several benchmarks in .
This paper seeks to explore the design space of translating the PCIU algorithm into hardware by utilizing several optimization techniques, ranging from fine-grain to coarse grain and parallel coarse grain approaches.
The Impulse-C CoDeveloper Application Manager Xilinx Edition Version 3.70.a.10 was used to implement the PCIU algorithm.
Figure 3 depicts the overall PCIU Impulse-C system organization.
It is important to bring to the attention of the reader that the proposed PCIU hardware accelerator can be mapped to other less expensive FPGAs such as Virtex-5LX as long as enough Block Ram is available to accommodate the bit vectors (stored in [Mem.sub.2]) resulting from the preprocessing stage and used in the classification phase.
In the next few sections we will describe in detail the steps to transform the pure software implementation of the PCIU to hardware via the Impulse-C platform.
The original PCIU's preprocessing C-code was mapped to the CoDeveloper to generate the baseline implementation.
The pure software implementation of the PCIU algorithm was executed on a state-of-the-art x86 Family 15 Model 4 Intel Xeon processor operating at 3.4 GHz.
The incremental update capabilities of the PCIU running on an FPGA are still preserved.