DIGITAL FILTER STRUCTURES

DIGITAL FILTER STRUCTURES

Let’s start with the discussion of digital filter structures and considered the problem of analysis. The analysis is very simple. That is, one can write the equations and eliminate the intermediate variables to get the overall transfer function Y(z)/X(z). On the other hand, the synthesis problem is tougher than the analysis problem. One of the characteristics in the synthesis problem is that, if there exists one solution then there will exist an indefinite number of solutions.

One may argue if one gets one solution then why does one stick to it? Different structures have different properties, the two things we have to worry about are overflow. (if at any stage in the filter there is saturation then there shall be overflow and the filter will cease to perform in the manner which is desirable). Also, the word length (this arises because of the finite number of bits that one has to use). Therefore, of necessity, the coefficient, as well as the signals, have to be truncated.

For example, an 8bit number multiplied by another 8bit gives a 16bit number and if one’s hardware is 8bit, then one has to truncate it to 8bit. Although, various structures have different properties that have the lowest quantization error. Well, this is the reason why the multiplicity of structures has to be worked out before making a choice of an optimum or a near-optimum structure.

Meanwhile, one can make them structure canonic with respect to multipliers as well as delays. We shall elaborate on this to see how a general infinite impulse response transfer function can be realized by the canonic structure.

DIGITAL FILTER STRUCTURES: Canonic Structure

Firstly, let us discuss transposition, it allows one structure to be converted to another. We shall explain the process with reference to the transfer function.

H(z)=\frac { { p }_{ 0 }+{ p }_{ 1 }{ z }^{ -1 } }{ 1+d_{ 1 }{ z }^{ -1 } }

Therefore the structure we get from the transfer function is shown below:

Canonic Structure

This is a canonic structure (because it is canonic in multipliers it requires only three multipliers P0, P1, and d1 and in delays, it requires only one delay since it is a first-order filter). In this structure we will make the following changes:

First we interchange the input and output and reverse all the arrows. This is shown below;

Canonic Structure

We observe that the summation nodes now become a pickup node. This is because it no longer performs summation and similarly all the pickup nodes now become summation nodes. Therefore;

Transposition structure

Transposition of H(z)

Now we have simply transposed the structure, so it involves three steps;

  • Interchange the input and output
  • Reverse all the arrows
  • Replace each summer by a pickup point or node and vice versa.

Conventionally, we draw these structures with the input at the left-hand side and the output at the right-hand side.

This is in general true, given a digital filter structure, one can always make a transposition to get an alternative structure whose properties overflow and word length effects are different. Ideally, if we had an infinite bit arithmetic (an infinite storage capacity). Then the structures would have been the same that is, in theory they are the same but in practical implementations they are different.

We shall give one or two illustrations of transposed structures. Well, before that let us study systematically how finite impulse response filters are realized.

Finite impulse response

Finite impulse response filters as we know is of the form;

H(z)=\sum _{ k=0 }^{ N }{ h[k]{ z }^{ -k } }

such that the length is N + 1. very easily

y[n]=\sum _{ k=0 }^{ N }{ h[k]x[n-k] }

and if we derive the structure which uses the coefficients h[k] or hk for short then for a 4th order system that is N = 4

H(z)=\sum _{ k=0 }^{ 4 }{ h[k]{ z }^{ -k } }

DIGITAL FILTER STRUCTURES

We get a direct structure because the multipliers are obtained directly from the transfer function, no manipulation is needed. If we transpose this structure we get;

Finite impulse response

This is the resulting transposed structure and they are theoretically similar but in practical implementation, the properties are different. They are sometimes known as transversal structures. They are indirect structures also for finite impulse response filters.

It is possible to write the transfer function as a product of only first-order filters. Although, the disadvantage is that we get complex coefficients because it is not guaranteed that all zeros are real. If all zeros are real, then we can write the transfer function as a product of only first-order factors. Meanwhile, complex zeros must be written together so that the coefficients are real when we want to implement the filter in real-time with real coefficients. Each of these factors can now be realized in the direct form or its transpose and then the realization is cascaded together. So this is called cascaded realization.

Each of these factors has to now be realized in the transversal form for whatever structure is chosen. Like we said, if there exists one simple solution then there are indefinite solutions. Therefore the implementation of the cascade is given as

H(z) = Π{H}_{1}(z)

where H1 is no more complicated than a quadratic expression.

Then the order of cascading is not important because we can start the cascade at H1(z) to H2(z) or interchange them because {H}_{1}(z){H}_{2}(z){H}_{3}(z) = {H}_{2}(z){H}_{3}(z){H}_{1}(z) (that is multiplication is commutative). But in practice, the order of cascading is important because of word length effects and overflow.

Cascade Realizations

So the job after finding a cascade realization is not complete if we want to implement the realization, then we must study the word quantization behavior and the overflow behavior of the structure. What one normally does is that the section with the lowest gain is placed at the beginning of the cascade. This is done so that it’s signal output does not saturate the next blocks realizing the various transfer functions. It may not be possible to avoid overflow, particularly in fixed-point arithmetic. DIGITAL FILTER STRUCTURES

The diagram range of fixed-point arithmetic is severely limited as compared to floating-point. Now if that happens, that is if one is keen on fixed point all other peripheral structures are fixed-point implementation. Then one can not suddenly switch to the floating-point because it is very costly proposition than the alternative in scaling.

Suppose at the {i}^{th} stage the signal gotten has exceeded the dynamic range of storage of accumulators. Then at {i}^{th} − 1 stage one uses a scaling constant which is usually of the form {2}^{-1}, then the overall transfer function is realized which is not the original transfer function, H(z), but through the scaling constant. So scaling is also an important consideration.

One may ask, why not scale at the input? scaling the input exposes the system to noise. If the signal is scaled at the input, it becomes weak and the noise may take over (noise corruption). Therefore scaling needs to be done in a distributed manner judiciously so that one maintains a balance between overflow, signal strength, and word length effects (that is quantization errors). This ends the discussion on cascade realization.

Parallel Realizations

Is it possible to carry out a parallel implementation of finite impulse response transfer functions? Parallel realization is usually resorted to just speed up the process. Parallel algorithm/software is faster than serial processing. In the finite impulse response transfer function, it is not possible to implement a parallel processing because suppose we have

H(z)=\sum _{ k=0 }^{ 7 }{ h[k]{ z }^{ -k } }

Therefore 7 delays are needed in whatever way one chooses to decompose. So the processing time can not be reduced by parallel implementation in the case of Finite Impulse Response filters or transfer functions. This is not true for Infinite Impulse Response transfer functions. However, the parallel implementation is useful in the context of multi-rate signal processing that is interpolation and decimation.

Parallel decomposition helps here and it is done in a particular manner which is called polyphase decomposition. Polyphase decomposition has come to be known as an integral part of any programmable digital signal processing interpolation and decimation. Parallel decomposition is resorted to for the main reason which is that it reduces the hardware complexity of the structure.DIGITAL FILTER STRUCTURES

Hboiacademy
Follow me
Latest posts by Hboiacademy (see all)