Viognier wrote:> > When resampling a sequence to an output sample rate equal to a rational > number times the input sample rate, (FSout = (P/Q) * FSin), polyphase > filtering is the method of choice. But when the output sample rate is > an irrational multiple of the input rate (FSout = r * FSin), one cannot > use the polyphase filtering. That's when spline methods may be helpful.i kinda agree with Eric that polyphase interpolation (essentially designing an optimal LPF and keeping the impulse response in a table) is preferable to spline or polynomial interpolation for any bandlimited reconstruction but where i might agree with Viognier is because of the cost. you can't have a polyphase table that has all infinite possible fractional delays. but you *can* evaluate a polynomial for any fractional delay expressible in the machine arithmetic. but if you can afford some memory for the polyphase table, you can use a polynomial interpolation between entries of that table. r b-j

# interpolation accuracy, oversampling and fractional interpolation

Started by ●October 9, 2006

Reply by ●October 11, 20062006-10-11

Reply by ●October 11, 20062006-10-11

Martin Eisenberg wrote:> robert bristow-johnson wrote: > > Mark wrote: > > > you can compare the merits by recognizing that, even for > > spline/polynomial interpolation, all methods can be represented > > as convolving the string of ideal impulses (weighted by the > > sample values) with the corresponding interpolation function > > which is an impulse response of a LPF. then you can judge which > > LPF is better for the application. > > >> It seems to me that in the case of the audio method.. if the > >> low pass filter were "perfect" in rejection of the image > >> frequencies, the insert sample/filter method provides "perfect" > >> results. By comparison, the linear, cubic, spline methods are > >> always imperfect. > > > > so is any FIR or causal IIR. > > What's more, don't the IRs of interpolating splines converge to the > sinc function at infinite order?not B-splines (as i understand the definition) but any polynomial that interpolates directly through the given (uniformly spaced) samples. since the IR corresponding to Lagrange or Hermite polynomials has to be 1 at t=0 and 0 for every other sample time, i would think that this function would become a sinc() function in the limit. r b-j

Reply by ●October 11, 20062006-10-11

Eric Jacobsen wrote:> I'd still disagree, having used polyphase filters for many years for > locking symbol clocks in digital recievers, with no measurable loss. >I can see using polyphase to compensate for receiver sample clock phase offset. Can it be used to compensate for receiver sample clock frequency offset too ( as in the case when a fixed crystal asynchronous ADC is being used)? -V

Reply by ●October 11, 20062006-10-11

On Oct 11, 2:46 pm, "robert bristow-johnson" <r...@audioimagination.com> wrote:> Viognier wrote: > > > When resampling a sequence to an output sample rate equal to a rational > > number times the input sample rate, (FSout = (P/Q) * FSin), polyphase > > filtering is the method of choice. But when the output sample rate is > > an irrational multiple of the input rate (FSout = r * FSin), one cannot > > use the polyphase filtering.Note that this is confusing the method of sample interpolation with the functional approximation of a filter kernel. You can also directly compute or approximate the coefficients for any fractional delay expressible in the machine arithmetic, at some computational cost for a given accuracy. Whether linear interpolation from a table lookup is the best method for the functional approximation is a completely separate subject. For a "one shot" interpolation within a small set of points, and using a simple interpolation kernel (von Hann windowed sinc, for instance), it can be faster to compute a few individual interpolation coefficients directly using a floating point math library rather than pre-computing some big table, only to never use any table entries without interpolation. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M

Reply by ●October 11, 20062006-10-11

On Oct 9, 7:57 am, "Mark" <makol...@yahoo.com> wrote:> I have always been confused about the relationship between this kind of > interpolation (linear, cubic, spline etc) and the kind used in DSP > audio work for up sampling. In audio up-sampling we just insert zero > value samples and then pass the result through a low pass filter to > remove the image frequencies.The DSP kind of interpolation usually assumes the the data is from a bandlimited signal (or "close enough"). Other polynomial types of interpolation usually have an error bounded by the existance, continuity, or peak magnitude of some number of derivatives of the function. How are these two different types of constraints related? -- rhn A.T nicholson d.0.t C-o-M

Reply by ●October 11, 20062006-10-11

Ron N. wrote:> > Note that this is confusing the method of sample interpolation > with the functional approximation of a filter kernel. You > can also directly compute or approximate the coefficients for > any fractional delay expressible in the machine arithmetic, at > some computational cost for a given accuracy. Whether linear > interpolation from a table lookup is the best method for the > functional approximation is a completely separate subject. > For a "one shot" interpolation within a small set of points, > and using a simple interpolation kernel (von Hann windowed sinc, > for instance), it can be faster to compute a few individual > interpolation coefficients directly using a floating point > math library rather than pre-computing some big table, only > to never use any table entries without interpolation.strictly speaking, i agree with you, Ron. because you said "for a given accuracy". but i would say that directly computing the coefficients for an optimally designed (say, using Parks-McClellan) LPF for bandlimited interpolation in a DSP real-time is not realistic. practically speaking, if you want your filter kernel to be what comes out of P-McC (often called "remez" by the MATLABers) or a similar optimizing program, then the method has to be some kind of table lookup and, if you need arbitrary fractional delays that do not correspond exactly to any table entry, you gotta interpolate using some function that has infinite resolution. r b-j

Reply by ●October 12, 20062006-10-12

For high performance interpolator, take a look at the thesis of "Jussi Vesma". Also he has published numerous paper in IEEE about Polynomial-Based interpolation. If someone tell you that you cannot get good performance with a polymonial interpolator, send them this link: www.cs.tut.fi/kurssit/83080/Pol_Interpol2004.pdf (see fig 25) LM "renaudin" <alsaeed86@gmail.com> wrote in message news:g-6dnVM345-viLfYnZ2dnUVZ_vidnZ2d@giganews.com...> Hi all, > > Interpolation of a sampled signal x(n) to generate an up-sampled signal > y(n) can be represented mathematically as: > > y(n) = x(n/L) /* here L is the intrpolation factor. > > Regardless of the type of interpolation, if we increase the value of 'L' > weather it will increase the accuracy of interpolation process? > > Whats about the fractional Interpolation/Decimation factors how to deal > with them? > > Thanks in advance for the discussion and comments. > > Renaudin >

Reply by ●October 12, 20062006-10-12

robert bristow-johnson wrote:> Viognier wrote: > > > > When resampling a sequence to an output sample rate equal to a rational > > number times the input sample rate, (FSout = (P/Q) * FSin), polyphase > > filtering is the method of choice. But when the output sample rate is > > an irrational multiple of the input rate (FSout = r * FSin), one cannot > > use the polyphase filtering. That's when spline methods may be helpful. > > i kinda agree with Eric that polyphase interpolation (essentially > designing an optimal LPF and keeping the impulse response in a table) > is preferable to spline or polynomial interpolation for any bandlimited > reconstruction but where i might agree with Viognier is because of the > cost. you can't have a polyphase table that has all infinite possible > fractional delays. but you *can* evaluate a polynomial for any > fractional delay expressible in the machine arithmetic. but if you can > afford some memory for the polyphase table, you can use a polynomial > interpolation between entries of that table. > > r b-jI'm just starting to look at this interpolation problem, but i'm confused by the terminology that you're using in the thread. When you refer to a low pass polyphase interpolation filter, what are you actually talking about? The ideal brickwall bandlimiting filter aka the sinc function in the time domain truncated to the length of your FIR (windowed if you prefer)? I know that you can use any type of low pass filter to perform the interpolation, but what parameters influence the design? In addition, I have one comment regards using polynomial or splines for the interpolation, it seems to me that the FIR polyphase method requires that a new set of filter coefficients must be generated for each change in, say, the fractional delay, hence the lookup table that Eric talked about. However, when using polynomial interpolation via a Farrow type FIR structure, then only one parameter is used to change the interpolation and approximate the fractional delay. This is what LM posted a link to further down in this thread and what i had been looking at implementing. The latter method seems to me to be more flexible and efficient because there isn't any swapping in and out of filter coefficients. Perhaps i'm missing something important here? col

Reply by ●October 12, 20062006-10-12

On Oct 12, 6:08 am, c...@hotmail.com wrote:> robert bristow-johnson wrote: > > Viognier wrote: > > > > When resampling a sequence to an output sample rate equal to a rational > > > number times the input sample rate, (FSout = (P/Q) * FSin), polyphase > > > filtering is the method of choice. But when the output sample rate is > > > an irrational multiple of the input rate (FSout = r * FSin), one cannot > > > use the polyphase filtering. That's when spline methods may be helpful. > > > i kinda agree with Eric that polyphase interpolation (essentially) > > designing an optimal LPF and keeping the impulse response in a table) > > is preferable to spline or polynomial interpolation for any bandlimited > > reconstruction but where i might agree with Viognier is because of the > > cost. you can't have a polyphase table that has all infinite possible > > fractional delays. but you *can* evaluate a polynomial for any > > fractional delay expressible in the machine arithmetic. but if you can > > afford some memory for the polyphase table, you can use a polynomial > > interpolation between entries of that table. > > > r b-jI'm just starting to look at this interpolation problem, but i'm > confused by the terminology that you're using in the thread. When you > refer to a low pass polyphase interpolation filter, what are you > actually talking about? The ideal brickwall bandlimiting filter aka > the sinc function in the time domain truncated to the length of your > FIR (windowed if you prefer)? I know that you can use any type of low > pass filter to perform the interpolation, but what parameters influence > the design? > > In addition, I have one comment regards using polynomial or splines for > the interpolation, it seems to me that the FIR polyphase method > requires that a new set of filter coefficients must be generated for > each change in, say, the fractional delay, hence the lookup table that > Eric talked about. However, when using polynomial interpolation via a > Farrow type FIR structure, then only one parameter is used to change > the interpolation and approximate the fractional delay. This is what > LM posted a link to further down in this thread and what i had been > looking at implementing. The latter method seems to me to be more > flexible and efficient because there isn't any swapping in and out of > filter coefficients.Aren't the Farrow stucture and the table lookup, after the algorithms are unwound, essentially just 2 different way of generating each set of filter coefficients? It comes down to a trade-off between efficiency and quality of two different methods of function estimation of a filter kernel for a given delay. Table look-up, polynomial approximation, a combination of the two, or some other method of functional approximation (e.g. calculate all the trig fuctions using cordic inside the system math libraries, etc.) IMHO. YMMV. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M

Reply by ●October 12, 20062006-10-12

On 11 Oct 2006 14:46:25 -0700, "robert bristow-johnson" <rbj@audioimagination.com> wrote:> >Viognier wrote: >> >> When resampling a sequence to an output sample rate equal to a rational >> number times the input sample rate, (FSout = (P/Q) * FSin), polyphase >> filtering is the method of choice. But when the output sample rate is >> an irrational multiple of the input rate (FSout = r * FSin), one cannot >> use the polyphase filtering. That's when spline methods may be helpful. > >i kinda agree with Eric that polyphase interpolation (essentially >designing an optimal LPF and keeping the impulse response in a table) >is preferable to spline or polynomial interpolation for any bandlimited >reconstruction but where i might agree with Viognier is because of the >cost. you can't have a polyphase table that has all infinite possible >fractional delays. but you *can* evaluate a polynomial for any >fractional delay expressible in the machine arithmetic. but if you can >afford some memory for the polyphase table, you can use a polynomial >interpolation between entries of that table. > >r b-jAs I mentioned elsewhere, in a practical system with fixed precision there is a number of phases beyond which the coefficients won't change when trying to increase the resolution. I looked at this years ago when designing a polyphase for a comm receiver and trying to answer the question of how many phases did we really need to not lose any performance. In other words, for an example system that uses 8-bit precision for the coefficients there will be some minimum phase change beyond which the change in phase will result in coefficient changes that are below the LSB in the current coefficients. Surprisingly, I found that for our case that meant that with a very practical number of phases we had essentially infinite phase precision, or at least within the precision we were using one would never tell the difference. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org