**It Was Necessary To Destroy The Precision In Order To Save It**

I read an article yesterday regarding the writing of early space software, where the processors were asthmatic and had no floating point hardware. Seems they spent 30% of the time just managing precision, which is much better than my ratio!

For the last couple of days I've been trying to implement a toy NCO (numerically controlled oscillator) that employs a fractional delay to align the sawtooth edge, which happens at accumulator rollover, and the value in the accumulator at rollover gives the fractional delay if you divide it by the phase increment (normalize it). So we need the reciprocal of the phase increment, which calls for the dreaded integer division, where precision basically goes to die.

Premature optimization, but I pared the Newton's method integer quotient and remainder subroutine down to give just give the reciprocal, which is 22 cycles max. The precision issue raises its head when you feed it larger integers, which give very small fractional results. Give it 32 bits and you get 0 bits, give it 0 bits and you get 32 bits, so the happy medium seems to be 16 bits, but it really depends on the range of the input data.

Given a 32 bit accumulator, to generate 32Hz at a 48kHz sampling rate we need a phase increment of (32 / 48k) * 2^32 = 2863311, which is 21.5 bits of info, taking the reciprocal of this gives 10.5 bits of info, and we have to take the worst (10.5 bits) here for the precision (garbage in/out). To generate 8kHz the phase increment is (8k / 48k) * 2^32= 733007751, which is 29.5 bits of info, which means the reciprocal only has 2.5 bits of info! Shifting the phase increment to the right 10 bits obviously throws 10 bits of input info away, but increases the minimum precision of the reciprocal. Over the 32Hz to 8kHz range this shift gives a precision of 15.5 bits over the middle range and 12 bits at the extremes, which should be sufficient for this application.

**[EDIT]** So I used the above to reduce aliasing and it does work. I can get a clean sounding sawtooth up to ~1.4kHz. Need to try it with 8x oversampling. One nice thing about that is the reciprocal is a constant over the oversampled period. Not sure where this is going as I really like the phase modulated sine wave approach, and I don't think this method of alias reduction adapts well to that. I'd like a generic process that is continuous, just feed it anything and have it kill aliasing without looking for edges, but I'm not aware of any process that can do that.

**[EDIT2]** Here's the sawtooth NCO:

The frequency (phase increment) comes in and gets scaled to C9 max. The upper path shifts it right 10 places to trade 1/x precision, then 1/x is called (unsigned). The middle / lower path accumulates the phase increment, producing old and new values, which are compared (signed) to detect the sawtooth edge. If so, 1/2 (2^31) is added to the new to make it unsigned, whereupon it is shifted right 10 places to match 1/x, then the two are multiplied together (regular, not extended multiplication, which is sign agnostic). The resulting unsigned value is use to crossfade between the old and new NCO values, and the result is the output sawtooth waveform (signed). When there isn't an edge the old and new get averaged together, which gives us a filter zero at Nyquist.

The NCO accumulation value can be seen as signed or unsigned, but you have to be consistent or it won't work (ask me how I know this). As with PLLs, I get easily confused when it comes to "error" vs. "correction" signals.

Lately I'm coding up NCOs and commenting all but one out, and recording the audio of the variations in one audio file, comparing the sound, waveforms, and spectra in Audition, an arrangement which is working out well. Otherwise it's hard to keep it all straight.