*"Did you try to play at 80cm on Etherwave?" - Buggins*

No, but one with the ESPE01 or YAEWSBM might be somewhat playable there, but even then the "playing" would most likely only be suited to effects and such.

*"So, I completely disagree with part 1 of your calculations."*

I see what you are saying, and I don't entirely disagree with you disagreeing with me!

What's at the heart of this is how the "noise" is characterized. For theoretical calculations it is usually treated as zero mean white noise, but here the "noise" is clearly correlated between samples. Whatever error a single period measurement has is made up for in the next, so after many consecutive period measurements the error or uncertainty or noise is still one sample clock period.

Let me use the previous analogy because it better reflects the final implementation (a Theremin axis) and I'm simplifying the numbers a bit:

- LC resonance is 1MHz, we sample this at 100MHz and then sub-sample that at 1kHz.

- Sampling error is correlated so the 1kHz sub-sample will only have one clock of 100MHz of "nose" or error.

- 100MHz / 1kHz = 100,000 = ~17 bits.

So I was wrong by a wide margin here because I was treating the noise as uncorrelated.

However, if there are only 4 bits of "changing" information out of a total of 8, then I believe we would have to reduce this to 17 - 4 = ~~11~~ 13 bits of "real" postional information. I mean, the 17 bits above have a limited dynamic range of 4 bits out of 8 bits. Does that make sense?

Because the noise is correlated, I don't believe further inspection and comparisons of the data with different shifts will yield much in the way of further benefit (i.e. higher order CIC or similar). Some perhaps, but not as much because there isn't as much uncorrelated noise to filter away. I mean, if you shift the circular buffer by one, pulling in one new sample and throwing away one older sample, the average isn't going to change much if you average this with the previous average. You'll get a smoother output, but I don't think it will net you as much in terms of resolution increase as the uncorrelated case can.

Actually, by shifting and overlapping the measurements by 1mm, as in your second example, I believe you are introducing uncorrelated noise for each, and I'm not sure if that might actually help or hurt the final average. There are real scenarios where more measurement actually hurts the SNR.

Of course you can take the output and filter it again down to 100Hz and increase the resolution ~1.6 bits. Or you could immediately go from 100MHz to 100Hz and gain ~3.3 bits due to the correlation. But then you might hear zippering if you don't do further filtering.

**[EDIT]** Vadim, I hope you don't think I'm arguing with you, or disagreeing just to be disagreeable, you are really helping me to straighten out my own thoughts regarding sampling. I appreciate it when you point out the flaws in my reasoning. As Feynman said, the objective is to prove yourself wrong as fast as possible! I'm also not playing "devils' advocate" here, I want to understand this as well as you do. I don't get a lot of opportunity to bounce this stuff off of people who are also in the process of thinking it through.

**[EDIT2]** The real trick would be to somehow keep the noise correlated through the entire process, but I think uncorrelated noise is ultimately introduced whenever the value is actually measured to be used. I mean, every time you discard samples from the average you are starting it at a new location, which introduces new uncertainty or noise. But if the sample discard is total after a single use, then the new start point is the old end point, and that would seem to minimize uncertainty. I suppose there is a reason integrate-and-dump is called a "matched filter".