Let's Design and Build a (mostly) Digital Theremin!

Posted: 3/2/2016 2:40:22 PM
oldtemecula

From: 60 Miles North of San Diego, CA

Joined: 10/1/2014

Dewster said: "Tradition is something you often end up fighting and conceding to with guitars, whereas there is very little of that with Theremins."

And there is your downfall as I believe the beauty of the theremin is completely in the on-stage classical tradition or presentation. Imagine Clara Rockmore playing metal plates. I think plates are a good approach but using the word theremin once again misleads, ask Paul Tanner where it got him, no wait he is feeding the worms.

Christopher

Posted: 3/2/2016 3:07:43 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

"And there is your downfall as I believe the beauty of the theremin is completely in the on-stage classical tradition or presentation. Imagine Clara Rockmore playing metal plates."  - oldtemecula

Opinions differ, and audiences could certainly be more fickle than I'm imagining.  But the physical appearance of the modern Theremin is already all over the place, with the lectern type obscuring the player (where the player is "playing the top of the coil" as much as they are playing the exposed antenna), to the slim horizontal bar Subscope, to the baguette from space Theremini, etc.  It's hard for me to imagine throwing a plate into the mix will instantly kill the appeal of the performance, but I could certainly be mistaken.

If things work out, players will come for the precise linearity of the pitch field and stay for the precise linearity and broad useful adjustments to the pitch field (and hopefully stay for the tuner and multi-axis volume hand as well).

============

Been pissing w/ Hive again, this time adding mixed signed and unsigned add, subtract, and multiply opcodes (trivial to do, though no current use for them), as well as some cosmetic reordering of some ops.  Every time I do this the code editing and testing set me back at least a week.  Also refining the inverse algorithm as I imagine it will get used a lot.  I don't believe Newton's method can be beat with a polynomial, and I'm seeing why the negative exponent range in the floating point standards is kept smaller than the positive (so that underflow can't happen when taking the inverse).

Posted: 3/5/2016 4:46:44 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Language / Console / Graphics Suggestions?

Trigger warning: software rant ahead.

My Hive simulator / code development tool is written in C++ and is compiled to be a Win32 console app.  It runs absolutely fine on my XP32 machine, but I've noticed that it runs quite sluggishly on the two Win7 machines I have access to (an old laptop with ALL the Aero and similar stuff turned off, and a newer AMD A10 with the default settings).  The Win7 console must have layers of scripting or similar sleaze going on in the background.  I downloaded a few console replacements for XP32 but they won't size to 132 x 66 and/or are even more sluggish.  This makes me think I should be targeting a TK text window or similar, rather than the console, but graphical stuff has been deflecting me since the beginning of time it seems.  Not sure why but 99% of my dips into the software world are completely negative experiences, including languages and programming environments.  Dev-C++ is one notable exception, though the latest version is fairly buggy.

The above led me yet again to web searches for GTK and the like.  GTK seems to have dropped support for XP32, which is my fault I suppose. 

This led me yet again to web searches for alternate programming languages that better support GUI widgets and general programming.  Anything having anything to do with Java is out as the whole virtual machine thing is crazy, particularly the way Java implements it.  Go looks interesting; I particularly like the explicit assignment operator ":=" and the focus on leaving things out, but C pointers and the modern tilt towards iterators are kind of a turn off.  D has nice package support, something profoundly lacking in C (the whole C compiler directive thing ("#define) is super clunky and bug-prone).  I've looked at Python enough to know that I'm not fanboy material.  There is no dearth of testimonials on the web where individuals proclaim their search for the perfect programming language finally over now that they've found "x" (fill in the blank) - I dearly wish I could say the same (I find myself adding "-java -python" to most of my web searches these days to eliminate some of the chaff).

It's a total shame the Win console doesn't support ANSI escape characters for formatting and screen control, and that the Win7 console is a snail.  So much can be done in the console, neatly sidestepping the bottomless pit of full graphics support.  I know I say this a lot but the software world is pretty much utter chaos, and most SW people seem to oddly not notice this or are oddly OK with it.  Their world doesn't seem to be strongly anchored to anything in reality.  (My theory is: overly complex processors => overly complex compilers => "anything goes" languages => programmers blind to the hardware.  The problem starts at the bottom with the HW and snowballs from there.  And the puzzle solving types drawn to SW seem to positively revel in pointless complexity.)

Anyway, has anyone out there found a simple, direct method to crank out simple apps (text windows that experience a lot of updating) that aren't sluggish on the various platforms?  I'm thinking mainly XP32, Win7-64, Linux-64.  I'm loathe to recode my C/C++ Hive source, but would consider it if the solution were long term and generally applicable to new projects.

Posted: 3/8/2016 5:54:09 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Division / Reciprocal

Found a couple of fascinating papers:

http://www.informatik.uni-trier.de/Reports/TR-08-2004/rnc6_12_markstein.pdf

http://research.microsoft.com/pubs/70645/tr-2008-141.pdf

Goldschmidt division is kinda spooky - it converges quadratically like Newton Raphson, but does so by repeatedly squaring a fixed error term, and it's almost trivial for finding the reciprocal.  The second paper above details software integer division using iterative methods with practical notation and algorithms.  In hardware I can see why they pick table approximation for the initial guess followed by a couple of rounds of Goldschmidt or Netwon.  I was thinking a polynomial might be used to provide a good guess, but the convergence is so fast it's hard to beat even with a fairly crude initial guess.

[EDIT] My main takeaway from the first paper (other than describing Goldschmidt pretty well) is the opportunity for hardware concurrency that Goldschmidt provides, which is most likely why it's used in processor hardware dividers.  But, because it doesn't use feedback directly, small truncation / rounding errors grow as it converges, making a pass or two through Newton (which does use feedback) at the end mandatory.

The second paper really drives home the practical use of tables for a certain level of starting precision as the initial guess (how we got the Pentium bug).  A sweet spot for the table output is ~9 bits because each iteration doubles the precision.

Posted: 3/13/2016 6:25:02 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Thanks to the above papers I've got unsigned integer division & modulo down to 30 cycles worst case, which is a mere 15% of the 201 cycles the bit by bit routine required.  And unsigned float reciprocal is down to 31 cycles worst case via unrolling the loop and employing the new immediate multiply opcode.  These two operations get used a lot in general computing, so extra time spent making them fast means more real time to do lots of higher level stuff.  It's tedious work, but these are the building blocks, so they can't be made of sand.

===============

Ran across a quote today by Orson Welles: "The enemy of art is the absence of limitations."  I think this is quite applicable to digital musical instruments in general.  You can build almost anything so sky's the limit, no constraints to help define what it is that you're doing or trying to do.  All that freedom can be paralyzing to the designer, and cause player / consumer expectations to be all over the place.

Posted: 3/17/2016 6:25:35 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Float Denorms

Reading about the history of IEEE 754, it seems the biggest bone of contention in the standards committee was over denorms, or numbers smaller than 2^-126.  Normalized floats have a significand between 1.000... and 1.999... but denorms hit the minimum exponent value and are allowed to violate this rule by having significands less than 1.  This fills the "hole" around zero with values that degrade in precision the closer they approach zero, which is generally considered a "good thing" by mathematicians.  It effectively extends the low end dynamic range by 2^23, which was probably felt necessary due to the limiting pressures of packing floats into a 32 bit space.

Examining source code for Freeverb I encountered some cryptic lines that I had to go to the web to decipher.  Turns out they exist in order to circumvent denorm values creeping in.  Why?  To the web again, where I find everyone doing DSP does this, particularly with IIR filters and reverbs and other things where signals recursively decay to zero.  Why?  Because denorms are usually implemented in software via a hardware trap mechanism, where they can take ~100x longer to execute and crash your real-time program!  Idiot me thought this was all handled in hardware from day one.  What a mess. 

Here is one example (from http://rcl-rs-vvg.blogspot.com/2012/02/denormal-floats-across-architectures.html?view=classic):

Seems most processors with floating point support barely even bother with denorms and just flush little numbers to zero, which is the reason for the bogus good looking 1x speed of Cell and ARM in the chart. 

Anything having anything to do with processors is just ludicrously complex and arcane.

[EDIT] Like I noted above, denorms expand the dynamic range at the expense of precision within that expanded range.  If the dynamic range is really too small then take another bit from the significand and give it to the exponent.  I'm all for standards, but I think denorms are a mistake, particularly the way they are (or aren't as the case may be) implemented.

[EDIT2] My bad, it seems some of these processors actually do have hardware support for denorms, but hardware that slows down to a crawl when dealing with them.  Statistically denorms should be rather rare, but there are common DSP scenarios which generate piles of them.

Posted: 3/31/2016 1:19:47 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Zero = Denorm!

I've been working on floating point algorithms and have a polynomial version of square root pretty much finished.  I finally figured out how to simply adjust poly coefficients by hand when the poly degree is high without the whole thing going crazy: just apply the opposite adjustment to the next higher term as well - this very much localizes the adjustment.  I've probably spent weeks adjusting poly terms, wish I'd stumbled on this a lot earlier.

Anyway, I decided to go back and do the simple things like float multiplication, addition, subtraction, etc. and it turns out they aren't all that straightforward to do and require gobs of cycles.  Signed significands introduce complexity, but the surprising thing is zero, which, if you think about it, is a denorm!  So even if you plan to flush denorms to zero, you still have to give zero itself special treatment everywhere.  I probably should have started with float multiplication and worked my way up to the heavier algorithms, but what do you do?

(The reason zero is a float denorm is because the exponent and decimal aligned significand make floats semi-logarithmic.  And while logarithmic systems can get arbitrarily close to zero, they can never reach it.  LOG[0] = negative infinity.)

To deal with floats I added a OP_SGN opcode to Hive, which returns -1 for negative numbers and +1 for non-negative.  By doing say:

  s3 := SGN[s0]
P0 *= P3

you get the absolute value of the number in s0.  And the sign itself can be kept for later use if desired.

It's kind of ironic that the same hardware guys who pushed the IEEE float committee hard for denorms then went on to implement them in super inefficient ways.

"...premature optimization is the root of all evil."  - Donald Knuth

Encountered the above quote and, while I agree with the spirit of it, often during a local optimization process you gain deeper insights that are difficult to pick up again later.  So you might as well put a certain level of effort into what you are doing before moving on, particularly if it is thorny and full of nuance.  I don't dare drop this algorithm stuff until I'm substantially past it as it's just too detailed to quickly and easily relearn and get back to the same spot.  When I was gainfully employed I remember one coder who spent most of the day coming up to speed on what he did the day before, and only then would he add to or change things, often in the late evening.  Heads can only hold so much.

Posted: 3/31/2016 1:40:33 PM
oldtemecula

From: 60 Miles North of San Diego, CA

Joined: 10/1/2014

dewster stated: "...premature optimization is the root of all evil."  - Donald Knuth

Yeah my wife use to tell me that, gosh...  I miss those days.

dew that handful of design points you shared I am taking seriously, they all make good sense. The Opto does something special so it must stay.

Christopher

Posted: 3/31/2016 2:16:35 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Ya, "premature optimization" can make you unpopular with the coder ladies! ;-)  Kind of like "cone droop" in speaker design circles.

 

Posted: 4/17/2016 7:13:31 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Algorithms = Mental Quicksand

If you're an old like me, you probably remember back when TV went through a period where it seemed literally every actor on the tube was getting stuck in quicksand.  It could happen anywhere, even in the desert (!), they're minding their own business and *bam* they're mired and fighting for dear life.  The actor tasked with saving them invariably admonished them "don't try to get out, you'll only go under faster!"  Kind screwed no matter what they did, and struggling it only made things worse.

Feel a bit like that lately with algorithms.  Today I was innocently going to extend unsigned integer division / modulo to signed and immediately ran into the question of what to do with the maximum negative value divided by -1.  With no special checks it will return the maximum negative number, which seems dangerous.  But then again, C allows integers to roll over and under with addition and multiplication, so should why integer division be any different?  So I coded it up in C with uint_32's and it crashes!  No help there.  The asymmetry of 2's complement with respect to positive & negative numbers is simultaneously a natural wonder of the world and a rather harsh mistress, particularly at modulo limits.

You must be logged in to post a reply. Please log in or register for a new account.