Let's Design and Build a (mostly) Digital Theremin!

Posted: 4/18/2016 10:17:18 PM
oldtemecula

From: 60 Miles North of San Diego, CA

Joined: 10/1/2014

Let's stop the madness over here too.

Posted: 4/23/2016 7:40:49 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Floating Point Addition

Floating point addition is fairly nuanced.  Because addition is essentially little endian, the decimals of the inputs need to be aligned before the addition can take place, which in the float world is accomplished by right shifting the significand of the input with the smaller exponent a distance equal to the difference of the exponents. 

A good place to start is with normalizing the inputs before denorming the one with the smaller exponent.  Zero is a denorm that can confound things here because it will test larger than a non-zero input with negative exponent, so we need to examine each input for zero significand and deal with it somehow.  One way is to replace the exponent with a very negative value, so zero will always lose the battle of the exponents. 

Next we need to compare the input exponents by subtracting one from the other.  Because Hive shift distances are modulo, we need to somehow limit the shift range to the interval [-32:0].  One tricky way to do this is to zero out the significand if the shift distance is greater than -32.  Shifting the value zero any distance will always return zero, and subsequently adding this zero to the other signifcant won't change it, which is what would happen if it were actually shifted -32 or beyond.

Then we do a signed addition on the significands with a check for over and underflow, followed by normalization of the result if necessary, and the usual bounds checks on the output exponent.  A zero result should return both zero significand and zero exponent.

The output exponent is simply the largest of the two input exponents, given input and output normalization.

I'm reaching the point where a lot of this stuff is becoming boilerplate, which is good.

Posted: 5/1/2016 2:58:45 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Found this fantastic paper this morning via Hacker News and am in the process of digesting it:

  http://www.eecs.berkeley.edu/~waterman/papers/phd-thesis.pdf

Read it and weep!  Laundry lists of fail regarding every popular ISA (instruction set architecture) are laid bare in plain English.  Literals being given too much opcode space in almost every ISA is an issue I've really grappled with (and IMO effectively dealt with via a tiered method) in Hive.  Surprisingly even ARMv8 (the ISA everyone is currently hitching their wagons to) substantially sucks.  The paper rightly tears the x86 ISA a new one, it's absolutely horrid.

Like some newer programming languages which pare things down by purposely omitting various features, I think it's super important to keep the processor as simple as humanly possible.  ISA design is really hard work, but it's amazingly poorly done for the vast impact computing has on the world.  And we really need open standards here.

[EDIT] I like the way they handle different width immediates, by breaking them up and reassembling the pieces in a way that doesn't require hardware muxing (though muxing here isn't really that onerous).  I'd love to use that idea but can't.  They also picked little endian, which tells me they aren't completely insane.

Posted: 5/6/2016 10:39:11 AM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

LOG2 Float

Logs are exponents - it's too bad the term exponential is taken.

Log base 2 is the exponent of 2 required to produce the input value.  So it's a work backwards thing:

  2^0 = 1  so  log2(1) = 0
  2^1 = 2  so  log2(2) = 1
  2^2 = 4  so  log2(4) = 2

Inputs less than one produce negative results:

  2^-1 = 1/2  so  log2(1/2) = -1
  2^-2 = 1/4  so  log2(1/4) = -2

It seems log2(0) equals negative infinity (negative overflow).

If the input is a float with significand S and exponent E:

  log2(S * 2^E) = log2(S) + log2(2^E) = log2(S) + E * log2(2) = log2(S) + E * 1 = log2(S) + E

so:

  log2(S * 2^E) = E + log2(S)

The output significand is basically the input exponent, plus a positive fractional offset caused by the input significand.  The output exponent is simply the distance required to normalize the output significand.  This works for any positive input.

As in the EXP2 case, observation of the LOG2 graph reveals the fact that we can scale a small portion to cover all input and output scenarios by compressing the input by appropriate powers of 2 (manipulation of the float exponent).  First we subtract 1 from the input to move the curve down to the origin of both axes, and mulitply it by 2 to better fit the calcuation space.  This also makes the polynomial approach fundamentally tractable.  The polynomial coefficients are adjusted so that they also fit in the calculation space, which gives an polynomial output that is 1/2 the expected value.

What to do with negative inputs?  I rashly decided to take the absolute value of the input.  Zero input is the only scenario that can cause (negative) overflow, all other inputs should easily fit in the output space.

Here is an illustrative example of how to handle the input value 12.25 (where the significand width is 8 and the exponent width is 4):

               SIG               EXP
               ---               ---
 START         01100010          0101
 ABS & NORM    11000100          0100
 CHVAR         10001000          0011
 POLY_i        10001000
 POLY_o        01001011
 NORM (signed)                   01100000 (SHL 5)
 MOVE          01100000
 POLY          00010010 (SHL 5-7) (7 here instead of 8 to make up for poly output of 1/2 expected value)
 ADD           01110010
 EXP_o                           0011 (8-5)
 END           01110010          0011

So the result is (114 / 256) * (2^3) = 3.5625  (the actual precise result is 3.6147...)

The polynomial requires 11 terms for minimal error.

You must be logged in to post a reply. Please log in or register for a new account.