Let's Design and Build a (mostly) Digital Theremin!

Posted: 7/15/2018 10:57:22 AM
tinkeringdude

From: Germany

Joined: 8/30/2014

'I watched MTV (and VH1) in the 80's so I missed that song '

(sorry for straying further off topic here, I feel I have to comment on this "song")

Well, as an adolescent, that "song" was kinda fun and all, but knowing the actual music behind it, I must say they kinda butchered it. Them playing the melody and not getting the chords right at every time makes me cringe a little in places, and then the silly overuse of the "orchestral hit" sound sample Well, you had to experiment with those new ROMpler thingies back then, I guess.


The music is a simplified adaptation of the main theme from this movie soundtrack here, which I find enjoyable to listen to by itself. Maybe only because I saw the film, hard to tell.

Das Boot - Soundtrack, 1h14m
https://www.youtube.com/watch?v=gB2ePUPk83g


Composed by Klaus Doldinger, a reknown jazz/fusion performer and composer of several film scores. I like his style of using synthesizers. You can hear he knows what he's doing, it's different from some more or less clueless fumbler with synths trying to create pop music with it, or 90's+ dance music (not saying I don't enjoy some of that nonetheless). Yet he also doesn't sound purely like "doing classical using synths as approximation of classical instruments", he does do some uniquely synth things with them, in addition to composition.

That movie was rather gripping. Remembering seeing a rerun of the "short" cinema version way back, I actually watched the ~ 5h long bluray version (glued together TV mini series that was made after the theatrical release - or rather, just not cut away as much as for the cinema version) a year ago, I don't think I ever voluntarily watched a movie anywhere near that long, lol.
It's just a huge bummer that the English overdub is rather horrible. Not because it's spoken by the actual actors, hey, you'd expect (exaggerated) German accent in american movies there, too, hehe. But they heavily sanitized the lower ranks roughneck crewmen talk, which was dirty, sometimes in a bizarre/sexual way, for american cinema, completely distorting the dialogs' meaning, topics, everything, and makes it totally goofy, instead of following the attempt of recreating the atmosphere in that smelly, crowded, underwater metal cigar. There was no talk about no damn fat valkyries in the original, aaargh!

Posted: 7/15/2018 11:14:20 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Thanks for the pointer, tinkeringdude!  I'm giving the soundtrack a listen as I type this.  I'm more impressed with synth qua synth, rather than as a toy effect or orchestral fake (thought those have their moments too).  I can get into William Orbit's "Pieces in a Modern Style" (the first one, not the second so much) but, try as I might, could never understand the allure of "Switched On Bach" (other than as some kind of quaint technological milestone) though I have tremendous respect for the early trail-blazers in all their forms.  Have you listened to the soundtrack to "To Live & Die in LA"?  (link)  I've just about worn that CD out.

I'm a huge movie fan (watch at least one per day) but Das Boot is one I haven't seen for some reason (I was 26 when it hit theaters).  I guess I'm not super attracted to war films, though that one looks like it was really well done (in all of its many forms!).  From what some IMDb reviewers say, the English subtitles  were "sanitized" as well, a double shame as I usually prefer to watch foreign film in their native tongue with subtitles.  They seem to think we're a bunch of babies here, and maybe they're right ("F-bomb" etc. - don't get me started).  

American movies from the 1930's are among my favorites.  Without the chromanance blurring things, black and white can be remarkably hi-def.  The theater industry was firing on all cylinders, with some fantastic actors, writers, directors cranking out reliably entertaining fare at a breakneck clip.  The "pre-code" era alas didn't last nearly as long as it should have, with all of the interesting adult content (not nudity per se, but issues real adults deal with) squashed by the Hays code around 1934.  The code was finally cast off in the late 60s, another era I'm fond of for its experimentalism. 

Posted: 7/16/2018 6:07:42 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Velocity Envelope (part 100?)

After having practiced some songs for a while, I believe the pitch correction code has been significantly improved by swapping out the slew limiter with a 4th order filter as discussed earlier.  As I said, even with the correction high enough to be in the quantization zone I can now introduce a little vibrato if I do so rather carefully so as not to "pop" over to the adjacent notes.  This lets me do stuff like horn voices with a bit of realistic and interesting sounding vibrato added to it.

This has got me reviewing at the velocity envelope code for what feels like the zillionth time.  But now I see what is wrong with it: I'm adding the velocity to the nominal volume signal before exponentiating it, and an add before an exponentiation is equivalent to a multiplication after.  This is why scaling the velocity with LOG2 before the add works so well, as it somewhat mitigates the effective multiplication.  

The thing I can't seem to keep in my head is this: there is nothing linear in an analog Theremin - everything is exponential!  Heterodyning resolves to an exponentially spaced pitch, the volume control voltage is used to directly control a linear VCA, etc.

As an experiment I moved the velocity code after EXP2 and made it a high pass filter rather than a peak hold and linear slew limit.  But with this configuration the result of velocity hand movement is proportional to where my hand is in the field, weaker farther out, stronger closer in, which I don't like.  I also made it bi-directional, so negative movements cause abrupt cut-off, which is interesting.  But then the smoothness of the decay is super dependent on the steadiness of my hand after the velocity movement, which I also don't like.

[EDIT] So I think I can probably improve what I've got a bit by being more mindful of the linear / exponential regimes and their mixing.  I may remove the attack control as the function of softer attack can be accomplished with lower velocity gain / less steep volume knee.  Reverse envelopes and the like are kinda fun and all to mess with once or twice, but I don't see a lot of serious need for them.  And I still think it's best to have envelopes "trigger" at the knee point, where the knee gain accentuates the velocity, and at a fixed point in space.

Posted: 7/19/2018 4:58:31 AM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

2^10 = 1024 ~= 1000

An interesting coincidence that gives us a valuable binary/decimal equivalence point that's easy to remember; e.g. 10 bits of resolution give a 1000:1 ratio.

Another handy thing is a one bit, which is a 2:1 ratio (octave), is almost exactly 6dB; e.g. the 16 bit PCM audio dynamic range is 6dB * 16bits = 96dB

Also, a 10:1 ratio is exactly 20dB; e.g. the "20dB pad" switch on your mixing console or high-end stereo reduces the signal to 1/10.

1/f is 6dB per octave, or 20dB per decade, or a first order change.  A second order change is 1/f^2, 12dB per octave, 40dB per decade.  Third order is 1/f^3, 18dB, 60dB, etc.  I use these all the time when thinking about filters.

I've found that 1/2 the 16 bit PCM dynamic range, or -48dB, is right around the threshold of hearing for an "average" signal in an "average" setting (which includes headphones).  48dB = 20dB + 20dB + 8dB, or a ratio of roughly 10 * 10 * 2.5 = 250, or 8 bits.  You of course want your signals to go fully off (-96dB) when turned off.

===========

"This has got me reviewing at the velocity envelope code for what feels like the zillionth time.  But now I see what is wrong with it: I'm adding the velocity to the nominal volume signal before exponentiating it, and an add before an exponentiation is equivalent to a multiplication after.  This is why scaling the velocity with LOG2 before the add works so well, as it somewhat mitigates the effective multiplication."  -- me, above

I think I was kind of wrong saying this, as I add a bunch of stuff linearly and then exponentiate it in other places and it works fine.  Then I thought it was an offset issue, as the 32 bit linear signals need a huge offset to even start producing exponential results, but velocity is generally riding on top of a large nominal volume position signal to begin with, so that's probably not it.  I'm thinking maybe the main problem with the way I'm handling velocity is the smallness of the signal and the small variability of velocity with hand movement.  The forearm and hand are a sort of sprung mass which is easy to get moving quickly but perhaps rather difficult to make move all that much faster?  And the hand doesn't move all that far in 1/48kHz so the difference signal is small and rather noisy when sufficiently gained up (2^10 to 2^16 or so via the knob, this on top of the extra change the knee introduces).  Differentiating is inherently noisy, but a certain amount of integration of it (low pass filtering or averaging) could be done as the bandwidth feeding it is <100Hz.  It's somewhat counter-intuitive to contemplate the integration a differentiation.

Putzing around with other stuff, added an auto-offset to the volume and pitch when doing an auto-calibrate (initiated via encoder pushbutton) that I should have done ages ago as I always have to add 25 or so to the pitch.  And I never null the volume, which means I've been using a somewhat less than ideally linear volume field for all my experimenting to date.  Also added a 50Hz option to the hum filter for those bathed in the "other" EMF.  Tried adding an extra HPF to conduct the vibrato signal when the pitch is highly quantized, but it didn't seem to produce positive results.  Often it's hard for me to tell if I just didn't give it the old school try or if it's a real dead-end, so a certain minimal level of effort has to be put into serious putzing to have any assurance of the result, one way or the other.

===========

Read a story on Hacker news where the writer was contemplating coding up a good touchpad interface for Linux.  Not to knock him personally, and not saying I know everything, and not saying people can't learn once they're out of college (indeed that's generally where the most in-depth learning happens) but I think coders often don't have sufficient engineering background - in this case signals and control theory - to do much more than hack on stuff unless they really dig into what they're actually trying to accomplish with the code.  I don't code to the prevailing "best practices" (though I don't intentionally not do that), but I enjoy coding and try to do it well - coding to me is an exact functional description of the problem solution in that medium, and a means to an engineering end.  I suppose I kind of don't believe in the idea of "coder" as a generic job as it rather misses the whole point behind the coding, though perhaps with that I'm making something of a straw man argument.  But I've seen some pretty bad code in my time, and feel surrounded by terrible examples in most modern household appliances (anything with a sensor &  feedback & averaging going on is almost guaranteed to be obviously botched). (#notallcoders)

Posted: 7/19/2018 10:07:58 PM
tinkeringdude

From: Germany

Joined: 8/30/2014

"the writer was contemplating coding up a good touchpad interface for Linux.  Not to knock him personally, and not saying I know everything, and not saying people can't learn once they're out of college (indeed that's generally where the most in-depth learning happens) but I think coders often don't have sufficient engineering background - in this case signals and control theory - to do much more than hack on stuff unless they really dig into what they're actually trying to accomplish with the code."

Ok, so there is one domain specific thing that the guy maybe doesn't know about, other people don't know much about many other domains and couldn't provide quality solutions made in software within those domains.
Then there are domains that he knows about, and there he can create good solutions?


In this instance, I guess "touchpad" sounds innocuous enough for someone who is not aware of the underlying engineering challenges to get it "really right". Although you may call it botched, but is it really in the case where it works, if "works" means it does what the user expects and there doesn't seem to be missing anything?

But even just the simple fact of not having thought deeply about / thoroughly investigated a subject makes one susceptible to fall in the trap of thinking it's easy, at first, I guess.
I have to say I have been somewhat shocked by your thread here, I had not imagined that making intuitive, properly working controls for such a musical instrument would be such a deep topic by itself and entail the amount of R&D which can be witnessed here.
Then again, your theremin also goes beyond currently existing designs in what it does, that may also be part of it.

I guess throwing simple stuff like averaging on something is an attempt of getting something to work well enough quickly enough, without having to think a lot. Can sometimes be the right thing to do? As for the particulars, well I don't have this mathematical and signals theory background and only pick something little up here and there. To me, averaging makes random noise cancel out over time, which sometimes seems exactly the thing needed.

"I don't code to the prevailing "best practices" (though I don't intentionally not do that), but I enjoy coding and try to do it well"

Best practices, or a subset of them, are somewhat contextual, I would say, though.
And also a bit a thing of "a school" and consistency within that, in which all makes sense and does work - but then there is a great intersection of principles between those, which is just objectively right, or so it seems, as ignoring them consistently apparently inevitably produces disaster, depending on overall circumstances.
Those practices are heuristics, though. People who are extremely good probably know when to ditch them and by how much, but not everyone is or needs to be extremely good - those practices also helps them, without the benefit of the deeper understanding that the superstars of the scene may have.

Do keep in mind that such best practices are usually intended with keeping in mind of development as a process, not you, some single dude, think something up, implement it, cast in iron (or rather silicon) and that's it. What if someone else (which may also be future you) has to revisit the project, maybe change some aspect of its nature or at least extend it because new customer X wants it and boss can't say "no"? And many people working on a thing and having to be able to quickly understand the others' code... people leave or get sick, or are on leave.
It may exist, but I have not come across projects with a scope such as your theremin and a dozen (let alone 2) electrical engineers working on it, with shifting assignments of who works on what part, later maintaining and sometimes extending it.

A bit a thing of perspective and particular needs within a given scenario / circumstances, I guess.

" - coding to me is an exact functional description of the problem solution in that medium, and a means to an engineering end."

You're assuming that you actually understand what you are building up front, or if it's a contract work, that even the customer understands what he wants and needs, while in reality, he may begin to understand that 6 months down the line.
Now I do see that you have your R&D journey here and things are not carved in stone from the beginning. But with many collaborators and annoying customers who actually want to see stuff and like to argue why you are wrong about why certain things need to be this or that way even though they clearly don't understand...yet..., and really like to change their mind often about requirements and scope, it's much more difficult than the luxury of assuming there is one thing, for which a clear cut functional description exists, that you can "code", and that that's all that you have to think about - "the" mere technical solution itself. But much of the effort is far from just being that, or, what "that" is, keeps changing, and not really to anyone's fault. There are always a lot of nasty dependencies of different kinds and levels (technical and social).
Best practices are not just random, they have been won by lots of pain and agony, and may help reducing that in future projects
But I would guess that, the "tinier" one aspect of a supposed "best practice" gets, the more likely it's actually silly bureaucrat-ish nonsense without real merit.

By the way, I have seen quite atrocious code made by EE's  Probably the mathematical solution of the problem is totally great and all, but the rest... Although those were also a few not extremely "coding happy" EEs, they did it when they had to, so probably, there are just many hours and insights about problems in certain regards missing... 
(I've also seen horrible code from physicists, probably the smartest guys I know, but that doesn't make an instant great software developer out of everyone, apparently. )
Now you may doubt my non-engineer judgment about that all you like, but stuff I have done as a teen and learned the hard way it'll come back and bite me in the butt, does not turn into a good idea when done by a proper engineer

"I suppose I kind of don't believe in the idea of "coder" as a generic job as it rather misses the whole point behind the coding, though perhaps with that I'm making something of a straw man argument."

I don't really like the term "coder", it sounds like something that probably should be automated, lol. A "mere" translation from one domain into another. Although real-world coding involves craftsmanship that's not to be sneezed at. I consider myself a "software developer", and some design of hopefully sound systems is involved. Solving problems. Not nearly everything that can be and needs to be done in software requires application of theory about signals and controlling. (I'm not saying it couldn't actually be advantageous to maybe view certain things in that light even if it's not directly taught in that context, though let's be honest, not even every engineer thinks out of the box a lot like that)

"Computer science" and all its offshoots, or even just software development without lots of theory, has so many so deeply specialized sub areas these days it's not funny, I lost track.
People seem to be willing to pay them for some reason, maybe they feel they're actually providing solutions to their problems.
I wonder how many of the Linux kernel developers know the first thing of control theory. Okay, probably a few more than, say, people in enterprise business type software  But those also can't afford to be stupid. Different domains. There are so many, on mere mortal can be expert in them all. I'm not saying some aren't intellectually harder than others. I don't see how any point is being missed by anyone, though. (which is probably because I am missing one here  I'm sure I'll be lectured)

Posted: 7/20/2018 3:53:58 AM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

tinkeringdude, I was really afraid you might think I was trolling you!  Hence the #notallcoders hashtag...

"In this instance, I guess "touchpad" sounds innocuous enough for someone who is not aware of the underlying engineering challenges to get it "really right". Although you may call it botched, but is it really in the case where it works, if "works" means it does what the user expects and there doesn't seem to be missing anything?"

Sorry, I should have given more background.  The author was lamenting that Apple had seemingly perfected the trackpad device driver, which made switching to Linux a bad experience for him.  There are apparently 3 Linux drivers to choose from but they all reportedly suck in comparison to the Apple, and the Windows driver is somewhere in between in quality.  The author thought he might take a few months and fix the situation, though he was quite vague about what that might entail, and quite honest about his vagueness.  I suppose I was wondering aloud what in his background, other than coding, equipped him for such an undertaking, though he wasn't being egotistical or anything.  (Personally I can't stand trackpads, a decent wireless mouse is inexpensive and worlds better, though I would prefer a built-in optical trackball for portability reasons, but those went out of vogue long ago.)  Making a driver that would please someone this finicky seems like it would take a lot of deep knowledge and finesse? 

"And also a bit a thing of "a school" and consistency within that, in which all makes sense and does work - but then there is a great intersection of principles between those, which is just objectively right, or so it seems, as ignoring them consistently apparently inevitably produces disaster, depending on overall circumstances.
Those practices are heuristics, though. People who are extremely good probably know when to ditch them and by how much, but not everyone is or needs to be extremely good - those practices also helps them, without the benefit of the deeper understanding that the superstars of the scene may have."


I agree, nicely put.

"What if someone else (which may also be future you) has to revisit the project, maybe change some aspect of its nature or at least extend it because new customer X wants it and boss can't say "no"? And many people working on a thing and having to be able to quickly understand the others' code... people leave or get sick, or are on leave."

The future me often encounters my old code and wonders WTF I was thinking!  For that me I above all try to make my code as direct and as free of "tricks" as possible.  Emergent behavior is great and all but it can be a bear to puzzle through and convince yourself that you've safely covered all scenarios and failure modes.  Code can never be made clear enough it seems, it's always a chore picking it back up.

"By the way, I have seen quite atrocious code made by EE's  Probably the mathematical solution of the problem is totally great and all, but the rest... Although those were also a few not extremely "coding happy" EEs, they did it when they had to, so probably, there are just many hours and insights about problems in certain regards missing...
(I've also seen horrible code from physicists, probably the smartest guys I know, but that doesn't make an instant great software developer out of everyone, apparently. )
Now you may doubt my non-engineer judgment about that all you like, but stuff I have done as a teen and learned the hard way it'll come back and bite me in the butt, does not turn into a good idea when done by a proper engineer"

I hear you Hoss, and totally concur! :-)  EE's, at least the one's I've encountered, are often (usually?) sloppy coders, I don't know why.  I realize the human tendency is to believe "everyone else's code stinks" but wow, I've seen (and spent way too much time re-writing) some hair-raising awful HDL code in my time.  And not just code, I've seen fairly terrible HW designs, like so bad entire teams are kept busy for years playing bop the gopher with all the bugs.  The worst by far was a terrible EE designing a couple of terrible ASICs at the core of an entire product line.  They were first to market and made money and everything, but OMFG, life's too short to spend it cleaning up after hacks.  After enough of that you don't want to work with anyone on anything the least bit creative (hence my permanent state of whinging).  I realize a lot of bad design is due to inexperience - they let me do a lot of stuff that I probably shouldn't have been that near and it still haunts me a bit - I suppose everyone has to start somewhere.  But some never stop starting! :-)

Once sat in a meeting with timing IC reps, and a tech manager who was directing a department of developers hoping to build a feature that distributed sub-ns timing over the network.  The tech manager clearly had zero knowledge of control theory and was asking pretty stupid questions.  The reps looked flabbergasted and I permanently lost some faith in engineering and management that day.  Good times!  :-)

Posted: 7/20/2018 2:35:30 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Programming Your Brain

This project has a lot of detail (some would say at least partially needlessly so due to the custom processor, and on a bad day I might agree) covering a lot of different areas.  The other day I was working on the integer library (because I was working on a filter because I wanted to try something in the pitch corrector because...) and wondered if adding mixed signed add and subtract could remove a cycle or two from the subroutine in question.  So I looked at the current opcode arrangement / decoding and ultimately decided against it, but an afternoon was gone in a puff of smoke, most of it spent reacquainting myself with gritty details I'd implemented and forgotten.  I end up doing this a lot, it sometimes takes me days or even more than a week to get fully back into a groove that I had known quite well at one time - reprogramming the brain (or at least my brain) for technical activity isn't very efficient, and it makes one quite aware of how much and how quickly the brain simply forgets detail.

If you ever find yourself designing a 2 operand machine, you definitely want the "A signed * B unsigned = AB signed" type extended (upper 32 bits) multiplication in there somewhere.  You'll use it everywhere for polynomial and filter coefficients, volume control, etc.  The reverse operation, "A unsigned * B signed = AB unsigned" doesn't seem to have nearly as many uses.  This was not obvious to me at all, even after having spent much time analyzing it up front, until I started programming the thing in earnest.  You are often treating the 32 bit value as a fraction, so the unsigned range is [0:1) and the signed range is [-0.5:+0.5).  Multiplying signed * unsigned doesn't change the resulting range of [-0.5:+0.5), but multiplying unsigned * signed reduces it to [0:0.5) because the negative portion gets lopped off.  And fractional signed values are just weird in general because the sign often cuts into the range, e.g. multiply two signed fractions and the range reduces to [-0.25:+0.25] so I find I don't do that very much in actual use.  That the resulting range includes +0.25 is actually problematic because it means you can't simply multiply the result by 2 to restore the range without checking this one lone corner case.

Posted: 7/20/2018 10:25:52 PM
tinkeringdude

From: Germany

Joined: 8/30/2014

Hehe, no thinking of trolling, there was enough truth in that post, just confused by some details or the angle of view.

"Making a driver that would please someone this finicky seems like it would take a lot of deep knowledge and finesse? "
Maybe it's also a matter of taste. I can't stand Apple stuff. Who knows in what weird way that works they claim is genius!
My notebook luckily has the driver option to auto-deactivate touchpad whenever a mouse is connected, also known as "always". My mouse tracks every surface so far excellently, if needbe my pants on the thigh - still better than the annoying touchpad which tends to be in the way...

Lenovo has this interesting red rubber thing on some of their notebooks. I don't know what's inside. You apply some force with a finger in X,Y direction, and barely move/deform it at all, but it reacts like some sort of analog joystick, with minute movements.
I at first never used it and dismissed it as a toy, but for me, now it surely beats touchpad when I have no mouse.

A relative uses an USB trackball by Logitech(?) for some hand problems which make use of a mouse very uncomfortable. So that still exists, if as an external device.

"above all try to make my code as direct and as free of "tricks" as possible.  Emergent behavior is great and all"
Lol! Code like I imagine from reading this ("emergent...") warrants the emergence of a large fly swatter right on somebodies behind.
There is this one saying: "Debugging a piece of code is 10x harder than writing it. So if you're being as clever as can be writing it, you're by definition not smart enough to debug it." Seems accurate enough.

"puzzle through and convince yourself that you've safely covered all scenarios and failure modes."
If you are speaking of times when you need to change code not well understood anymore and the fear of breaking something without noticing:
I have no clue of HW dev equivalents or applicability, but in software, for this there are unit tests (using test frameworks for different languages & environments, to not repeat all the work common to all tests, and to automate stuff)

There's a wide spread notion that this isn't practical for embedded projects. The book "test driven development for embedded C" disagrees. I have not really worked through that book yet, though
Of course one can't practically cover everything. But to check whether a module does basically what it's supposed to, how it reacts to edge cases.
If you then change something and the tests are run through and something changed the behavior, it explodes in your face and you know you need to fix it.
If it seems difficult or complicated to test edge cases of a module, that module is probably less of a module and more of a ball of wool *after* the cat played with it. Or worse, the whole system is like that. Dependencies make isolated testing hard / impossible - so just trying to do that will even warn you about maintainability problems in the code one may not have thought about before, because one was not forced to.

For parts of a system which are highly experimental in R&D, it is probably less practical, because a lot of things are expected to be moving a lot for a while, and it would be tiresome to change tests all the time.
Also, if only one person works on something and changes being made to (not experimental) parts of code at some point is very unlikely, the cost/benefit is perhaps less good than in other scenarios.

"Code can never be made clear enough it seems"
Indeed, and some of the "best practices" also help make it clearer. If e.g. code is composed in a way that there's never too much complexity in one place, it's easier to follow than a  page of code that jumps between levels of abstractions all the time, forcing the reader to context-switch a lot, which is mentally exhausting, and makes one error prone in attempts of getting what's going on, for at some point your mind will just be in the wrong frame...
Which means: keep things simple, and reasonably split responsibility into smaller units of code (files, modules, classes, whatever).
That was a thing that really bugged me about the code at a former work place. Where the "lead" (more boss than lead) hadn't heard of separation of concerns. When the editor is showing a really tiny scroll bar, one might as well take the hint that the source file is too damn large  And then of course he had variables accessed from everywhere throughout the system and couldn't possibly be aware of all potential race conditions. Not to mention that it's smelly to begin with that accesses of the same variable from different logical layers of the program are happening... as well as HW driver functions being accessed from different levels in the logical hierarchy of the program... who needs a well defined flow structure or resource access, or clearly separated responsibilities anyway!?

"I've seen (and spent way too much time re-writing) some hair-raising awful HDL code in my time."
I couldn't comment on hardware languages. I sometimes suspected when some EE wrote some funny piece of software that they were thinking in hardware design mode which didn't translate too well. But as I have at best very sketchy ideas about what HW design is like, I don't give too much weight to that hypothesis  We'll see, if the day ever comes that I actually look into FPGA, whether I'll do the reverse and my EE colleague laughs at me for it

"But some never stop starting! :-)"
Sounds somewhat like:
https://daedtech.com/how-developers-stop-learning-rise-of-the-expert-beginner/


Posted: 7/21/2018 4:22:09 AM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

"Hehe, no thinking of trolling, there was enough truth in that post, just confused by some details or the angle of view."

That's a relief!  I was wielding a pretty broad brush and didn't mean to get any paint on you! :-)

"Maybe it's also a matter of taste. I can't stand Apple stuff. Who knows in what weird way that works they claim is genius!"

Well, some folks I like a lot like them a lot, so I try not to step on toes, but as one wag stated they seem to be more of a fashion company at this point.

"Lenovo has this interesting red rubber thing on some of their notebooks. I don't know what's inside. You apply some force with a finger in X,Y direction, and barely move/deform it at all, but it reacts like some sort of analog joystick, with minute movements."

I had one of those as corporate issue and could never get into the eraser thingie, though I can't say I tried all that hard.  There are comparisons to naughty bits that I shan't go into. :-)

"A relative uses an USB trackball by Logitech(?) for some hand problems which make use of a mouse very uncomfortable. So that still exists, if as an external device."

I've owned and used a lot of trackballs, the early ones rode on rods that, if they weren't properly hardened and heat treated, actually wore away pretty quickly with the plastic ball rubbing against them - something that seems kind of impossible but there it is.  Logitech makes an OK one but lacks a scroll wheel, which is a must IMO.  Used a Kensington Expert for years, works well if you remove the magnet in the outer dial, but the whole thing is way too high in the sky, leading to wrist fatigue.  Good build but overpriced.  The default pointing device in many early laptops was a small trackball, and I have no earthly idea why that changed, as everything else seems quite inferior.

"Lol! Code like I imagine from reading this ("emergent...") warrants the emergence of a large fly swatter right on somebodies behind."

It seems quite common among HDL hackers.  A counter or two coupled to async statements, sometimes the flops are off somewhere else, grouped together for no obvious reason.  The worst is schematic capture of "standard logic" type boxes, no clue as to what's going on, just a pile of unnamed wires.  Best practice is to use state machines, but they don't interact with counters all that well - get two clocked things together and the timing becomes almost impossible to simulate in your head.  I've sort of come full circle, from counters + async, to state machines, and back again, though I use state machines when they really make sense.  Clean consistent code layout, comments for most sub blocks, short but descriptive signal naming, "right sized" modules (not too complicated but not trivial either), and all with an eye towards ease of verification.

"I have no clue of HW dev equivalents or applicability, but in software, for this there are unit tests (using test frameworks for different languages & environments, to not repeat all the work common to all tests, and to automate stuff)"

HDL has this too, for medium to large projects there is often is a sim guy who does nothing (!) but run scenarios with a gauntlet of theoretical and previous pass / failure cases.  It's good and bad, I think it can make HDL coders (even more) sloppy and lazy.  For me testing is an absolutely essential and enjoyable experience, but I have zero interest in catching anyone else's bugs.

"Of course one can't practically cover everything. But to check whether a module does basically what it's supposed to, how it reacts to edge cases.
If you then change something and the tests are run through and something changed the behavior, it explodes in your face and you know you need to fix it.  If it seems difficult or complicated to test edge cases of a module, that module is probably less of a module and more of a ball of wool *after* the cat played with it. Or worse, the whole system is like that. Dependencies make isolated testing hard / impossible - so just trying to do that will even warn you about maintainability problems in the code one may not have thought about before, because one was not forced to."

Exactly.  Preach it brother!

"Indeed, and some of the "best practices" also help make it clearer. If e.g. code is composed in a way that there's never too much complexity in one place, it's easier to follow than a  page of code that jumps between levels of abstractions all the time, forcing the reader to context-switch a lot, which is mentally exhausting, and makes one error prone in attempts of getting what's going on, for at some point your mind will just be in the wrong frame...
Which means: keep things simple, and reasonably split responsibility into smaller units of code (files, modules, classes, whatever)."

Yes, KISS, modularity for ease of understanding and verification.

"When the editor is showing a really tiny scroll bar, one might as well take the hint that the source file is too damn large"

LOL!

"And then of course he had variables accessed from everywhere throughout the system and couldn't possibly be aware of all potential race conditions. Not to mention that it's smelly to begin with that accesses of the same variable from different logical layers of the program are happening... as well as HW driver functions being accessed from different levels in the logical hierarchy of the program... who needs a well defined flow structure or resource access, or clearly separated responsibilities anyway!?"

Was this Toyota mission critical SW?  Probably not, but it sounds like it.  There was a SW type analyzing their code for a court case and it was horrible, unsafe globals, giant cryptic functions,  processes dying for lack of real time, you name it and they were doing it wrong.  He could flip a single bit in the code and the car would take off like an uncontrollable rocket.  Happened a couple of times to my Dad in a used Toyota they bought, sitting at a stop and the thing revved up out of the blue and just about killed him, his quick thinking was to cram on the brakes and turn off the ignition.  Which is what got me looking into the issue. Toyota lost the case, which is good because that code killed some people, but it seems they worked really hard to cover it up and blame the victims.  Maybe they all do it (?) but I'll never buy a Toyota.

"I couldn't comment on hardware languages. I sometimes suspected when some EE wrote some funny piece of software that they were thinking in hardware design mode which didn't translate too well. But as I have at best very sketchy ideas about what HW design is like, I don't give too much weight to that hypothesis  We'll see, if the day ever comes that I actually look into FPGA, whether I'll do the reverse and my EE colleague laughs at me for it"

You're giving them way too much credit! :-)  List type programming and HDLs are more alike than different, and share many of the same techniques and approaches.  But managing anything beyond the simplest concurrency requires a few new crutches (i.e. sketching of the logic showing the domains bordered by flops as well as any flop crossing feedback/forward; and sketching of waveform tables) and of course being familiar with basic digital constructs and their code representations.  At some point higher level architectural issues and therefore interfaces and handshakes become the focus and that's when the real fun starts, though the devil as always remains below.

You must be logged in to post a reply. Please log in or register for a new account.