I was running a WSPR beacon the other day and another ham hit me up on email with a screenshot that shows my badly overdriven signal cluttering up the band. I thanked him, pulled the plug, and started troubleshooting stuff. The wall I’m hitting, I think, is the audio-in levels going into the data port on the radio.
I bit the bullet and ordered a Tigertronics Signalink USB for amateur digital modes. One of the shortcomings of the DIY cable I made is that there isn’t a good way to control audio levels into the rig except for the sound mixers in the operating system. They work fine, but the data input pin on the Yaesu is really, really sensitive and I need more control over the voltage. I can keep audio level very low, and end up operating QRP (according to the meter on the radio), or I can move the slider a millimeter and then blow the levels out again. There’s no middle ground, even with CLI controls.
A divider circuit or pot would probably do the trick, but I’m not confident enough in my abilities to try either and the purpose of the exercise is to eliminate as many unknowns as I can while I clean up my signal. They seem to be the de facto standard digital modes gadget, which has to count for something.
The devices aren’t particularly expensive and I had a nice email exchange with a guy in customer service wherein a couple of my questions were answered and some suggestions offered out. The service was great, which bodes well for any future questions I might have.
Bookwise: I just finished Weapons of Math Destruction by Cathy O’Neil, which was featured on a recent 99% Invisible podcast. Very good stuff there. It was written by a former academic mathematician-turned-Wall Street quant who grew increasingly uneasy by the level of control that badly written algorithms are claiming over our lives. Black box approaches to difficult problems (such as determining which teachers are effective and which aren’t) seem to offer an unbiased, scientific pathway. They sometimes fail spectacularly (to wit: when Google’s Photos app identified African-Americans as gorillas back in 2015), but more often fail subtly. The subtle failures are arguably worse.
First, despite any claimed objectivity, algorithms written by fallible human beings often encode the assumptions and biases of their creators. Second, in many of the most egregious cases O’Neil explores, there is no apparent feedback into the algorithm for correction and optimization. Finally, bad outcomes tend to reinforce the flawed assumptions that went into their creation, so the cycle continues. Did we mention that creation of these algorithms is Big Business, that they tend to be jealously guarded intellectual property, and are thus even further removed from any sort of public scrutiny?
Also still working my way through Pope St. John Paul II’s Theology of the Body audiences. The TOB has been called something of a theological time bomb, and it’s hard to disagree, even only partway through it. More later.
Finished up the 20th Aubrey/Maturin novel (The Hundred Days) on a recent trip to the Florida panhandle. This leaves one complete novel (Blue at the Mizzen). Then I guess I’ll go finish out the all of the Richard Sharpe books and then go for an O’Brian re-read.