Websdr as a remote receiver with Max/MSP and Elecraft K4

Remote receiver project using Websdr as a remote alternative to a local receiver.

Demonstration of a Max/MSP program that connects an amateur radio transceiver to Websdr – transmitting locally from Maine (USA) while receiving remotely using a radio in the Netherlands. The Max program reads the frequency from a Elecraft K4 transceiver, to control the Websdr sites. It also loads the remote receivers, controls audio routing, mode, filter, and waterfall display settings. An iPad, running touchOSC, acts a a control panel. Up to 4 remote receivers operate at the same time. Websdr is a remarkable system, developed by PA3FWM at http://websdr.org/. It lets you control remote receivers worldwide, from your Web browser.

Components:

  • Max/MSP
  • Websdr
  • TouchOSC
  • Elecraft K4 transceiver with antenna system
  • Skookumlogger (logging software)

Max Patches:

websdrjweb7.maxpat : main control program. Contains [jweb] objects for launching websdr instances. Also code for injecting javascript to control parameters like frequency, filter, and volume. This patch acts as an intermediary between TouchOSC, WebSDR, and allows external MIDI control as well as getting frequency input from CAT controlled radios like the Elecraft K4.

websdrCATaudio.maxpat : handles serial port interaction for the K4. Also reads audio stream from either the K4 receiver (via USB) or the websdr receiver (via Blackhole.)  I created an aggregate audio device called K4sdr to allow Max to read both devices at the same time. Audio switching and levels are handled using a Korg nanoControl2. For example to switch between the audio streams or listen to both.

Optional: arduino-ptt-detect2.maxpat : reads serial data from an Arduino, connected to the amplifier keying line, to determine whether the radio is in transmit mode, so we can switch back to the local audio stream to eliminate latency of hearing your signal via websdr. See subsequent post about this setup…

TouchOSC

websdrCW3.touchOSC : controls all 4 websdr channels, ie,., volume, mute, filter, CW offset, filtershift, – Also handles window management, loading js code, zoom in/out websdr and selecting channel waterfall views or Max code views.

CW Offset

websdr doesn’t have a control for CW pitch offset. To sync the frequency of the K4, the websdr is run in LSB mode with a frequency offset equal to the CW pitch setting in the transceiver. eg., 450 Hz. This works for most of the websdr sites, but unfortunately some of the sdr’s are off-frequency. You can usually compensate by adjusting the CWfreqOffset for that channel (in Max or TouchOSC).

Setting the offset also requires shifting the filter so it is centered over the actual signal.

Files

This is currently a work in progress, not available on Github. Local files are in max teaching examples folder.

Amateur radio contesting time-lapse map

Mapping geocoded  contest log data using node.js and openlayers.

The goal was to make something that looks like the Reverse Beacon Network map, only for contest log files. I use RBN for testing antennas now. That map display gives you a pretty good idea of your actual antenna pattern.

Code is written in node.js (javascript) and html.

Part 1: Read a Cabrillo log file containing QSO: records. Look up each callsign, get latitude and longitude, and rewrite the file as json data, tagged with geo coordinates. I originally tried getting the data from hamQTH but it was not current, so ended up using the qrz.com xml callsign lookup. For callsigns “not found” I used the qrz.com dxcc prefix lookup to get general coordinates for the country. There are still a few bad/missing data issues to resolve. Like European stations with coordinates at the South Pole.

Part 2: Tried various mapping frameworks – like leaflet, arcgis, and openlayers. Wanted to use a great-circle projection (azimuthal equidistant) like the big ARRL world map. And may still figure this out. But working with map projections and coordinate transforms is way worse than doing a Smith Chart.  I ended up hacking a flight tracking example from openlayers.org and basically replacing airplanes with QSO’s. That is why the lines are animated from source to destination.  Also added a layer for day/night, and QSO/time status display.

It probably makes sense to get rid of the flight animation and just display the entire path in sync with the QSO data – with color code for each band (K1KP) – and speed control on the time lapse, etc., So you can get a better sense of rate and propagation.

It would be cool to have a website where you could upload a log file and generate maps.

Note: this project is not yet available

Files

local files:

generating data:

internetsensors/cabrillomap

put the cabrillo data in testdata.cbr (use QSO: records only for now) should be sorted chronologically.

run:  node index.js

the output file will be: geocab.json (which is used as input to the mapping program)

mapping

internetsensors/oltest

main.js = node source with ol mapping and data processing

index.html = web page for map

geocab.json = geocoded cabrillo json test data

to run, type: npm start

Then open: http://localhost:5173/ in a browser

Additional work / current issues

Some of the qrz.com callsign data has bad geo coordinates. In particular some of the records show a latitude of -89 and longitute -179 – need to check for these numbers and replace with dxcc coordinates.

There should be an argument on the node program to pass in the datafile. Also the program should clean up any non QSO: records, like the file header info and any X-QSO recs.

Also need to clean up the async/await stuff – currently there are several methods for handling state transitions.

mapping ideas:

As mentioned above, its probably a good idea to make a version of the code without the flight animation, and have various controls to stop/start the data playback to look at individual qso’s do speed control, etc.,

azimuthal equidistant projection: there are some links to examples in leaflet, and arcgis to handle complex projections. In documents, look at:  “map links for projection stuff.txt”

leaflet test version:

in the internetsensors/cabrillomap folder there’s a test file: cbworld1.html that works using websockets when you run the index.js file to generate test data. It uses a leaflet map, but the lines don’t adapt to great circle polar paths.

arcgis

I believe the arcgis examples are in internetsensors/projected geometries

And: internetsensors/pe-gs-projection

The former is a a very nice world projection with some point markers. The latter is an example that shows how to switch out various projections in realtime.

Substituting components in parallel

Resistance and capacitance in an AM radio.

A first test to find out if its practical to ‘piggyback’ external controls on to an existing radio. The reason for doing this is to leave an original radio intact by clipping the remote components to the leads of the existing controls.

For example a varactor would be connected in parallel to the variable capacitor already in the circuit. The existing capacitor would be set low. The capacitance of the varactor would then be added to the total, using the formula for parallel capacitors.

For potentiometers, its not as easy because parallel resistors are divided:

ParallelWiderstaende

Formula:  Rtotal = R1×R2/(R1+R2)

For example if R1 is 10K, R2 would need to be 100K to get a total resistance of 9K. To get 99% of the existing resistance, the piggyback resistor needs to be 100 times the value of the existing resistor. 1 MegOhm if matched with 10K.

Practical considerations

What happens when the radio is not being controlled remotely?

  • For capacitance, the remote capacitor (varactor) should be set to 0.
  • For resistance, the remote resistor should be set as high as possible.

Conversely, how should the physical controls on the radio be set when operating remotely?

  • Variable capacitors should be set as low as possible.
  • Potentiometers should be set as high as possible. For a volume control this actually means turning the volume all the way down.

Switches

SPST switches can be considered as a form of potentiometer with infinite resistance. A piggybacked switch will only work if the existing switch is in the ‘off’ position. And vice-versa.

Double-Throw and Rotary switches present more difficulties as multiple states are maintained by the same device.

I don’t think multiple throw switches can be piggybacked. Two possible solutions:

  • mechanical connection to manual control (servo)
  • internal relays – requiring modification of the radio, so that the existing control and the remote control operate the same relays
  • Hybrid approach: Operate the switches manually while operating other controls remotely.

Testing

I piggybacked a tuning capacitor from an AM radio onto the tuning capacitor of a vintage Radio Shack Globe Patrol (regenerative receiver).

 

Synscape

A soundscape that responds to color.

By Helen Trevillion

The Max patch is not available. From the video it appears that many channels of sound are playing concurrently. Color values are assigned to faders for each channel.

Presets in Max for Live

How to use the Max preset object inside of M4L.

Screen Shot 2015-03-22 at 8.06.14 PM

There is some confusion about how to use Max presets in a M4L device. The method described here lets you save and recall presets with a device inside of a Live set, without additional files or dialog boxes. It uses pattrstorage. It works automatically with the Live UI objects.

It also works with other Max UI objects by connecting them to pattr objects.

Its based on an article by Gregory Taylor: https://cycling74.com/2011/05/19/max-for-live-tutorial-adding-pattr-presets-to-your-live-session/

Download

https://github.com/tkzic/max-for-live-projects

Folder: presets

Patch: aaa-preset3.amxd

How it works:

Instructions are included inside the patch. You will need to add objects and then set attributes for those objects in the inspector.  For best results, set the inspector values after adding each object

Write the patch in this order:

A1. Add UI objects.

For each UI object:

  1. check link-to-scripting name
  2. set long and short names to actual name of param

Screen Shot 2015-03-22 at 8.44.23 PM

A2 (optional) Add non Live (ie., Max UI objects)

For each object, connect the middle outlet of a pattr object (with a parameter name as an argument) to the left inlet of the UI object. For example:

Screen Shot 2015-03-22 at 8.30.24 PM

Then in inspector for each UI object:

  1. check  parameter-mode-enable
  2. check inital-enable

Screen Shot 2015-03-22 at 8.51.10 PM

B. Add a pattrstorage object.

Screen Shot 2015-03-22 at 8.35.28 PM

Give the object a name argument, for example: pattrstorage zoo. The name can be anything, its not important. Then in the inspector for pattrstorage:

  1. check parameter-mode enable
  2. check Auto-update-parameter Initial-value
  3. check initial-value
  4. change short-name to match long name

Screen Shot 2015-03-22 at 8.42.49 PM

C. Add an autopattr object

Screen Shot 2015-03-22 at 8.34.21 PM

D. Add a preset object

Screen Shot 2015-03-22 at 8.34.53 PM

In the inspector for the preset object:

  1. assign pattrstorage object name from step B. (zoo) to pattrstorage attribute

Screen Shot 2015-03-22 at 8.52.11 PM

 Notes

The preset numbers go from 1-n. They can be fed directly into the pattrstorage object – for example if you wanted to use an external controller

You can name the presets (slotnames). See the pattrstorage help file

You can interpolate between presets. See pattrstorage help file

Adding new UI objects after presets have been stored

If you add a new UI object to the patch after pattrstorage is set up, you will need to re-save the presets with the correct setting of the new UI object. Or you can edit the pattrstorage data.