The figure above is the waveform for the 38400 bps QAM signal. While I believe in math, my common sense refuses to believe that any information can be recovered from such a chaotic wave.
The "version 3" QAM modem written in Python is a real winner. Today, some of the FIXMEs in code were implemented, and the thing reached 38400 bps at 2400 baud, and 48000 bps at 3000 baud — both using 16 bits per symbol. Note that 3000 baud is almost twice the rate of 1800Hz carrier!
It "wanted" to work at 18 bits/symbol, but showed too many errors. I am not sure whether 16-bit WAV quantization is the hard limit here, but it seems to be the case. But I was quite surprised to see that transmitted message was still recognizable at 18 bits/symbol, albeit corrupted.
The changes in v3 which allowed this performance jump are:
1) added code to detect adjacent symbols which are equal in amplitude and phase.
While all V3 constellation symbols cause a phase change, sometimes it is very small, especially if we are sending many bits per symbol.
The algorithm is simple: if a symbol change is not detected within the expected interval, the complex signal is sampled anyway. Basic as it seems, it allows RX to handle up to 16 bits per symbol, while it handled only 7 before.
2) Added a carrier Hz compensation, PLL-like.
Any difference between TX and RX carriers cause a "phase drift" in decoded QAM signal — that is, phase shows a small but continuous rotation in absolutely every sample. This can be distinguished from sudden, time-limited symbol phase rotations, and compensated for.
I discovered that phase drift is more stable (and hence easier to handle) if RX carrier frequency is smaller than TX. If TX carrier is above TX, phase drift oscilates in a difficult way. So, now we make sure that RX carrier is always "low". For symbols with 16 bits each, RX can tolerate up to 15 Hz carrier deviation, which seems very good to me.
3) Constellation uses Gray code now, so adjacent points are guaranteed to have only one different bit.
This "softens" errors (even if RX resolves to the wrong symbol, chances are just one bit will be corrupted), and helps whatever error-correction algorithms in use (currently we use none, but I plan to implement convolutional codes in V4).
Actually, the v3 performance numbers are getting meaningless; of course they are possible just because the medium is a noiseless 16-bit WAV file, whose Shannon capacity is almost 700 kbps. (The Shannon limit of a phone line is around 24kbps.)
The next challenge is to make RX work well on noisy and band-limited signals. Rates like 14400bps will be possible only by implementation of convolutional code, I guess.