Encryption won't work for this scenario as it reintroduces the problem of all or no data. Degraded encrypted data can't be decrypted successfully (yet?)
A stream cipher like ChaCha20 would work. Most stream ciphers (including ChaCha20 and RC4) work by generating a pseudorandom bitstream based on the key which is then xor'd with the cleartext to produce the ciphertext. Since neither the cleartext nor the ciphertext are used to generate the pseudorandom bits any individual bit flip in the ciphertext will only result in that bit being flipped in the cleartext, the same as if the data were not encrypted.
OFDM 802.11 rates already have a significant amount of Forward Error Correction. 802.11g uses a convolution encoder and Viterbi Decoder [1], and 802.11 HT rates (n, ac, ad) can also use Low Density Parity Codes (LDPC) [2]. The problem with Forward Error Correction is that it can't deal with too many sequential errors, so many modems use something known as an interleaver. An reorders bits as they are sent over the air, so instead of sending LSB to MSB or vice versa you are sending bits in a random yet mutually known order. This causes link quality issues such as interference to not interfere with continuous bits (to the benefit of the FEC decoder). The problem with an interleaver is that it causes latency to go up. If you interleave based on 256 bits of 2048bits, you can't decode blocks of data until you've received all of this bits. So the 288 bit interleaver that 802.11 uses won't cause many problems if you're streaming but if you are interleaving data across multiple packets you will notice a spike in video latency.
Error correction (or redundant parity data as est mentioned in another reply) just kicks the can down the road.
Let's say you're using a 20/40 erasure encoding. You break a piece of data up into 20 pieces and create 20 extra parity pieces. Now you only need 20 out of the 40 to recreate the original data.
Are we encoding the encrypted data? Ok well we need at least 20 good pieces, and that's to decode the original data. This method doesn't allow for seamless degradation but allows for some data loss in the transmission (while effectively doubling the amount we're trying to push in the first place).
Let's say we're breaking up the original data, creating parity pieces and encrypting each little piece. Then it could decrypt each piece it got and use it and if it couldn't decrypt a piece just throw it away. This could potentially work but parity pieces are useless unless you are trying to recreate the original file neglecting the ability to degrade quality. So redundancy is more important in this scenario than parity.
But, if we make the encrypted pieces small enough, say each packet body, then that could probably work but be resource intensive. Encode/decode every packet, if successful insert into feed, else throw the packet away. This would work a lot like the existing technology just requiring some middle step of decrypting each packet body.
> But, if we make the encrypted pieces small enough, say each packet body, then that could probably work
So in other words you can use what's basically the default mode of encryption, CBC. Each encrypted byte only depends on the adjacent 32 bytes, so you can allow errors through and they affect a couple pixels instead of a single pixel.
Let's assume CBC with AES. It encrypts in 16 byte blocks. If you slightly corrupt one block, you will fail to decrypt it entirely, and it will slightly corrupt the block after, but everything else will be fine.
There are modes of encryption where losing one bit will corrupt all subsequent bits.
There are also modes like GCM or stream ciphers like ChaCha20 where one corrupted bit will not corrupt any other bits at all.
In short: There are many options, and half of them are suitable for this.
How would you implement analog parity? Parity doesn't translate well as a concept into the analog space.
You can take an analog signal and "quantize" it into sixteen possible values so that you can apply a parity algorithm that returns sensible results and doesn't fail with expected noise, but you're digitizing the signal.
I don't understand what analog has to do with it? The video is digitized first, then error-correction (parity?) information added before transmission, so all parity would be related to the digital bitstream -- unless I missed something?
Even then I think it would still be all-or-nothing. That may increase the chances of "all" over "nothing" but not allow graceful degradation of the video.
Hierarchical coding: send a low quality version of the video with high redundancy, and a higher quality refinement of the video with lesser redundancy.