Does this allow encryption of the video stream? Graceful signal degradation is a great feature, but I don't see how that would work when there is encryption.
A block cipher doesn't make errors all that big. Look at the two layer error correction on a CD. That would play extremely well with a block cipher if you wanted to.
Of course it is possible to encrypt an unreliable unidirectional bitstream. (Tons of systems do this, eg. satellite video links.)
Contrary to what other commenters suggest, a proper stream cipher like ChaCha20 is not even needed. You could just use CTR mode, which turns any block cipher (like AES) in a stream cipher and prevents ciphertext bit errors from creating more plaintext bit errors. Also, transmit the counter every once in a while so that dropped packets don't prevent you from decrypting subsequent packets.
Encryption won't work for this scenario as it reintroduces the problem of all or no data. Degraded encrypted data can't be decrypted successfully (yet?)
A stream cipher like ChaCha20 would work. Most stream ciphers (including ChaCha20 and RC4) work by generating a pseudorandom bitstream based on the key which is then xor'd with the cleartext to produce the ciphertext. Since neither the cleartext nor the ciphertext are used to generate the pseudorandom bits any individual bit flip in the ciphertext will only result in that bit being flipped in the cleartext, the same as if the data were not encrypted.
OFDM 802.11 rates already have a significant amount of Forward Error Correction. 802.11g uses a convolution encoder and Viterbi Decoder [1], and 802.11 HT rates (n, ac, ad) can also use Low Density Parity Codes (LDPC) [2]. The problem with Forward Error Correction is that it can't deal with too many sequential errors, so many modems use something known as an interleaver. An reorders bits as they are sent over the air, so instead of sending LSB to MSB or vice versa you are sending bits in a random yet mutually known order. This causes link quality issues such as interference to not interfere with continuous bits (to the benefit of the FEC decoder). The problem with an interleaver is that it causes latency to go up. If you interleave based on 256 bits of 2048bits, you can't decode blocks of data until you've received all of this bits. So the 288 bit interleaver that 802.11 uses won't cause many problems if you're streaming but if you are interleaving data across multiple packets you will notice a spike in video latency.
Error correction (or redundant parity data as est mentioned in another reply) just kicks the can down the road.
Let's say you're using a 20/40 erasure encoding. You break a piece of data up into 20 pieces and create 20 extra parity pieces. Now you only need 20 out of the 40 to recreate the original data.
Are we encoding the encrypted data? Ok well we need at least 20 good pieces, and that's to decode the original data. This method doesn't allow for seamless degradation but allows for some data loss in the transmission (while effectively doubling the amount we're trying to push in the first place).
Let's say we're breaking up the original data, creating parity pieces and encrypting each little piece. Then it could decrypt each piece it got and use it and if it couldn't decrypt a piece just throw it away. This could potentially work but parity pieces are useless unless you are trying to recreate the original file neglecting the ability to degrade quality. So redundancy is more important in this scenario than parity.
But, if we make the encrypted pieces small enough, say each packet body, then that could probably work but be resource intensive. Encode/decode every packet, if successful insert into feed, else throw the packet away. This would work a lot like the existing technology just requiring some middle step of decrypting each packet body.
> But, if we make the encrypted pieces small enough, say each packet body, then that could probably work
So in other words you can use what's basically the default mode of encryption, CBC. Each encrypted byte only depends on the adjacent 32 bytes, so you can allow errors through and they affect a couple pixels instead of a single pixel.
Let's assume CBC with AES. It encrypts in 16 byte blocks. If you slightly corrupt one block, you will fail to decrypt it entirely, and it will slightly corrupt the block after, but everything else will be fine.
There are modes of encryption where losing one bit will corrupt all subsequent bits.
There are also modes like GCM or stream ciphers like ChaCha20 where one corrupted bit will not corrupt any other bits at all.
In short: There are many options, and half of them are suitable for this.
How would you implement analog parity? Parity doesn't translate well as a concept into the analog space.
You can take an analog signal and "quantize" it into sixteen possible values so that you can apply a parity algorithm that returns sensible results and doesn't fail with expected noise, but you're digitizing the signal.
I don't understand what analog has to do with it? The video is digitized first, then error-correction (parity?) information added before transmission, so all parity would be related to the digital bitstream -- unless I missed something?
Even then I think it would still be all-or-nothing. That may increase the chances of "all" over "nothing" but not allow graceful degradation of the video.
Hierarchical coding: send a low quality version of the video with high redundancy, and a higher quality refinement of the video with lesser redundancy.
This is awesome project. I'm currently building a fixed wing drone with dual wifibroadcast links, downstream for (stereo) video and upstream for control.
Monitor mode is a MAC level concept, and doesn't alter any of the RF characteristics that would make operating this device illegal. (This is not legal advice. Consult with an attorney, etc etc)
Cool stuff. Good to see them recognize and use advantages of analog. I could tell them digital transmission of data has always been done with analog circuits but analog's invisible ubiquity is beside the point. ;)
Hell yes! Or even hardware acceleration for starting in random parts of videos on my PC so they don't break up and delay as often. This is the digital era. It's supposed to work more reliably than old, analog tech. I used to be able to stop my rewind or fast-forward on VCR within a few seconds of the target moment with clean play. Still not reliable with digital streaming. (sighs)
Note: I do like how, even with errors, it still takes under 20 seconds for me to get to any random part of the vid. That's an improvement over the rewind/fast-forward speeds. :)
whereas analog media stored full resolution versions of every frame.
Seeking is much easier when you can pick any random location and have all the data right there ready to use, and you don't have to backtrack and try to recreate things from previous frames.
Isn't a lot of that related to how the video is encoded/decoded though? I have only a basic understanding of how it works through working with video (not developing or tweaking actual codecs) but when I'm doing VJ-type stuff in Resolume (for example) I use a setup where all clips are encoded with a keyframe every frame. It makes for some huge clips but I can scrub or jump around or manipulate clips with no delay at all.
On the opposite end you've got streaming video or even many common formats and settings for ripped/downloaded video. Those do a keyframe every (x) frames and then between keyframes the file only contains the data for changes to the keyframe. This gets you smaller files so your downloads are quicker and your streams look nicer while using less bandwidth.
I know there's a lot more to it but at least with locally stored files, I thought it mostly had to do with keyframes. With streaming I'd imagine it's related more to how the stream is managed to download in chunks and maximize quality versus bandwidth (rather than focusing on quick scrubbing or quick access to random points of the video).
I'm interested in this stuff so if there's something I'm missing or flat-out wrong about, feel free to educate me.
Oh I'm not a subject matter expert on video compression. I just know the difference between using a general-purpose CPU and the ASIC version of decoders is huge. It's why you have all these weak, low-power SOC's that can do 720/1080p etc. That's the hardware acceleration.
I doubt it's designed for the random access I'm describing, though. So, one for that might solve the problem. Might also need to be integrated with a good, storage subsystem if that causes any difficulties. Cool, though, that yours lets people jump around at will. :)
> Note: I do like how, even with errors, it still takes under 20 seconds for me to get to any random part of the vid. That's an improvement over the rewind/fast-forward speeds. :)
Don't forget not having to rewind after watching/before watching. Glad that whole class of annoyance is gone, much more annoying than poor random access :)
"Note: I do like how, even with errors, it still takes under 20 seconds for me to get to any random part of the vid. That's an improvement over the rewind/fast-forward speeds. :)"
I know man. It's a semi-joking and semi-serious post where I point that people step from one mental, model (reliable/p2p/digital) to another (lossy/continuous/analog) to come up with a good solution to a problem. In this case, kind of reinventing one but with a medium that has plenty of cheap HW supporting it. It's technically digital but most like analog for the average person's experience with the attributes.
Not electricity but analog circuits. Many people think they have one or the other while they have a mix of both leaning heavily toward digital. So many misconceptions about the topic.