Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Raspberry Pi doesn't have any "tensor cores" at all. According to Wikipedia, it actually does have a "Broadcom VideoCore IV" GPU, but I don't think this processor is ever used for deep learning. So if you did inference on the Pi then it would have to be on the CPU; inference is slower even on a meaty desktop CPU than on a GPU, never mind the low-powered CPU on the Pi.

That is all academic, as the whole point of the article is actually that the processing isn't done on the Pi but on the remote server. In that case the difference (if there even is one, I don't see a frame rate mentioned in the article) is indeed down to the difference in power of the respective GPUs, as you're alluding to, or to do with the fact that the article is having to stream the image frames over the network (it doesn't even seem to compress them) whereas the parent comment's idea just processes them locally.



You know what, you're right and I'm wrong.

I went back and looked carefullier, and I must have read "on the edge", then "testing it locally" then "integrating tensorflow" and thought they moved it. But it doesn't actually do it on the edge at all. I think I need to learn to read.


As I said in another comment here, I and lots of other commenters misread it that way too. I definitely find it funny they took "on the edge" to mean "anywhere on my local network", rather than just on the actual device capturing the data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: