“We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input.”
I wonder if it’s real time? If yes then this is good for VR.
I highly doubt it, here is something that might interest you on the topic
Here is an alternative Piped link(s):
here is something that might interest you on the topic
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Interesting! It could theoretically act like a Kinect if it were advanced enough, and you wouldn’t need any additional hardware.
This has a bunch of additional hardware, much more complex than the Kinect camera