Karlesi on Tue, 12 Jan 2016 21:32:25
Hey, I am currently working on my Master Thesis in Geoinformatics. The goal is to scan objects submerged under water using both versions of Microsoft's Kinect sensor.
I conducted several experiments and found that the water surface is recognized as kind of a solid border once the Kinect is approached closer than ~50cm to the water and there are no points returned at all (both Kinect versions).
I also mounted the Kinects in an aquarium and submerged it a few cm underneath the water surface --> also no results!
Did you encounter this problem when you worked with the Kinect or do you have any idea why I am having these issues? Is there probably a fix that could make the Kinect see water not as solid border when it is close?
It would be great if you could help me here!
Phil Noonan on Wed, 13 Jan 2016 07:45:49
I've been fairly rough with my Kinects in experiments and teardowns but I never thought of drowning them!
If you take a look at the IR video from the v1/v2 you may see the problem when scanning water with near infra red light. The wavelength of IR used by the Kinect would get absorbed by the water more than visible light, and the surface of the water would reflect a lot of the signal back anyway.
Do you see anything in the depth data from submerged objects at any distance?
I'm not sure the best way to get small object underwater scanning would be a Kinect. I would suggest a camera that uses stereo optical cameras like the ZED cam https://www.stereolabs.com/ or perhaps the intel r200 http://click.intel.com/intel-realsense-developer-kit-r200.html (I think this has a passive (non IR) stereo mode for depth in addition to its active IR based structured light mode, but dont quote me on that)