Question

wxy123 on Fri, 04 Oct 2013 03:08:58


I want to use my own camera pose instead of the calculated one in the Fusion toolkit. My problem is that I apparently cannot provide a transform that is compatible with the camera space. When the variable

m_paramsCurrent.m_bMirrorDepthFrame is equal to 0, the camera translations in the real world do not correctly add up in volume space. Apparently, the translations are integrated in the volume space as left-handed.  When m_paramsCurrent.m_bMirrorDepthFrame is equal to 1, I can make the translations in the real world integrate correctly in the volume space but not the rotations. Can you provide more information about how to calculate your own camera pose so that it can be integrated in the volume space?

 

m_worldToCameraTransform = m_MyOwnTransform;

hr = m_pVolume->IntegrateFrame( m_pDepthFloatImage, integrateColor ? m_pResampledColorImageDepthAligned : nullptr, m_cMaxIntegrationWeight, NUI_FUSION_DEFAULT_COLOR_INTEGRATION_OF_ALL_ANGLES, &m_worldToCameraTransform);



Sponsored



Replies

Carmine Si - MSFT on Fri, 04 Oct 2013 23:21:28


We don't discuss internals of the API's as they can change without notice. How are you calculating your worldToCameraTransform? What API's are you calling to create this?

The Kinect Explorer code demonstrates "Use Camera Pose Finder" logic to find the correct value that you can pass that cost the least amount of energy. KinectFusionProcessor::FindCameraPoseAlignPointClouds. You will want to look at the INuiFusionMatchCandiates and INuiFusionCameraPoseFinder

http://msdn.microsoft.com/en-us/library/microsoft.kinect.nuikinectfusioncameraposefinder.inuifusioncameraposefinder.aspx


wxy123 on Sun, 06 Oct 2013 18:31:25


I want to use my own camera pose definition because the objects I try to reconstruct do not have enough depth features and depth tracking fails.

I do not use an API to calculate worldToCameraTransform. I physically measure the current position and orientation of the camera and calculate the corresponding rotation-translation displacement matrix which I feed to the IntegrateFrame function as m_MyOwnTransform instead of m_worldToCameraTransform.

Since Microsoft gives access to the IntegrateFrame function, I thought it might be possible to get information about how to define m_worldToCameraTransform matrix parameter. Obviously, the m_bMirrorDepthFrame has an impact on the type of coordinate system used in the volume space. If Microsoft doesn't want to share the information about how to define the worldToCameraTransform matrix, I can make it work by fiddling with the parameters but I would prefer to understand mathematically what I'm doing.

Carmine Si - MSFT on Tue, 08 Oct 2013 18:26:16


The calculations are made from the information we get from the Kinect sensor. If you are not using the Kinect sensor to process depth/color data, then that is not supported. As stated above, we provide the necessary functions to get the orientation/matrix information for the frames you intend to integrate. If you are not using those and using your own calculations, you will have to determine this on your own. The re-localiztion logic will ignore the data if the delta change from the last frame is too big to consider. You may need to transpose some values if you think they are correct.

MrElop on Mon, 14 Dec 2015 07:19:28


Did you get this working after all? I'm trying to use ResetReconstruction() with transformation matrices provided by KinectFusionHelper functions. Simple thing like rotating camera 90 degrees around an axis seems to be difficult.