Mapping the cubemap for realtime reflections - samrg123/JniTeapot GitHub Wiki
This page will explain how to write a camera texture to cubemap faces in order to use the cubemap to achieve real-time reflections. It uses the camera texture, the camera view matrix, and the cubemap object.
Our cubemap gets written every frame in jniTeapot.cpp. It gets the camera image from the Arwrapper, and calls a function in GlSkybox to update it:
skybox.UpdateTextureEGL(ARWrapper::Get()->EglCameraTexture(), &glContext);
Our cubemap gets written to in GlSkybox.h. The cubemap gets stored as 6 images, in the order of the faces:
struct {
const char *posX,
*negX,
*posY,
*negY,
*posZ,
*negZ;
};
const char* images[6];
Then, each time we get a new image from the camera, we attempt to write part of this camera background to all 6 faces. This gets done vertex and fragment shaders in GLSkybox.
The vertex shader calculates which position of the camera image each face should write to. This gets a bit tricky, since a cubemap is 3d, while each face of the cubemap and the camera image is 2d. Thus, we have to convert each position in each face of the cubemap into a "fakeWorldPos". This fakeWorldPos is the position of each cubemap vertex in 3d space. This translation is different for each face. We used the following graphic to convert these positions to 3 positions, sourced here: https://www.khronos.org/opengl/wiki/Cubemap_Texture
In our vertex shader, we passed in which face we were on, and did the conversion based off of it:
//X
" if (i_face == 0) { fakeWorldPos = vec4(1., -a_Position.y, -a_Position.x, 0.); }"
" if (i_face == 1) { fakeWorldPos = vec4(-1., -a_Position.y, a_Position.x, 0.); }"
//Y
" if (i_face == 2) { fakeWorldPos = vec4(a_Position.x, 1., a_Position.y, 0.); }"
" if (i_face == 3) { fakeWorldPos = vec4(a_Position.x, -1., -a_Position.y, 0.); }"
//z
" if (i_face == 4) { fakeWorldPos = vec4(a_Position.x, -a_Position.y, 1., 0.); }"
" if (i_face == 5) { fakeWorldPos = vec4(-a_Position.x, -a_Position.y, -1., 0.); }"
We then have to flip the x axis to adjust for the difference in left handed and right handed coordinates between ARcore and OpenGL.
" fakeWorldPos.x = -fakeWorldPos.x;"
From there, we have our coordinates in 3d space. We then multiply this by the view matrix.
"viewPos = m_viewMat * vec4(fakeWorldPos.xyz, 0.);"
This translates the world coordinates to camera coordinates. We use this to write the camera texture to our cubemap face textures in our fragment shader. To do this, we first have to check to make sure our viewPos is within the camera bounds. This is to ensure we are only overwritting textures that the camera actually sees. Then, if it is in bounds, we just set the fragColor to the color we see in the camera texture.
if (viewPos.x > -1. && viewPos.x < 1. && viewPos.y > -1. && viewPos.y < 1. && viewPos.z > 0.) {"
"vec2 cameraTexCoord = vec2((viewPos.x + 1.)/2., (1.0 - (viewPos.y + 1.)/2.));"
"fragColor = texture(sTexture, cameraTexCoord);"
This happens for each face in the cubemap. Our bounds checking ensures that the cubemap faces are only getting written to when the camera can "see" them.