JniTeapot Documentation - samrg123/JniTeapot GitHub Wiki

JniTeapot


Table of contents:

  1. JniTeapot Summary

  2. JniTeapot High Level Overview

  3. Programming Reference
    3.1 GlContext
    3.2 GlText
    3.3 GlObject
    3.4 GlTransform
    3.5 GlCamera
    3.6 GlSkybox
    3.7 GlUtil
    3.8 JNITeapot
    3.9 ArWrapper
    3.10 Log
    3.11 Panic
    3.12 Timer
    3.13 Memory
    3.14 FileManager

  4. Useful Resources
    4.1 Math
    4.2 Graphics
    4.3 OpenGl
    4.4 Hardware
    4.5 Android

  5. Future Goals



Section 1: JniTeapot Summary

screenshot

The purpose of our project was for the team to learn new skills regarding ARCore, OpenGL, and graphics programming. We wanted to try to recreate near cutting edge technology in mobile AR graphics, which was creating real time reflections for virtual objects. Our idea was to write to a cubemap in real time, based off the camera. As the user "scans" their room with our app, a cubemap is built up. With that cubemap, we are able to render a reflective sphere. This makes the reflective sphere look more realistic, since it is actually reflecting the contents of the users room! With this technology, we can further blur the line between the virtual and actual reality.




Section 2: JniTeapot High Level Overview

The goal of this project is to implement our approach to real-time reflections for augmented reality. This project uses ARCore for tracking and pose estimation of the mobile device, and uses OpenGL to render a reflective object.

We use cubemaps for representing reflections for virtual objects since computing the reflection at render time is highly optimized on modern GPUs. This breaks up our project into three parts:

Setting up ARCore

ARCore is Android's augmented reality framework that includes functionality such as pulling camera textures, tracking device pose, and even machine learning models to estimate depth of a scene from movement and context. This project was written in C++ using the Android NDK, so we used the ARCore NDK bindings that can be found here.

The main entry point to ARCore is the ArSession. However, since ARCore (included in Google Play Services for AR) isn't always installed, it's important to first check if it is installed and if not, request an install. The following is a simple example of initializing an ArSession based on our code from ARWrapper.h which implements all ARCore-related functionality in this project.

ArSession* arSession;

ArAvailability arCoreAvailability;
ArCoreApk_checkAvailability(jniEnv, jActivity, &arCoreAvailability);
    
if(arCoreAvailability != AR_AVAILABILITY_SUPPORTED_INSTALLED) {
    ArInstallStatus installStatus;
    ArCoreApk_requestInstall(jniEnv, jActivity, 1, &installStatus);
    // If ArCore was installed from open play store prompt update the install status
    if(installStatus == AR_INSTALL_STATUS_INSTALL_REQUESTED) {
        ArCoreApk_requestInstall(jniEnv, jActivity, 0, &installStatus);
    }       
}
    
ArConfig* arConfig;
ArConfig_create(arSession, &arConfig);
// Note: InstantPlacement and AugmentedImages have significant CPU overhead so we disable them
ArConfig_setInstantPlacementMode(arSession, arConfig, AR_INSTANT_PLACEMENT_MODE_DISABLED);
ArConfig_setAugmentedImageDatabase(arSession, arConfig, nullptr);
ArSession_configure(arSession, arConfig);
ArConfig_destroy(arConfig);

Now that we have an initialized ArSession, we can call ArSession_resume(arSession) to tell ARCore that we are ready to start tracking. We can now start using it to query information about the camera pose. This involves creating an ArFrame, ArCamera, and ArPose. Example code based on our code from ARWrapper.h to get the camera pose is below:

ArFrame* arFrame;
ArPose* cameraPose;
ArCamera* arCamera;

ArFrame_create(arSession, &arFrame);
ArPose_create(arSession, nullptr, &cameraPose);
ArFrame_acquireCamera(arSession, arFrame, &arCamera);

ArSession_update(arSession, arFrame);
ArCamera_getDisplayOrientedPose(arSession, arCamera, cameraPose);
ArPose_getPoseRaw(arSession, cameraPose, rawPose.vals); // rawPose.vals now stores 7 values (4 quaternion, 3 xyz)

The other important information we need from ARCore is the projection matrix, which depends on physical properties of the phone's camera lens such as its field of view. We will discuss how this is used in the next section. This can be accessed using ArCamera_getProjectionMatrix(arSession, arCamera, nearPlane, farPlane, projMatrix.values);.

To access the camera's image as an OpenGL texture, after creating an OpenGL EGL texture, one can call ArSession_setCameraTextureName(arSession, eglTextureId) to tell ARCore to put the camera image in the texture with the given ID.

Rendering a reflective sphere

The first step in rendering any 3D object is having a good representation of that object, which primarily includes the object's mesh. Our project supports loading OBJ files, which are essentially a list of vertices, faces, and optionally normals. If normals are not present in the file, we compute them ourselves by taking the cross product of the vectors between the 3 points on a face. Once we have our vertices, faces, and normals loaded in, we copy them to the GPU. Since the mesh doesn't change after load-time (in our use case), we only need to do this once. To do this, we use glBindBuffer to create a VBO. At this point, it is also common to load a texture for the object; however, since we focused on reflection for smooth metallic objects for this project, we neglected to perform this step.

Once our object is fully loaded in, we can start drawing it. To draw an object, we will use a vertex shader and a fragment shader. The vertex shader can be thought of as being run once for each vertex in our object. When rendering a 3D object, it is usually responsible for transforming the object's vertex, which lives in object space, to view space and/or clip space. The transformation from object space to world space is done using the model matrix. The transformation from world space to view space is done using the view matrix. The transformation from view space to clip space is done using the projection matrix (also called perspective matrix when perspective projection is used). For more information on these terms, check out this explanation. Essentially, this process outputs 2D vertices that represent the original vertices' corresponding positions on the screen. In addition to transforming vertices, we also transform normals, light directions and more in the vertex shader. All of this information is passed into the fragment shader, which is the next step in the standard render pipeline and can be thought of as outputting a color for each pixel in the output image. Note how the fragment shader runs per pixel, while the vertex shader runs per vertex. The output attributes from the vertex shader must be interpolated between pixels so we can get varying values per pixel. This is usually done automatically by OpenGL when declaring an out from the vertex shader. Our fragment shader implements the Phong lighting model with modifications for cubemapped reflections. Given a cubemap, we can compute the reflect the vector from the camera to the object across the object's normal at that position, and use a cubemap sampler to look up the correct texel.

A more in-depth overview of this can be found in our implementation of GlObject.

Generating a cubemap

Our real time reflection cubemap is generated using information from the camera.

Each time we get a new image from the camera, we attempt to write part of this camera background to all 6 faces. This gets done vertex and fragment shaders in GLSkybox.

The vertex shader calculates which position of the camera image each face should write to. This gets a bit tricky, since a cubemap is 3d, while each face of the cubemap and the camera image is 2d. Thus, we have to convert each position in each face of the cubemap into a "fakeWorldPos". This fakeWorldPos is the position of each cubemap vertex in 3d space. This translation is different for each face. We used the following graphic to convert these positions to 3 positions, sourced here: https://www.khronos.org/opengl/wiki/Cubemap_Texture

From there, we have our coordinates in 3d space. We then multiply this by the view matrix.

This translates the world coordinates to camera coordinates. We use this to write the camera texture to our cubemap face textures in our fragment shader. To do this, we first have to check to make sure our viewPos is within the camera bounds. This is to ensure we are only overwritting textures that the camera actually sees. Then, if it is in bounds, we just set the fragColor to the color we see in the camera texture.

This happens for each face in the cubemap. Our bounds checking ensures that the cubemap faces are only getting written to when the camera can "see" them.

For a more in depth look of our code and shaders, see Mapping the cubemap for realtime reflections and GlSkybox.




Section 3: JniTeapot Code Reference

Our project is made up of GlObjects, which are rendered with a GlSkybox for reflections. All AR functionality is handled by ARWrapper, including getting device pose and background camera texture.

GlContext.h:

GlContext handles the majority of the global OpenGL context, including helper functions to create shaders, VBOs, and cubemaps.

void Init(ANativeWindow* window);

Initializes the EGL surface and context based on the Android Native Window passed in.

inline void Shutdown()

Cleans up all OpenGL related resources, including the context, surface, and display.

static inline GLuint CreateShader(GLuint type, const char* source)

Uses glCreateShader, glShaderSource, and glCompileShader to compile a shader with the given type and source, and returns a GLuint referring to the newly created shader.

static GLuint CreateGlProgram(const char* vertexSource, const char* fragmentSource)

Calls CreateShader on both the vertex and fragment shaders, and then links them. Returns the combined program.

static inline void VertexAttribPointerArray(GLuint attribIndex, GLuint dataType, uint32 bytes, const void* pointer) 

Uploads an array of vertex attributes. Consider using a uniform block instead.

bool SwapBuffers()

SwapBuffers wraps eglSwapBuffers and recreates the context if necessary. Returns false if the context needed to be recreated.

static GLuint LoadCubemap(const char* (&images)[6], int &size)

Loads 6 PNG images positive/negative X, Y, Z directions) into a OpenGL cubemap texture using FileManager and lodepng. Returns the ID of the newly created cubemap texture.


GlText.h:

GlText.h is a class designed to render and draw strings to the screen.


Public Types:

struct alignas(8) StringAttrib

A struct used to specifiy how a string is drawn on the screen.

Field Default Value Description
Vec2<float> scale GlText::kDefaultStringAttribScale The x,y scale used to draw text to the screen
uint32 rgba GlText::kDefaultStringAttribRGBA The Red,Blue,Green,Alpha color used to draw text on the screen
float depth GlText::kDefaultStringAttribDepth The z value used to order strings on the screen in the range [-1,1]. Strings with larger depth values are rendered on top of strings with smaller depth values. If the depth value is outside of the [-1,1] range, the string won't be drawn.

struct RenderParams

A struct used to specify how a font will be rendered.

Field Default Value Description
StringAttrib renderStringAttrib GlText::kDefaultRenderStringAttrib The initial StringAttributes used to draw text to the screen.
StringAttrib extraStringAttrib GlText::kDefaultExtraStringAttrib The number of extra StringAttributes reserved for rendering strings on the screen. This value effectivley determinines the maximum number of different string styles that can be displayed on the screen in a single Draw call.
uint16 targetGlyphSize GlText::kDefaultGlyphSize The target font glyph height and width used to render the font atlas.
uchar startChar GlText::kDefaultStartChar The first character used in the range [startChar, endChar] used to rendered to the font atlas.
uchar endChar GlText::kDefaultEndChar The last character used in the range [startChar, endChar] used to rendered to the font atlas.
uchar unknownChar GlText::kDefaultUnknownChar The last character used to display characters not in the [startChar, endChar] range.

Constructor:

GlText(GlContext* context, const char* assetPath, int fontIndex = 0)

Constructs a GlText object

Parameter In/Out Description
context In The gl
assestPath In The asset path to the font file used to render text (Ex. "myfont.ttf")
fontIndex In The specific font to use from the in font asset. In for many font formats you should leave this as 0

Returns: GlText instance


Public Metods:

void RenderTexture(const RenderParams& params)
Parameter In/Out Description
params In Refrence to GlText::RenderParams struct used to specify how the font is rendered.

Remarks:
Before being able to draw text to screen you must first call RenderTexture(). RenderTexture will try to render each character with a params.targetGlyphSize heightsuare, but may perfer to use a slightly larger height if the font asset contains a prerendered bitmap of similiar size. Each glyph in the [params.startChar, params.endChar] range will be tightly packed and a font atlas texture and used for all preceeding Draw() calls until RenderTexture is called again.


Returns: N/A


template<typename... ArgT>
void PushString(Vec2<float> position, const char* fmt, ArgT... args)

Queues up a string to be draw in proceeding Draw() calls.

Parameter In/Out Description
position In The x,y screen position to start the string baseline. Screen coordinates are used with the upper left cornder of the screen being (0,0) and the lower right corner of the screen being (sWidth, sHeight), sWidth and sHeight is the current screen size set by UpdateScreenSize(). If no call to UpdateScreenSize has been made, the width and height of the GlContext used to construct the GlText object will be used to set sWidth and sHeight respectivly.
fmt In A printf style format string used format the string drawn.
... args In The corresponding aruments used for fmt.

Remarks:
Any string pushed to GlText will be drawn in all subsequent Draw() calls until Clear() is invoked. If you only want to draw a string for a single frame you should immediatly call Clear after you call Draw. Ex:

glText.PushString(Vec2(100.f, 100.f), "Frame Number: %d", frameNumber);
glText.Draw();
glText.Clear();

Returns: N/A


void Draw()

Draws all queued strings on the screen.

Parameter In/Out Description
N/A - -

Returns: N/A


void Clear()

Clears all queued strings.

Parameter In/Out Description
N/A - -

Returns: N/A


void AllocateBuffer(uint32 numChars)

Preallocates a buffer large enough to fit numChars.

Parameter In/Out Description
numChars In The number of characters to allocate.

Remarks:
GlText automatically allocates a buffer for each string, but this method can be used to preallocate a large buffer if you are pushing multiple strings. you can also call AllocateBuffer(0) to free the previously allocated buffer. A new buffer will be allocated on the next Draw call.


Returns: N/A


GlObject.h:

A class that represents a renderable object and all associated information. It contains the source for both the vertex and fragment shaders, as well as the bookkeeping to manage uploading VBOs and uniform blocks to the GPU.

Constructor:

GlObject(const char* objPath, GlCamera* camera, const GlSkybox* skybox, const GlTransform& transform = GlTransform())

Initializes a GlObject with a given path to a mesh stored in OBJ format, a reference to the scene's camera, a reference to a GlSkybox used for computing reflections, and an optional transform that serves as the model matrix in the model-view-projection paradigm. The constructor reads in the OBJ, parses it, and uploads its vertices into a VBO. It also sets up the uniform block for the shader that contains the camera information.

GlTransform GetTransform() const
void SetTransform(const GlTransform& t)

Helper getter/setter functions to access the object's transform, which refers to its transform from object space to world space.

void Draw(float mirrorConstant)

This is the main draw call that renders this object to the output buffer. If necessary, it updates the uniform block (object transform, camera transform, view matrix, view-projection matrix, etc.). It enables the associated shader program, binds the cubemap textures from the associated GlSkybox, and calls glDrawElements.

Future improvements to this function could include:

  • create generic VBO
  • create VBIs
  • create FBO pipeline
  • create composite pipeline
  • have text render unit quad vertices for each character (since GlText is a type of renderable object with a VBO)

GlCamera.h:

A GlCamera encapsulates a transform along with a projection matrix.

Constructor:

GlCamera(const Mat4<float>& projectionMatrix = Mat4<float>::identity, const GlTransform& transform = GlTransform())

Initializes a camera with the given projection matrix and transform.

inline GLuint EglTexture()        const

Returns the ID of the camera's EGL texture.

   
   
Returns the ID of the camera's EGL texture sampler.

```c++
inline GlTransform GetTransform()        const

Returns the camera's transform.

inline Mat4<float> GetProjectionMatrix() const 

Returns the camera's projection matrix.

inline Mat4<float> GetViewMatrix() 

Computes and returns the view matrix given by the inverse of the camera's transform.

inline void SetTransform(const GlTransform& t) 

Sets the camera's transform

inline void SetProjectionMatrix(const Mat4<float>& pm) (const GlTransform& t) 

Sets the camera's transform

inline uint32 MatrixId()

Returns a monotonically increasing number that refers to the current Matrix's unique ID. This is used to avoid repetitious copies of an identical matrix.

inline Mat4<float> Matrix()

Uses the computed view matrix from the transform and the projection matrix to compute the view-projection matrix. Updates the matrix ID.


GlSkybox.h:

This is the class containing the functionality for our cubemap for real time reflections.

inline void UpdateMipMaps()

Updates the mipmaps of our cubemap textures to avoid anti-aliasing.

inline void UpdateUniformBlock()

Updates uniform block information for the gpu.

GlSkybox(const SkyboxParams &params): GlRenderable(params.camera), generateMipmaps(params.generateMipmaps) {

Initialized our skybox cubemap.

void UpdateTexture(const GlContext* context)

Draws the camera texture to our cubemap depending on the rotation of the camera. For more information on the process, see the "Mapping the cubemap for realtime reflections" wiki

void Draw()

Draws the cubemap as a background of our AR program.

glUtil.h:

This header includes various macros to simplify OpenGL logging and assertions.

Macros:

void GlAssertNoError(const char* fmt, ArgsT... args)

Checks whether or not an OpenGL Error has occurred and fail a runtime assertion with a provided error message if one has.

Parameter In/Out Description
fmt In The error message printf style format string to print if an error occurs.
... args In The corresponding arguments used for fmt.

void GlAssert(bool condition, const char* fmt, ArgsT... args)

Checks whether or not an OpenGL condition is meet and fail a runtime assertion with a provided error message if it isn't.

Parameter In/Out Description
condition In The OpenGL condition to check
fmt In The error message printf style format string to print if the condition isn't meet.
... args In The corresponding arguments used for fmt.

void GlAssertTrue(int val, const char* fmt, ArgsT... args)

Checks whether or not val == GL_TRUE and fail a runtime assertion with a provided error message if it doen't.

Parameter In/Out Description
val In The value to compare to GL_TRUE
fmt In The error message printf style format string to print if val != GL_TRUE.
... args In The corresponding arguments used for fmt.

Public Metods:

template<typename T> constexpr GLenum GlAttributeSize()

Returns the number of OpenGL attributes taken up by a given by GL type.

Parameter In/Out Description
T In The OpenGL attribute type to get the size of.

Remarks:
This function is evaluated at compile time. An example use case of this function is: GlAttributeSize<GL_FLOAT>() which returns 1


Returns: The number of attributes taken up of an OpenGL type T


template<typename T> constexpr GLenum GlAttributeType()

Maps a C type into an open GL type.

Parameter In/Out Description
T In The C type to map to Open GL

Remarks:
This function is evaluated at compile time. An example use case of this function is: GlAttributeType<float>() which returns GL_FLOAT


Returns: The OpenGL type corresponding with the C type T


jniTeapot.cpp:

jniTeapot is the main portion of our code, that has the main loop that renders our AR objects.

void InitGlesState() {

Initializes the state of the screen to a black debug color, as well as clearing the depth buffers.

Vec2<float> DrawMemoryStats(GlText* glText, Vec2<float> textBaseline, Vec2<float> lineAdvance)

Returns a position in text as textBaseline. This text contained memory stats that can be helpful for debugging, including memory butes, blocks, and reserve blocks of memory. This text gets saved in the glText, which can be drawn to screen.

Vec2<float> DrawFPS(GlText* glText, float renderTime, float frameTime,
             Vec2<float> textBaseline, Vec2<float> lineAdvance) {

Returns a position in text as textBaseline. This text contains the current frames per second given renderTime and frameTime. This text gets saved in the glText, which can be drawn to screen.

void DrawStrings(GlText* glText, float renderTime, float frameTime,
                 Vec2<float> textBaseline, Vec2<float> lineAdvance) {

Calls DrawMemoryStats and DrawFPS using renderTime and frameTime to draw the memory and fps to the screen.

void* activityLoop(void* params_)

This function is run in its own thread that is started when the JNI NativeStartApp ABI call is called. It initializes GlContext, ARWrapper, and sets up a basic scene with a GlObject and GlSkybox. It uses a Timer to run a time-constrained render loop. In this loop, it updates ARWrapper and updates the camera position based on the pose estimation from ARCore. It then updates the skybox using GlSkybox::Draw. Then, it rotates the main GlObject in the scene as a demo application. Lastly, it draws information to the screen using DrawStrings.

void JFunc(App, NativeStartApp)(JNIEnv* jniEnv, jclass clazz, jobject surface, jobject jAssetManager, jobject jActivity)

This function is the main entrypoint to the C++ native part of this project. It is called by the Java JniTeapotActivity in the surfaceCreated callback. This function performs initialization of the FileManager, ARWrapper, and runs a thread with activityLoop, passing it a pointer to the Android ANativeWindow that is used for determining canvas parameters such as width and height.

void JFunc(App, NativeSurfaceRedraw)(JNIEnv* env, jclass clazz,
int rotation, int width, int height) {

This function was supposed to handle the rotation of the phone. However, it in unfinished, and not used. Rotation of the phone in real life messes up some of the renders to screen.

ARWrapper.h:

This is a wrapper class for ARcore

bool IsDepthSupported()

Returns if depth perception is supported on the current device. See https://developers.google.com/ar/discover/supported-devices for a list of devices that support depth.

void ConfigureSession() 

Initializes and configures the ArSession. There should only be one ArSession running at a time.

inline Mat4<float> ProjectionMatrix(float nearPlane, float farPlane) const

Takes in the the distance to the near Z-clipping plane and the distance to the far Z-clipping plane. Returns a 4x4 matrix representing the Projection Matrix.

inline void SetEglCameraTexture(GLuint eglTexture)

This tells the program what texture to write the camera to. EglTexture comes from ARcore.

void UpdateScreenSize(int width_, int height_, int rotation = 1)

Takes in a width, height, and rotation, and updates the render screen size to that width and height with that rotation. 0 being an upright position, and 1 being rotated 90 degrees.

void InitializeARWrapper(void* jniEnv, jobject jActivity)

Initializes the ARWrapper class, including the arFrame, cameraPose, and arCamera.

inline GlTransform UpdateFrame()

Updates camera position each frame. RawPose gets the position and rotation of the phone camera.


log.h:

The log class will help log messages such as errors, warnings, and info in order to help debugging.

#define Log(fmt, ...)	LogEx(LogType(LOG_LEVEL_MSG), 	fmt, ##__VA_ARGS__)

Logs the given message as an info type log.

#define Warn(fmt, ...)	LogEx(LogType(LOG_LEVEL_WARN), 	fmt, ##__VA_ARGS__)

Logs the given message as a warning.

#define Error(fmt, ...) LogEx(LogType(LOG_LEVEL_ERROR), fmt, ##__VA_ARGS__)

Logs the given message as an error.


panic.h:

Calling panic will print a log message and cause the program to shutdown. Is usually called when there is an error in the program which makes it impossible to continue running the program.

void Panic_(const char* const kErrorFuncStr, const ArgsT&... args) {

Logs the given error string, then calls exit(1), exiting the program.


Timer.h:

This is a timer class, which allows for a way to use timers in your code. You can compare timers, as well as add or subtract them. Avaliable operations include =, <, >, +=, -=, +, and -.

inline Timer(bool startTimer = false)

Initializes your timer. If startTimer is True, it will then start your timer.

inline void Start() {

Starts the timer.

inline float ElapsedSec()   const { 

Returns the time, in seconds, since you started your timer. ElapsedMs() returns the time in miliseconds, ElapsedUs() returns the time in microseconds, and ElapsedNs() returns the time in nano seconds.

inline float LapSec()   const { 

Returns the time, in seconds, since you started your timer. Then it restarts the timer. LapMs() returns the time in miliseconds, LapUs() returns the time in microseconds, and LapNs() returns the time in nano seconds.

inline void SleepLapSec(float sec) {

Sleeps the thread for sec seconds. SleepLapMs(float ms) sleeps for ms miliseconds, SleepLapUs(float us) sleeps for us microseconds, and SleepLapNs(uint64 ns) sleeps for ns nanoseconds.

Memory.h:

Memory is an arena bases memory management class used for dynamic memory allocation.


Public Types:

class Memory::Arena

A class that represents a arrangement of allocated memory.


struct Memory::Region

A class that represents the current position inside of an Arena.


Public Variables:

static Memory::Arena Memory::temporaryArena

A memory arena allocated at the start of the application that can be used for temporary memory. This arena should not be used for long term storage.


static uint32 Memory::memoryBytes

The number of bytes currently being used by the whole program. This includes reserve and pad bytes.


static uint32 Memory::memoryPadBytes

The number of bytes currently being used by the whole program to pad align memory.


static uint32 Memory::memoryUnusedBytes

The number of bytes in the whole program currently allocated but not being used. This includes the number of bytes reserved and the number of bytes needed for alignment.


static uint32 Memory::memoryBlockReservedBytes

The number of bytes in the whole program currently being reserved for future allocations.


static uint32 Memory::memoryBlockCount

The number of allocated blocks in the whole program. This includes blocks reserved for future allocations.


static uint32 Memory::memoryBlockReserveCount

The number of allocated blocks in the whole program currently currently being reserved for future use.


static uint32 Memory::Arena::arenaBytes

The number of bytes being used by the whole arena. This includes reserve and pad bytes.


static uint32 Memory::Arena::arenaPadBytes

The number of bytes being used by the whole arena to align memory.


static uint32 Memory::Arena::arenaUnusedBytes

The number of bytes being used to align memory and reserved for future use in the whole arena.


static uint32 Memory::Arena::arenaBlockCount

The number of allocated blocks used in the whole arena. This includes reserved blocks.


Constructors:

Memory::Arena(uint32 prealocatedBytes = 0)

Creates a new memory arena to use for dynamic allocations.

Parameter In/Out Description
prealocatedBytes In The initial number of bytes allocated by the arena.

Returns: An instance of Memory::Arena


Memory::Region()

Creates a new memory region pointing to start of all memory arenas.

Parameter In/Out Description
N/A - -

Returns: An instance of Memory::Region pointing to the start of all memory arenas


Public Methods:

void* Arena::PushBytes(size_t bytes, bool zeroMemory = false, uint8 alignment = 1)

Pushes a bytes size buffer onto the arena. If zeroMemory is true the buffer will be zero initialized. The start address of the buffer returned will be aligned with alignment.

Parameter In/Out Description
bytes In The number of bytes to push onto the arena.
zeroMemory In If true, the new memory will be initialized to zero. If false, the initial values stored in the returned buffer are undefined.
alignment In The multiple for which the first byte in the returned buffer is aligned to.

Returns: A pointer to newly allocated buffer of size bytes aligned with alignment.


template<class T> T* Arena::PushStruct(bool zeroMemory = false, uint8 alignment = 1)

Pushes a new structure onto the arena. If zeroMemory is true the structure will be zero initialized. The start address of the structure returned will be aligned with alignment.

Parameter In/Out Description
zeroMemory In If true, the structure will be initialized to zero. If false, the initial values stored in the returned structure are undefined.
alignment In The multiple for which the first byte in the structure is aligned to.

Returns: A pointer to the newly allocated structure of type T aligned with alignment.


inline void Arena::Reserve(uint32 bytes)

Reserves a minimum of bytes bytes on the arena for future allocations.

Parameter In/Out Description
bytes In The number of bytes to reserve.

Returns: N/A


inline bool Arena::IsEmptyRegion(Region r) const

Returns true if no allocations have been made on the area since region r or false otherwise.

Parameter In/Out Description
r In The region to check.

Remarks:

If region r doesn't not belong to the arena, IsEmptyRegion will return false.


Returns: true if r is an empty region, false otherwise


inline Region Arena::CreateRegion() const 

Creates a new region pointing to the current position in the arena

Parameter In/Out Description
N/A - -

Returns: An instance of a new region pointing to the current position in the arena.


inline void Arena::FreeBaseRegion(const Region& region)  

Frees all bytes on the arena that were allocated after region

Parameter In/Out Description
region In The base region to free all bytes after.

Returns: N/A


inline void Arena::Pack()  

Frees any reserved blocks from the arena if present.

Parameter In/Out Description
N/A - -

Returns: N/A


template <typename ArenaT> 
inline void Arena::CopyToBuffer(uint32 numElements, void* buffer, 
                                uint32 regionStride=sizeof(ArenaT), 
                                uint32 bufferStride=sizeof(ArenaT)) const   

Copies the whole arena of type ArenaT to a buffer of type ArenaT.

Parameter In/Out Description
numElements In The number of elements in the arena
buffer In The buffer to copy the arena to
regionStride In The stride between elements in the arena.
bufferStride In The stride between elements in the buffer.

Remarks:
buffer must be large enough to fit the whole arena with stride bufferStride.


Returns: N/A


template <typename TranslatorFuncT>
inline void Arena::TranslateToBuffer(uint32 numElements, void* buffer, 
                                     uint32 regionStride=sizeof(ArenaT), 
                                     uint32 bufferStride=sizeof(ArenaT),
                                     const TranslatorFuncT& translator)   

Translates the whole arena of type ArenaT to a buffer of type ArenaT using translator to translate each element in the arena.

Parameter In/Out Description
numElements In The number of elements in the arena
buffer In The buffer to translate the arena to
regionStride In The stride between elements in the arena.
bufferStride In The stride between elements in the buffer.
translator In A function that takes in the parameters (void* regionElement, void* bufferElement) and will get invoked for every region-buffer element pair. Note this function is responsible for copying/translating each regionElement to bufferElement

Remarks:
buffer must be large enough to fit the whole translated arena with stride bufferStride.


Returns: N/A


template<typename FuncT>
static inline void ForEachRegion(const Region& startRegion, const Region& stopRegion, const FuncT& func)

Executes func(void* chunk, uint32 chunkBytes) for each chunk in the arena spanning the range [startRegion, stopRegion].

Parameter In/Out Description
startRegion In The start memory region to iterate over
stopRegion In The stop memory region to iterate over
func In The function to invoke (void* chunk, uint32 chunkBytes) on for each memory chunk in [startRegion, stopRegion]

Remarks:
startRegion and stopRegion must belong to the same arena.
ForEachRegion iterates over arenas in the reverse order.


Returns: N/A


template <typename RegionT>
static inline void CopyRegionsToBuffer(const Region& startRegion, const Region& stopRegion, 
                                       uint32 numElements, void* buffer,
                                       uint32 regionStride=sizeof(RegionT), 
                                       uint32 bufferStride=sizeof(RegionT))

Copies each element in the range [startRegion, stopRegion] of an arena to buffer.

Parameter In/Out Description
startRegion In The start memory region to iterate over
stopRegion In The stop memory region to iterate over
numElements In The number of elements in [startRegion, stopRegion]
buffer In The buffer to copy each element in [startRegion, stopRegion] to
regionStride In The stride between elements in [startRegion, stopRegion].
bufferStride In The stride between elements in the buffer.

Remarks:
startRegion and stopRegion must belong to the same arena.
buffer must be large enough to fit each element in [startRegion, stopRegion] with stride bufferStride.


Returns: N/A


template<typename TranslatorFuncT>
static inline void TranslateRegionsToBuffer(const Region& startRegion, const Region& stopRegion, 
                                            uint32 numElements, void* buffer,
                                            uint32 regionStride, uint32 bufferStride, 
                                            const TranslatorFuncT& translator)

Translates each element in the range [startRegion, stopRegion] of an arena to buffer using translator.

Parameter In/Out Description
startRegion In The start memory region to iterate over
stopRegion In The stop memory region to iterate over
numElements In The number of elements in [startRegion, stopRegion]
buffer In The buffer to copy each element in [startRegion, stopRegion] to
regionStride In The stride between elements in [startRegion, stopRegion].
bufferStride In The stride between elements in the buffer.
translator In A function that takes in the parameters (void* regionElement, void* bufferElement) and will get invoked for every region-buffer element pair. Note this function is responsible for copying/translating each regionElement to bufferElement

Remarks:
startRegion and stopRegion must belong to the same arena.
buffer must be large enough to fit each translated element in [startRegion, stopRegion] with stride bufferStride.


Returns: N/A


GlTransform.h:

This class contains all of the transform information for an object, and function to transform that information. It holds the rotation, position, and scale.

inline Quaternion <float> GetRotation() const {

Returns the quaternion representing the rotation.

inline void SetRotation(const Quaternion<float>& r) {

Sets the rotation to the quaternion given.

inline Mat4<float> Matrix() const {

Returns the gl transform matrix. Multiplying the matrix by a position will transform that point to this position, rotation, and scale.

inline Mat4<float> InverseMatrix() const {

Returns the inverse gl transform matrix. Multiplying the inverse matrix by the transform matrix will get the identity matrix.

inline Mat4<float> NormalMatrix() const {

Returns the gl transform matrix for normals. Used to multiply with normal vectors.

inline GlTransform& Scale(const Vec3<float>& s)

Modifies the scale by the given float vector. If the float vector has negative numbers, it decreases in that dimension. Currently, this function is unused.

inline GlTransform& Rotate(const Vec3<float> theta)

Rotates the object by the given vector, representing the angle of rotation.

inline GlTransform& Translate(const Vec3<float>& delta)

Modifies the position by the given float vector. Currently, this function is unused.

FileManager.h:

The filemanager class allows you to easily read and save files.

static void Init(JNIEnv* env, jobject jAssetManager) {

Initializes the file manager using the given Asset Manager and a JNI enviroment.

static AssetBuffer* OpenAsset(const char* assetPath, Memory::Arena* arena)

Loads the file given by assetPath into RAM using the given memory arena. It returns a buffer with the given file.

static void SaveAsFile(AssetBuffer* buffer, const char* filePath) {

Saves the data given by the buffer into a local file on the device. The location of the file is specified by filePath. Currently, this function is unused, and is not tested.


Section 4: Useful Resources

Below is listed a collection of resources we found useful during the development of the project.

4.1 Math

This is a collection of math resources that used to create the Quaternion library:

  1. A Fast Compact Approximation of the Exponential Function - A paper showcasing a technique for approximating ln and how to minimize its error. Used in mathUtil.h for FastPow10

  2. Quaternion - A paper explaining the mathematical properties of Quaternions

  3. Quaternion Interpolation - A paper explaining the pros and cons of different interpolation methods.

  4. Rotating Objects Using Quaternions - A blog showing how to implement useful Quaternion methods in C.

  5. Bithacks - A website chalked full of fast algorithms for graphics programing. Used while programing mathUtil.h

  6. The Bit Twiddler - A website that goes more into depth about how some of the Bithacks algorithms work and benchmarks them against other algorithms.

4.2 Graphics

This is a collection of graphics resources that used to gain an understanding of shading and lighting resources:

  1. Reflection Models - A presentation outlining how several different reflection models work and should visual examples of them.

  2. Triangle Strips - A paper talking about how triangle strips work and showing an efficient algorithm for making a cube using 14 vertices.

  3. Phong Reflection Model Wiki - The wiki page for how the Phong shading and reflection model works. Used to program Phong reflections in GlObject.h

4.3 OpenGl

This is a collection of OpenGl resources used to write OpenGl code:

  1. EGL 1.5 Specification - Full Specification for EGL 1.5. Used in GlContext.h

  2. GLES 3.1 Full Specification - Full Specification for GLES 3.1

  3. GLES Shading Language Specification - Full Specification for the GLES Shading language 1.0. Used to help write shaders throughout the project.

  4. GLSL ES Specification 3.20 - Full Specification for the GLES Shading language 3.2. Used to help write shaders throughout the project.
    Warning: This project uses GLES 3.1 because GLES 3.2 is not currently supported by the Android emulator. Because of this GLES 3.2 specific features listed in the specification such as geometry shaders are not supported by the project.

4.4 Hardware

This is a collection of resources used to get a loose understanding of how mobile CPU and GPU architecture work and differ from PCs:

  1. Adreno Developer Guide - A guide full of developer tip for the Adreno Mobile GPUs.

  2. Armv8-A Instruction Set Architecture - A reference to the Mobile Arm 8 instruction set.

  3. CUDA-ptx_isa_1.4 - A reference to the CUDA PTX instruction set for NVIDIA GPUs. Used to compare desktop GPUs to Mobile GPUs.

  4. GT200 instruction Benchmark - A paper benchmarking PTX instructions for the GT200 GPU. Used to see a relative comparison of instruction throughput on GPUs. Specifically used to compare trig instructions to square root instructions.

4.5 Android

This is a collection of Android resources used to figure out how the Android and ARCore API works:

  1. Android Source Code Repo - The Android open source repo. Useful for searching and reading the Android source code when the documentation is unclear.

  2. Android NDK Reference - The Android Native Development Kit reference. Used to find documentation on Native Android programming.

  3. ArCore NDK Reference - The Android NDK ArCore Reference. Used to understand how to interface with ArCore.



Section 5: Future Goals

Within one semester, we were able to implement a base project that made real time AR reflections. We have many ideas one how to make our ideas better, and more realistic.

  1. Utilizing the depth buffer - In our code, we already have a function that determines if a device is able to support depth scanning. If it is, we want to use that depth information to be able to build better and more accurate cubemaps. We would use the ARcore's Depth API. We could then translate this depth information from the camera to be used in each objects' cubemap. This makes it so the reflection is more accurate, since it utilizes depth information. Real life objects closer to the reflective AR sphere should appear larger, despite how far the object was from the camera when scanned.

  2. Reflection of other virtual objects - Currently, when there are two objects in the scene, they both have the same cubemap, and don't reflect each other. By adding depth, we can have it so objects at different points would have different looking cubemaps. However, the cubemaps will still not contain the reflection of the other object. To make our scene more realistic, virtual objects should be reflected like real ones. We can probably achieve this by calculating virtual cubemaps with only our virtual objects, and overlaying the virtual cubemap with our existing cubemap.

  3. Box projected cubemap - One way to have a more accurate cubemap is with a box projected cubemap. This is a roomscale cubemap that does not assume each face is infinitly far away. This seems optimal for our application of more indoor settings. We can roughly assume that what the camera is seeing is the walls of a room, so it should be reflected a "wall" distance away. The problem may be finding the correct dimensions for this boxed cubemap. If we can get a roomscale cubemap, perhaps we only need one cubemap per object, and just translate the object's reflection based off it's position in the room.


⚠️ **GitHub.com Fallback** ⚠️