Semester 1, Week 13 Development - 62firelight/manimRT-490 GitHub Wiki
- Added a method to colour in pixels
- Reduced the size of the "projection point" in the default camera mobject
- Reduced the thickness of the lines in the default camera mobject
- Re-added the ability to generate spheres that are guaranteed to be positioned along a ray
- From the TODO list in code comments:
- Ensured that exceptions are thrown whenever appropriate during the methods for the lighting-related vectors
- Moved clamp method from Ray3D.py to Utility.py
- Cleaned up formatting when printing ray equations
This was mainly accomplished by using the Square mobject (which the RTPlane class is based on).
The main challenge here was getting the scale of the Square just right.
An interesting observation from this image is that the bottom of the cone in the ray does not seem to have a solid surface.
Code (click to reveal)
from manim import *
from manim_rt import *
from manim_rt.RTCamera import RTCamera
from manim_rt.Ray3D import Ray3D
class CameraTest(ThreeDScene):
def construct(self):
self.set_camera_orientation(phi=65*DEGREES, theta=-95*DEGREES, zoom=8)
axes = ThreeDAxes()
labels = axes.get_axis_labels()
camera = RTCamera([0, 0, 1])
red_ray = camera.draw_ray(8, 5, color=RED, distance=1.5, thickness=0.01)
green_pixel = camera.colour_pixel(8, 5, color=GREEN)
self.add(axes, labels, camera, red_ray, green_pixel)
The use case I had in mind for this feature was creating quick animations so you wouldn't have to think too much about where to place a sphere.
A future addition for this feature could be a way to offset the position of the sphere so that it looks more natural.
Code (click to reveal)
from manim import *
from manim_rt.RTCamera import RTCamera
from manim_rt.RTPlane import RTPlane
from manim_rt.RTPointLightSource import RTPointLightSource
from manim_rt.RTSphere import RTSphere
from manim_rt.Ray3D import Ray3D
class BasicRayTracingAlgorithm(ThreeDScene):
def construct(self):
self.set_camera_orientation(phi=65*DEGREES, theta=0*DEGREES, zoom=1.5, frame_center=[0, 0, 1.5])
axes = ThreeDAxes()
labels = axes.get_axis_labels()
red_sphere_location = [-1, 2, 2]
# Camera
ray_start = [2, -2.5, 3]
camera = RTCamera(ray_start, image_width=3, image_height=3).rotate(90 * DEGREES, RIGHT).rotate(35 * DEGREES, OUT)
partial_ray = camera.draw_ray(2, 2, 2)
ray = camera.draw_ray(2, 2, 7.5)
# ray1 = camera.draw_ray(1, 1, 7.5)
# ray2 = camera.draw_ray(2, 1, 7.5)
# ray3 = camera.draw_ray(3, 1, 7.5)
# ray4 = camera.draw_ray(1, 2, 7.5)
# ray5 = camera.draw_ray(3, 2, 7.5)
# ray6 = camera.draw_ray(1, 3, 7.5)
# ray7 = camera.draw_ray(2, 3, 7.5)
# ray8 = camera.draw_ray(3, 3, 7.5)
pixel = camera.colour_pixel(2, 2, RED).rotate(90 * DEGREES, RIGHT).rotate(35 * DEGREES, OUT)
# Scene
plane = RTPlane(x_scale=1.5, y_scale=4)
transparent_sphere = RTSphere(translation=[0, 0, 1], opacity=0.5)
red_sphere = RTSphere(translation=red_sphere_location, color=RED)
light_source = RTPointLightSource([0, -0.5, 3]).scale(0.25)
sphere = RTSphere.generate_sphere(ray, 2.5, color=RED)
# Image
self.add(axes)
self.add(labels)
self.add(camera)
# self.add(partial_ray)
self.add(ray)
self.add(pixel)
self.add(sphere)
# self.add(plane)
# self.add(transparent_sphere)
# self.add(red_sphere)
# self.add(light_source)
# self.add(ray1)
# self.add(ray2)
# self.add(ray3)
# self.add(ray4)
# self.add(ray5)
# self.add(ray6)
# self.add(ray7)
# self.add(ray8)
# Animations
# self.begin_ambient_camera_rotation(0)
# self.play(GrowFromCenter(camera))
# self.play(GrowFromCenter(plane))
# self.play(GrowFromCenter(red_sphere))
# self.play(GrowFromCenter(transparent_sphere))
# self.play(GrowFromCenter(light_source))
# self.play(Create(partial_ray))
# self.play(FadeOut(partial_ray))
# self.play(Create(ray))
# self.play(Create(ray1))
# self.play(Create(ray2))
# self.play(Create(ray3))
# self.play(Create(ray4))
# self.play(Create(ray5))
# self.play(Create(ray6))
# self.play(Create(ray7))
# self.play(Create(ray8))
For ray equations, floating point numbers are now rounded (to 1 decimal place by default) and integers no longer have a trailing dot.
Before
After
The camera now looks like it is contained within a blue box, so it doesn't lack boundary lines on two of its sides.
It might be better to make these boundary lines white in future versions, but I think blue is fine for now.
Before
After
The camera doesn't keep track of its transformations like the RT objects do, so the coloured pixels won't be rotated (and scaled from what I'm assuming) correctly by default (as seen above).
A quick fix to this is to just apply the same transformations to the coloured pixel (resulting in the image below), but this is not an ideal solution.
A more ideal solution may be to allow the camera to keep track of its own transformations (similar to RTSphere and RTPlane), so that any method calls using the same camera object will account for those transformations.
- The get_intersection method must be called after the ray has been constructed. Maybe it would be better to tell ManimRT to calculate the intersections when we initialize a Ray3D object?
-
Rays can only intersect one object at a time. For example, if I write
object2.get_intersection(ray)
afterobject1.get_intersection(ray)
, then the ray will only store the intersections with object2 instead of combining the intersections from object1 and object2 together. This isn't an actual ray tracer so I don't know how big of a problem this may be. You can simply just do whatever stuff you need to do with the intersections of object1 before calculating the intersections for object 2 and then moving onto to do whatever with the intersections of object2. I think it should function just fine as long as you remember to add whatever objects or animations you need to the scene before doing more intersection calculations. If I wanted to illustrate Constructive Solid Geometry (CSG), then this may be a major problem as we will need to know all of the combined intersections with a given ray. - Transformations to an already constructed RT object are not tracked. If I make a plane that is 2 units big on the Y axis, I can't do anything else with it (at least not in an ManimRT-friendly way). I can still call the stretch, rotate, or shift methods to transform the object, but those further transformations won't be tracked. I think it would a be simple fix to address this, but I would need to know how much of a priority this would be.
- Which limitations are acceptable and which of them aren't? (see the text highlighted in bold from the section above)
- Should I use "we" or "I" in the report? (e.g. "We present ManiMRT...")
- How are meetings going to work between semesters?
- Any thoughts about what theme to use for my presentation?
- How do people organize their 4th year project work between semesters?
- What would an outline of a potential SoME video look like? (maybe Introduction to Ray Tracing, Ray-Object Intersections, Lighting, Shadows + Reflections + Refractions)
- What's the importance of initializing a camera with just a width value and aspect ratio? Python doesn't support constructor overloading so implementing this may be a bit more awkward than anticipated.
- Where does documentation (e.g., inheritance graph, class descriptions, example gallery with code) fit into everything?
From highest to lowest priority:
- Ensure code is consistent in terms of formatting, parameters, etc
- Start on interim report/presentation
- Develop full animation video for illustrating ray-object intersections
- Add docstrings to created classes and their methods
- Add option to initialize camera grid with x value and aspect ratio