Experiments - ImmersiveStorytelling/DocumentationMasterclass GitHub Wiki
Experiments
Here is a list of the experiments we made:
- Moving in VR
- Teleport Laser
- Multiple floors
- Fireplace
- Expanding room
- Rotating shot
- Controller on hand accuracy
Moving in VR
What:
Here we wanted to see if it’s possible for a user to start moving in a 360° video and if the feeling would be natural. The issue is that the user can’t choose his/hers own direction and that the video has to be continuous which disables a certain flow in the story.
How:
Here we first recorded one static shot, then recorded a new one from where it stood to another place. Then we recorded another static shot where the camera stood. This way we enabled it to start moving whenever we want to by giving input (controller, keyboard), and not static as in one full video. Also it’s possible to immediately change to another shot that continues the story when the user stops moving.
Results:
The moving can be a bit disorienting when not filmed carefully. Although you can’t choose what direction to walk to, this made it possible to walk to different locations which could be interesting in telling a story (for example running around in an appartment). Making multiple shots to different locations, and controller to point which direction to go to, could make it possible to choose a direction of choice depending on how many shots for each direction is made. The biggest issue is the placement and the movement of the camera. This needs to be researched further.
Teleport Laser
What:
We wanted to experiment another way of moving through space to determine which ones are more comfortable. In the virtual world the user would stand on a place, direct to a certain position and teleport when activated. The biggest issue here is that since we use 360° videos, movement in the same shot disforms the video. Thus another video should be recorded at the place where the laser is pointed. This also limits the places to teleport to. Depending on how many shots at the different positions are made.
How:
In a hallway we marked the floor on different places. Then putting on the headset, and using the laser that can be attached to the controller in the virtual world, we determined the position in the video of the markings (which must be done for each video, because changing shots changes only the video and thus the marking positions, but not the personal position). We then programmed so that when you fire the laser at that certain position, the video changes to the proper video that belongs to that certain position.
Result:
The result was quite good and clear. The only issue that needs to be worked around more is the switching between the two shots. It can be a bit unnoticeable or disorienting when the other shot immediately swaps to the next without any conversion between them.
Multiple floors
What:
Here we wanted to see the effect of switching three different rooms which were constructed in the same way by immediately switching between them. Also we could see and meet the certain errors by not being able to set the camera accurately enough.
How:
In a building with three floors with the same room (but different interiors) we set up the camera on exactly the same places and filmed a few seconds. On the click of a button, the three rooms will follow up on another.
Result:
The effect was interesting in that way it isn’t disorienting since the layout of the room stays the same but only interior changes. It brought an interesting experience, which was then followed up by the Coffee Ritual project by the master Class “Show and Tell”.
Fireplace
What:
We wanted to see if it was possible to set a certain sound in some position, and so when the user moves, the sound changes so that it seems to stay in one position. The user would always hear the correct direction of the sounds wherever he/she is looking at.
How:
In a video, by using AudioSource in Unity, we set a sound on 3D mode and placed the sound of a fireplace crackling on a certain position in the video (the electrical heater that showed in the video).
Result:
Wherever you looked at, the sound kept on the same position of the heater in the video and you could hear the sound moving form left to right as where it was positioned from the user’s direction of sight.
Expanding room (stitching)
What:
We wanted to find a way to see if we were able to play with the videos by stitching one half of a video to another half of another video. So we wanted to try to make a smaller room look much bigger by placing the camera in the front and back of the room, and place the longest side of the room halves to each other.
How:
First we set up the camera on one side of the room, recorded a video, then set the camera on the other side and recorded another video. This way it was possible to use both halves since the connection between the two half rooms where the same walls, so that the videos could flow into each other as though the walls were extended.
Result:
The room looked like it got much bigger (in proportion bigger than just double the room). The only issue we came across was decent stitching. By placing two half videos to each other, the connection between the touching outer lines must match exactly. If not, you can see some differences (walls looking crooked, floor not overlapping correctly). For this we needed a better and decent program to make proper stitches that we had not. (Needed to buy license)
Rotating shot
What:
We wanted to see the effect on the user when rotating the virtual world around while in it.
How:
Grabbed the camera with hands and while recording turning in upside down and back up.
Result:
Very disorienting.
Controller on hand accuracy
What:
We wanted to see on request how accurate it would be to tape the controller on the user´s hands so to be able to feel and grab certain things that are in the virtual world.
How:
By taping controller upside down on hand, and placing a bottle of Coke on the same place it was in the video.
Result:
Pretty accurate. Unhandy with controller on hand (ideally make something that detects hands without the use of controller). Also only useful for objects that can’t move, since the object in the video doesn’t actually exist. So it could be useful for feeling a certain surface that shows in the video. Though the feeling doesn’t really match, since it doesn’t looks like you can though anything in the video (video is projected in a sphere, so a deformed 2D movie so that it looks 3D in a sphere).