Exercise 6 (Optional): Automating programming - VapaaLassi/TAMKUnity2024 GitHub Wiki
This is an experimental exercise where you can start doing new audio programming tasks with the help of AI chatbots. I tested a few and for basic audio logic, all of them seemed to perform well enough. ChatGPT seemed to produce the shortest and least weird code.
You can create a new scene or new project for this or keep expanding the one from lesson 5. If you start a new one, here is the unity package that has a basic player controller.
So remember what you learned from doing programming on your own and try to make things happen that weren't taught on the course. This is in preparation of making your own projects. From this point on you should start learning what you want to learn or what interests you.
Examples to try:
- Play a sound after a delay
- Play a sound every 5 seconds
- Play a sound right after another one finishes
- Crossfade from one piece of music to another
Try some of these and see if you can get the code to work as you expect. Create all necessary objects in the scene and pay attention to the chatbots relying on OnTriggerEnter/OnColliderEnter to play the sounds and make sure you have set up the scene in a way that matches those scripts.
If you get errors, you can try to fix them on your own, but if you get stuck, you can try giving your code and the error to the AI system and see if it can figure out what the error is. Or ask a human for help.