AI_LabReport_Week4 - TheEvergreenStateCollege/upper-division-cs-23-24 GitHub Wiki

AI Lab 4: Cloning Richard Weiss

Lab Directions found here


Q1 : Has listening to your chosen voice produced an emotional reaction for you? Say as much as you're comfortable.

Mostly disappointment - I was hoping that with the long recording of Richard's voice available, the result would sound more like him.

Q2 : Has it changed your ethical considerations that you wrote about at the beginning of the lab?

Only a little bit - Richard seemed a little hesitant to let me use his voice, but the result couldn't be confused for him. Of course, this doesn't mean it isn't possible to have an AI-generated audio file that does accurately resemble you. I thought Dante's example in which he used himself as a dataset had a more recognizable output, but it still sounded robotic. As Dominic pointed out in a previous discussion, the AI lacks the ability to carry out appropriate cadence in its speech, so it doesn't quite feel authentic enough to feel like a dangerous impersonation.

Q3 : What would you better like to understand about the voice cloning process?

How would I clean it up? What is the effect of having more than one voice in a file? Given a good result, how much "bad data" would it take to make a recognizable impression unrecognizable? How much does the grammar and spelling of the input text effect the output?

📚 return to diary homepage... 📖

⚠️ **GitHub.com Fallback** ⚠️