Griffin AI HW 04 - TheEvergreenStateCollege/upper-division-cs-23-24 GitHub Wiki
Backpropagation
- we want to reduce the cost function. Cost function is the difference between the expected and the actual output, for each output, squared.
Backpropagation is an algorithm to compute the negative gradient (towards the minima).
Example:
-notice that the cost is calculated for each output, and the total cost is the sum of all costs.
Updating Output Activations: we can't update output activations directly, but we can update the weights and biases. Additionally, if an output neuron is way off, then we want to make greater adjustments than if it is only a little ways off.
-note it lists change the activations from the previous layer as an avenue, however this can only be done through changes to the weights and biases.
Changing Bias:
for the example output of 2 we would want to increase the biases of edges associated with 2, and decrease the biases associated with all the other neurons. -I'm having a hard time understanding why this works, but a way I can visualize this is to imagine the layer before the output layer. Go through the neurons one at a time. If a neuron is associated with 2 increase its bias; if it is not, decrease its bias. This is done for each sample.
Changing Weights:
weights have differing levels of influence based on their input neurons -importantly increasing a weight connected to a powerful input neuron has a larger influence on the cost function.
Changing Activations:
Gradient Descent is a way to reduce the cost function (find the minima). Remember that the cost function is the difference between expected output and the actual output.
Stochastic Gradient Descent vs. (batch) Gradient Descent:
in stochastic gradient descent weights and biases are updated after each training example
whereas in (batch) gradient descent, the weights and biases are updated after the entire set of training data.
Backpropagation summed up:
we find a cost function based on the outputs vs. the expected outputs. Now we know that we want to adjust each output neuron. For each neuron we can adjust the weights and biases that are associated with them. This cycle continues recursively through each layer of the network
HW 04:
Question 0: A. iii B. iv C. i D. ii
Question 1: 3.4185
Question 2:
Question 3: neurons that fire together are related, and will be grouped overtime together.
Question 4: It shows changes to the weights of all the neurons "requested" by each training data. Additionally we can adjust biases.
Question 5: A drunk stumbling quickly down a hill
and
making a small change to one law at a time chosen by random groups of n people, until everyone in the country has been asked at least once
Question 6. 12
Backpropagation Calculus
https://www.3blue1brown.com/lessons/backpropagation-calculus
I'm going to come back to this last.
Human Writing:
-
How does this compare to the "unjust enrichment" lawsuit against the estate of the actor Peter Cushing above? The case with Peter Cushing seems pretty tame compared to the many nefarious things that people can do with this technology. I actually am excited about the possibility of bringing back dead voices to create new things. For me I can see myself use this technology to create Skyrim mods. Obviously I don't have money to hire the original voice actors, so this technology allows me to create additional voice lines using the original voices.
-
What concerns does it bring up for you, if any? I'm concerned how technology like this could be used to imitate world leaders to create propaganda. I've heard stories that the president of Ukraine had their likeness used to spread lies in Russia.
-
You can try recording your voice for a few minutes and do zero-shot voice cloning in the lab. What thoughts or feelings do you have right now, before hearing your synthesized voice? It feels weird putting my voice out there. Not bad, but strange to think other people might hear my voice, and might even use it too
-
Should you have the right to synthesize your own voice? What are possible advantages or misuses of it? Yeah totally, it's your voice.
-
Photographs, video and audio recordings, and now possibly holograms are changing the way we remember loved ones who have died. If generative AI allows us another way to interact with a personality of a person in the past, how does this compare with historical re-enactors, or movies depicting fictional events with real people? Either way it's impossible to know if they actually said something. However I'm sure that people will be tricked because they aren't aware of this technology. Often TV and interviews are already edited in ways to manipulate the intent of the speakers.