wshine ai hw3 - TheEvergreenStateCollege/upper-division-cs-23-24 GitHub Wiki

Ai Homework 3

technical reading notes

Deep Learning Book

  1. What is connectionism and the distributed representation approach? How does it relate to the MNIST classification of learning the idea of a circle, regardless of whether it is the top of the digit 9 or the top / bottom of the digit 8?

connectionism is the idea that a large number of simple computations can be networked together in order to achieve intelligent behaviour. In the case of neural networks I believe this is most easily seen in the distributed representation approach where each input is evaluated based on a large number of features.

  1. What are some factors that has led to recent progress in the ability of deep learning to mimic intelligent human tasks?

The ability to efficiently train neural networks due to the strategy developed in 2006 called greedy layer-wise pretraining. Along with the resources/ability to provide a network with much, much larger datasets.

  1. How many neurons are in the average human brain, versus the number of simulated neurons in the biggest AI supercomputer described in the book chapter? Now in the year 2024, how many neurons can the biggest supercomputer simulate? (You may use a search engine or an AI chat itself to speculate)

I did not see this example in the reading but it does says that at the rate networks are increasing artificial nueral networks wont have as many neurons as the human brain until 2050. The best source I could find said 530 billion, no idea how accurate that is and the article was also published in 2019.

3Blue1Brown

  1. Why does the neural network, before you've trained it on the first input , output "trash", or something that is very far from the corresponding ?

    the network needs to put that output through a cost function and then repeat this process for a massive amount of inputs in order to optimize the weights and biases before it will start getting things correct.

  2. (1, 3, 4)

  3. weights = [(7, 13) (2, 13)] biases = [(1,13), (1, 2)]

  4. D > C > A > B

  5. The rate at which biases/weights are changed will differ, basically the learning rate is how much to value each individual run through the network and apply it to the weights. with a higher learning rate you could reach the minimum faster, but could also end up overshooting it.

  6. it is called stochastic because of the elements of randomness incorporated into it. it will not be a linear path to the minimum and will likely vary on output when training.

human writing

GPT 4 didn't seem to be much of an improvement over 3.5, it still made mistakes and presented false information, but stuck to it's initial statements more strongly rather than quickly back tracking from something it had just stated. If i was to use ChatGPT I think it would mainly be for convenience and not much for my development as a writer.

Framing is attempting to highlight or move focus away from something else through the use of messages. In journalism this could be using certain word choices to imply a negative or positive connotation to an idea or act in particular.

Systems prompt seem to be guiding gpt to steer the behaviour of ChatGPT in a specific way, this sounds identical to framing. In the case of chatgpt this sounds like a very useful concept for personal work or help in development/writing. But I can also see how this would could be used in a negative way. for example, does chaptGPT have 'systems prompts' by default? what are they?

I think asking ChatGPT very technical questions could be useful in getting a better understanding of complex topics, but I think the user also needs to cross reference what gpt tells them. I also think asking simple or very specific questions like 'how to do I create {some x} to solve {problem y}' is probably not going to be very beneficial. But a more specific technical question might be appropriate. I have not tried this but I think the other potential use case for chaptGPT is generating some boiler plate code. This could potentially help experienced developers but likely takes away from the learning of newer developers.

⚠️ **GitHub.com Fallback** ⚠️