Tyrese's dev Diary's - TheEvergreenStateCollege/upper-division-cs-23-24 GitHub Wiki

In the legal saga dissected in the blog post, divergent viewpoints emerge over the interpretation of performers' rights in the context of Peter Cushing's estate. Advocates for Cushing's estate advocate for an expansive interpretation of these rights, asserting that posthumous CGI recreations of his likeness in films like Rogue One: A Star Wars Story infringe upon his estate's control over his image and entitlement to fair compensation. Conversely, opposing arguments, likely championed by the film production company or legal experts, might emphasize the precedence of contractual agreements signed during Cushing's lifetime, which potentially granted rights for such usage. They might stress the importance of upholding these agreements and the broader implications for the film industry's creative processes.

This scenario shares similarities with the debate surrounding the CGI resurrection of James Dean for an upcoming film. Both cases stir discussions about the ethical implications of digitally resurrecting deceased performers and the legal complexities surrounding the use of their likenesses without explicit consent. However, subtle differences may exist, such as the specifics of contractual arrangements or the degree of public interest in preserving the legacies of these iconic figures, shaping the legal arguments and public discourse in distinct ways.


Cybernetics: Studies system structure, function, and regulation, focusing on feedback, self-regulation, and adaptation in biological and artificial systems.

Relationship to Artificial Intelligence (AI): Provides foundational principles for feedback and self-regulation used in AI. AI uses these principles to create intelligent agents capable of learning and decision-making.

Ethics: Personal understanding: Principles of right and wrong, fairness, justice, and responsibility. Essay's description: Evaluates moral implications of actions in technology, emphasizing consequences, welfare, and responsibility.

Cybernetics of Cybernetics: Examines feedback mechanisms, control processes, and communication within the field to refine and improve theories and practices.

Non-Verbal Experiences with Large Language Models (LLMs): Current: Visual data representations, interactive simulations, emotional tone through language. Future: Potential integration with virtual and augmented reality, advanced haptic feedback, improved multimodal capabilities.

impact of Cybernetics on AI: Enhances AI through robust feedback mechanisms, adaptive systems, and better communication and control.

Impact of AI on Cybernetics


Cybernetics, AI, and Ethical Data Sourcing

This exploration discusses the relationship between cybernetics and artificial intelligence (AI), emphasizing ethical data sourcing for training AI models. A proposed philosophy advocates for ethical considerations, data diversity, and relevance, ensuring privacy, consent, and reduced bias in data collection. The importance of diverse datasets is highlighted to prevent biased AI predictions, drawing from past experiences with inadequate datasets.

David Deutsch’s philosophy on AI focuses on creativity and problem-solving, suggesting AI should foster innovation. While AI can produce content that seems creative, true creativity involves generating novel ideas independently of existing data, a capability current AI lacks.

Creativity is deemed essential for AI in solving complex problems and driving innovation. However, unchecked creative freedom can lead to harmful technologies and misinformation, necessitating a balance with ethical constraints.

in summary, ethical, diverse data sourcing and fostering creativity in AI, guided by principles of responsibility and societal impact, are crucial for the positive advancement of AI technology.


GPT (Generative Pre-trained Transformer) is a specific type of large language model (LLM). While GPT refers to a particular architecture developed by OpenAI, LLM is a broader term encompassing any large-scale model trained on extensive language data. The labeled training pairs in "InstructGPT" are most similar to Quora posts and their replies, where structured question-answer pairs help train the model to generate relevant responses. On platforms like Stackoverflow, Reddit, and Quora, the quality of multiple answers to a single question can be ranked by community votes or expert validation.

The GPT architecture was initially designed for machine translation, aiming to improve text translation between languages using attention mechanisms. In Rashka's text, deep learning typically involves neural networks with more than three layers, qualifying the MNIST classifier as a deep learning neural network.

Pre-training is more expensive and time-consuming than fine-tuning. Pre-training uses large amounts of unlabeled data, while fine-tuning is done on meticulously labeled data. Fine-tuning can be performed by different individuals than those who conducted the pre-training. Pre-training creates a general-purpose model, and fine-tuning specializes it for specific tasks, usually using less data. Pre-training creates a model from scratch, while fine-tuning modifies an existing model.

GPTs predict the next word in a sequence using existing words in sentences, user prompts, system prompts, and the trained model components. Tasks like predicting the next word, classifying items, and answering questions share foundational elements, with predicting the next word being a general-purpose task that supports other functions.

The encoder in the GPT architecture converts words into high-dimensional vectors, similar to feature extraction in neural networks, while the decoder translates feature vectors back into words. Zero-shot learning involves using a model for tasks it hasn't been explicitly trained on, such as using ChatGPT to answer new questions without prior examples. It differs from few-shot learning, which provides a small number of examples, and many-shot learning, which uses a large number of examples for robust learning.

OpenAI's GPT-3 model has 175 billion parameters. LLMs do not operate directly on words but convert them into numerical embeddings for processing. Embeddings are numerical representations of text, with dimensions reflecting semantic information detail. GPT-3 embeddings have high dimensions, like 1,024 or 2,048. The steps to prepare a dataset for LLM training include breaking up text into tokens, assigning unique token IDs, converting token IDs to embeddings, and adding position embeddings to token word embeddings.


Screenshot 2024-06-12 235028

Screenshot 2024-06-12 235034

Screenshot 2024-06-12 235044 ![Screenshot 2024-06-13 000754](https://gith Screenshot 2024-06-13 000806 ub.com/TheEvergreenStateCollege/upper-division-cs/assets/23390998/4d434223-46dd-4ae2-9c2b-8e693143b0bb)


Music Project

Dev log day 1: the bugs I ran into seem to be a combination of the editor and the syntax used in the video.The directory with src to the links seems to not be recognized within code pen. It seems to fix the issue the script tag wasn't closed properly for ace to be called so the resources werent showing up on the live server.I finnaly fixed the editor to have it call the ace library. Screenshot 2024-06-17 135500

Dev log day 2:I restarted the code so now the play button is at the bottom but the editor is there and added on click to the id="go" to make it become active.I also had trouble with making the sound work within the video the method with tone.js for the synth was not being called correctly to Destination() is a better method.

Dev log day 3: It seems like the sampler wont work my theory is that either one of the syntax like start() isn't working the connection to the destination to the volume isn't established or the links aren't the correct library. Screenshot 2024-06-17 144009

Dev log day 4: the bugs I ran into today were trying to call the strings with the url links into the p1 function. I took a look online and it seems like not secure links with http wont work plus the library has to be specific for Tone.JS. I tried to look at the updated code for the sampler which was a quick fix but some links still weren't constant. My next attempt was to create each induvial variable with the var function and do it that way and connect it but it wont resolve.

Screenshot 2024-06-18 211859