Notes on Random Topics - doraithodla/notes GitHub Wiki
Teaching
Teaching Python with Turtle - 6 sessions
Reading List
"Lightrail is an open-source AI command bar that seeks to simplifies software development. It is designed to be a general-purpose, extensible platform for integrating LLM-based tooling into engineering/development workflows. It does this by focusing on three components of working with LLMs: Providing sources of context, constructing effective prompts, and interfacing with external services. Lightrail accomplishes these goals through an extension framework called Tracks. Tracks can provide Tokens, which are sources of dynamically generated context for a prompt, as well as Actions, which are functions that can modify a prompt, send it to an LLM, and use the LLM's response to execute functionality. All Lightrail functionality is delivered via the Tracks system, so a plain install of the Lightrail Core is essentially nonfunctional. Therefore, Lightrail's default installation includes a few commonly used tracks (Chat, VSCode, Chrome). More tracks are in development and will be installable through the Lightrail application."
Currently reading https://axleos.com/writing-about-writing-about-programming/
Notes https://axleos.com/writing-about-writing-about-programming/
Writing about writing about programming
Being first-order productive, I’m programming: creating and interacting with a system.
This first-order productivity is great, but it isn’t discoverable for others: there’s generally a high bar to entry for comprehending another person’s work when it’s expressed solely as a structured program.
Second-order productivity is when I’m writing about programming, or about systems. This kind of productivity is generally more accessible and distributable, and forms most of the content of this blog!
Third-order productivity involves writing software to help me produce the writing about programming.
Notes from Cambiran explostion of generative models
hugging face/models/trending No moat for foundation models mutltimodal workflows become common
From Twitterverse There is a hierarchy of training paradigms: - Architectural: uses general properties of the data to direct the architecture of the learning system. - Self-Supervised: can use lots of (raw) data to pre-train a large system to represent the data in a task-independent way. - Supervised: requires labeled data and provides less information per sample, hence appropriate for smaller networks or large pre-pretrained networks - Reinforcement: so data-inefficient that it can only be used for fine-tuning. Customarily, some of those steps are fully automatic, while others involve human intervention, but all of them are types of search and optimization.