Read Log - doraithodla/notes GitHub Wiki
- Reading List Reading List
Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Basic Books, ISBN 978-0786723317
-
Finished reading using Copilot a link from Search Assistant
-
Another topic due to Career Assistant Project is Z-BERT-A: a zero-shot pipeline for unknown intent detection Question Answering over Docs
-
Today's topic is Common sense knowledge and Reasoning. I was listnening to the podcast NLP HighLights Episode 95
Here are a bunch of things to research
- Common sense reasoning #CSR
- Knoweledge representaton
- Log tail problems
- The impedence mismatch between humans and machines
- CSR and CSK occur at all levels of NLP. Some are easier to solve than others
- Datasets - Serial IQA, IQA, NLI, Abductive reasoning
- Has the system learned enough patterns (subject comprehension)
- declarative and observed knowledge, knowledge integration
- explicit and implicit knowledge
- Taxonomy of knowledge
- composistion phrases
- atomic knowledge
- What are the research question in CRS
- Are there theories of common sense knowledge
- When do children obtain what type of knowledge? Object permanence, developmental psychology
- Task specific knowledge
5/16
-
One of the problems career assistant will face is the skill hierarchy. Not all job descriptions will use the same terms for skills. Some use high level terms (AI or DS) and some will use lower level terms (for example ML Engineer). One way to solve the problem is to have a skill map or better still, skill graph. A skill graph will coontain skill terms interconnected with others.
-
We may also use topic modeling (or hierarchical topic modeling) to interconnect these terms and organize them. They are not strictly hierarchies. They are not synonyms. A term that may be more appropriate for them is neighbors or nearest neighbors.
-
Came upon a Deep Pavlov article on intent selection. Well explained by the article was old. So started looking at Deep Pavlov
-
I think I have been thinking about GihubBot on and off. Looks as if I may need to use it. Perhaps it is time to start one. While noting it down in Obsidian, I also jotted down a series of explorers FileExplorer, a GDriveExplorer, an EmailExplorer.
5/15
Professor of care ethics Els van Wijngaarden and colleagues in the Netherlands listened to a group of older people who were not seriously ill, yet felt a yearning to end their lives. The key issues they identified in such people were: aching loneliness, pain associated with not mattering, struggles with self-expression, existential tiredness, and fear of being reduced to a completely dependent state.
5/13
5/11
5/3
5/2
- Generating structured JSON from language models
- tools which allow you to chat with your documents @chatdoc_ai @Cha--tPDF @HumataAI
5/1
AI Companies
-
playgroundai.com, happening.ai
-
#TIL Theory of mind tests? (from AI Breaskfast tweeet about GPT4)
Theory-of-mind (ToM) tasks, which require understanding agents' beliefs, goals, and mental states, are essential for common-sense reasoning involving humans, making it crucial to enhance LLM performance in this area. This study measures the ToM performance of GPT-4 and three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates the effectiveness of in-context learning in improving their ToM comprehension https://arxiv.org/ftp/arxiv/papers/2304/2304.11490.pdf
News Pinecone has raised $100M in Series B funding at a $750M valuation. "We're on a mission to provide long-term memory for AI." (via a Pinecone tweet)
VCs in AI
- @a16z, with participation from
- @ICONIQGrowth
- @MenloVentures, and
- @Wing_VC.
Funds (ETFs) four of the largest AI-focused ETFS:
- Global X Artificial Intelligence & Technology ETF (AIQ),
- Global X Robotics & Artificial Intelligence ETF (BOTZ),
- First Trust Nasdaq Artificial Intelligence and Robotics ETF (ROBT) and
- Ark Autonomous Technology & Robotics ETF (ARKQ).
Watchlist
4/24
- Start of NLP course
- AIML News
4/23
- Starting reading NLTK Cookbook
4/22
- Trip to Coimbatore
4/224/17 Reading the book Natural Language Processing in Action: Understanding, Analyzing, and generating text with Python
4/16
- Meta-Prompt,
- Noah Goodman, for building self-improving agents. It is written in
- LangChain.
Meta-Prompt is a simple self-improving language agent that reflects on interactions with a user and modifies its own instructions based on its reflections. The only thing that is fixed about the agent is the meta-prompt, which is an instruction for how to improve its own instructions.
For a description of Meta-Prompt, see Noah's blog-post. 4/15
- Natural Language Processing: Python and NLTK
- Modern Python Cookbook by Packt
- Natural Language Processing in Action: Understanding, analyzing, and generating text with Python , Cole Howard and Hobson Lane
4/12
Ecosystem Graphs What are the latest foundation models? Who builds them and where are they used downstream? What are the general trends over time? We hope that ecosystem graphs will be a useful resource for researchers, application developers, policymakers, and the public to better understand the foundation models ecosystem.
To explore the ecosystem, check out the website or read the paper.
- https://openai.com/blog/new-and-improved-embedding-model/
- Chain of Thought Prompting Elicits Reasoning in Large Language Models
- Entailment as Robust Self-Learner https://arxiv.org/pdf/2305.17197.pdf?fname=cm&font=TypeI
- An important metric to evaluate large language model.
- LLMs.md https://gist.github.com/yoavg/59d174608e92e845c8994ac2e234c8a9 - Some remarks on large language models - #### Yoav Goldberg, January 2023
- NLP in 5 minute- Explainer video - https://www.youtube.com/watch?v=CMrHM8a3hqw
- Digital Marketing from Simply Learn https://www.youtube.com/watch?v=bixR-KIJKYM
NLP with Deep Learning - Manning
- Better job at building systems that communicate better
- GPT Tokenizers - https://simonwillison.net/2023/Jun/8/gpt-tokenizers/
What should you build? "You can tell they really need it because they are willing to use your crappy first version of the product."
- This applies to features also when you decide which features to implement. What’s the best way to figure out if someone needs your product? Get it in the hands of users as quickly as possible. Ask someone to use it. Try to sell whatever you have to a company. If nobody wants what you built, you will find out pretty quickly.
In the end, the origin of every great startup idea is unique; some combination of insight, luck, timing and serendipity.
From https://daedtech.com/how-to-keep-your-best-programmers/
three motivations for departure:
- Frustration with the inversion of meritocracy (“organization stupidities”)
- Diminishing returns in mutual value of the work between programmer and organization
- Simple boredom
To this list I’m going to add a few more things that were either implied in the articles above or that I’ve experienced myself or heard from coworkers:
- Perception that current project is futile/destined for failure accompanied by organizational powerlessness to stop it
- Lack of a mentor or anyone from whom much learning was possible
- Promotions a matter of time rather than merit
- No obvious path to advancement
- Fear of being pigeon-holed into unmarketable technology
- Red-tape organizational bureaucracy mutes positive impact that anyone can have
- Lack of creative freedom and creative control (aka “micromanaging”)
- Basic philosophical differences with majority of coworkers