proposal review - chunhualiao/public-docs GitHub Wiki
ChatGPT as a Reviewer – By prompting an LLM to “act as an NSF review panelist” or “act as a critical reviewer”, users can get surprisingly detailed feedback on a draft proposal. The AI can highlight unclear assumptions, missing citations, or sections that need more detail, emulating the perspective of a peer reviewer. Researchers have found this useful as a form of automated peer review that is available on-demand.
- For instance, one ResearchGate tool called “Review My Paper” was built on GPT-4 to provide feedback on manuscripts across various parameters (Review My Paper - An AI tool - ResearchGate). Similar prompts can be used for proposals, essentially giving you a ruthless but fast reviewer. While not infallible, this approach can catch many issues in logic or presentation.
- “Identify weaknesses or unclear points in the above text and suggest improvements.”
Enago's AI Reviewer Tools – Enago (a scientific editing company) has developed AI tools to assist journal peer reviewers, which are also applicable to proposals. These tools can automatically screen for common problems: e.g., missing sections, lack of clarity in aims, grammatical issues, even ethical issues or possible plagiarism (6 Assisted AI Tools for Peer Reviewers - Enago). A reviewer pressed for time might use such a tool to get a quick initial assessment. For proposal writers, running their draft through similar checks can ensure they haven't overlooked a key requirement or inadvertently copied text.
classify proposals and select reviewers
(Artificial intelligence is selecting grant reviewers in China), aiming to reduce the workload of assigning proposals to the right experts. While this is about routing rather than scoring, it is a step toward automating parts of the review. By analyzing the text, AI can predict which discipline or program a proposal fits best, or flag if it should go to an ethics review etc.
Automated Score Predictors
Automated Score Predictors – Experimental systems have been researched that attempt to score or evaluate proposal quality using machine learning. For example, studies have tried to predict grant success based on text features or bibliometric indicators (Machine learning in scientific grant review: algorithmically predicting ...). One approach trained models on past proposals with known outcomes to see if AI can learn what a successful proposal looks like (Machine learning in scientific grant review: algorithmically predicting ...). Results so far indicate this is a hard problem – success depends on many intangible factors – and models trained only on past text have difficulty outperforming trivial baselines ([2106.10700] On predicting research grants productivity - arXiv). However, simpler proxy metrics can be automated: readability scores, checking alignment of the proposal with the funder's stated criteria, coverage of required sections, etc. Some tools (like Grantable mentioned above) incorporate a checklist that the AI uses to flag if any criterion isn’t adequately addressed.
Mock Panel Simulation
An intriguing concept is using multiple AI agents to simulate a review panel discussion on a proposal. Each agent could be prompted to take on a persona (e.g., Reviewer A who loves theory, Reviewer B who is a stickler for methodology) and then have them critique a proposal and even debate each other’s points. While largely experimental, such simulations could uncover different perspectives on the proposal’s strengths and weaknesses, akin to a real panel. This falls more under research than any commercial tool currently available.
Plagiarism and Similarity Checkers
Ensuring the proposal is original is paramount. AI-based plagiarism detectors (e.g., Turnitin) can be run on drafts to catch unintentional overlaps with existing text. Interestingly, as more proposals might be AI-generated, there’s concern about many containing similar phrasings. Funding agencies like the ERC note they have processes “able to detect text similarities” across submissions (European Research Council issues warning on AI’s use in grant applications | EURAXESS). Therefore, using an AI to review one’s own draft for any such issues (and then rephrasing) could become a standard step.