Member-only story
RE2+CoT: The Prompting Technique That Improves AI Accuracy(+ demo you can try)
Today I found out about a new prompting technique. Asking LLM to re-read the questions boosts the quality of its answers by ~5% on top of the Chain-of-Thought technique. I even made a CustomGPT to demo this technique for you!
AI use disclosure: ChatGPT was used to help structure and format my thoughts and materials
The Research Behind the Technique
Do not have a Medium Account? Read here!
A recent research paper introduced this innovative approach. The idea is straightforward: by prompting the AI to re-read the question, it enhances its understanding and reasoning capabilities. Here’s the prompt structure they used:
💡 Prompt structure RE2+CoT:
Q: {Input Query}
Q: Read the question again: {Input Query}
Q: Thought-eliciting prompt (“Let’s think step by step”)
This method was tested on several language models, including davinci-003, ChatGPT, and LLaMA-2 (both 13B & 70B parameters). The experiments spanned across 14 different datasets covering: