Sitemap

RE2+CoT: The Prompting Technique That Improves AI Accuracy(+ demo you can try)

3 min readSep 20, 2024
Press enter or click to view image in full size

Today I found out about a new prompting technique. Asking LLM to re-read the questions boosts the quality of its answers by ~5% on top of the Chain-of-Thought technique. I even made a CustomGPT to demo this technique for you!

AI use disclosure: ChatGPT was used to help structure and format my thoughts and materials

The Research Behind the Technique

Press enter or click to view image in full size
Figure from original paper

Do not have a Medium Account? Read here!

A recent research paper introduced this innovative approach. The idea is straightforward: by prompting the AI to re-read the question, it enhances its understanding and reasoning capabilities. Here’s the prompt structure they used:

💡 Prompt structure RE2+CoT:
Q: {Input Query}
Q: Read the question again: {Input Query}
Q: Thought-eliciting prompt (“Let’s think step by step”)

This method was tested on several language models, including davinci-003, ChatGPT, and LLaMA-2 (both 13B & 70B parameters). The experiments spanned across 14 different datasets covering:

--

--

Eduard Ruzga
Eduard Ruzga

Written by Eduard Ruzga

We make our world significant by the courage of our questions and by the depth of our answers — Carl Sagan

No responses yet