Overview
Chain of Thought (CoT) prompting significantly improves the performance of LLMs on complex reasoning, math, and logic tasks. By 'thinking out loud,' the model is less likely to make simple errors.
How to Use
- Few-shot CoT: Provide examples in the prompt that show the step-by-step reasoning.
- Zero-shot CoT: Simply add the phrase 'Let's think step by step' to the end of the prompt.
Why it Works
It allows the model to use more 'compute' (tokens) for the reasoning process and provides a logical path for the model to follow, reducing the chance of jumping to a wrong conclusion.