Chain of Thought (CoT) prompting is a technique used to improve reasoning by encouraging models to break down their thought process into intermediate steps.  This method helps the model articulate its reasoning, leading to more accurate and transparent answers.  

An important reason to use chain of thought prompting is that LLMs are good at finding patterns in massive amounts of data, but they often struggle with complex reasoning tasks.  They might give the right answer by chance, but not necessarily because they understand the underlying logic

CoT prompting addresses this by guiding the LLM through the reasoning process step-by-step or using a chain.  This chain typically consists of intermediate steps, justifications, or evidence used to arrive at the final output.  We do this by providing examples that show how to break down a problem and solve it logically.

The user explicitly prompts the model to generate a step-by-step explanation or reasoning process before arriving at a final answer.

Try this example:  Solve this problem step by step. 

Problem: What is the value of 5+7+9-12? 

Response:

To solve the problem 5+7+9−12, we follow the order of operations from left to right:

  1. Add 5 and 7:  5+7=12
  2. Add the result (12) and 9:  12+9=21
  3. Subtract 12 from the result (21):  21−12=9

Therefore, the value of 5+7+9−12 is 9.

The statement ‘solve this problem step by step’ is the most important factor in this process.

You will find this style of prompt creation to be particularly effective for conducting research.  CoT prompting builds on the principle that complex problems are often best tackled by breaking them into smaller, more manageable pieces.  We know this is true in any kind of research, so why not get the computer to help you.

You have to define what you are looking for, so some careful thought will have to go into the prompt.

Example:  Provide three subtopics for a paper on women soldiers’ direct combat contributions in the revolutionary war.

I tested a basic zero-shot prompt, women in war.  Consensus provided two pages of topics, any of which would make several good research papers.  It was a lot to sort through and AI is supposed to reduce your work.  

Next, I tried a CoT with clearly defined and detailed parameters “Three Subtopics for a Paper on Women Soldiers’ Direct Combat Contributions in the Revolutionary War”

  • Women Disguised as Soldiers and Direct Battlefield Participation
    • Explore cases like Deborah Sampson, who disguised herself as a man to fight, and other women who directly engaged in combat or took up arms when needed, such as Molly Hays at the Battle of Monmouth.
  • Overlapping Roles:  From Camp Followers to Combatants
    • Analyze how women’s roles as camp followers, nurses, and support staff often overlapped with direct combat activities, blurring the lines between support and soldiering in the heat of battle.
  • Recognition, Memory, and Historical Marginalization
    • Examine why women’s direct combat contributions have been underrepresented in historical narratives, and how modern scholarship and movements have worked to recover and recognize these stories.

This response provides plenty of examples to start your research on as well as three legitimate sources for each topic.