This guide offers various strategies and techniques to improve the performance of your Language Model (LLM). You can experiment with these methods individually or in combination to achieve better results for your specific needs. Some strategies include:

1. Write Clear Instructions

Ensure that your instructions are concise and clear. If the outputs are too lengthy, request brief responses. If you need expert-level writing, specify that. Minimizing guesswork for the LLM increases the likelihood of receiving the desired output. Consider the following:

  • Include specific details in your query for more relevant answers.
  • Instruct the model to adopt a specific persona.
  • Use delimiters to indicate distinct parts of the input.
  • Specify the steps required to complete a task.
  • Provide examples.
  • Specify the desired length of the output.
  • Refer to the Evaluation Description of available LLMs.

2. Provide Reference Text

To reduce the generation of fake answers, particularly on obscure topics, or to include citations and URLs, provide reference text that can assist the LLM. Here’s what you can do:

  • Instruct the model to answer using reference text.
  • Instruct the model to answer with citations from reference text.
  • Refer to Offline Data Loaders.

3. Break Down Complex Tasks into Simpler Subtasks

Complex tasks tend to have higher error rates. To enhance performance, break down complex tasks into simpler subtasks. You can:

  • Use intent classification to identify the most relevant instructions for a user query.
  • For dialogue applications with long conversations, summarize or filter previous dialogue.
  • Summarize long documents piecewise and construct a full summary recursively.
  • Refer to the Description of available LLMs.

4. Allow LLMs Time to “Think”

LLMs may make more reasoning errors when rushed. Asking for a chain of reasoning before a response can help them reason their way to correct answers. Consider:

  • Instructing the model to work out its solution before rushing to a conclusion.
  • Using an inner monologue or a sequence of queries to hide the model’s reasoning process.
  • Asking the model if it missed anything on previous passes.
  • Refer to the Description of available LLMs.

5. Utilize External Tools

Compensate for LLM weaknesses by using outputs from other tools. Text retrieval systems or code execution engines can be helpful. If a task can be done more reliably or efficiently with a tool, consider using it for better results:

  • Use embeddings-based search for efficient knowledge retrieval.
  • Use code execution for more accurate calculations or call external APIs.
  • Refer to Offline Data Loaders.

6. Test Changes Systematically

Measuring the impact of changes is essential for improvement. Define a comprehensive test suite (eval) to ensure that modifications yield a net positive performance:

  • Evaluate model outputs with gold-standard answers.
  • Refer to Evaluation.

Each of the strategies listed above can be implemented with specific tactics. These tactics provide ideas for experimentation and improvement. Feel free to explore creative ideas beyond what’s listed here.