Skip to content Skip to sidebar Skip to footer

How to Get Accurate GPT-4 Answers: Prompts Engineering Guide

Prompt engineering is an essential technique for maximizing the effectiveness of language models such as GPT-4. In this article, we will explore strategies and tactics that will help you get more accurate and relevant responses from these models. This guide is useful for beginners as well as advanced users who want to improve their interactions with artificial intelligence….

Architecture of a prompt

When structuring a prompt, it is necessary to be clear about the steps needed to achieve a good result.

Source:

In the following image you have identified by color the elements that you should take into account in your prompt to achieve the best output.

El prompt perfecto para ChatGPT
Source: therundown.ai
Using Delimiters to Create Prompts

When our prompt has a certain complexity, it is necessary to structure it correctly. Delimiters can be used for this purpose.

It is not strictly necessary to use these delimiters for all prompts, but they can be extremely useful in certain contexts. These help to structure and clarify the inputs, which can significantly improve the accuracy and relevance of model responses. Here are some situations where delimiters are particularly useful:

Delimitadores ChatGPT El Blog de Salvador Vilalta

However, we can structure our queries in different ways. In my case, I use brackets a lot to structure my Prompts. The important thing in this case is to arrange the structure of the prompt to achieve a good result.

Let’s now look at some tricks to improve our prompts

1. Write Clear Instructions

Language models cannot read your mind. It is crucial to provide clear and detailed instructions to avoid misunderstandings. If you need short answers, ask for them explicitly. If you want an expert level of writing, please specify. The more information you provide, the better the result.

Tactics:

  • – Include details in your query to get more relevant answers.
  • – Ask the model to adopt a specific persona.
  • – Use delimiters to indicate different parts of the input.
  • – Specifies the steps required to complete a task.
  • – Provides clear examples.
  • – Specifies the desired length of the output.

Practical Example:

  • Worse: “I need help improving my website.”
  • Best: “I am looking to improve the loading speed of my e-commerce website, can you recommend techniques to optimize images, reduce CSS and JavaScript file size, and use browser caching?”
2. Provides Reference Text

Sometimes, models can make up false answers, especially on complex topics. Providing reliable reference text can improve the accuracy of responses.

Tactics:

  • Instruct the model to respond using a reference text.
  • Ask the model to quote specific passages from the reference text.

Practical Example:

  • Worse: “What are your thoughts on the impact of climate change?”
  • Better: “Use the information in the following article to answer: ‘” Climate Change: Impact on Ecosystems and Sustainable Solutions”.’ What specific measures are proposed to mitigate climate change?”
3. Breaks Complex Tasks into Simple Subtasks

Complex tasks tend to have a higher error rate. Breaking a task into more manageable subtasks can reduce errors and improve accuracy.

Tactics:

  • Use the classification of intentions to identify the most relevant instructions.
  • For dialog applications, summarize, or filter previous dialogs.
  • Summarizes long documents in parts and builds a complete summary recursively.

Practical Example:

  • Worse: “Write a report on the history of artificial intelligence.”
  • Better: “First, it summarizes the advances in artificial intelligence from 1950 to 2000. It then provides a summary of progress from 2000 to the present. Finally, it concludes with future prospects in the field of artificial intelligence.
4. Give the Model Time to "Think".

Like humans, models can make reasoning errors when they respond in haste. Asking for a “thought process” before an answer can improve accuracy.

Tactics:

  • Instruct the model to solve its own solution before reaching a conclusion.
  • Use internal monologues or a sequence of queries to hide the reasoning process of the model.
  • Ask the model if it omitted anything in previous passes.

Practical Example:

  • Worse: “How much is 17 times 28?”
  • Better: “Solve the following problem step by step: How much is 17 times 28? First, write out the complete multiplication and then give the final result.”
5. Use External Tools

It compensates for the weaknesses of the model by feeding it with outputs from other tools. For example, a text retrieval system can provide relevant documents. A code execution engine can help with mathematical calculations.

Tactics:

  • Uses embeddings-based search to implement efficient knowledge retrieval.
  • Use code execution to perform more precise calculations or call external APIs.
  • Gives the model access to specific functions.

Practical Example:

  • Worse: “Calculate the square root of 256.”
  • Better: “Use the Python function math. sqrt to calculate the square root of 256. Provides the necessary code and the result.”
6. Test Changes in a Systematic Manner

It is easier to improve performance if you can measure it. Define a set of tests (evals) representative of actual usage to evaluate modifications and ensure that the changes are positive.

To test changes in language models systematically, define representative test sets, such as standard verified answers, tasks of different complexities, varied formats (lists, tables, paragraphs), multiple languages, questions requiring prior context, and recent events. This ensures that the model handles diverse situations and improves its performance significantly.

Tactics: 

  • Evaluates model outputs with reference to standard responses.
  • Performs model-based assessments for questions with multiple correct answers.

Practical Example:

  • Worse: “Improve this AI model.”
  • Best: “Evaluate the performance of the AI model using the standard test suite. Compares the results to the correct answers and provides an analysis of areas for improvement.”
Generador de Prompts . El Blog de Salvador Vilalta

Some tools, such as HX.AI, Promptperfect, Taskade, or Webutility and even some GPTs that you can find in the ChatGPT GPT library, allow you to generate prompts to get the best results

To improve your prompts, you could consider these aspects in your prompts::

  • Ask the model to ask you questions . For example, you can include in your prompt the following text “ask me any questions you consider to clarify my request so that you can provide me with the best answer”
  • Ask the model to provide you with the sources from which it is extracting the information.
  • Ask for the specific output format you expect “structure the information in a table”.
  • Use correct spelling and grammar in your prompts.

Do you have any questions about prompts engineering or want to share your experience? Leave me a comment and join the conversation!

You can also suggest topics related to marketing, technology and artificial intelligence that you would like me to address in future articles.

Have a goodweek!

¿Te ha gustado este contenido?

Si te ha gustado este contenido y quieres acceder a contenido exclusivo para suscriptores, suscríbete ahora. Agradezco de antemano tu confianza

Leave a comment

0.0/5

Go to Top
Suscríbete a mi Blog

Sé el primero en recibir mis contenidos

Suscribe to my Blog

Be the first to receive my contents

Descárgate El Método 7

El Método 7 puede será tu mejor aliado para incrementar tus ventas