Sekėjai

Ieškoti šiame dienoraštyje

2023 m. gruodžio 30 d., šeštadienis

ChatGPT: The official guide to good prompting is here

"For the first time, Open AI has now described in detail in a document how to actually use the artificial intelligence (AI) ChatGPT. That's what it says.

 

The document "Prompt engineering" recommends six strategies in English for achieving good results with the stage directions to the machine (prompts). So far, many have chatted in natural language with ChatGPT, often achieving impressive results, but also getting some hallucinations or simply incorrect answers. Experts have been exploring the mechanisms of these language models for more than a year (including us), and have given and continue to give tips and tricks for good prompts.

 

The strategies now presented are:

 

Write clear instructions.

 

Include a reference text.

 

Break complex tasks into simpler subtasks.

 

Give the model time to “think”.

 

Two other strategies recommend linking the AI to your own databases and systematically testing further developments. For home use we explain the first four points here:

 

Clear instructions

 

The more precise the question, the clearer the answer. Open AI cites the task “Summarize the meeting notes” as one of several examples. A more detailed prompt would be better. Open AI recommends:

 

"Condense the meeting notes into a single paragraph. Then write a Markdown list of the speakers and each of their key points. Finally, list the next steps or action points suggested by the speakers, if any."

 

The instructions correspond to our earlier finding that it is best to treat the digital AI boy like a 14-year-old student intern: tell him exactly what he should do, how and in what order. The application and use of AI in everyday professional life is basically a guide to good communication with colleagues. Say exactly what you want. Then the answers will work too.

 

The instructions show even more: expertism and nerd confusion have not gone away even a year after ChatGPT was introduced. Or would you know off the top of your head what a “Markdown list” of speakers is? - "Markdown" refers to a special text format: In this so-called markup language, for example, **bolds** are marked with two asterisks and lists of speakers are created with a *. Headings have one or, depending on the level, several hash marks # placed before and after them. Structure is important to the machines, otherwise they will wander off like a caught lazy examinee.

 

Clarity in the prompt can and should mean even more. This includes:

 

To provide a persona. "Put yourself in the role of a comedian. Give me ten examples of a funny, lightening remark at the beginning of negotiations with our long-standing supplier."

 

It is also helpful to clearly differentiate between different sources. "You receive two articles about the same topic. Combine both articles. Then make a comparison: which of the articles has the better arguments?"

 

Open AI recommends marking the two articles at the beginning and at the end. We tried this out with two comments on FAZ.NET, from Sarah Huemer and Daniel Mohr. And instead of the awkward spelling with square brackets, we simply used “Text 1” and “Text 2” to delimit them. It works. A wise mind will decide whether the machine draws the right conclusions by attesting that the first post has better arguments.

 

It also helps to clarify the request by specifying the expected length: 50 words, two paragraphs or three bullet points?

 

And finally, Open AI recommends breaking the prompts down into steps: "Use the following step-by-step guide for your answers. Step 1: The user gives you text enclosed in triple quotation marks. Wrap the text within the Put three quotation marks together in a sentence. Place the word 'summary' in front of it. Step 2: Translate the summary from the first step into Spanish. Place the word 'translation' in front of it."

 

Provide reference text

 

It helps to give the AI a framework. This could be a reference text, for example. We tried this out with a guest article by dpa news director Froben Homburger on FAZ.NET. And then asked two questions. The prompt to the AI was:

 

You will be provided with a document and a question delimited by triple quotes. Your task is to answer the question using only the document provided and to cite the passages of the document that were used to answer the question. If the document does not contain the information needed to answer this question, simply write: 'insufficient information.'" If an answer to the question is given, it must be answered with a quote can be documented. Use the following format to cite relevant passages ({"Quote": ...}).

 

""""""

 

Question: Why is Hamas a terrorist organization according to the dpa?

 

The answer provides relevant quotes from the text and also allows questions about other organizations - including the FARC in Colombia, which are not mentioned in the text. The machine therefore correctly answers: “insufficient information”.

 

Break complex tasks into simpler subtasks

 

Open AI uses an example from customer service to illustrate. The AI is trained to initially categorize customer inquiries: Is it about billing issues, technical support or other general categories? Second step: When it comes to technical support, does the customer want help troubleshooting or do they have a question about compatibility with other devices? These are subcategories.

 

Sorting the customer request in this way helps the AI go further in depth. When the main category and subcategory are clear, AI and support know more quickly where to look and what the correct answer boils down to. With corresponding additional documents stored, it receives a branch that is easier to penetrate for individual recurring questions.

 

The same applies to extensive texts: working with summaries is recommended for prompt engineering. If you want to understand an entire book, you either read it in its entirety or Open AI suggests summarizing individual chapters. This allows you to bypass the context limit in a long chat. After the tenth chapter, ChatGPT no longer knows what was in the first. Unless you tell the machine to create a new summary of all five summaries after, for example, the first five individually summarized chapters. Open AI has illustrated this using the example of “Alice in Wonderland”.

 

Give the model time to “think”

 

Using the example of a complicated math problem, Open AI shows how you can help the AI when it comes to assessing a student's solution as correct or incorrect. The trick is to first instruct the machine to first solve the math problem itself. The AI should then compare the developed solution with that of the student.

 

Basically, this is again about breaking down a larger task into smaller tasks. It helps to ask the machine itself: How can I divide the following detailed task into many smaller ones so that you as an AI can handle it?" [1]

 

1. ChatGPT: Die offizielle Anleitung für gutes Prompten ist da. Frankfurter Allgemeine Zeitung (online) Frankfurter Allgemeine Zeitung GmbH. Dec 19, 2023. Von Marcus Schwarze

Komentarų nėra: