Errors when writing prompts in AI

Let's be honest: We've all written awful prompts at some point.. You ask ChatGPT, Gemini, or another AI a question, and the answer is nothing like what you expected. And you think: “AI is useless!”.

Spoiler alert: it's almost never the AI's fault. Usually, the problem lies with how are we asking.

Writing prompts is like giving instructions in a lab: if they aren't clear, the result will be… unpredictable. Let's look at the most common mistakes and, most importantly, how to correct them.

Errors when formulating a prompt in ChatGPT, Gemini, or other AI

1. Ask in a vague or ambiguous way

This is the classic one. The most common. The most human.

Example of a problematic prompt: “Explain this topic to me.” What topic? At what level? With what objective?

When the prompt is vague, the AI fill in the gaps as best you can. And sometimes he does it well… and other times not so much. How can it be improved? Be a little more specific, don't be afraid.

“"Explain this concept to me clearly, with simple examples and a scientific approach."”

You don't need to write a thesis, just give clear clues.

2. Not defining context or objective

The AI doesn't know if you're writing an article, preparing a class, or just trying to understand something out of curiosity. If you don't tell it Why do you want the information?, It will give you a generic answer.

Typical example:

“"Summarize this article." For whom? For what purpose? Very short or detailed?

How can it be improved? Add context and purpose.

“Summarize this article highlighting the hypothesis, methodology, and main results, intended for a literature review.”

You'll see how the answer changes completely.

3. Expecting AI to “guess” what you need

This mistake is very common, especially when you've been working with AI for a while and you think: “He should understand me by now”. But no. AI It has neither infinite contextual memory nor human intuition.

Example:

“Do better.”

Better how? Shorter? More technical? Clearer?

How can it be improved? Tell him exactly what you want to improve.

“Make the text clearer, eliminate redundancies, and maintain an academic tone.”

AI works best with specific instructions, not with assumptions.

4. Asking for too much in a single prompt

Here we tend to err on the side of enthusiasm.

Chaotic example:

“Summarize the article, translate it, draw conclusions, identify methodological errors, and propose future lines of research.”

Can he do it? Sometimes, yes. Will he do it well? Not always.

When you ask for too many things at once, the AI tends to respond superficially or to leave parts lame.

How can it be improved? Divide and conquer. First summarize. Then analyze. Then propose improvements.

Use several chained prompts, just like you would do yourself.

How to fix a poorly designed prompt

Do you have a prompt that isn't working? Don't throw it away. Refine it.

Here's a short, practical method:

  1. Detect what's wrong
    Is it too generic? Is there a lack of context? Is the answer unhelpful?
  2. Add intention
    What exactly do you want to achieve with the answer?
  3. Define format and tone
    List, summary, long text, technical level, target audience.
  4. Reformulate without starting from scratch
    Sometimes all it takes is one extra line.

Example of evolution:

Initial prompt:

“Explain this to me.”

Prompt corrected:

“"Explain this study to me clearly, highlighting the objective, methodology, and main results, with a focus on researchers."”

The difference is enormous.

With a few tweaks, you'll go from mediocre answers to truly useful results. And remember, if AI is a key resource for you, you might be interested in this course on prompt engineering.

Leave a Reply

Your email address will not be published. Required fields are marked *

en_US