Files
Main/2 Personal/1 Skills/AI/Prompt Engineering.md
Obsidian-MBPM4 70774e2211 vault backup: 2025-12-01 19:16:35
Affected files:
.obsidian/workspace.json
0 Journal/0 Daily/2025-12-01.md
2 Personal/1 Skills/AI/Prompt Engineering.md
Temporary/Untitled.md
2025-12-01 19:16:35 +01:00

3.3 KiB
Raw Blame History

title, created_date, updated_date, aliases, tags
title created_date updated_date aliases tags
Prompt Engineering 2025-12-01 2025-12-01

Prompt Engineering

Tricks

Prompt

It's a call to action --> it is a program, not a questions LLMs are prediction engines.

Personas

Persona should go into System Prompt if possible, but user prompt is also good.

[!NOTE] Persona Who is answering this whats the perspective, what's the source of information?

Context

Context takes the guesswork out of prompting. The context is what makes the LLM know what and how it should respond. Without context it will just invent everything. This means we need to provide ALL the context EVERY TIME. We cannot leave anything to chance.

ABC: Always be Contexting!

Context Snippets I always want to include:

  • If my question cannot be answered reliably with the information available, or if important data is missing, say The answer is uncertain because…. Provide your best inference but clearly separate facts from assumptions.

[!NOTE] Context provides: What are the facts? What do we know?

Format

output requirements: 2 - Format: Use a clear bullet list 3 - Length: keep it under 200 words 4 - Tone: professional, technical, radically transparent. no corporate fluff

Few Shot Prompting

  • you show a few examples of what you want to achieve (e.g. old emails you've written).
  • examples of only what you want to achieve.

[!NOTE] Format provides: How would we do it? How would we describe the process to someone else? Few shot shows: this is what good looks like: repeat it!

Advanced Technique

Chain of Thought (CoT)

Chain of Thought: Always use thinking models! It increases Accuracy, Trust

[!NOTE] CoT provides How will the logic flow?

Trees of Thought (ToT)

The idea is that the model can explore ideas and answers in different branches and see which one is best and then comparing them.

E.g:

  1. Brainstorm different strategies
  2. Evaluate each branch (clear pros and cons, with detailed goals)
  3. Synthesize the best elements from all branches to create an optimal Golden Path strategy that is balanced
  4. Execute the actual task with this Golden Path strategy

Playoff Method

Also known as: Adverserial Validation Let different personas execute the different tasks and then have the other personas review those drafts. The reason for that being that the LLMs are usually better at correcting and improving than at original writing.

Meta-Skill

Before asking the LLM to do anything, YOU need to know exactly what you want and what you expect. This means you need to be able to explain it clearly yourself, else you cannot prompt it.

Thinking is messy --> prompt will be messy --> output is messy

Think first, Prompt second!

[!NOTE] Skill Issue If you're not happy with the output treat it as your fault: meaning your prompt is not good enough! You didn't provide enough context.

Finally: Use prompt enhancing prompts to improve your prompts and use a prompt library!

Google Framework

TCREI: Task Context Reference Evaluate Iterate

Fabric

  1. What are the Challenges that I have in real life that I want to solve?
  2. Components

Sources

  1. You SUCK at Prompting AI (Here's the secret) - YouTube
  2. Introducing Fabric — A Human AI Augmentation Framework - YouTube