vault backup: 2025-12-01 19:16:35

Affected files:
.obsidian/workspace.json
0 Journal/0 Daily/2025-12-01.md
2 Personal/1 Skills/AI/Prompt Engineering.md
Temporary/Untitled.md
This commit is contained in:
2025-12-01 19:16:35 +01:00
parent ec2819c910
commit 70774e2211
4 changed files with 161 additions and 11 deletions

View File

@@ -64,12 +64,12 @@
"state": {
"type": "markdown",
"state": {
"file": "0 Journal/1 Weekly/2025-W37.md",
"file": "0 Journal/0 Daily/2025-12-01.md",
"mode": "source",
"source": false
},
"icon": "lucide-file",
"title": "2025-W37"
"title": "2025-12-01"
}
}
]
@@ -125,8 +125,7 @@
"title": "Bookmarks"
}
}
],
"currentTab": 1
]
}
],
"direction": "horizontal",
@@ -205,13 +204,13 @@
"state": {
"type": "outline",
"state": {
"file": "0 Journal/0 Daily/2025-11-11.md",
"file": "2 Personal/1 Skills/AI/Prompt Engineering.md",
"followCursor": false,
"showSearch": false,
"searchQuery": ""
},
"icon": "lucide-list",
"title": "Outline of 2025-11-11"
"title": "Outline of Prompt Engineering"
}
},
{
@@ -358,12 +357,49 @@
"height": 932,
"maximize": false,
"zoom": 0
},
{
"id": "6645ab23fe9160db",
"type": "window",
"children": [
{
"id": "50ed15b644bed129",
"type": "tabs",
"children": [
{
"id": "5325a7fc5b3fe55f",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "2 Personal/1 Skills/AI/Prompt Engineering.md",
"mode": "source",
"source": false
},
"icon": "lucide-file",
"title": "Prompt Engineering"
}
}
]
}
],
"direction": "vertical",
"x": 388,
"y": 129,
"width": 1024,
"height": 800,
"maximize": false,
"zoom": 0
}
]
},
"active": "be6b5d9342950966",
"active": "5325a7fc5b3fe55f",
"lastOpenFiles": [
"0 Journal/0 Daily/2025-12-01.md",
"0 Journal/1 Weekly/2025-W37.md",
"2 Personal/1 Skills/AI/Prompt Engineering.md",
"2 Personal/1 Skills/AI",
"Temporary/Untitled.md",
"0 Journal/0 Daily/2025-11-11.md",
"0 Journal/0 Daily/2025-10-09.md",
"0 Journal/0 Daily/2025-10-08.md",
@@ -387,10 +423,7 @@
"2 Personal/Home Lab/VPS/Paperless.md",
"2 Personal/Home Lab/Baerhalten",
"2 Personal/Home Lab/NAS/Zerotier Installation.md",
"Temporary/DS218+ - Drive Upgrade.md",
"Attachments/Pasted image 20251113204535.png",
"Temporary/Drone Regulation Overview.md",
"Temporary/n8n.md",
"Temporary/Untitled.base",
"Attachments/Pasted image 20251104121740.png",
"2 Personal/Home Lab/Devices",
@@ -408,7 +441,6 @@
"Attachments/Pasted image 20250721140942.png",
"Attachments/Pasted image 20250721140924.png",
"2 Personal/Projects/Robotics/Untitled",
"2 Personal/Projects/Robotics",
"Dashboard Canvas.canvas",
"99 Work/0 OneSec/OneSecNotes/30 Engineering Skills/Computer Science/Untitled.canvas",
"8 Work/OneSecNotes/Temporary/Untitled.canvas"

View File

@@ -0,0 +1,28 @@
---
aliases:
Tags:
- daily
day_grade:
Dehnen:
Sport:
Ernährung:
---
# 2025-12-01
[[2025-11-30]] <--> [[2025-12-02]]
Error generating daily quote
---
## Planning
___
## Reflection
Stop doing tech for the sake of tech. Think about what you want to achieve in real life, think what outputs you want to see in the world. If it requires tech, then do tech for that purpose and that purpose only.
___
## Notes
-

View File

@@ -0,0 +1,90 @@
---
title: Prompt Engineering
created_date: 2025-12-01
updated_date: 2025-12-01
aliases:
tags:
---
# Prompt Engineering
## Tricks
### Prompt
It's a call to action --> it is a program, not a questions
LLMs are prediction engines.
### Personas
Persona should go into System Prompt if possible, but user prompt is also good.
> [!NOTE] Persona
> Who is answering this
> whats the perspective, what's the source of information?
### Context
Context takes the guesswork out of prompting. The context is what makes the LLM know what and how it should respond. Without context it will just invent everything. This means we need to provide **ALL** the context **EVERY TIME**. We cannot leave anything to chance.
ABC: Always be Contexting!
#### Context Snippets I always want to include:
- If my question cannot be answered reliably with the information available, or if important data is missing, say The answer is uncertain because…. Provide your best inference but clearly separate facts from assumptions.
> [!NOTE] Context provides:
> What are the facts?
> What do we know?
### Format
output requirements:
2 - Format: Use a clear bullet list
3 - Length: keep it under 200 words
4 - Tone: professional, technical, radically transparent. no corporate fluff
#### Few Shot Prompting
- you show a few examples of what you want to achieve (e.g. old emails you've written).
- examples of only what you want to achieve.
> [!NOTE] Format provides:
> How would we do it?
> How would we describe the process to someone else?
> Few shot shows: this is what good looks like: repeat it!
### Advanced Technique
#### Chain of Thought (CoT)
Chain of Thought: Always use thinking models!
It increases Accuracy, Trust
> [!NOTE] CoT provides
> How will the logic flow?
#### Trees of Thought (ToT)
The idea is that the model can explore ideas and answers in different branches and see which one is best and then comparing them.
E.g:
1. Brainstorm different strategies
2. Evaluate each branch (clear pros and cons, with detailed goals)
3. Synthesize the best elements from all branches to create an optimal *Golden Path* strategy that is balanced
4. Execute the actual task with this Golden Path strategy
#### Playoff Method
Also known as: Adverserial Validation
Let different personas execute the different tasks and then have the other personas review those drafts. The reason for that being that the LLMs are usually better at correcting and improving than at original writing.
### Meta-Skill
Before asking the LLM to do anything, **YOU** need to know exactly what you want and what you expect. This means you need to be able to explain it clearly yourself, else you cannot prompt it.
Thinking is messy --> prompt will be messy --> output is messy
Think first, Prompt second!
> [!NOTE] Skill Issue
> If you're not happy with the output treat it as your fault: meaning your prompt is not good enough!
> You didn't provide enough context.
Finally: Use prompt enhancing prompts to improve your prompts and use a prompt library!
## Google Framework
TCREI: Task Context Reference Evaluate Iterate
## Fabric
1. What are the Challenges that I have in real life that I want to solve?
2. Components
## Sources
1. [You SUCK at Prompting AI (Here's the secret) - YouTube](https://www.youtube.com/watch?v=pwWBcsxEoLk)
2. [Introducing Fabric — A Human AI Augmentation Framework - YouTube](https://www.youtube.com/watch?v=wPEyyigh10g)

0
Temporary/Untitled.md Normal file
View File