Session 2 • February 11, 2026

Prompting Fundamentals

Core prompting principles, context management strategies, and why every new chat starts with a blank slate.

Key Takeaways

  • Write your own prompts — you're responsible for understanding and validating what they do
  • Every new chat is a blank slate; think of it as meeting a stranger who knows nothing about you
  • Create and maintain context files (saved text) rather than depending on AI memory features
  • Text files are processed much better than PDFs — convert PDFs to text for reliable AI understanding
  • If a conversation goes wrong, start fresh with better instructions rather than trying to fix it

What We Covered

Prompting Best Practices

The foundation of effective AI use starts with how you write prompts:

  • Write your own prompts – While it's fine to use AI-generated prompts for inspiration, you must understand what every instruction does
  • Prompts are instructions, not facts – A prompt can't "hallucinate" in the traditional sense, but the outputs it generates can
  • You're the validator – Always read the output word by word and verify factual claims

Context Management

Understanding how AI "remembers" (and forgets) is critical to getting consistent results:

  • Blank slate – Every new chat starts fresh. The AI knows nothing about you until you tell it
  • Context window – AI has a limited memory that progressively forgets earlier parts of long conversations
  • Context files – Save reusable text files with your background, role, and preferences to paste into new chats
  • Don't trust memory features – Built-in AI memory is unreliable. Maintain your own context files instead

File Handling

Not all file formats are equal when it comes to AI:

  • Text files are best – AI processes plain text much more reliably than any other format
  • PDFs are problematic – They require extra processing, and AI might only read a fraction of the content
  • Best practice – Convert PDFs to text first, save as context files for reuse across conversations

When Prompts Go Wrong

What to do when an AI conversation starts producing bad results:

  • Start fresh – Open a new chat with improved instructions instead of trying to course-correct
  • Learn from failures – Use failed attempts to identify what was confusing and write better initial prompts
  • Be specific early – Define desired outcomes from the beginning rather than adjusting mid-conversation

"You cannot expect that the AI is going to be 100% accurate... you need to read the prompt word by word."

"Hallucination is something that's factual information that can be labeled as true or false. But if it writes a prompt for you, it's not information that can be labeled true or false. It is just a prompt... It's an instruction."

Use Cases

Topics and challenges participants brought to the session.

Katie Custody calendar visualization
Patrick Marketing knowledge base
Meli HR & legal context for AI

Questions Asked

Q Is there any library or subscription platform for prompts that are consistently updated and triple-checked?

Pre-made prompt libraries exist but they become outdated quickly as AI models evolve. It's better to learn prompting principles and write your own prompts — that way your skills stay current regardless of which tools or models you use.

Q Is there a library for context — like PDFs or JSON files of books about marketing — that I could download and use?

Not yet as a mainstream product. The best approach is to create your own context files from materials you already trust and use in your work. This ensures the context is relevant to your specific situation and you understand what's in it.

Q Does the AI recognize when you say "hello" at the beginning?

AI processes greetings but they don't meaningfully affect output quality. Your context window has a limited capacity — it's better to focus your tokens on context and clear instructions rather than pleasantries.

Q How do you know the outcome is true when the AI might have hallucinated in the meta-prompt?

Important distinction: prompts are instructions, not facts — they can't "hallucinate." But the outputs generated from prompts absolutely can contain false information. Always validate any factual claims in the AI's response against trusted sources.

Q Is it a question of the AI forgetting, or does the chance of picking the right information decrease?

Both. AI has a fixed context window that progressively deprioritizes earlier content as conversations grow longer. Think of it as talking to someone who gradually forgets the beginning of your conversation — which is exactly why starting fresh with a well-structured prompt often works better than trying to fix a long thread.

Homework

  • Create a context file about yourself or your role that you can paste into new chats
  • Try converting a PDF you use frequently into a text file and compare AI results
  • Start a new chat when things go wrong instead of trying to fix a broken conversation
Free Community

AI Safari

Learn alongside other professionals. Real use cases, get unstuck fast, and build together.

Join now →