Teaching Your AI Assistant to Remember: Session Checkpoints for Multi-Day Work

If you’ve used an AI coding assistant for anything beyond a quick one-off question, you’ve hit the wall: the conversation context runs out. You start a new session, and the assistant has no idea what you were doing yesterday. You spend the first fifteen minutes re-explaining your environment, your conventions, your naming standards, and where you left off.

It’s like onboarding a brand-new contractor every single morning.

I’ve been working with GitHub Copilot CLI on a database seeding project — PostgreSQL scripts, PowerShell runners, multiple environments — and the sessions routinely span multiple days. The work is cumulative. Yesterday’s decisions about idempotent INSERT patterns directly affect today’s work on a different script. Losing that context isn’t just annoying; it means the assistant makes worse decisions.

So I built a checkpoint system. And it changed everything about how I work with AI across sessions.

A woman DBA reviewing a glowing constellation map of connected session nodes, each node containing miniature document icons, symbolizing checkpoint-based memory across AI assistant sessions

The Problem: Context Evaporation

Here’s what a typical multi-day workflow looks like without checkpoints:

Day 1: You rewrite an auth user seeding script from a destructive DELETE/INSERT pattern to an idempotent temp-table approach with per-row conflict handling. You and the assistant discuss PostgreSQL version constraints (no MERGE support), debate ON CONFLICT syntax, get a DBA review pass, and settle on a final pattern. You also prepare deployment commands for two different environments with different parameters.

Day 2: You open a new session. The assistant doesn’t know any of that. It doesn’t know your PostgreSQL version constraints. It doesn’t know your PL/pgSQL style preferences. It doesn’t know which branch you’re on, which environments you’ve already seeded, or that you had a DBA review the syntax yesterday.

You’re starting from zero. Again.

The Solution: Structured Checkpoints

The checkpoint is a markdown file that captures everything a future session needs to resume seamlessly. Not a conversation log — a structured state snapshot. Think of it like a database backup, but for your working context.

Here’s the structure I settled on:

The key sections and why each matters:

What Was Accomplished — So the assistant doesn’t redo work or suggest approaches you’ve already rejected.

Deployment Commands — Exact, copy-pasteable commands with all parameters. Marked with whether they’ve been run and when. This is critical for database work where you’re seeding multiple environments over multiple days.

Key Decisions & Context — This is the highest-value section. It captures the why behind choices. “We use ON CONFLICT instead of MERGE because the PostgreSQL version doesn’t support it” prevents the assistant from suggesting MERGE in the next session. Style preferences go here too — no more re-explaining that you want leading commas and / / comments.

Referenced Resources — Full paths to DDL files, connection strings, API endpoints. Everything the assistant would need to look up if it were starting fresh.

How It Works in Practice

I added a simple instruction to my Copilot configuration: when I say “save a checkpoint,” the assistant writes a timestamped markdown file to a known location organized by repository name. At the start of every new session, it checks that location for existing checkpoints and reads the most recent one.

The restore flow at session start:

  • Read the latest checkpoint
  • Summarize what was in progress
  • Verify the current branch and repo state match what the checkpoint recorded
  • Ask whether to continue or start fresh
  • Here’s what that actually looks like in practice. I open a new session the next morning, and within seconds the assistant says:

    “Found checkpoint from yesterday. You were on branch fix/idempotent-seed-scripts with 7 commits pushed. Both environment 47 and 49 seeding completed successfully. The PR hasn’t been merged yet. Want to continue from here?”

    Compare that to: “How can I help you today?”

    What Makes a Good Checkpoint vs. a Bad One

    Bad checkpoint: A brain dump of everything that happened. Too long, too noisy, buries the important context in a wall of text.

    Good checkpoint: Structured, scannable, focused on what the next session needs. Think of it as writing a handoff note for a colleague who’s picking up your work — not a diary entry.

    Some lessons learned:

    Include exact commands, not descriptions of commands. “We seeded environment 47” is less useful than the actual PowerShell command with all twelve parameters. When you’re deploying to multiple environments over multiple days, you don’t want to reconstruct parameters from memory.

    Capture decisions, not just actions. “Changed the INSERT pattern” tells the next session what happened. “Changed from DELETE/INSERT-all to ON CONFLICT DO NOTHING because PostgreSQL version doesn’t support MERGE, and the DBA review confirmed the PK constraint name is authuser_pkey” tells the next session why, which prevents it from suggesting the approach you already ruled out.

    Record what’s NOT done. Open items, pending confirmations, blocked work. “Waiting for Cristi to confirm the correct service account for environment 49” prevents the assistant from assuming it’s been resolved.

    Update, don’t append. Within a session, reuse the same checkpoint file. Update sections as things change rather than creating a growing append-only log. The checkpoint should always reflect current state, not history.

    The Compound Effect

    The real payoff isn’t any single session — it’s the compound effect over days and weeks. Each checkpoint builds on the last. Your assistant accumulates institutional knowledge:

  • Environment mappings and connection parameters
  • Your coding style preferences
  • Which approaches work and which you’ve rejected
  • Deployment history (what’s been run, what succeeded, what’s pending)
  • File locations, DDL paths, API endpoints
  • By day three of a project, the assistant has context that would take a human colleague hours of onboarding to absorb. It knows your PostgreSQL doesn’t support MERGE. It knows you prefer leading commas. It knows the exact command line for each environment. It knows which PR is still open.

    That’s not artificial intelligence. It’s artificial memory. And for multi-day database work, memory is often more valuable than intelligence.

    Setting It Up

    The implementation is straightforward. You need three things:

  • A storage convention. I use ~/.copilot/session-state/{repo-name}/ with timestamped files like checkpoint-2026-04-24T1047.md. Organizing by repo name (not session ID) means checkpoints survive across sessions — you can always find them by project.
  • A save trigger. I say “save a checkpoint” when I’m at a natural stopping point. The assistant writes the file using the structured template.
  • An auto-restore instruction. Tell the assistant to check for checkpoints at session start and offer to restore. This is the part that makes it seamless — you don’t have to remember to say “load my checkpoint.”
  • You can put these instructions in your global Copilot configuration (e.g., ~/.github/copilot-instructions.md) or in a per-repo .github/copilot-instructions.md. Global means every project gets checkpoints. Per-repo lets you customize the template for different types of work.

    Beyond Checkpoints: What Else Gets Stored

    The checkpoint concept naturally extends to other kinds of persistent context:

  • Style preferences that apply across projects (your PL/pgSQL conventions, your commit message format, your Git workflow preferences)
  • Environment mappings that you’d otherwise re-explain every session
  • Out-of-repo change logs for server configurations, database settings, and other changes that git diff won’t capture
  • These go in the global configuration rather than per-session checkpoints, but the principle is the same: make the implicit explicit, so the assistant doesn’t have to rediscover it.

    The Bigger Picture

    We spend a lot of time talking about what AI assistants can do — generate code, analyze query plans, write deployment scripts. But the bottleneck in multi-day work isn’t capability. It’s continuity.

    A brilliant assistant with no memory of yesterday is less useful than a mediocre one that remembers your entire project context. The checkpoint system bridges that gap.

    It’s not glamorous. It’s a markdown file in a dot-directory. But it transforms an AI assistant from a stateless tool you use into a stateful collaborator you work with.

    And honestly? The discipline of writing structured checkpoints has made me more organized too. Turns out, clearly documenting your decisions, your environment, and your open items is useful whether an AI is reading it or not.


    Have you built similar persistence patterns into your AI workflows? I’d love to hear what’s working for you — especially for database deployment work where context across environments and sessions is critical. Find me on Bluesky, LinkedIn, or drop a comment below.