From ChatGPT to Claude Code: How I Finally Understood AI Agents
I recently attended a presentation by Ayham Boucher from Cornell’s AI team. He covered a lot of ground, but one moment stuck with me. In the middle of everything else he was discussing, he mentioned that his favorite AI tool was Claude Code, the command line interface from Anthropic.
I had been using AI tools regularly, but my workflows never quite held together. Context kept breaking, conversations reset, and progress felt harder to carry forward than it should. What finally made the difference was not a new model or feature, but a better understanding of how context works and what people really mean when they talk about AI Agents.
Claude Code has two powerful features that really stuck out to me. It runs in your terminal, so work happens in a real environment, and it is built around files and folders, so context can live alongside the work itself. That second part is also where AI Agents finally clicked for me. This post focuses on that.
I want to explain the moment when AI Agents finally made sense to me, and then walk through how I set this up in practice so it actually works day to day.
Part 1: The Context Problem
Why Context Never Stuck
I have been using ChatGPT since the first month it launched. Like most people, I was impressed. I used it for brainstorming, general problem solving, and project planning, among other things.
But something always felt off.
The promise of “projects” in ChatGPT was supposed to solve the context problem. You could create a project, upload files, and theoretically the AI would remember context across different chats. In practice, it rarely worked. I would start a new conversation inside the same project and still have to re-explain everything. What we decided last time. How things were structured. Why something existed.
The conversations were there. The continuity was not. Claude Code solved this problem in a way I did not expect.
What Claude Code Taught Me About AI Agents
Claude Code runs in your terminal. It has direct access to your file system and can execute commands. But what makes it powerful for my workflow is how naturally it handles context.
When you start Claude Code, it automatically reads a file called CLAUDE.md in your working directory. This file can contain anything: context about who you are, what you are working on, preferences, instructions. You write it once, and Claude knows it every time you start a session.
But this is also where something else finally made sense to me.
We have all been hearing about “AI Agents.” They come up constantly in AI conversations, often described as if they are autonomous systems or special features you need a framework to build.
In practice, at least in this workflow, AI Agents are much simpler. An agent can be as simple as written context stored in a file in your workspace.
Specifically, it is a markdown file with instructions and background that you can explicitly call into any conversation. You give it a name, write down what it should know or how it should behave, and invoke it when you want that perspective applied.
Because the agent is a file, not a chat, it remembers everything you put in it every time you invoke it. Nothing has to be re-explained. The context lives with the work itself, which means you can pull that agent into any conversation anywhere in your workspace rather than being trapped in a single thread.
That was the moment the idea of AI Agents finally made sense to me.
Part 2: How I Actually Set This Up
At that point, I wanted to see if this could hold up outside of a demo or a single use case. So I built a real workflow around it.
This is how I got started.
Claude Code and the “Terminal Access” Moment
Claude Code requires a paid subscription. At the time of writing, that is $20 per month.
Once you install it, you run it from your terminal.
Giving an AI access to your terminal sounds scary. And honestly, it should give you pause. You should not point a tool like this at a production system or a machine with sensitive data and hope for the best.
The good news is you have options.
Safe Ways to Run Claude Code
You do not need to give Claude Code access to your primary work computer to benefit from it.
- Containerized environment I run Claude Code inside a Docker container on a home Ubuntu server. The container only sees a single mounted directory: my workspace. No access to the rest of the system, the NAS, or any network shares.
- Test virtual machine A Windows 11 test VM gives you a safe place to experiment without risking a production machine.
- Non-production computer If you have a machine without sensitive data, you can run Claude Code directly and keep your working directory scoped to your project folder.
The key point is that Claude Code’s value comes from proximity to files, not unrestricted system access.
Syncing the Workspace
Everything in my Claude workspace syncs to a private GitHub repository. I wanted to make sure I could access the same Claude workspace no matter which device I was working on.
That gives me:
- One source of truth
- Versioned changes
- Easy access from multiple machines
- Context files that evolve with the work
The workflow stays intentionally simple.
When I sit down to work, I pull the latest version of my workspace so everything is current. I spend my time thinking, writing, and organizing, not managing tools. When I am done, I save those changes so they are available the next time I open the project, no matter which machine I am on.
The important part is not the mechanics, but the habit. My notes, scripts, and context files all live in one place and evolve together. Claude is always looking at the same source of truth I am, which is why the conversations stop resetting.
Claude understands version control behind the scenes. It can see what changed, help keep things organized, and move work forward without getting in the way. It feels like it belongs in this environment.
Organizing the Workspace
When you first open Claude Code, you are staring at a cursor. So now what? I start by creating folders that mirror how I already think.
A work folder. A personal folder. From there, structure emerges naturally.
The surprising part is that I did not need to remember terminal commands. I could just describe what I wanted in plain language and let Claude handle the mechanics.
Over time, my workspace settled into something like this:
ai-cli-workspace/
├── CLAUDE.md
├── CLAUDE-CONTEXT.md
├── Cornell/
│ ├── Projects/
│ │ ├── project-a/
│ │ └── project-b/
│ ├── Meeting-Notes/
│ │ └── weekly-1on1s/
│ ├── Documentation/
│ └── Scripts/
│ ├── automation/
│ └── utilities/
└── Mine/
├── Biking/
├── Homelab/
└── Video-Projects/This is where AI Agents stop being theoretical.
Up to this point, AI Agents might still sound abstract. So it helps to see what one looks like when it is tied to a real problem and a real environment.
Agent Example: My Homelab Sysadmin
I run a small home lab. A few servers, Docker containers, a NAS, and the usual collection of ports, firewall rules, and services that you forget the details of until something breaks.
Before Claude Code, troubleshooting meant digging through notes or re-explaining my entire setup to ChatGPT every time I needed help.
So I created an agent called Homelab Sysadmin.
It is just a markdown file, but it contains hardware configurations, server names, container details, port mappings, firewall rules, and common commands. When something is not working, I call that agent and describe the problem. Claude already knows the environment.
No re-explaining. No warm-up.
A simple example makes the difference clear. I can ask, “What kind of RAM should I purchase for my Synology NAS?” and get a complete answer without any back-and-forth. The agent already knows the exact NAS model, how many memory slots it has, what is currently installed, and what workloads it is handling. That context lets it give a specific, correct recommendation instead of a generic list pulled from documentation.
Where This Started Showing Up in Everyday Work
The homelab agent was the first place this really proved itself, but it did not stop there.
Once I had folders in place, I started leaning on them for everything that normally ends up scattered across tools and conversations. Individual project folders. Meeting notes. Documentation. Scripts. Even half‑formed ideas that usually get lost.
Instead of treating those as disposable conversations, I started treating them as files organized by purpose.
Each project lives in its own folder. Meeting notes live in a predictable place and follow a consistent structure. Documentation and scripts sit alongside the work they describe. When I open Claude Code inside one of those folders, it already knows the context because the context is right there.
Before a meeting, I can open the meeting notes folder and ask Claude to summarize what I worked on last week, what is still open, and what I should bring up. When I am working on documentation or scripts, Claude can read what already exists and help extend it without losing intent or tone.
This is also where I started creating more focused AI Agents for everyday tasks.
One example is a Code Critic agent. This agent is not there to write code for me. It has specific instructions to review what I have written, check it against our standards, and point out problems. It knows things like required script headers, logging conventions, and where log files should be written. When I ask it to review a script, it can flag missing pieces, suggest improvements, or insert a standard header without changing the underlying logic.
I also created a Writing Critic AI Agent. This one helps with clarity and structure rather than content. It is tuned to look for repetition, unclear transitions, and sections that need tightening. Because it is defined in a file, it applies the same rules every time, whether I am working on documentation, notes, or longer writing like this post.
These agents are small, specific, and predictable. They do one thing well, and they do it the same way every time.
None of this felt dramatic in the moment. It just quietly removed friction.
Part 3: When Work Stopped Resetting
The change was not that I suddenly worked faster or produced more.
It was that work stopped feeling fragile.
Projects no longer depended on what I remembered to restate in the next conversation. Notes did not disappear into chat history. Decisions lived next to the work that came out of them.
I can open a project folder and ask, “Where are we at on this?” and get an answer grounded in meeting notes, documentation, and scripts that already exist.
That was when the shift became clear to me.
After years of using ChatGPT, Claude Code feels different. Not because the AI is smarter, but because the context actually persists.
An agent is not a mystical AI concept. It is written context that you can call into any conversation when you need it. You name it. You define it. And it behaves the same way every time.
I am still early in this. I signed up after Ayham’s presentation and I am learning new patterns as I go. But for the first time, I feel like I have an AI assistant that actually knows my work and can pick up where we left off.
That is the promise AI tools have been making for years.
Claude Code is the first one that delivered it for me.
Quick Start Checklist
If you want to try this yourself:
- Subscribe to Claude Code ($20/month)
- Choose a safe environment & Install Claude Code
- Docker container, test VM, or non-production machine
- Create a dedicated workspace folder
- Add a CLAUDE.md file with basic context
- Sync the workspace to a private Git repository
- Create one agent
- A sysadmin role
- A coding standard
- Or a reviewer role
- Write down what you keep re-explaining
- Call that agent explicitly when you need it
You do not need to build everything at once. One agent is enough to feel the difference.