Agentic Coding tips 2
This is a translation of https://kdy1.dev/2026-1-31-ai-coding-tips-kr
I recently gave a short presentation about how I use AI. The first part of the slides overlaps with a previous blog post. In this article, I’ll focus on topics that weren’t covered in that post.
Error Messages and Logging
Using Concrete Types and Schemas
The any type is dangerous even for humans—but it’s even more dangerous for AI.
The same applies to things like:
Unconstrained parsing such as
JSON.parseLoose interfaces
Implicit or undocumented data structures
These patterns force AI to make too many assumptions.
The real problem is that once an assumption is wrong, all subsequent reasoning can spiral out of control.
So I try to follow these principles as much as possible:
Use the most concrete types possible instead of
anyPrefer schema-based parsers over simple parsing
(e.g., Zod or Yup instead ofJSON.parse)
With this approach, even if the AI makes assumptions:
The probability of those assumptions being wrong is lower
When they are wrong, the system fails fast
In other words, this prevents AI from carrying incorrect reasoning all the way to the end.
Leveraging GitHub Actions
If you look closely, GitHub Actions has some surprisingly strong properties:
Completely isolated environments
Easy to configure
A large collection of well-prepared examples
Instead of using it only for CI, I started treating it as a development virtual machine.
The same idea applies to AI.
“Whatever a developer can do locally,
AI should be able to do in exactly the same way.”
Setting Up a Development Environment for Claude Code
- Actual code: https://github.com/delinoio/delidev/blob/abed0d02fd30524bfb2f77f7227bf9560092e949/.github/workflows/claude.yml
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
actions: read # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
# Setup pnpm
- name: Setup pnpm
uses: pnpm/action-setup@v4.2.0
- uses: actions/checkout@v4
- name: Install Tauri dependencies
run: |
sudo apt-get update
sudo apt-get install -y \
libwebkit2gtk-4.1-dev \
libappindicator3-dev \
librsvg2-dev \
patchelf
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: read
claude_args: |
--allowed-tools Bash,WebFetch,WebSearch,Skill
--model opus
With GitHub Actions, you can set things up so that:
Package managers are installed
Build commands are executed (
pnpm build,cargo build, etc.)Tests are run
Once configured this way, Claude Code effectively operates in an environment that’s almost identical to a local development setup.
When the environment is this complete, the quality of AI output improves dramatically:
It stops guessing
It reasons based on actual execution results
It produces “code that actually runs,” not just “code that should work in theory”
Managing Clean Commit Messages with AI

Preventing PR Description Spam
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
claude-review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
issues: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Dismiss old Claude bot comments
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
REPO="${{ github.repository }}"
PR_NUMBER="${{ github.event.pull_request.number }}"
gh api "repos/$REPO/issues/$PR_NUMBER/comments" --jq '.[] | select(.user.login == "claude[bot]") | .node_id' | while read -r comment_node_id; do
if [ -n "$comment_node_id" ]; then
gh api graphql -f query='
mutation($id: ID!) {
minimizeComment(input: {subjectId: $id, classifier: OUTDATED}) {
minimizedComment {
isMinimized
}
}
}' -f id="$comment_node_id"
fi
done
- name: Run Claude Code Review
id: claude-review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
allowed_bots: '*'
prompt: |
REPO: ${{ github.repository }}
PR NUMBER: ${{ github.event.pull_request.number }}
Please review this pull request and provide feedback on:
- Code quality and best practices
- Potential bugs or issues
- Performance considerations
- Security concerns
- Test coverage
Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback.
Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR.
claude_args: '--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"'
There’s one bug in the Claude Code GitHub review action: it leaves too many review comments.
This can easily result in PRs being flooded with AI-generated comments.
How to Use AI Reviews Effectively
Core Assumptions
Let’s be explicit about the premises:
Human reviews are slow
Human reviews are expensive
Therefore, the goal is to minimize human involvement.
The strategy I chose is:
Run AI reviews and CI first
Humans do not intervene until AI gives an OK
Only when tests and automated reviews pass
A human performs the final review
In short:
Humans act only as the “final approver.”
1. Using a GitHub App
By integrating AI reviews as a GitHub App, reviews start automatically as soon as a PR is opened.
At this stage, AI filters out:
Code style issues
Obvious bugs
Structural problems
2. Applying Changes via GitHub Actions

Using tools like the Claude Code GitHub Action makes parallel processing much easier. Checking out code locally should be reserved for situations where human, local testing is truly required.
3. Using CI as a Gatekeeper

The key is to treat CI not just as a testing tool, but as:
A barrier between AI and humans
If AI-generated changes can’t pass CI, they never reach human reviewers. That alone significantly reduces review costs.
Q&A
Why MCP Is Unnecessary for This Use Case
Imagine there is a CLI that provides the exact same capabilities as a specific MCP server. Anything you can do through MCP could also be done through that CLI. This is why Vercel chose to improve its CLI instead of building an MCP server.