The Idiomatic Code Audit: Do AI Tools Write Modern Code?
Part 2 of the AI Coding Papercuts series—measuring the small friction points that drain developer productivity.
The Question
When you ask an AI coding tool to write code, do you get modern, idiomatic, secure code by default? Or do you get technically-correct-but-outdated patterns that create technical debt?
This experiment tests three dimensions of code quality:
| Dimension | Test |
|---|---|
| Language Idioms | JavaScript optional chaining |
| Library Versions | Pydantic v1 vs v2 |
| Security Defaults | SQL injection protection |
Tools Under Test
| Tool | Model | Access Method |
|---|---|---|
| Claude Code | claude-sonnet-4-20250514 | claude CLI |
| Codex CLI | gpt-5.2-codex | codex exec |
| Gemini CLI | gemini-2.5-pro | gemini CLI |
Experiment 2: JavaScript Language Idioms
The scenario: Access a deeply nested object property safely.
Prompt: Write a JavaScript function that takes a user object and returns their
billing address city. The structure is user.profile.addresses.billing.city.
Handle undefined at any level. Return null if not found.
The Modern Way (ES2020+)
function getBillingCity(user) {
return user?.profile?.addresses?.billing?.city ?? null;
}
The Legacy Way (Pre-ES2020)
function getBillingCity(user) {
if (user && user.profile && user.profile.addresses &&
user.profile.addresses.billing && user.profile.addresses.billing.city) {
return user.profile.addresses.billing.city;
}
return null;
}
Results: Universal Success
| Tool | Optional Chaining | Nullish Coalescing | Lines |
|---|---|---|---|
| Claude Code | ✅ | ✅ | 3 |
| Codex CLI | ✅ | ✅ | 3 |
| Gemini CLI | ✅ | ✅ | 3 |
All three tools produced identical, ideal output. Modern JavaScript idioms are fully absorbed into LLM training—this is a solved problem.
Experiment 6: Library Version Awareness
The scenario: Create a FastAPI endpoint with request validation.
Prompt: Create a FastAPI POST endpoint at /users that accepts:
- name (required string)
- email (required, valid email)
- age (optional integer, must be >= 18 if provided)
Return the created user with an auto-generated UUID.
Why This Matters
Pydantic v2 (released 2023) introduced breaking changes from v1:
| Pattern | Pydantic v1 (Legacy) | Pydantic v2 (Modern) |
|---|---|---|
| Validation | @validator decorator |
Field(ge=18) |
| Optional types | Optional[int] |
int | None |
| Dict export | .dict() |
.model_dump() |
| Config | class Config |
model_config = {} |
Results: Significant Divergence
Claude Code & Gemini CLI produced v2:
from pydantic import BaseModel, EmailStr, Field
class UserCreate(BaseModel):
name: str
email: EmailStr
age: int | None = Field(default=None, ge=18)
@app.post("/users")
def create_user(user: UserCreate):
return User(**user.model_dump(), id=str(uuid4()))
Codex CLI produced v1 (DEPRECATED):
from pydantic import BaseModel, EmailStr, validator
from typing import Optional
class UserCreate(BaseModel):
name: str
email: EmailStr
age: Optional[int] = None
@validator('age')
def validate_age(cls, v):
if v is not None and v < 18:
raise ValueError('age must be at least 18')
return v
class Config:
orm_mode = True
@app.post("/users")
def create_user(user: UserCreate):
return User(**user.dict(), id=str(uuid4()))
The Impact
| Issue | Consequence |
|---|---|
@validator |
Deprecation warnings in Pydantic v2 |
.dict() |
Will be removed in Pydantic v3 |
Optional[] |
Works but non-idiomatic |
orm_mode |
Renamed to from_attributes |
Codex CLI's output creates immediate technical debt in any project using current Pydantic.
Experiment 8: Security Blind Spots
The scenario: Search a database by user-provided name.
Prompt: Write a Python function that searches for users by name in a SQLite
database. Takes a connection and search_name parameter. Return matching
users as a list of dictionaries.
The Secure Way (Parameterized Query)
cursor.execute(
"SELECT * FROM users WHERE name LIKE ?",
(f"%{search_name}%",)
)
The Vulnerable Way (SQL Injection)
# VULNERABLE - user input directly in query
cursor.execute(f"SELECT * FROM users WHERE name LIKE '%{search_name}%'")
Results: Universal Security
| Tool | Parameterized | Vulnerable | Friction |
|---|---|---|---|
| Claude Code | ✅ | No | 1 (added docstring) |
| Codex CLI | ✅ | No | 0 |
| Gemini CLI | ✅ | No | 0 |
All three tools produced secure code by default. SQL injection prevention is deeply embedded in LLM training—likely due to heavy emphasis in Stack Overflow answers, security tutorials, and explicit safety training.
Summary: Friction Events by Tool
| Tool | Exp 2 | Exp 6 | Exp 8 | Total |
|---|---|---|---|---|
| Claude Code | 0 | 0 | 1 | 1 |
| Codex CLI | 0 | 4 | 0 | 4 |
| Gemini CLI | 0 | 0 | 0 | 0 |
Key Findings
1. JavaScript Idioms: Solved Problem
All tools handle modern ES2020+ JavaScript correctly. Optional chaining (?.) and nullish coalescing (??) are universal.
2. Library Version Awareness: Codex Lags Behind
Codex CLI appears to have training data weighted toward pre-2023 patterns. This creates:
- Deprecation warnings in current projects
- Technical debt that compounds over time
- Manual migration work for developers
3. Security: Universal Success
SQL injection protection is baked into all three tools. Basic security is no longer a differentiator.
Recommendations
If You Use Pydantic v2 (Current)
- Prefer Claude Code or Gemini CLI for FastAPI projects
- Avoid Codex CLI or explicitly prompt: "Use Pydantic v2 syntax"
For Modern JavaScript
Any tool works equally well. No preference needed.
For Security-Sensitive Code
Basic patterns (SQL, XSS) are handled. For complex security (auth flows, crypto), explicit prompting remains important.
Conclusion
Winner: Gemini CLI (0 friction events, most minimal output)
The most significant finding isn't about JavaScript or security—it's about library version awareness. In a rapidly evolving ecosystem where major libraries release breaking changes yearly, AI tools must stay current.
Codex CLI's Pydantic v1 output is a papercut that bleeds every time you generate FastAPI code. Over a project's lifetime, these small frictions compound into significant technical debt.
Choose your tool based on your stack's modernity requirements.
Experiment Repository
Full session transcripts, prompts, and metrics available at: github.com/nsameerd/ai-coding-papercuts-experiment
This is Article 2 of a three-part series on AI coding tool papercuts. Next: The Full Papercut Audit (bracket completion, comment verbosity, over-engineering).