ChatGPT for Professional Work

Using ChatGPT for Coding: Generation, Review, and Debugging

ChatGPT is a capable coding partner when prompted precisely. Learn the prompt patterns that produce correct, reviewable code and the habits that prevent the most common AI-generated code failures.

What ChatGPT Can and Cannot Do With Code

ChatGPT generates code by predicting plausible continuations of your input. This means:

It's reliable for:

  • Boilerplate and scaffolding (component structure, CRUD services, test setup)
  • Pattern application (converting a code pattern to a new use case)
  • Refactoring with clear constraints
  • Explaining what existing code does
  • Finding likely causes of errors given an error message and code

It's unreliable for:

  • Correctness in complex business logic without verification
  • Security-sensitive code (auth, permissions, data handling) — always review
  • Code that requires runtime knowledge (live APIs, your exact database schema)
  • Very long outputs — quality degrades past ~200 lines

Code Generation Prompts

The Specification Prompt

text
Write a TypeScript function with the following specification:

Name: validateEmail
Input: email: string
Output: { valid: boolean; normalizedEmail?: string; error?: string }

Behavior:
- Trim whitespace from input before validation
- Validate format against RFC 5322 simplified pattern
- Normalize to lowercase if valid
- Return error message if invalid: "Invalid email format"
- Return normalizedEmail if valid

Constraints:
- No external libraries
- Use TypeScript strict mode compatible types
- No regex more complex than necessary
- Include JSDoc comment

The Pattern Application Prompt

text
Here is an existing service function in our codebase:
[paste getUser function]

Following the exact same pattern — same error handling, same return type
structure, same logging approach — write a getProject function.

The projects table has these columns: id, name, owner_id, created_at,
settings (jsonb). The function should accept projectId: string and
return the project or null if not found.

Pattern application prompts consistently produce better-integrated code than blank generation prompts because ChatGPT matches the style it sees.

The Test-Driven Generation Prompt

text
I need to implement the parseDateRange function.
Here are the tests it must pass:

[paste test file]

Implement parseDateRange in TypeScript to pass all of these tests.
Do not modify the tests. If a test seems incorrect, flag it but
still write implementation that passes it.

Code Review Prompts

Security Review

text
Review this code for security vulnerabilities.

Focus on:
1. SQL injection or query injection risks
2. Authentication and authorization gaps
3. Insecure handling of user input
4. Exposed secrets or credentials
5. Unsafe operations on user-supplied data

Format: [CRITICAL/HIGH/MEDIUM/LOW] Line number: Description of vulnerability.
For CRITICAL and HIGH: suggest the specific fix.

[paste code]

Type Safety Review

text
Review this TypeScript code for type safety issues.
The project uses strict mode: no any, no unknown without narrowing,
no non-null assertions without justification.

List all type issues as: [File:Line] Current type: [type] — Issue: [description]
Suggest the correct type for each.

[paste code]

Architecture Review

text
You are reviewing a pull request for architectural coherence.
Our conventions: [describe your conventions briefly]

The PR adds: [describe what it does]

Review for:
- Consistency with existing patterns
- Separation of concerns
- Any coupling that will create maintenance problems
- Missing abstractions that will be needed immediately

Do not flag style issues. Flag only structural concerns.

[paste changed files]

Debugging Prompts

Error Diagnosis

text
This code throws an error. Identify the root cause and the minimal fix.

Error:
[exact error message and stack trace]

Relevant code:
[paste the relevant functions/files]

Context:
- This error occurs when [describe when it happens]
- It does not occur when [describe when it doesn't]

Do not refactor unrelated code. Minimal change only.

Logic Bug

text
This function produces incorrect output. Find the bug.

Function: [paste function]
Input: [paste example input]
Expected output: [describe what it should return]
Actual output: [paste what it actually returns]

Explain the root cause in one sentence. Show the fixed function.

Handling ChatGPT Code Failures

ChatGPT-generated code can be confidently wrong. The habits that prevent shipping bad AI code:

Always run it. ChatGPT cannot run code. Syntax errors, import failures, and logical errors that testing would catch immediately don't exist to it.

Read security-sensitive code. Never trust AI-generated auth, authorization, or data handling code without reading every line.

Ask ChatGPT to review its own output:

text
Review the code you just wrote. Check for:
1. Any TypeScript strict mode violations
2. Edge cases you didn't handle (null inputs, empty arrays, invalid types)
3. Any potential runtime errors

List issues if found. If none, say "No issues found."

Ask for tests with the code:

text
Write the validateEmail function AND a test file covering:
- Valid emails
- Invalid format emails
- Emails with leading/trailing whitespace
- Disposable domain emails
- Edge cases: empty string, null-like inputs

Code ChatGPT writes alongside tests is higher quality because it has to satisfy verifiable constraints.

Key Takeaways

  • Specification prompts (name, inputs, outputs, behavior, constraints) produce more correct code than description prompts
  • Pattern application — showing existing code and asking for a new case in the same pattern — produces better-integrated output
  • Always run generated code; ChatGPT cannot
  • Read security-sensitive generated code yourself — tests don't catch all logic errors
  • Ask ChatGPT to generate tests alongside implementation to force higher quality output

---

Try It Yourself: Write a code generation prompt for a utility function you actually need. Use the specification format: name, inputs, outputs, behavior bullets, constraints. Compare the output to what you'd get from a vague description. Then ask ChatGPT to review its own output for edge cases it missed.