Getting Started with ChatGPT

Prompt Engineering Fundamentals for ChatGPT

The structure of your prompt almost entirely determines the quality of ChatGPT's output. Learn the patterns that consistently produce precise, useful responses versus the patterns that generate generic ones.

Why Prompting Is a Skill

ChatGPT is not a search engine where you type keywords and rank results. It's a text completion system that extends your input. The shape of your input — its structure, specificity, and framing — directly determines what gets generated.

Vague input produces vague output. Precise, structured input produces targeted, useful output.

The Four-Part Prompt Structure

Every effective ChatGPT prompt has four optional but high-value components:

1. Role (Who ChatGPT Should Be)

text
You are a senior TypeScript engineer reviewing a pull request.
Your job is to identify type safety issues, logic errors,
and violations of the project's existing patterns.

A defined role constrains the response to a specific perspective and expertise level. Without it, ChatGPT defaults to a general-purpose response that satisfies no particular use case well.

2. Context (What the Situation Is)

text
I'm building a multi-tenant SaaS application using React,
TypeScript, and Supabase. Each tenant has their own data
isolated via row-level security. The current codebase uses
React Query for server state management.

Context is the background information ChatGPT needs to give a relevant answer rather than a generic one. The more relevant context you provide, the more targeted the output.

3. Task (What You Want)

text
Review this TypeScript interface and the functions that use it.
Identify any places where the types are incorrect or could
cause runtime errors. List findings as: [Severity] Location: Issue.

The task is the specific action. State it as a verb — review, write, explain, compare, summarize, convert. Specify the output format explicitly.

4. Constraints (How to Do It)

text
Keep explanations under 2 sentences per issue.
Do not suggest refactoring the overall structure —
only flag the type issues.
Respond with a numbered list.

Constraints prevent ChatGPT from doing more or less than you want. Without them, responses drift toward verbose explanations of obvious things and miss the specific issue you care about.

Complete Example

text
ROLE: You are a senior TypeScript engineer reviewing code for
correctness and type safety.

CONTEXT: This is part of a multi-tenant React app. The User type
comes from Supabase Auth. We use React Query for data fetching.
Types are strict — no any, no unknown.

TASK: Review the following code. Find all type errors, unsafe
operations, and places where the code could fail at runtime.
Format: [HIGH/MEDIUM/LOW] filename:line — Description

CONSTRAINTS: Flag only genuine issues. Do not suggest style
improvements or refactors. Maximum one sentence per issue.

[paste code]

Iterative Prompting

A single prompt rarely produces exactly what you want on the first try. Effective ChatGPT use is iterative:

Refine scope:

text
That's too broad. Focus only on the authentication logic
in lines 45–78. Ignore the rest.

Increase specificity:

text
The error handling section needs more detail.
Expand that part only — keep the rest as is.

Change format:

text
Convert your last response to a numbered checklist
I can paste into a GitHub PR comment.

Request verification:

text
Check your previous answer. Is the TypeScript type
in step 3 actually valid for strict mode?

Prompt Patterns for Common Tasks

Code Generation

text
Write a TypeScript function that [exact behavior].
Input: [type and description]
Output: [type and description]
Constraints: [library restrictions, no third-party deps, etc.]
Include JSDoc. Do not include usage examples.

Debugging

text
This code throws: [exact error message]
Here is the code: [paste]
Here is the relevant stack trace: [paste]
What is the root cause? What is the minimal fix?
Do not refactor unrelated code.

Explanation

text
Explain [concept/code] as if I'm a developer who knows
[X] but hasn't encountered [Y] before.
Use a concrete example with [my stack/language].
Maximum 200 words.

Comparison

text
Compare [Option A] vs [Option B] for [specific use case].
My constraints: [list your actual constraints].
Output: a table with columns [Criteria 1, Criteria 2, Criteria 3],
then a one-paragraph recommendation.

What to Avoid

Avoid: "Can you help me with my code?"

Use: "Review this TypeScript function for type safety issues."

Avoid: "Make this better."

Use: "Improve the error handling in this function. It currently silently swallows errors. Throw typed custom errors instead. Don't change any other logic."

Avoid: Multiple unrelated questions in one prompt.

Use: One question per prompt. Start a new conversation or continue the thread for the next question.

Key Takeaways

  • Role + Context + Task + Constraints is the structure that produces precise, targeted responses
  • Be specific about output format — numbered list, table, code block, paragraph, bullet points
  • Iterate: refine scope, increase specificity, change format, request self-verification
  • One concern per prompt — bundled requests produce unfocused outputs
  • The most common failure is too little context, not too much

---

Try It Yourself: Take something you recently asked ChatGPT and got a mediocre answer on. Rewrite the prompt using the four-part structure: Role → Context → Task → Constraints. Compare the quality of the two responses. Note specifically what changed.