Modern development is harder than it should be. You’re constantly juggling:
Context switches that kill productivity. You’re in the zone, building a feature. Then for whatever reason, you need to:
Each context switch costs you 15-20 minutes to regain focus. Over a day, that’s hours lost.
The learning curve is steep. Whether you’re:
You spend too much time playing detective, figuring out “how things work here”, instead of building features and assessing performance.
A debugging process that takes forever. A single bug can consume days:
Boilerplate and repetitive code slow you down. You know what you want to build, but you spend more time typing setup code, scaffolding and skeletong, rather than solving actual problems.
Documentation that is always out of date. Code changes, but docs don’t. You’re left guessing how things actually work, again.
Team consistency being hard to maintain. Everyone writes code slightly differently. Code reviews catch style issues, but by then it’s too late—the code is already written.
This is where Cursor changes the game. By bringing AI assistance directly into your editor, Cursor understands your codebase context, writes code that fits your existing patterns, helps you debug faster, generates documentation that stays in sync, and maintains consistency across your team—all without leaving your workflow.
This tutorial shows you how to use Cursor effectively—not as a crutch, but as a force multiplier that lets you focus on solving problems instead of fighting with tools.
Before starting, you should have (in this order):
.edu email and show proof you are currently attending classes.What you don’t need:
Skip the deep dive? Want to start using Cursor right now? Follow these steps to get productive in 5 minutes.
Ctrl+L (or Cmd+L on Mac) to open ChatExplain what this file does: @src/main.ts
Or:
How does user authentication work in this project?
Ctrl+I (or Cmd+I on Mac) to open ComposerAdd a function that validates email addresses
Ctrl+K (or Cmd+K on Mac) for Inline EditAdd error handling to this function
That’s it! You’ve used Cursor’s three main features. Now you can:
Pro tip: The more context you give Cursor (by referencing files with @filename), the better its suggestions will be.
Ctrl+, or Cmd+,)Cursor offers multiple AI models. Play with all of them. Open separate chats and give them all the same task, then grade their performance. Usually the differences are pretty obvious but generally they’re all pretty good at writing and coding:
Why this matters: Different models excel at different tasks. Use them all atleast once to get a better understanding of the things your capabilities when prompting them.
Ctrl+K Ctrl+OPro tip: Cursor works best with projects that have clear structure. If you’re starting fresh, keep in mind that when you make files, you need to keep things organized. There are also ways to generate templates from scratch, ask the AI about it.
The feature: Press Ctrl+L (or Cmd+L on Mac) to open the Chat panel. You can also use the Chat button in the topbar. Note: Your shortcut might be different if you’ve customized it—check what Cursor shows when you hover over the Chat button. Shortcuts can be customized in Settings → Keyboard Shortcuts.
Why it’s powerful: Traditional AI tools see only the current file. Cursor’s chat sees your entire project, dependencies, and even your .cursorrules file (more on that later).
Example: Understanding a complex codebase
// You're new to a project and ask:
// "How does user authentication work in this codebase?"
// Cursor analyzes:
// - auth.ts files
// - middleware patterns
// - session management
// - API routes
// Returns: A clear explanation with file references
Real use case: When building nextdish.app, I needed to understand how the entire authentication and recipe flow worked across dozens of files. Instead of spending hours tracing code manually, I asked Cursor: “Show me how user authentication and recipe creation flow works from signup to posting a recipe.” It traced the entire flow from frontend forms → API routes → Supabase → database in 30 seconds, showing me exactly which files handled each step.
Another example - Complex debugging session:
When debugging email sending in nextdish.app, I wrote a comprehensive prompt specific to this purpose:
I have a Next.js 15 app deployed on Cloudflare Pages with a waitlist feature
that should send confirmation emails via a Cloudflare Worker, but emails are
not being sent. Here's the current setup:
**Architecture:**
- Next.js 15 app on Cloudflare Pages (Edge Runtime)
- Cloudflare Worker at `https://email.nextdish.app` for email sending
- Resend API for actual email delivery
- Supabase for database
**Current Code Flow:**
1. User submits email at `/waitlist` → `POST /api/waitlist`
2. Creates waitlist entry in Supabase with confirmation token
3. Calls `sendWaitlistConfirmationEmail()` → `sendEmailEdge()`
4. `sendEmailEdge()` calls Cloudflare Worker at `https://email.nextdish.app`
5. Worker forwards request to Resend API
**What's Working:**
- ✅ Cloudflare Worker is accessible and returns 200 with messageId
- ✅ Resend API is working (tested directly)
- ✅ Database operations work
- ✅ Waitlist signup works
**What's Not Working:**
- ❌ Emails are not being sent to users
- ❌ No error messages in logs
**Key Files:**
- `src/app/api/waitlist/route.ts` - Main waitlist endpoint
- `src/lib/email-edge.ts` - Email sending logic
- `workers/email-worker.js` - Cloudflare Worker
- `workers/wrangler.toml` - Worker configuration
**Environment Variables:**
- `RESEND_API_KEY` - Set in Cloudflare Worker
- `EMAIL_FROM` - Set to `noreply@nextdish.app`
- `EMAIL_WORKER_URL` - Should be `https://email.nextdish.app`
**Recent Changes Made:**
- Hardcoded worker URL as fallback
- Hardcoded EMAIL_FROM as fallback
- Added comprehensive debugging logs
- Worker is deployed and accessible
**Debugging Needed:**
1. Check if the Pages function is actually calling the worker
2. Verify the worker is receiving requests from Pages
3. Check if there are any CORS or network issues
4. Verify the email payload is correct
5. Check if emails are going to spam or being blocked
**Questions:**
- How can I see Cloudflare Pages function logs?
- How can I debug the connection between Pages and Worker?
- Are there any Cloudflare-specific issues I should check?
- How can I verify the email payload is reaching the worker correctly?
Please help me debug this step by step to identify where the email sending is failing.
This prompt, made with Cursor in a previous chat, gave the AI everything it needed: architecture, what works, what doesn’t, file locations, environment variables, recent changes, and specific questions. Cursor responded with comprehensive debugging steps and code fixes that ultimately solved the issue.
The feature: Press Ctrl+I to enter Composer mode. Describe what you want to build, and Cursor generates code across multiple files.
Why it’s powerful: Most AI tools generate one file at a time. Composer understands relationships between files and creates complete features.
Example: Building a user profile feature
You type in Composer:
"Create a user profile page with:
- Display user name, email, and avatar
- Edit profile form with validation
- API endpoint to update user data
- Database migration for new fields"
Cursor generates:
1. frontend/components/UserProfile.tsx
2. frontend/components/EditProfileForm.tsx
3. backend/routes/users.ts (new endpoint)
4. backend/migrations/add-profile-fields.sql
5. Updates to existing files (imports, types, etc.)
Why this works: Cursor reads your existing patterns (do you use TypeScript? React? Express?) and matches your style automatically.
The feature: Select code and press Ctrl+K to get AI suggestions inline.
Why it’s powerful: No context switching. See suggestions right where you’re working.
Example: Refactoring duplicate code
// Before: You have this pattern repeated 5 times
function getUserById(id: string) {
return fetch(`/api/users/${id}`)
.then(res => res.json())
.catch(err => console.error(err));
}
// Select it, press Ctrl+K, type: "Extract to utility function with error handling"
// After: Cursor generates
// utils/api.ts
export async function apiRequest<T>(endpoint: string): Promise<T> {
try {
const response = await fetch(endpoint);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
return await response.json();
} catch (error) {
console.error(`API request failed: ${endpoint}`, error);
throw error;
}
}
// And updates all 5 call sites automatically
The feature: Create a .cursorrules file in your project root. Cursor reads this to understand your coding style, patterns, and preferences.
Why it’s powerful: Every team has conventions. .cursorrules makes Cursor generate code that matches your standards automatically.
Example .cursorrules file:
# Project Coding Standards
## Language & Framework
- TypeScript strict mode enabled
- React with functional components and hooks
- Use Tailwind CSS for styling
## Code Style
- Prefer named exports over default exports
- Use async/await over promises
- Error handling: Always use try/catch with proper logging
- Comments: Explain "why" not "what"
## Testing
- Write tests for all new features
- Use Vitest for unit tests
- Test file naming: `*.test.ts`
## Architecture
- API routes in `/api` directory
- Components in `/components` directory
- Utilities in `/utils` directory
- Types in `/types` directory
## AI Usage Transparency
- Mark AI-generated code with: // AI-assisted: [description]
- Always review and test AI-generated code
- Document complex AI-generated logic
Result: When you ask Cursor to generate code, it automatically follows these rules. No more “use TypeScript” reminders—it just knows.
Scenario: You need to add a “favorite recipes” feature to your app. (This is based on actual work from nextdish.app.)
Step-by-step:
Ctrl+L):
"I want to add a favorite recipes feature. What files do I need to create or modify?"
Cursor analyzes your codebase and lists:
Ctrl+I):
```
“Create the favorite recipes feature based on our existing recipe patterns:
Ctrl+K) to adjust anything that doesn’t match your style"Generate unit tests for the favorites API endpoints"
Time saved: What would take 2-3 hours manually becomes 30-45 minutes with review and testing.
Scenario: Users report that recipe search is slow on mobile devices. (Real issue from nextdish.app deployment.)
Step-by-step:
Ctrl+L):
"Show me the recipe search implementation. What could cause performance issues on mobile?"
Cursor shows you the search code and identifies potential bottlenecks.
"What database queries run during recipe search? Are there any N+1 query problems?"
Cursor traces through your code and database calls.
Ctrl+K on the problematic code):
"Optimize this search function for mobile performance. Add pagination and debouncing."
"Does this fix maintain backward compatibility with existing API clients?"
Why this works: Cursor sees the entire codebase, so it can identify issues that span multiple files—something that’s hard to do manually.
Scenario: You inherited a codebase with inconsistent patterns. You want to standardize error handling.
Step-by-step:
Ctrl+L):
"Find all places where we handle API errors. Show me the different patterns used."
.cursorrules or new utility):
"Create a standard error handling utility that matches our logging system"
"Refactor all API error handling to use the new standardErrorHandler utility.
Maintain the same error messages for users."
"Show me a diff of what changed. Are there any breaking changes?"
Result: Consistent codebase without manual find-and-replace across dozens of files.
Scenario: You need to document your API for other developers.
Step-by-step:
Ctrl+L):
"Analyze all API endpoints in this project and generate OpenAPI/Swagger documentation"
"Generate code examples for each API endpoint in Python, JavaScript, and cURL"
Why this is powerful: Documentation stays in sync with code because Cursor reads the actual implementation.
Bad:
"Fix the bug"
Good:
"The login form submits successfully but doesn't redirect.
The response includes a JWT token in the header.
Check the auth middleware and frontend routing logic."
Why: More context = better solutions. Cursor can see your code, but it doesn’t know your intentions unless you tell it.
Rule: Never commit AI-generated code without review.
Why:
Process:
Create a shared .cursorrules file that everyone on your team uses. This ensures:
Example team .cursorrules:
# Team Standards
## Code Review Checklist
- [ ] All functions have JSDoc comments
- [ ] Error handling includes user-friendly messages
- [ ] API endpoints have rate limiting
- [ ] Database queries use parameterized statements
## Security
- Never log sensitive data (passwords, tokens, PII)
- Always validate user input
- Use environment variables for secrets
What AI is good at:
What you’re good at:
The sweet spot: Use AI for the repetitive work, you handle the strategic decisions.
Be transparent about when and how you use AI:
// AI-assisted: Initial structure generated by Cursor
// Manually reviewed and tested - handles edge cases for user permissions
export async function updateUserProfile(userId: string, data: ProfileData) {
// ... implementation
}
Why:
Essential shortcuts:
| Shortcut | Action |
|---|---|
Ctrl+L / Cmd+L |
Open Chat |
Ctrl+I / Cmd+I |
Open Composer (multi-file edit) |
Ctrl+K / Cmd+K |
Inline edit (select code first) |
Ctrl+Shift+L |
Chat with selected code |
Tab |
Accept inline suggestion |
Why: Speed matters. Keyboard shortcuts keep you in flow state.
This is the most important skill you’ll develop with Cursor. The quality of your prompts directly determines the quality of your output. Here’s how to master it.
This might sound counterintuitive—isn’t brevity better? Not with AI coding assistants. More context = better results. Your goal isn’t to write the shortest prompt; it’s to write the most complete prompt that captures everything relevant.
Important caveat: Sometimes writing detailed prompts can get annoying or exhausting. That’s when you need to take a break. Don’t force yourself to write the perfect prompt when you’re frustrated. Step away, come back fresh, and you’ll write better prompts. The goal is effective communication, not perfectionism at the cost of your mental energy.
Why this works: Cursor needs to understand:
A short prompt like “fix the bug” leaves Cursor guessing. A detailed prompt gives it everything it needs to solve your problem correctly the first time.
Think of prompts as conversations, not commands. Here’s the structure that works:
1. Where I came from / How I got here
2. What I need / The goal
3. Constraints and requirements
4. Examples and additional context
@file.tsHere’s an actual prompt I used when building the social features for nextdish.app (a recipe sharing platform):
I'm working on nextdish.app, a social media platform for recipes. I'm building the social features
and this is the hardest part of the project. I need to set up the foundation properly.
Context:
- I'm using Next.js with TypeScript
- Supabase for database and auth
- Cloudflare Pages for deployment
- I already have user authentication working
- Recipe CRUD operations are complete
What I need:
I want users to be able to:
1. Follow other users
2. See a feed of recipes from users they follow
3. Like recipes
4. Comment on recipes
5. Get notifications when someone interacts with their content
Constraints:
- Must use Supabase real-time subscriptions for the feed
- Follow the existing auth pattern (middleware in `lib/auth.ts`)
- Use the existing database schema style (see `supabase/migrations/001_initial.sql`)
- All API routes should be in `app/api/` following Next.js App Router conventions
- Frontend components should use shadcn/ui components (already installed)
What I'm unsure about:
- Should follows be a separate table or part of a user_profiles table?
- How should I handle pagination for the feed?
- What's the best way to handle real-time updates without overwhelming the client?
Please:
1. Design the database schema for follows, likes, comments, and notifications
2. Create the API routes following my existing patterns
3. Create the frontend components
4. Explain the architecture decisions you made
5. Point out any potential performance issues I should watch for
Reference files:
- @lib/auth.ts (for auth patterns)
- @supabase/migrations/001_initial.sql (for schema style)
- @app/api/recipes/route.ts (for API route patterns)
Why this prompt works:
Result: Cursor generated a complete, working implementation that matched my existing codebase style. The social features took 7-9 months total, but most of that was learning and iteration—the actual code generation was fast.
Example: Discussing feature complexity before implementation
When working on meridian-whisper (voice control app), I wanted to add VAD (Voice Activity Detection) silence detection. Instead of jumping in, I asked Cursor to explain the complexities upfront:
talk to me about the VAD silence detection. there were some complexities with
the transcription display and i want any complexities or drudgery for this next
feature to be upfront BEFORE we start development
Cursor responded with a detailed impact report covering:
After seeing this, I realized it required streaming and a state machine—more complexity than I wanted to tackle. I asked:
so we will need to implement streaming for this? that was part of the issue with
the transcription display. and i dont want to implement any kind of state machine
complexity, that sounds like a right heckin slog and im not down for it right now.
based on this new information and my gripes, is the VAD something for post-mvp or
can we reasonably and reliably do it now?
Cursor confirmed it should be deferred. This saved me hours of work on a feature that wasn’t feasible for MVP. The upfront complexity discussion prevented wasted effort.
When debugging, always include the error message. Sometimes that’s enough:
[Paste entire error message]
Fix this please.
But often you need more context:
I'm getting this error when trying to deploy to Cloudflare Pages:
[Error message]
Context:
- This is a Next.js app
- I'm using Supabase for the database
- The error happens during the build process
- It works fine locally
- I just added the social features (follows, likes, comments)
The error mentions something about environment variables, but I have them set in Cloudflare Pages settings.
File where error occurs: @app/api/feed/route.ts
Please:
1. Explain what's causing this error
2. Show me how to fix it
3. Tell me how to verify the fix works
Pro tip: Include images or screenshots when possible. Visual context helps Cursor understand UI issues, layout problems, or error states.
Use long, detailed prompts when:
Use short, focused prompts when:
Example of short prompts in a workflow:
You: "Let's plan the notification system for nextdish"
Cursor: [Provides plan]
You: "Implement phase 1"
Cursor: [Generates code]
You: "Continue with phase 2"
Cursor: [Generates more code]
You: "Fix this bug and don't mess anything else up" [with selected code]
Cursor: [Fixes the issue]
Example: Simple troubleshooting prompt
When working on meridian-whisper, I hit a bug with the Realtek HD microphone. Instead of a long prompt, I started simple:
i had an error when i tried to use the realtek hd microphone, my laptop mic may
be broken but i just want to make sure its not the code. do troubleshooting on
this issue and find an accurate and complete fix without breaking anything else
Cursor fixed the AttributeError. Then I followed up with another short prompt:
will any other selection crash the program? i just want an error message if
something goes wrong, i dont want it to break
Cursor added error handling. Sometimes the simplest prompts work best when you’re debugging a specific, isolated issue.
Example: Asking for impact reports
Before implementing the HTTP API endpoint feature for meridian-whisper, I asked Cursor for an impact report:
talk to me about this HTTP API endpoint feature, give me an impact report and
tell me about what it entails and how the effort will compare to the other
features we implemented
Cursor provided:
This helped me make an informed decision about whether to implement it now or later. Getting complexity upfront saves time and prevents scope creep.
Always include:
@file.ts or “in src/api/users.ts”)components/Button.tsx”)Sometimes include:
Avoid:
Prompt engineering is an art that requires practice. You’ll learn:
My process:
Starting a new feature:
I'm working on [project name], a [brief description].
Context:
- Tech stack: [list]
- What's already done: [list]
- Relevant files: @file1.ts @file2.ts
I need to build [feature description]:
- [Requirement 1]
- [Requirement 2]
- [Requirement 3]
Constraints:
- [Constraint 1]
- [Constraint 2]
Questions/Uncertainties:
- [Question 1]
- [Question 2]
Please [specific request with deliverables].
Debugging:
I'm getting this error:
[Full error message]
Context:
- Project: [description]
- What I was doing: [description]
- It works when: [condition]
- It fails when: [condition]
Files involved:
- @file1.ts (where error occurs)
- @file2.ts (related code)
I've tried:
- [Attempt 1]
- [Attempt 2]
Please:
1. Explain the root cause
2. Show the fix
3. Explain how to prevent this
Refactoring:
I want to refactor [what] to [goal].
Current state:
- [How it works now]
- Files: @file1.ts @file2.ts
Target state:
- [How it should work]
- [New patterns to follow]
Constraints:
- Must maintain [requirement]
- Don't break [feature]
- Follow pattern in @example.ts
Please:
1. Show the refactored code
2. Explain the changes
3. List any breaking changes
.cursorrules file: Teaches Cursor your project’s patterns permanently. Use this for:
System prompts in Cursor settings: Personal preferences that apply across all projects. Use this for:
In-prompt references: For one-off context. Use this for:
Example: I have a .cursorrules file that says “always use TypeScript strict mode.” I don’t need to mention that in every prompt—Cursor just knows. But if I’m working on a specific feature, I might say “follow the error handling pattern in utils/api.ts” to reference a specific file.
Remember: The time you spend writing a detailed prompt is time saved debugging incorrect code later.
Cursor is powerful, but it’s not the only tool in your toolkit. Here are developer behaviors and systems that amplify your productivity, whether you’re using Cursor or not.
The problem: You have 10+ projects, each in their own repository. When updating your resume, portfolio, or blog, you need information from multiple projects. Opening entire project folders in Cursor adds too much useless context—you just need the career-relevant summaries.
The solution: Symbolic links (symlinks).
Symlinks let you create “shortcuts” that point to files in other locations. For documentation, this means you can have a central shared-docs/ folder that contains career summaries from all your projects, then symlink those into your portfolio/resume workspace.
How it works:
Your Projects (10+ repos)
├── nextdish-app/
│ └── docs/
│ └── summary-for-career.md
├── iq-trainer/
│ └── docs/
│ └── summary-for-career.md
└── voice-control/
└── docs/
└── summary-for-career.md
│
│ (symlinked)
▼
shared-docs/ (central hub)
├── nextdish-app/
│ ├── summary-for-career.md → (symlink)
│ ├── retrospective.md → (symlink)
│ └── key-metrics.md → (symlink)
├── iq-trainer/
│ └── summary-for-career.md → (symlink)
└── voice-control/
└── summary-for-career.md → (symlink)
│
│ (symlinked into portfolio)
▼
portfolio-site/
└── docs/ → (symlink to shared-docs/)
Why this is powerful:
Setting it up:
cd D:\projects
mkdir shared-docs
Generate a career-focused project summary for this repository.
Include: problem solved, tech stack, my contributions, challenges,
impact, and resume-ready bullet points.
Save as: docs/summary-for-career.md
cd shared-docs
mkdir nextdish-app
mkdir iq-trainer
# ... etc
cd shared-docs\nextdish-app
mklink summary-for-career.md ..\..\nextdish-app\docs\summary-for-career.md
Or symlink the entire shared-docs folder into your portfolio:
cd portfolio-site
mklink /D docs ..\..\shared-docs
Files to symlink:
summary-for-career.md - Main career summary (required)retrospective.md - Post-mortem notes (optional)key-metrics.md - Specific KPIs and numbers (optional)What NOT to symlink:
Using with Cursor:
When working on your resume or portfolio, open that project in Cursor. The symlinked docs/ folder gives Cursor access to all your project summaries without the overhead of entire codebases.
Example prompt:
Generate resume bullet points for all projects where I used Docker.
Reference the summaries in @docs/ for context.
Cursor can now see all your project summaries via the symlink and generate targeted content.
Real-world impact: I went from manually copying information between 10+ projects to having a system where Cursor generates summaries, I symlink them once, and they’re available everywhere. Updating my resume went from a 4-hour task to 30 minutes.
Git isn’t just for code—it’s a documentation tool. Your commit history tells a story:
Using Git with Cursor:
Cursor can read your Git history to understand:
Example prompt:
Show me how the authentication system evolved. Look at the git history
for auth-related files and explain the changes.
Best practices:
feat:, fix:, refactor:)CHANGELOG.md that Cursor can referenceFolder structure matters. Cursor works better with organized projects:
project-name/
├── docs/ # Documentation
│ ├── README.md
│ ├── ARCHITECTURE.md
│ └── summary-for-career.md
├── src/ # Source code
│ ├── components/
│ ├── utils/
│ └── api/
├── tests/ # Tests
└── .cursorrules # Cursor configuration
Why this helps:
Using with Cursor:
Create a new feature following our project structure:
- Components in src/components/
- API routes in src/api/
- Tests in tests/
- Update docs/ARCHITECTURE.md
What to automate:
Example: I have a script that:
Using Cursor to create automation:
Create a script that:
1. Generates summary-for-career.md using the career summary prompt
2. Validates it has all required sections
3. Symlinks it to shared-docs/[project-name]/
4. Commits with message "docs: update career summary"
Use Node.js and reference our existing scripts in scripts/ for patterns.
Treat documentation like code:
Using Cursor for documentation:
Based on the codebase, generate comprehensive documentation:
- README with installation and usage
- API documentation for all endpoints
- Architecture overview
- Contributing guidelines
Follow the style in @docs/README-template.md
These behaviors work with any editor, but they’re especially powerful with Cursor because Cursor can read and understand your entire system.
Cursor has powerful features beyond Chat, Composer, and Inline Edit. Here’s how to use them.
What it is: Agent mode lets Cursor make changes across multiple files autonomously, with your approval at each step.
When to use it:
How to use it:
Example: When updating nextdish.app to use a new error handling pattern, Agent mode helped refactor 40+ API endpoints systematically. Time saved: 4-6 hours of manual refactoring.
What it is: Tab provides inline code suggestions as you type, similar to GitHub Copilot but with Cursor’s codebase awareness.
When to use it:
Pro tip: Tab works best when your codebase has clear patterns. The more consistent your code style, the better Tab’s suggestions.
What it is: Background Agents can work on tasks while you focus on other things.
When to use it:
Example use case: Set Background Agents to review all new code for security issues, suggest test cases, and generate documentation. Review their work when you have time, rather than interrupting your flow.
Scenario: You want to migrate from JavaScript to TypeScript across the entire codebase.
Approach:
Ctrl+L): “Create a migration plan: convert this JS project to TypeScript”Ctrl+I): “Convert src/utils/ directory to TypeScript with proper types”Why phased: Large changes are risky. Break them into testable chunks.
Scenario: You’re new to Next.js and need to build a feature.
Approach:
Why this works: Cursor explains concepts while showing practical examples in your codebase context.
Before submitting a PR:
"Review this code for:
- Security vulnerabilities
- Performance issues
- Code style consistency
- Missing error handling
- Test coverage gaps"
Result: You catch issues before reviewers do, making the review process smoother.
Why this matters: Modern projects often use multiple languages and monorepo structures. Cursor can work across languages and help you navigate complex project structures.
Polyglot project navigation:
This project uses multiple languages:
- Frontend: TypeScript/React
- Backend: Python/FastAPI
- Scripts: Bash
- Infrastructure: Terraform (HCL)
Help me understand:
- How these components interact
- Where to add a new feature (which language/layer)
- How to maintain consistency across languages
- Shared types/interfaces between languages
Project structure: @.
Language-specific .cursorrules:
Create language-specific .cursorrules files:
- .cursorrules.ts for TypeScript files
- .cursorrules.py for Python files
- .cursorrules.tf for Terraform files
Each should include:
- Language-specific best practices
- Framework conventions (React, FastAPI, etc.)
- Testing patterns for that language
- Documentation standards
Base rules: @.cursorrules
Cross-language type sharing:
Generate TypeScript types from this Python Pydantic model:
[Paste Python model]
Also generate:
- Python types from TypeScript interfaces
- Validation schemas that work in both languages
- API contract documentation
Ensure types stay in sync between languages.
Monorepo structure understanding:
Analyze this monorepo structure:
- Identify workspace boundaries
- Understand dependency relationships
- Find shared packages/libraries
- Locate build configurations
- Understand deployment strategy per package
Monorepo root: @.
Tool: [Nx, Turborepo, Lerna, pnpm workspaces, etc.]
Monorepo-specific prompts:
Add a new feature to this monorepo:
- Create new package in packages/feature-name
- Set up workspace dependencies
- Configure build pipeline
- Add to root package.json scripts
- Update CI/CD for new package
Feature: User authentication service
Dependencies: @packages/shared-types, @packages/database
Monorepo tool: [Nx/Turborepo/etc.]
Cross-package refactoring:
Refactor this feature across multiple packages:
- Update shared types in @packages/types
- Update API in @packages/api
- Update frontend in @apps/web
- Update tests in all packages
- Ensure no breaking changes
Feature: User profile structure
Packages affected: [list packages]
Monorepo dependency management:
Analyze and optimize dependencies in this monorepo:
- Find duplicate dependencies across packages
- Identify unused dependencies
- Suggest dependency consolidation
- Check for version conflicts
- Optimize build order
Root: @.
Packages: @packages/
TypeScript/JavaScript:
Generate TypeScript types and interfaces for this API:
- Request/response types
- Error types
- Utility types
- Zod schemas for validation
API spec: @docs/api-spec.md
Follow existing patterns: @src/types/
Python:
Create Python data classes and Pydantic models:
- Type hints throughout
- Validation rules
- Serialization methods
- Documentation strings
Reference: @models/user.py
Follow: PEP 8, type hints, Pydantic patterns
Go:
Generate Go structs and interfaces:
- Proper error handling
- Context usage
- Interface definitions
- Test files
Follow: Go conventions, error wrapping, context patterns
Reference: @pkg/models/
Rust:
Create Rust structs and implementations:
- Proper ownership patterns
- Error handling with Result types
- Trait implementations
- Documentation comments
Follow: Rust conventions, ownership rules, error handling
Reference: @src/models/
Nx workspace:
Set up a new Nx library in this monorepo:
- Generate library structure
- Configure build targets
- Set up testing
- Add to dependency graph
- Create example usage
Library name: feature-name
Type: [library, application, etc.]
Turborepo:
Optimize Turborepo pipeline:
- Analyze task dependencies
- Configure caching
- Set up parallel execution
- Configure remote caching
Pipeline config: @turbo.json
Packages: @packages/
Here are real examples from projects I’ve built with Cursor, showing how it works in practice.
The project: A full-stack social media platform for sharing recipes, built in 7-9 months as a brand new developer.
The challenge: I had no idea what best practices were or how to manage a project this size. The social features felt like the hardest part, and they were—but only because of bugs, not complexity.
How Cursor helped:
1. Foundation and Planning:
I'm building nextdish.app, a social media platform for recipes. I'm starting
with the social features and this feels like the hardest part. I need to think
about this from the ground up: what's my foundation? What needs to be created
or changed to accomplish the next milestone?
I'm using:
- Next.js with TypeScript
- Supabase for database and auth
- Cloudflare Pages for deployment
I've already got:
- User authentication working
- Recipe CRUD operations complete
What I need:
- Users can follow other users
- Feed of recipes from followed users
- Like and comment on recipes
- Notifications for interactions
Help me design the foundation and create a plan.
Cursor acted as my mentor, guiding me through:
2. Deployment Debugging (Cloudflare Pages + Supabase): The biggest time sink was deployment issues. Here’s a typical debugging session:
I'm trying to deploy to Cloudflare Pages and getting this error:
[Full error message with stack trace]
Context:
- Next.js app with Supabase
- Works perfectly locally
- Error happens during build
- Just added social features
The error mentions environment variables, but I have them configured
in Cloudflare Pages settings. I think it might be related to how
Supabase client is initialized during build vs runtime.
Files:
- @lib/supabase/client.ts
- @app/api/feed/route.ts
Please:
1. Explain why this is happening
2. Show me the fix
3. Explain how to verify it works
Time saved: What would have taken days of debugging (my first projects took literal days per bug) became hours. With Cursor’s new Debug feature, it’s even faster now.
3. The Mental/Emotional Benefit: Having Cursor as a mentor was empowering. When I needed to do something fast that I wasn’t sure how to do manually, I’d:
Result: The actual building was fast. Most of the 7-9 months was learning and iteration. Cursor effectively offloaded the “grunt work” in coding and freed me to focus on design, strategy, and planning—which translated to more imaginative projects and the freedom to try anything.
The project: A cross-platform voice control application using C# that transcribes audio to text input.
The challenge: I thought the hardest part would be C# and cross-platform compatibility. I was wrong.
The reality: Cursor handled all the C# code, cross-platform setup, everything. Smooth sailing. Then I hit my first bug:
Bug 1: Audio Not Transcribing
The audio isn't being transcribed. The app runs, but no text appears.
No error message - it just silently fails.
Context:
- C# cross-platform app
- Using [speech API]
- Audio input via microphone
- Works on Windows, Mac, Linux (same issue everywhere)
I thought it might be a cross-platform issue or C# configuration problem,
but the fact that it fails the same way everywhere suggests it's something else.
Files:
- @AudioTranscriber.cs
- @MainWindow.xaml.cs
Please add incredibly granular logging to figure out where it's failing.
The discovery: Cursor added detailed logging. We discovered: my computer’s microphone was broken. Not a code issue at all. This forced me to integrate Bluetooth audio support.
Bug 2: Audio Cutting Off Early
Now with Bluetooth working, the audio is cutting off early and not
transcribing correctly. The audio stream seems to stop before the user
finishes speaking.
Please fix the audio buffering and transcription logic.
A couple of prompts later, it was working. The entire project—code, debugging, Bluetooth integration—was handled through prompting.
The realization: This was when I realized I can truly make anything I want with Cursor. It handled all the code and debugging, and when I got the core function working (transcribe audio to text input), I used it to “yap my way” to a basically finished version.
Example: Phased implementation with clear communication
When implementing user stories for meridian-whisper, I started with a simple prompt:
lets start on the user stories implementation
Cursor read the memory bank, understood the context, and proposed a 3-phase approach:
## 🎯 **Implementation Plan - User Stories (3 Phases)**
### **Phase 1: Audio Feedback System** (~15 minutes)
**The Perfect Starting Point** - Zero risk, immediate UX improvement
- Create `sounds/` directory with 4 short audio files
- Add `play_sound()` function to `ui_hotkey.py`
- Integrate sound calls at 3 key points:
- Wake word detected → activation chime
- Recording stopped → click sound
- Command completed → success/error sound
### **Phase 2: Read Text Back Feature** (~30 minutes)
**Major Functionality Addition** - TTS integration for accessibility
- Create `tts_manager.py` with Windows SAPI integration
- Add 3 new intent types: `read_clipboard`, `read_selection`, `read_text`
- Voice commands: "read clipboard", "read selection", "read that"
- Uses Windows built-in TTS (zero cost, minimal overhead)
### **Phase 3: Enhanced Custom Voice Commands** (~20 minutes)
**Natural Language Improvement** - Extends existing mappings.json
- Enhanced mappings: application aliases, action synonyms, phrase replacements
- Support commands like: "open my editor" → "open cursor"
- Zero risk (builds on existing secure mapping system)
I approved Phase 1, and Cursor implemented it. Then I said “continue to phase 2” and it continued. This iterative, conversational approach made complex features manageable.
The project: An IQ training application with Raven’s Progressive Matrices, Wonderlich, and Henmon-Nelson tests.
The challenge: The tests needed to match real IQ test patterns and difficulty.
How Cursor helped:
Initial approach: I drew the UI on paper/whiteboard, then took it to Cursor (and v0) to generate a skeleton. Then:
I need Raven's Progressive Matrices tests implemented. The tests should:
- Match the same types of patterns as real Raven's tests
- Have similar difficulty progression
- Include visual pattern recognition (shapes, sequences, matrices)
I've included some example images of Raven's tests for reference.
Please create:
1. Test generation logic
2. Scoring algorithm
3. UI components for displaying tests
4. Progress tracking
The key: Using example images helped Cursor capture my intention. This is common for me—using images to help AI understand visual requirements.
Current work: Adding Wonderlich and Henmon-Nelson tests (the actual tests Mensa uses). The process is the same: provide examples, explain requirements, let Cursor generate, then iterate.
The project: An interactive world map displaying CIA factbook data, pulling from a GitHub repository.
The challenge: Translating the mess that was TopoJSON and aligning it with world-atlas information.
How Cursor helped: I just prompted my way to victory. Cursor handled:
The debugging story: This project had some grueling debugging at the end for absolutely no reason. It’s probably a good example of the cons of AI coding—sometimes you hit edge cases that require deep understanding of the data formats, and AI can struggle with those. But overall, Cursor got me 90% of the way there, while I was free to handle the final 10%. Take that Pareto!
The project: Started as a 3D open-world mech game idea, pivoted to writing a book to serve as lore/backstory.
The challenge: Organizing world-building, character development, and narrative structure.
How Cursor helped:
Initial organization:
I have a bunch of world-building docs and ideas for a sci-fi novel.
I started with a game idea but pivoted to a book. I have:
- MVP outline from the game
- Random world-building docs
- Ideas for the universe/timeline
- Lots of questions
Please review all my docs and create a series of files to organize
what I already have. Then help me structure the world-building process.
Cursor created:
The editing process:
Here’s an actual prompt I used (from wnw-book-editsPrompts.md):
I'm struggling a bit rewriting the beginning. When I first did it, the goal
was to write every day and so I planned on going section by section until I
was done and then going back. But after getting Act 1.Stage 2 to a really
good point, I wanted to go back and clarify some things in Act1.Stage 1.
So that's what I've been doing but I think I had Lisan better described in
my brain than I did the Commander/New King, so I have a few things to ask you now.
1. Please help me apply the @book-when-neptune-wakes/frameworks/character-skeleton-template.md
to the Commander/New King, Captain Egrier Balfour, Lieutenant Voss, Trident
Intelligence Chief Lisan Ghavadi, and the other Lieutenant in the command center,
that way I can start moving my notes about them to their official files.
2. Please help me implement and rewrite the major sections in a1.s1 that I've pointed out.
3. Please help me address the minor issues, merge the new sections with the writing
that's already pretty good, and improve the overall flow of a1.s1.
But I don't want you to remove my comments, I want to use them to grade your work.
Use whatever you need to for answers and context.
@book-when-neptune-wakes/frameworks/philosophical-core.md @memory-bank
Why this worked: The detailed prompt gave Cursor everything it needed:
Result: Cursor generated character skeletons, rewrote sections, addressed issues, and improved flow—all while preserving my comments for review.
Let’s walk through building a complete feature using Cursor. This example shows the full workflow.
Requirements:
You: "I need to build a notification system. What's the best architecture?"
Cursor: Analyzes your codebase, sees you use:
- PostgreSQL database
- Express.js backend
- React frontend
- Socket.io for real-time features (already installed)
Suggests:
1. Database: notifications table with user_id, message, read status, timestamp
2. Backend: API endpoints + Socket.io events
3. Frontend: NotificationDropdown component + Socket.io client
4. Integration: Add notification triggers to comment creation
You (Ctrl+I): "Create a database migration for a notifications table:
- id (UUID, primary key)
- user_id (foreign key to users)
- message (text)
- type (enum: comment, like, follow)
- read (boolean, default false)
- created_at (timestamp)
- Include indexes for user_id and read status"
Generated migration:
-- migrations/create_notifications.sql
CREATE TYPE notification_type AS ENUM ('comment', 'like', 'follow');
CREATE TABLE notifications (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
message TEXT NOT NULL,
type notification_type NOT NULL,
read BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_notifications_user_id ON notifications(user_id);
CREATE INDEX idx_notifications_read ON notifications(read) WHERE read = FALSE;
You (Ctrl+I): "Create Express.js routes for notifications:
- GET /api/notifications - Get user's notifications (paginated)
- PUT /api/notifications/:id/read - Mark notification as read
- PUT /api/notifications/read-all - Mark all as read
- Use our existing auth middleware
- Include Socket.io event emission for new notifications"
Generated code:
// routes/notifications.ts
import express from 'express';
import { authenticate } from '../middleware/auth';
import { getNotifications, markAsRead, markAllAsRead } from '../services/notifications';
import { io } from '../server'; // Socket.io instance
const router = express.Router();
router.get('/', authenticate, async (req, res) => {
const userId = req.user.id;
const page = parseInt(req.query.page as string) || 1;
const limit = parseInt(req.query.limit as string) || 20;
const notifications = await getNotifications(userId, page, limit);
res.json(notifications);
});
router.put('/:id/read', authenticate, async (req, res) => {
const notificationId = req.params.id;
const userId = req.user.id;
await markAsRead(notificationId, userId);
res.json({ success: true });
});
router.put('/read-all', authenticate, async (req, res) => {
const userId = req.user.id;
await markAllAsRead(userId);
res.json({ success: true });
});
// Helper function to emit notification (used by other routes)
export function emitNotification(userId: string, message: string, type: string) {
io.to(`user-${userId}`).emit('new-notification', {
message,
type,
timestamp: new Date()
});
}
You (Ctrl+K on the route): "Generate the service functions that the routes need:
getNotifications, markAsRead, markAllAsRead. Use our existing database connection pattern."
You (Ctrl+I): "Create a React NotificationDropdown component:
- Shows unread count badge
- Dropdown list of notifications
- Click to mark as read
- Real-time updates via Socket.io
- Use our existing UI component library (Tailwind + shadcn/ui)"
You (Chat - `Ctrl+L`): "Where in the codebase do we create comments? Add notification creation there."
Cursor shows you the comment creation route, and you add:
// In the comment creation handler
import { createNotification, emitNotification } from '../routes/notifications';
// After creating comment
await createNotification(comment.postAuthorId, {
message: `${req.user.name} commented on your post`,
type: 'comment'
});
emitNotification(comment.postAuthorId, message, 'comment');
You (Chat - `Ctrl+L`): "Generate unit tests for the notification API endpoints using our existing test framework (Jest)."
Without Cursor: 6-8 hours Time saved: 4-6 hours
Cursor adapts to different roles. Here’s how to use it effectively for technical writing, DevRel, backend engineering, content engineering, and software engineering.
Your superpower: Cursor excels at generating documentation from code. Use it to create API docs, tutorials, and user guides that stay in sync with your codebase.
Key workflows:
1. API Documentation Generation
Analyze all API endpoints in this project and generate OpenAPI/Swagger documentation.
Include:
- Request/response schemas
- Authentication requirements
- Error codes and messages
- Code examples in Python, JavaScript, and cURL
- Rate limiting information
Format: OpenAPI 3.0 specification
Real example: When documenting nextdish.app (40+ API endpoints), I used Cursor to generate comprehensive API docs. Instead of manually documenting each endpoint, Cursor analyzed the route handlers, extracted schemas from TypeScript types, and generated OpenAPI specs. Time saved: 8+ hours of manual documentation work.
2. Tutorial Structure and Content
Based on this codebase, create a tutorial structure for "Building a Social Recipe Feature."
Include:
- Prerequisites and setup
- Step-by-step implementation guide
- Code examples with explanations
- Common pitfalls and solutions
- Next steps for extending the feature
Reference the actual implementation in @app/api/recipes/route.ts and @components/RecipeCard.tsx
3. Release Notes and Changelogs
Generate release notes for version 0.2.0 based on git commits since v0.1.0.
Format:
- Breaking changes (if any)
- New features
- Bug fixes
- Performance improvements
- Documentation updates
Use conventional commit messages to categorize changes.
4. Documentation Testing and Quality Assurance
Test this documentation for:
- Broken internal links
- Broken external links
- Outdated code examples
- Missing prerequisites
- Inconsistent formatting
- Dead code in examples
- Version mismatches (API versions, library versions)
Documentation directory: @docs/
Generate report with:
- File and line number for each issue
- Severity (critical, warning, info)
- Suggested fixes
Documentation link checking:
Check all links in this documentation:
- Internal markdown links
- External URLs
- Code references
- Image paths
Report:
- Broken links (404, 403, etc.)
- Redirects (301, 302)
- Slow-loading resources
- Missing anchor targets
Docs: @docs/
Code example validation:
Validate all code examples in this documentation:
- Check syntax errors
- Verify imports are correct
- Ensure examples match current API
- Test that examples actually run
- Check for deprecated patterns
Documentation: @docs/
Codebase: @src/
5. Style Guide Enforcement
Review this documentation against our style guide:
- Tone and voice consistency
- Terminology usage (check glossary)
- Formatting standards (headings, lists, code blocks)
- Link formatting
- Image alt text
- Code comment style
- Example structure
Style guide: @docs/style-guide.md
Documentation to review: @docs/
Automated style checking:
Create a script that validates documentation style:
- Checks heading hierarchy (no skipped levels)
- Verifies code blocks have language tags
- Ensures all images have alt text
- Validates frontmatter structure
- Checks for consistent date formats
- Verifies link formatting
Style rules: @docs/style-guide.md
Output: Report with violations and line numbers
Metrics that matter for technical writers:
Example from nextdish.app:
Your superpower: Cursor helps you create educational content, demos, and community resources that showcase your product.
Key workflows:
1. Creating Demo Code and Examples
Create a complete, runnable demo showing how to integrate our API.
Requirements:
- Working code example (can be copied and run)
- Step-by-step setup instructions
- Expected output
- Troubleshooting section
- Link to full documentation
Use our actual API endpoint: POST /api/recipes
Include authentication flow and error handling.
2. Blog Post Generation
Write a blog post titled "How I Built a Full-Stack App in 7 Months as a New Developer."
Structure:
- Hook: The problem I was solving
- Journey: Challenges and how Cursor helped
- Technical deep-dive: Architecture decisions
- Lessons learned: What worked, what didn't
- Call to action: Try Cursor for your project
Include real metrics: 41,652+ lines of code, 40+ API endpoints, 131+ components
Tone: Honest, educational, inspiring
Example from meridian-whisper:
Your superpower: Cursor excels at database design, API architecture, and system design patterns.
Key workflows:
1. Database Schema Design
Design a database schema for a social recipe platform.
Requirements:
- Users can create recipes
- Users can follow other users
- Users can like/comment on recipes
- Users can save recipes to collections
- Need to track analytics (views, likes over time)
Consider:
- Normalization vs. denormalization trade-offs
- Indexing strategy
- Row Level Security policies
- Performance at scale (10,000+ users)
Generate: Prisma schema, migration file, RLS policies
Real example: For nextdish.app, I designed 20+ database tables with 50+ RLS policies. Cursor helped me:
Result: Database handles complex queries with sub-200ms response times, even with 50+ security policies.
2. Performance Optimization
Analyze this database query performance issue:
[Paste slow query]
Context:
- Using PostgreSQL with Supabase
- 50+ RLS policies
- Getting "auth_rls_initplan" warnings
- Query takes 2+ seconds for simple SELECT
Optimize:
- RLS policy structure
- Index usage
- Query patterns
- Caching strategy
Real example: I optimized 50+ RLS policies in nextdish.app by replacing auth.uid() with cached (SELECT auth.uid()) patterns. Cursor helped identify all instances and refactor them systematically. Result: Eliminated all performance warnings, ~40% query improvement (estimated).
Metrics that matter for backend engineers:
Example metrics from nextdish.app:
3. Database Migration Strategies (Zero-Downtime)
Create a zero-downtime database migration:
- Wrap migration in transaction (BEGIN/COMMIT)
- Use conditional logic for optional tables
- Include verification queries
- Create rollback plan
- Document deployment strategy
Migration goal: [describe what you're migrating]
Current schema: @prisma/schema.prisma
Target schema: [describe changes]
Real example from nextdish.app:
Migration deployment strategy:
Plan a database migration deployment:
1. Test in staging environment
2. Create backup before production
3. Schedule during low-traffic window
4. Execute migration with monitoring
5. Verify data integrity
6. Have rollback plan ready
Migration file: @prisma/migrations/[migration-name]/migration.sql
Deployment plan: @docs/migration-plan.md
4. Background Job Processing (BullMQ)
Set up background job processing with BullMQ:
- Create job queues (image processing, analytics, etc.)
- Implement job producers
- Create job workers with retry logic
- Add job progress tracking
- Set up job scheduling
Job types needed:
- Image processing (priority: high, retries: 3)
- Analytics updates (priority: low, retries: 1)
- Email sending (priority: medium, retries: 2)
Use Redis for queue persistence.
Real example from nextdish.app:
imageProcessingQueue, analyticsQueueremoveOnComplete: trueCode example:
// services/tasks/index.ts
import { Queue, Worker } from 'bullmq'
import { redis } from '../../lib/redis'
export const imageProcessingQueue = new Queue('image-processing', {
connection: redis,
defaultJobOptions: { removeOnComplete: true }
})
export const analyticsQueue = new Queue('analytics', {
connection: redis,
defaultJobOptions: { removeOnComplete: true }
})
// Worker setup
new Worker('image-processing', async (job: ImageJob) => {
await processImage(job.data)
}, { connection: redis })
5. Retry Logic and Error Handling
Implement retry logic with exponential backoff:
- Configurable max retries
- Exponential backoff delay
- Type-safe generic implementation
- Error logging and tracking
Use cases:
- External API calls
- Database operations
- File processing
Real example from nextdish.app:
src/services/core/ai/recipe-analysis.tssrc/services/core/task-queue/index.ts)6. Rate Limiting Implementation
Implement rate limiting for API endpoints:
- Multiple storage backends (Redis, KV, Durable Objects)
- Lock mechanism after failed attempts
- Retry-After headers in responses
- Per-identifier rate limiting
Endpoints to protect:
- Authentication endpoints
- API endpoints
- File upload endpoints
Real example from nextdish.app:
Your superpower: Cursor helps you manage content pipelines, CMS integrations, and content strategy.
Key workflows:
1. Content Pipeline Automation
Create a content pipeline that:
- Ingests markdown files from a directory
- Validates frontmatter (title, date, tags, author)
- Generates SEO metadata
- Creates optimized images
- Publishes to CMS (Contentful/Sanity)
- Updates sitemap
Use Node.js and reference existing scripts in @scripts/
2. Content Strategy and Planning
Analyze this blog's content and suggest:
- 10 new article ideas based on popular topics
- Content gaps to fill
- SEO optimization opportunities
- Content series ideas
- Repurposing opportunities (video, podcast, social)
Reference existing posts in @content/blog/
3. Copywriting and Long-Form Content
Help me create compelling copy for this project:
- Product descriptions
- Marketing copy
- User guides
- Educational content
- Email campaigns
Tone: [professional/casual/technical/etc.]
Target audience: [describe]
Content type: [blog post/guide/email/etc.]
Reference existing content: @content/
14. Headless CMS Integration
Integrate headless CMS for content management:
- Evaluate CMS options (Sanity, Contentful, Notion, etc.)
- Design dual-purpose architecture (database + CMS)
- Implement API access for programmatic operations
- Enable human-friendly UI for content review
- Set up webhook triggers for workflow automation
CMS requirements:
- API access for workflows
- Human-friendly editing interface
- Search and filtering capabilities
- Relational data support
Real example from savant-insights:
Architecture pattern:
15. Content Migration Between Systems
Migrate content between systems:
- Plan migration strategy (schema mapping, data transformation)
- Execute data migration with validation
- Update workflows and integrations
- Test migrated content and functionality
- Document trade-offs and decisions
Migration considerations:
- Schema differences
- Data format conversions
- Workflow updates
- Performance implications
- Feature parity
Real example from savant-insights:
Migration process:
4. A/B Testing Content
Set up A/B testing for content:
- Create test configurations with variants
- Implement variant assignment logic
- Track conversions per variant
- Integrate with analytics
- Session-based persistence
Test example:
- Hero headline variants
- CTA button variants
- Social proof format variants
Target metric: waitlist_signup
Real example from nextdish.app:
src/lib/ab-testing/Implementation:
// Using A/B test in component
const headlineTest = useABTest({
testId: 'hero-headline-test',
variants: [
{ id: 'control', name: 'AI Recipe Generation That Works Offline' },
{ id: 'personal', name: 'Your AI Chef Works Without Internet' },
],
defaultVariant: 'control'
});
// Track conversion
headlineTest.trackConversion('waitlist_signup');
Real example from savant-insights:
Schema implementation:
// savant-insights/src/content/config.ts
const socialCollection = defineCollection({
schema: z.object({
title: z.string(),
platform: z.enum(['X', 'LinkedIn', 'Instagram', 'Facebook', 'YouTube']),
contentType: z.enum(['Post', 'Story', 'Carousel', 'Reel', 'Thread', 'Video']),
// A/B testing fields
abVariants: z.array(
z.object({
variantId: z.string(),
variantTitle: z.string(),
})
).optional(),
analyticsTracking: z.record(z.any()).optional(),
}),
});
Automated A/B testing workflow:
5. Content Personalization Workflows
Implement content-based personalization:
- Extract features from content (ingredients, tags, dietary info)
- Build user preference vectors from engagement
- Calculate similarity scores
- Provide personalized recommendations
- Handle cold start (new users)
Personalization features:
- Ingredient-based matching (weight: 1.0)
- Tag-based matching (weight: 0.8)
- Dietary preference matching (weight: 0.9)
- Cooking time preferences
Real example from nextdish.app:
src/services/content_based.pyReal example from savant-insights:
Persona-based content strategy:
Content Matrix Framework:
- Personas: Alex the Analyst, Chloe the Cautious Planner, Ethan the Experienced Explorer
- Journey Stages: Why Invest? → How to Start? → Building Foundations → Intermediate Strategy → Advanced Techniques → Risk Management → Portfolio Review → Long-term Planning
Example mapping:
- Alex the Analyst + "How to Start?" = Step-by-step guide with API integration options
- Chloe the Cautious Planner + "Building Foundations" = Simple guide with screenshots and visual explanations
Personalization engine features:
6. Multi-Channel Content Distribution
Create multi-channel content distribution strategy:
- Staggered rollout across platforms
- Platform-specific content adaptation
- Cross-platform traffic driving
- Content coordination (hub vs spoke accounts)
- Platform-specific hashtag strategies
Platforms:
- TikTok (Day 1, 3 PM - 7 PM EST)
- Instagram Reels (Day 2, 11 AM EST)
- X/Twitter (Day 2, 1 PM EST)
- YouTube (Day 3, 4 PM EST)
- LinkedIn (24 hours after launch)
Real example from nextdish.app:
docs/marketing/first_content_video_scripts.mdReal example from savant-insights:
Scheduling optimization:
recommendedTimes:
x:
weekdays: ["8:00-9:00 AM", "12:00-1:00 PM", "5:00-6:00 PM"]
weekends: ["9:00-11:00 AM", "1:00-3:00 PM"]
instagram:
weekdays: ["11:00 AM-1:00 PM", "7:00-9:00 PM"]
weekends: ["10:00 AM-12:00 PM", "2:00-4:00 PM"]
linkedin:
weekdays: ["7:30-8:30 AM", "12:00-1:00 PM", "5:30-6:30 PM"]
weekends: ["Not recommended"]
contentCadence:
x: "Daily"
instagram: "3x weekly"
linkedin: "2x weekly"
youtube: "1x weekly"
newsletter: "Weekly"
Content mapping workflow:
7. Content Analytics and Reporting
Set up content analytics:
- Track user engagement metrics
- Monitor content performance
- Track conversions and funnels
- Generate analytics dashboards
- Integrate with multiple analytics providers
Metrics to track:
- User metrics (DAU, growth trends)
- Content metrics (recipes per day, per user)
- Interaction metrics (likes, comments, shares)
- AI metrics (operations, processing time)
- Performance metrics (response time, latency)
Real example from nextdish.app:
backend/services/analytics.pyAnalytics hooks:
// Track events
const { trackEvent, trackConversion } = useAnalytics();
trackEvent('recipe_view', { recipeId: '123' });
trackConversion('waitlist', 'signup', { source: 'hero' });
Real example from savant-insights:
Content performance metrics:
Reporting cadence:
8. Content Versioning and Rollback
Implement content versioning:
- Track content revision history
- Enable content rollback to previous versions
- Create content approval workflows
- Manage draft/published states
- Maintain content change audit logs
Content type: [recipes/articles/etc.]
Version tracking: [database table/version control/etc.]
Real example from savant-insights:
Schema design:
Research Versions Database:
- Version ID (Title): Unique identifier (e.g., "v2", "draft-20240801")
- Parent Paper (Relation): Links to main research paper
- Content (Text): Version-specific content
- Agent Name (Text): Which AI agent generated this version
- Status (Select): "Draft", "Needs Review", "Approved"
- Revision Notes (Text): Changes description
- Timestamp (Date): Creation/save time
- Feedback Entries (Relation): Related feedback items
Use cases:
9. Workflow-Driven Content Automation
Create automated content generation workflows:
- Data collection from multiple sources
- Content standardization and processing
- AI-powered content generation
- Multi-format content creation
- Automated publishing pipelines
Workflow tool: [n8n/Airflow/etc.]
Data sources: [list sources]
Content formats: [text, visual, audio, checklist]
Real example from savant-insights:
10. Multi-Agent Content Generation
Design multi-agent system for content creation:
- Research Agent: Information gathering and fact-checking
- Content Agent: Article generation and writing
- Formatting Agent: Multi-format adaptation (text, visual, audio)
- Tool abstraction layer: Natural language tool invocation
- Agent memory persistence: Context continuity across sessions
Agent collaboration: [workflow-based/handoff patterns]
Memory storage: [database/Notion/etc.]
Real example from savant-insights:
11. Platform-Adaptive Content Generation
Create platform-adaptive content system:
- Dynamic prompt injection per platform
- Platform-specific constraints (character limits, formats)
- Single content brief → multiple platform outputs
- Extensible architecture for new platforms
- Standardized output schemas
Platforms: [X, LinkedIn, Instagram, TikTok, YouTube]
Content brief: @content/brief.md
Real example from savant-insights:
12. Multi-Format Content Delivery
Implement multi-format content generation:
- Text format (standard articles)
- Visual format (infographics, charts)
- Audio format (text-to-speech, podcasts)
- Checklist format (actionable summaries)
Content source: [single source or multiple]
Format generation: [AI-powered/manual/etc.]
Accessibility: [WCAG compliance requirements]
Real example from savant-insights:
13. Content Pipeline Orchestration
Design content pipeline architecture:
- Data ingestion from multiple sources
- Content standardization and validation
- AI processing and generation
- Human-in-the-loop review workflows
- Automated publishing and distribution
Pipeline stages: [collection → processing → review → publishing]
Review triggers: [webhook/database changes/etc.]
Real example from savant-insights:
Your superpower: Cursor helps you build full-stack applications, create reusable components, manage state, and optimize performance across the entire development lifecycle.
Key workflows:
1. Component-Driven Development
Create a reusable React component following our design system:
- Use Shadcn UI base components
- Follow TypeScript patterns with proper interfaces
- Include accessibility attributes (ARIA labels, keyboard navigation)
- Support dark mode
- Add loading and error states
- Make it responsive (mobile-first)
Component type: [Button/Card/Form/Modal/etc.]
Reference existing components: @components/common/
Design system: @components.json
Real example from nextdish.app:
Example component structure:
// AppShellLayout component demonstrates:
- TypeScript interfaces for prop validation
- Custom hooks for theme management
- Responsive design with state management
- Swipe gestures for mobile interaction
- Conditional rendering based on device type
2. State Management with Zustand
Create a Zustand store for [feature]:
- Define state shape with TypeScript
- Add actions (setters, updaters, async operations)
- Include persistence (localStorage) if needed
- Add optimistic UI updates
- Handle loading and error states
Feature: [user preferences/recipe state/social features/etc.]
Reference: @stores/ or @features/[feature]/[feature]Store.ts
Real example from nextdish.app:
Example store pattern:
// Recipe store demonstrates:
- Immutable state updates
- Optimistic UI updates (toggleLike)
- Normalized data (comments by recipeId)
- Derived state management
- Action-based mutations
3. Full-Stack Feature Development
Build a complete feature from frontend to backend:
- Database schema design
- API endpoints (Next.js App Router)
- React components
- State management integration
- Real-time updates (if needed)
- Error handling and validation
Feature: [notifications/social features/user profiles/etc.]
Reference: @app/api/ and @components/
Real example from nextdish.app:
Example: Notification System (from tutorial)
4. Vanilla JavaScript Architecture (No Framework)
Build a modular JavaScript application without frameworks:
- ES6+ modules with named exports
- Pub/sub state management pattern
- Component-based architecture (class-based or functional)
- Progressive loading strategies
- Code splitting with Webpack
- CSS organization (component-based)
Project type: [data visualization/dashboard/interactive map/etc.]
Reference: @js/ and @css-styles/
Real example from gip (CIA World Data Map):
State management pattern:
// Centralized state with pub/sub demonstrates:
- Centralized state object
- Action creators for mutations
- Observer pattern for reactivity
- Path-based subscriptions
- Error handling in observers
5. Mobile Development Patterns
Optimize this component for mobile:
- Responsive breakpoints (mobile-first)
- Touch target sizes (minimum 44x44px)
- Swipe gesture support
- Safe area insets (iOS notches)
- Performance optimizations (lazy loading, code splitting)
- Mobile-specific UI patterns
Component: @components/[component].tsx
Current issues: [list any mobile-specific problems]
Real examples:
From nextdish.app:
From gip:
6. Performance Optimization
Optimize this application for performance:
- Code splitting and lazy loading
- Image optimization (Next.js Image component)
- Bundle size reduction (tree-shaking, compression)
- React performance patterns (memo, useMemo, useCallback)
- Service Worker caching strategies
- Database query optimization
- API response caching
Current bundle size: [X KB]
Performance issues: [list specific problems]
Real examples from nextdish.app:
Real examples from gip:
7. TypeScript Patterns & Type Safety
Add TypeScript types to this codebase:
- Define interfaces for all data structures
- Create type-safe API clients
- Add generic types for reusable utilities
- Implement type guards for runtime validation
- Use Prisma-generated types for database models
- Create discriminated unions for state management
Current state: [describe current typing situation]
Files to update: @src/
Real example from nextdish.app:
8. Form Handling & Validation
Create a form with validation:
- React Hook Form integration
- Zod schema validation
- Form state management
- Error message display
- Multi-step form support (if needed)
- File upload handling (if needed)
- Accessibility (ARIA labels, error announcements)
Form type: [recipe creation/user registration/etc.]
Reference: @components/forms/
Real example from nextdish.app:
9. API Design & Integration
Create API routes following our patterns:
- RESTful API design principles
- Request/response validation (Zod)
- Error handling with appropriate status codes
- Pagination support
- Rate limiting (if needed)
- Authentication middleware
- Edge runtime compatibility (if using Cloudflare)
API endpoint: [GET/POST/PUT/DELETE /api/resource]
Reference: @app/api/
Real example from nextdish.app:
Example API route:
// Demonstrates:
- Edge runtime compatibility
- Environment variable handling
- Query parameter validation
- Database query building
- Error handling with HTTP status codes
- Pagination and filtering
10. Build & Deployment Workflows
Set up build and deployment:
- Build configuration (Next.js, TypeScript, Webpack)
- Environment-specific configurations
- CI/CD pipeline setup (GitHub Actions)
- Cloudflare Pages deployment (or other platform)
- Edge runtime considerations
- Build optimization (code splitting, minification)
- Pre-deployment checks
Platform: [Cloudflare Pages/Vercel/etc.]
Reference: @.github/workflows/
Real examples:
From nextdish.app:
From gip:
11. Error Handling & Debugging
Implement comprehensive error handling:
- Error boundary patterns for React components
- Try-catch blocks for async operations
- Error logging and monitoring
- User-friendly error messages
- Error recovery strategies
- Validation error handling
- Network error handling
Current error handling: [describe current state]
Error monitoring: [Sentry/Cloudflare Analytics/etc.]
Real example from nextdish.app:
12. Code Organization & Architecture
Organize this codebase following best practices:
- Feature-based folder structure
- Separation of concerns (components, utils, services)
- Module boundaries and barrel exports
- Shared vs. feature-specific code
- Configuration management
- Environment variable handling
Current structure: [describe current organization]
Reference: @src/
Real examples:
From nextdish.app:
features/recipes/, features/auth/components.json, tailwind.config.jsFrom gip:
js/components/, js/utils/, js/panels/css-styles/core/, css-styles/components/, css-styles/layout/js/panels/index.js for clean imports13. Testing Patterns & Strategies
Set up testing for this codebase:
- Unit testing with Jest/Vitest
- Component testing with React Testing Library
- Integration testing patterns
- E2E testing with Cypress/Playwright
- Test organization and structure
- Mocking strategies (API calls, external services)
- Test coverage and quality metrics
Testing framework: [Jest/Cypress/etc.]
Reference: @tests/ or @__tests__/
Real example from nextdish.app:
14. Accessibility (a11y) Patterns
Make this application accessible:
- ARIA attributes and roles
- Keyboard navigation support
- Screen reader compatibility
- Focus management
- Color contrast requirements (WCAG AA/AAA)
- Semantic HTML usage
- Form accessibility
- Modal and dialog accessibility
Current accessibility: [describe current state]
WCAG level: [AA/AAA target]
Real example from nextdish.app:
15. Styling & Theming Systems
Implement a theming system:
- Tailwind CSS patterns (or CSS-in-JS)
- Theme system implementation
- Dark mode support
- Responsive design utilities
- Custom utility classes
- Design token management
- Component styling patterns
Styling approach: [Tailwind/CSS-in-JS/etc.]
Reference: @tailwind.config.js or @styles/
Real example from nextdish.app:
Metrics that matter for software engineers:
Example from nextdish.app:
Example from gip:
Cursor becomes even more powerful when your team uses it consistently.
The challenge: Everyone on your team should follow the same coding standards, but manually enforcing this is tedious.
The solution: Create a shared .cursorrules file in your repository. When anyone on your team uses Cursor, it automatically follows these standards. No more “use TypeScript” reminders—Cursor just knows.
Traditional pair programming: Two developers, one keyboard, constant discussion.
With Cursor: One developer drives, Cursor acts as the second pair, suggesting improvements and catching issues.
Example: When building the social features for nextdish.app, I’d describe what I wanted, Cursor would generate code, I’d review it, ask questions, refine it. This is pair programming, just with AI instead of another human.
Before submitting a PR:
Review this code for:
- Security vulnerabilities
- Performance issues
- Code style consistency (check against .cursorrules)
- Missing error handling
- Test coverage gaps
- Documentation needs
Files changed: [list files]
Result: You catch issues before reviewers do, making the review process smoother and faster.
The challenge: Getting a team to adopt Cursor requires addressing skepticism, training, and establishing best practices.
Phase 1: Pilot Program (Weeks 1-2)
Set up a Cursor pilot program:
1. Select 2-3 enthusiastic early adopters
2. Provide Cursor licenses and training
3. Have them document their workflows
4. Collect feedback and metrics
5. Identify champions who can help others
Goal: Prove value with small group
Success metrics: Time savings, code quality, developer satisfaction
Phase 2: Team Training (Weeks 3-4)
Roll out Cursor to the full team:
1. Host training sessions (prompt engineering, best practices)
2. Create team .cursorrules file
3. Set up shared documentation
4. Establish code review guidelines for AI-generated code
5. Create internal knowledge base
Training topics:
- Prompt engineering basics
- When to use Chat vs. Composer
- Code review best practices
- Security considerations
Phase 3: Integration (Weeks 5-6)
Integrate Cursor into team workflows:
1. Add Cursor to onboarding checklist
2. Update code review process
3. Document team-specific patterns
4. Create prompt templates for common tasks
5. Set up metrics tracking
Success criteria:
- 80%+ of team using Cursor regularly
- Positive feedback from team members
- Measurable productivity improvements
Common concerns and responses:
“AI will write bad code”
“It’s a crutch, developers won’t learn”
“It’s too expensive”
“It’s a security risk”
Establishing team standards:
Create team Cursor guidelines:
1. Code review requirements (all AI code must be reviewed)
2. Security rules (no secrets, no proprietary algorithms)
3. Documentation standards (document AI usage)
4. Prompt templates for common tasks
5. Escalation process (when Cursor isn't working)
Save as: @docs/cursor-guidelines.md
Include in onboarding materials
Knowledge sharing:
Set up team knowledge sharing for Cursor:
1. Create shared prompt library
2. Document successful workflows
3. Share .cursorrules improvements
4. Regular "Cursor tips" meetings
5. Internal blog/wiki for patterns
Goal: Compound team knowledge
Format: Weekly 15-minute "Cursor tip" in team meeting
Why this matters: Contributing to open source is a great way to learn, build your reputation, and give back. Cursor helps you understand large codebases quickly and create contributions that get merged.
Codebase exploration:
I want to contribute to this open source project. Help me understand:
- Project structure and architecture
- Key components and their responsibilities
- How to set up the development environment
- Testing strategy and how to run tests
- Code style and contribution guidelines
- Recent changes and active areas
Project: [repository URL or @local-path]
Focus area: [feature area you want to contribute to]
Finding contribution opportunities:
Analyze this repository to find good first issues:
- Look for "good first issue" labels
- Identify areas with TODO comments
- Find outdated documentation
- Locate missing tests
- Identify performance improvements
Repository: @.
Filter by: Language, complexity, area
Feature implementation:
I want to implement this feature for this open source project:
[Feature description from issue]
Steps:
1. Understand existing code patterns
2. Create feature branch
3. Implement feature following project conventions
4. Add tests (unit and integration)
5. Update documentation
6. Ensure all tests pass
7. Create PR with clear description
Reference:
- Existing similar features: @src/features/[similar-feature]
- Test patterns: @tests/
- Documentation style: @docs/
Bug fix workflow:
Fix this bug reported in issue #123:
[Bug description]
Steps:
1. Reproduce the bug locally
2. Identify root cause
3. Write failing test that reproduces bug
4. Fix the bug
5. Verify test passes
6. Check for regressions
7. Update CHANGELOG
Bug location: [file/area]
Related code: @src/path/to/bug
Documentation improvements:
Improve documentation for this project:
- Update outdated README sections
- Add missing API documentation
- Create examples for common use cases
- Fix broken links
- Add diagrams for complex flows
Current docs: @docs/
Code to document: @src/
PR description template:
Create a PR description following best practices:
- Clear title summarizing change
- Description of what and why
- Link to related issue
- Testing instructions
- Screenshots (if UI changes)
- Checklist of completed items
Changes: [describe changes]
Issue: #123
Breaking changes: [yes/no and details]
Code review preparation:
Review my PR before submitting:
- Check code style matches project
- Verify all tests pass
- Ensure documentation is updated
- Check for security issues
- Verify no breaking changes (or document them)
- Ensure commit messages follow conventions
PR changes: [list files changed]
Project conventions: @CONTRIBUTING.md
These workflows show you how to use Cursor for specific, recurring tasks. Even if you haven’t done these yet, these patterns give you a template to follow.
Why testing matters: Tests catch bugs before production, document expected behavior, and enable confident refactoring. Cursor excels at generating tests once you show it your patterns.
The TDD cycle: Red → Green → Refactor
1. Write a failing test first:
I'm building a user authentication function. Write a test that:
- Tests successful login with valid credentials
- Tests failure with invalid password
- Tests failure with non-existent user
- Tests rate limiting after 5 failed attempts
Use Jest and follow the pattern in @tests/auth.test.ts
2. Generate minimal code to pass:
Now implement the login function to make these tests pass.
Reference @lib/auth.ts for existing patterns.
3. Refactor with confidence:
Refactor this login function for better error handling and security.
All tests must still pass.
Real example from nextdish.app:
Unit test generation:
Generate unit tests for this function:
@utils/api.ts - fetchUserData function
Test cases:
- Successful API call returns user data
- Network error returns appropriate error
- Invalid response format is handled
- Timeout scenarios
Use Jest with mocking. Follow patterns in @tests/utils/api.test.ts
Integration test generation:
Generate integration tests for the recipe creation API endpoint.
Test:
- POST /api/recipes with valid data
- POST /api/recipes with missing required fields
- POST /api/recipes with invalid authentication
- POST /api/recipes with rate limiting
Mock the database using @tests/helpers/db-mock.ts
E2E test generation:
Generate Cypress E2E tests for the recipe creation flow:
1. User logs in
2. User navigates to create recipe page
3. User fills out recipe form
4. User submits recipe
5. User sees success message and recipe appears in feed
Use existing page objects from @tests/e2e/page-objects/
Real example from meridian-whisper:
Set up testing infrastructure for this Next.js project:
- Jest for unit/integration tests
- Cypress for E2E tests
- Mocking utilities for:
- Edge runtime environment
- Supabase authentication
- Database queries
- API responses
Create test helpers and utilities following patterns in @tests/helpers/
Why CI/CD matters: Automated testing and deployment catch issues early, reduce manual work, and enable faster iteration. Cursor can help you set up and maintain CI/CD pipelines.
Basic CI workflow:
Create a GitHub Actions workflow that:
- Runs on push to main and pull requests
- Installs dependencies (npm install)
- Runs linter (ESLint)
- Runs type checking (TypeScript)
- Runs tests (Jest)
- Builds the project
- Fails if any step fails
Save as .github/workflows/ci.yml
Deployment workflow:
Create a GitHub Actions workflow for deploying to Cloudflare Pages:
- Triggers on push to main branch
- Builds Next.js application
- Deploys to Cloudflare Pages using Wrangler
- Uses secrets for API keys
- Sends notification on success/failure
Reference existing workflow in @.github/workflows/deploy.yml
Real example from nextdish.app:
Cloudflare Pages deployment:
Automate Cloudflare Pages deployment:
1. Build Next.js app with production optimizations
2. Upload to Cloudflare Pages
3. Set environment variables from GitHub secrets
4. Verify deployment health
5. Rollback on failure
Use Wrangler CLI and GitHub Actions
Database migration automation:
Create a deployment script that:
1. Runs database migrations before deployment
2. Verifies migration success
3. Only proceeds if migrations succeed
4. Rolls back on failure
Use Prisma migrations and Supabase CLI
Optimize this CI/CD pipeline for speed:
- Cache dependencies between runs
- Run tests in parallel
- Skip unnecessary steps on documentation-only changes
- Use matrix builds for multiple Node versions
Current pipeline takes 38 seconds. Target: <20 seconds.
Real result: Optimized CI time from 38s to 20s (47% improvement) through dependency caching and parallel test execution.
Why IaC matters: Infrastructure as Code lets you version control, test, and reproduce your infrastructure. Cursor helps you generate and maintain Terraform, CloudFormation, and Kubernetes configs.
Generate Terraform configuration:
Create Terraform configuration for this infrastructure:
- AWS S3 bucket for static assets
- CloudFront distribution for CDN
- RDS PostgreSQL database
- ECS cluster for containerized services
- VPC with public/private subnets
- Security groups with least-privilege access
Requirements:
- Use Terraform modules for reusability
- Include variables for environment (dev/staging/prod)
- Add outputs for important resources
- Include state management configuration
Reference existing infrastructure: @docs/infrastructure.md
Terraform module creation:
Create a reusable Terraform module for a standard web application:
- Application Load Balancer
- Auto Scaling Group
- RDS database
- CloudWatch logging
- IAM roles and policies
Module should accept:
- Environment name
- Instance types
- Database size
- VPC configuration
Output:
- Module structure
- Variables file
- Outputs file
- README with usage examples
Generate Kubernetes manifests:
Create Kubernetes deployment configuration for this application:
- Deployment with 3 replicas
- Service for internal communication
- Ingress for external access
- ConfigMap for environment variables
- Secret for sensitive data
- HorizontalPodAutoscaler for scaling
Application:
- Next.js frontend
- Node.js API backend
- PostgreSQL database (external)
Requirements:
- Health checks (liveness and readiness probes)
- Resource limits and requests
- Rolling update strategy
- Pod disruption budget
Reference: @docker/Dockerfile
Helm chart generation:
Create a Helm chart for this application:
- Chart structure (Chart.yaml, values.yaml)
- Templates for deployment, service, ingress
- Configurable values (replicas, image tags, resources)
- Documentation in README.md
Application: Multi-service architecture
Services: frontend, api, worker
AWS CDK stack generation:
Create an AWS CDK stack in TypeScript for:
- API Gateway with Lambda functions
- DynamoDB tables with proper indexes
- S3 bucket with versioning
- CloudWatch alarms
- IAM roles with least privilege
Use CDK best practices:
- Constructs for reusable components
- Environment-specific configurations
- Proper resource tagging
Why observability matters: You can’t fix what you can’t see. Observability (logging, metrics, tracing) helps you understand system behavior, debug issues, and optimize performance.
Structured logging implementation:
Implement structured logging for this application:
- Use a logging library (Winston, Pino, or similar)
- Log levels: error, warn, info, debug
- Include context: request ID, user ID, timestamp
- Format: JSON for production, readable for development
- Log to: console (dev), file (staging), cloud service (prod)
Requirements:
- Request/response logging middleware
- Error logging with stack traces
- Performance logging (slow queries, long operations)
- Security event logging (failed auth, suspicious activity)
Reference existing code: @middleware/logging.ts
Log aggregation setup:
Set up log aggregation for this distributed system:
- Centralized logging (ELK stack, Datadog, or CloudWatch)
- Log parsing and indexing
- Search and filtering capabilities
- Alerting on error patterns
- Log retention policies
Services to monitor:
- API servers (Node.js)
- Background workers (Python)
- Database queries (PostgreSQL)
Metrics collection:
Implement metrics collection for this application:
- Application metrics (request count, latency, error rate)
- Business metrics (user signups, orders, revenue)
- System metrics (CPU, memory, disk usage)
- Custom metrics (cache hit rate, queue depth)
Use: Prometheus for metrics, Grafana for visualization
Export metrics at: /metrics endpoint
Create dashboards for:
- System health overview
- API performance
- Error tracking
- Business KPIs
APM (Application Performance Monitoring) setup:
Set up APM for this application:
- Instrument code for tracing
- Track request flows across services
- Identify slow database queries
- Monitor external API calls
- Track error rates and types
Tools: New Relic, Datadog APM, or OpenTelemetry
Focus on: p50, p95, p99 latencies
OpenTelemetry implementation:
Implement distributed tracing using OpenTelemetry:
- Instrument HTTP requests
- Trace database queries
- Track async operations (queues, background jobs)
- Propagate trace context across services
- Export traces to: Jaeger or Zipkin
Services to trace:
- Frontend (Next.js)
- API Gateway
- Backend services (Node.js, Python)
- Database queries
- External API calls
Trace analysis:
Analyze this trace data to identify:
- Slowest operations in request flow
- Services causing bottlenecks
- Database queries taking too long
- Unnecessary external API calls
- Opportunities for caching
Trace data: [paste trace JSON or link]
Alert configuration:
Set up alerting for this system:
- Critical: Service down, error rate > 5%
- Warning: High latency (p95 > 1s), disk usage > 80%
- Info: Deployment events, scaling events
Alert channels:
- PagerDuty for critical alerts
- Slack for warnings
- Email for info
Create runbooks for common alerts:
- High error rate → Check logs, verify dependencies
- High latency → Check database, review slow queries
- Service down → Check health endpoints, verify deployments
Why security matters: Security vulnerabilities can compromise user data, damage reputation, and create legal liability. Cursor helps you implement security best practices systematically.
Comprehensive security review:
Perform a security audit of this codebase:
1. Check for common vulnerabilities (OWASP Top 10)
2. Review authentication and authorization
3. Check for SQL injection risks
4. Check for XSS vulnerabilities
5. Review API rate limiting
6. Check for exposed secrets or API keys
7. Review error handling (information disclosure)
8. Check CORS configuration
Generate a security report with:
- Vulnerability severity (Critical, High, Medium, Low)
- Affected files and line numbers
- Recommended fixes
- Priority order for addressing issues
Real example from nextdish.app:
Dependency vulnerability check:
Scan this project's dependencies for known vulnerabilities:
- Check package.json and package-lock.json
- Use npm audit or similar
- Identify vulnerable packages
- Suggest secure alternatives or updates
- Check for packages with security advisories
Generate a report with:
- Package name and version
- Vulnerability CVE numbers
- Severity level
- Recommended action (update, replace, or accept risk)
Code-level vulnerability scanning:
Scan this codebase for security vulnerabilities:
- Command injection risks (shell execution, subprocess calls)
- Path traversal vulnerabilities
- Insecure random number generation
- Hardcoded secrets or credentials
- Insecure cryptographic operations
- Missing input validation
Focus on:
- API routes (@app/api/)
- Utility functions (@lib/)
- Authentication code (@lib/auth.ts)
Real example from meridian-whisper:
shell=False for all subprocess callsRow Level Security (RLS) setup:
Design and implement Row Level Security policies for this database schema.
Requirements:
- Users can only access their own data
- Public data is readable by all authenticated users
- Admin users have full access
- Optimize policies to avoid performance warnings
Use cached auth pattern: (SELECT auth.uid()) instead of auth.uid()
Reference existing policies in @supabase/migrations/
Real example: Optimized 50+ RLS policies in nextdish.app by replacing auth.uid() with cached (SELECT auth.uid()) patterns, eliminating all performance warnings and improving query performance by ~40%.
Content Security Policy (CSP) implementation:
Implement strict Content Security Policy for this Next.js app:
- Allow only same-origin scripts
- Use nonce-based script execution
- Block inline styles (except nonced)
- Configure CSP violation reporting endpoint
- Test CSP doesn't break existing functionality
Add CSP headers in @app/layout.tsx
Important note: Only optimize when it needs to be done. Early optimization can create bugs, add complexity, and waste time. Optimize when you have:
The optimization workflow:
Database query optimization:
Analyze and optimize slow database queries:
1. Identify queries taking >200ms (use EXPLAIN ANALYZE)
2. Check for missing indexes
3. Look for N+1 query problems
4. Check for inefficient RLS policies
5. Suggest index additions or query rewrites
Current slow query:
[Paste query and execution plan]
Target: <100ms for this query
Real example from nextdish.app:
Bundle size optimization:
Optimize this Next.js application bundle size:
1. Analyze current bundle (use @next/bundle-analyzer)
2. Identify large dependencies
3. Suggest code splitting opportunities
4. Recommend tree-shaking improvements
5. Optimize images and assets
Current bundle: [size]
Target: <300KB (gzipped)
Real example:
API response time optimization:
Optimize API endpoint response times:
1. Identify slow endpoints (>200ms p95)
2. Check for unnecessary database queries
3. Look for missing caching opportunities
4. Review serialization overhead
5. Check for blocking operations
Endpoint: @app/api/recipes/route.ts
Current: 350ms p95
Target: <200ms p95
Real example:
Setting up performance monitoring:
Set up performance monitoring for this application:
- Track API response times (p50, p95, p99)
- Monitor database query performance
- Track bundle size over time
- Monitor cache hit rates
- Set up alerts for performance regressions
Use tools: Cloudflare Analytics, PostHog, or custom metrics
Measuring productivity gains:
Create a system to measure Cursor's impact on productivity:
- Track time to implement features (before/after)
- Measure bug fix time
- Count lines of code generated vs. written
- Track test coverage improvements
- Monitor deployment frequency
Baseline: [your current metrics]
Track weekly and compare
Why this matters: Technical debt slows development and increases bug risk. Cursor helps you identify, prioritize, and reduce technical debt systematically.
Code smell detection:
Analyze this codebase for code smells:
- Long methods (> 50 lines)
- Large classes (> 500 lines)
- Duplicate code
- Complex conditionals
- Magic numbers/strings
- Missing error handling
- Inconsistent naming
Generate report with:
- File and line numbers
- Severity (high, medium, low)
- Suggested refactoring
- Estimated effort
Codebase: @src/
Focus: [specific area or all]
Technical debt audit:
Perform a technical debt audit:
- Identify deprecated dependencies
- Find TODO/FIXME comments
- Locate commented-out code
- Identify unused code
- Find inconsistent patterns
- Check for missing tests
Prioritize by:
- Impact on development velocity
- Risk of bugs
- Cost to fix
- Frequency of code touched
Generate prioritized list with effort estimates.
Safe refactoring:
Refactor this code safely:
- Extract method/function
- Rename variable/function
- Simplify conditional logic
- Remove duplicate code
- Improve error handling
Requirements:
- Maintain existing functionality
- Update all references
- Update tests if needed
- Ensure no breaking changes
Code to refactor: @src/path/to/file
Tests: @tests/path/to/file
Large-scale refactoring:
Plan a large-scale refactoring:
- Break into small, safe steps
- Create feature flag for gradual rollout
- Write tests before refactoring
- Refactor one area at a time
- Verify after each step
Refactoring goal: [what you want to achieve]
Current state: [describe current code]
Target state: [describe desired code]
Dependency updates:
Update dependencies safely:
- Check for security vulnerabilities
- Review changelogs for breaking changes
- Update one dependency at a time
- Run tests after each update
- Update lock files
Dependencies to update: [list or all]
Strategy: Major vs minor vs patch updates
Dependency audit:
Audit dependencies for:
- Security vulnerabilities
- Outdated packages
- Unused dependencies
- Duplicate dependencies
- License compatibility
Generate report with:
- Package name and version
- Vulnerability details (if any)
- Recommended action
- Breaking changes (if upgrading)
Coverage analysis:
Analyze test coverage and identify gaps:
- Files with low coverage (< 80%)
- Critical paths without tests
- Edge cases not covered
- Integration tests missing
Generate test plan:
- Priority order for adding tests
- Test types needed (unit, integration, E2E)
- Estimated effort per area
Current coverage: [percentage]
Target coverage: 80%+
Test generation:
Generate tests for this code:
- Unit tests for all functions
- Edge case tests
- Error handling tests
- Integration tests for API endpoints
Code to test: @src/path/to/file
Test framework: [Jest, Mocha, etc.]
Existing test patterns: @tests/
When to modernize: Not all legacy code needs updating. Modernize when:
Framework migration pattern:
Migrate this legacy codebase from [old framework] to [new framework]:
1. Analyze current codebase structure
2. Identify core patterns and dependencies
3. Create migration plan (incremental vs. big bang)
4. Set up new framework alongside old
5. Migrate one module at a time
6. Test thoroughly after each migration
7. Remove old code only after verification
Start with: [specific module/file]
Reference: @docs/migration-plan.md
Database migration pattern:
Modernize this legacy database schema:
1. Document current schema and relationships
2. Identify problematic patterns (missing indexes, no constraints)
3. Create new optimized schema
4. Write migration scripts
5. Test migrations on staging
6. Plan rollback strategy
7. Execute migration with monitoring
Current schema: @database/old-schema.sql
Target: Modern schema with proper indexes and constraints
Real example from nextdish.app:
auth.uid() with cached (SELECT auth.uid()) across all policiesAPI modernization pattern:
Modernize this legacy API:
1. Document current API endpoints
2. Identify deprecated patterns
3. Design new API structure (RESTful, GraphQL, etc.)
4. Implement new endpoints alongside old
5. Add versioning (v1, v2)
6. Migrate clients gradually
7. Deprecate old endpoints after migration
Current API: @api/v1/
New API: @api/v2/
Maintain backward compatibility during transition
Code pattern modernization:
Refactor this legacy code to use modern patterns:
- Replace callbacks with async/await
- Replace var/let with const where possible
- Add TypeScript types
- Replace class components with functional components
- Update to modern React hooks
- Improve error handling
File: @components/LegacyComponent.tsx
Maintain same functionality, improve code quality
Why these patterns matter: Modern backend systems need async processing, fast data access, and proven scalability. These patterns are essential for production-ready applications.
RabbitMQ setup:
Set up RabbitMQ message queue for async processing:
- Create exchange and queues
- Implement producer (publish messages)
- Implement consumer (process messages)
- Add error handling and retries
- Implement dead letter queue for failed messages
Use cases:
- Email sending (async)
- Image processing (background jobs)
- Notification delivery (fire-and-forget)
Requirements:
- Connection pooling
- Message acknowledgment
- Idempotent message processing
- Monitoring queue depth
Redis Queue (Bull) for Node.js:
Set up Bull queue with Redis for job processing:
- Create job queues (email, image-processing, analytics)
- Implement job producers
- Implement job processors with retry logic
- Add job progress tracking
- Set up job scheduling (cron-like)
Job types:
- Send email (priority: high, retries: 3)
- Generate report (priority: low, retries: 1)
- Process image (priority: medium, retries: 2)
Include: Job status dashboard, failed job handling
Kafka event streaming:
Set up Kafka for event streaming:
- Create topics for events (user.created, order.placed, payment.processed)
- Implement event producers
- Implement event consumers (multiple consumer groups)
- Add schema registry for event validation
- Implement event replay capability
Event schema:
- Event type
- Timestamp
- User ID
- Event data (JSON)
- Version
Consumer groups:
- analytics-service (all events)
- notification-service (user events)
- reporting-service (business events)
Redis caching implementation:
Implement Redis caching for this application:
- Cache frequently accessed data (user profiles, product info)
- Cache expensive computations (aggregations, reports)
- Cache API responses (with TTL)
- Implement cache invalidation strategy
- Add cache warming for critical data
Caching patterns:
- Cache-aside (lazy loading)
- Write-through (write to cache and DB)
- Write-behind (write to cache, async to DB)
Cache keys: Use namespaced keys (user:123, product:456)
TTL: 5 minutes for user data, 1 hour for product data
CDN and edge caching:
Set up edge caching for static and dynamic content:
- Static assets (images, CSS, JS) → Long TTL
- API responses → Short TTL with cache headers
- HTML pages → Cache with revalidation
- Implement cache purging for updates
Headers to set:
- Cache-Control: public, max-age=3600
- ETag for conditional requests
- Vary header for user-specific content
Database query caching:
Implement query result caching:
- Cache slow queries (aggregations, joins)
- Cache user-specific queries (with user ID in key)
- Invalidate cache on data updates
- Use Redis for distributed cache
Queries to cache:
- User feed (TTL: 5 minutes)
- Popular recipes (TTL: 1 hour)
- Analytics aggregations (TTL: 15 minutes)
Load testing setup:
Create load testing suite using k6 or Artillery:
- Test API endpoints under load
- Simulate realistic user behavior
- Test concurrent users (100, 500, 1000, 5000)
- Measure: response time, error rate, throughput
- Identify breaking points
Scenarios:
- Normal load: 100 users, 5 minutes
- Peak load: 500 users, 10 minutes
- Stress test: 1000 users, until failure
Endpoints to test:
- POST /api/recipes (write-heavy)
- GET /api/feed (read-heavy)
- GET /api/users/:id (cacheable)
Performance benchmarking:
Create performance benchmarks for critical operations:
- Database query performance (before/after optimization)
- API endpoint latency (p50, p95, p99)
- Cache hit rates
- Queue processing throughput
Benchmark scenarios:
- Baseline (current implementation)
- With caching
- With database optimization
- With connection pooling
Target metrics:
- API latency: p95 < 200ms
- Database queries: < 100ms
- Cache hit rate: > 80%
Capacity planning:
Analyze system capacity and plan for scale:
- Current load: [metrics]
- Projected growth: 10x users in 6 months
- Identify bottlenecks (database, API, queue)
- Calculate resource requirements
- Plan scaling strategy (horizontal vs vertical)
Consider:
- Database read replicas
- API server auto-scaling
- Queue worker scaling
- CDN usage
- Cost implications
When to use microservices:
Microservices pattern with Cursor:
Design a microservices architecture for this application:
1. Identify service boundaries (domain-driven design)
2. Define service interfaces (APIs, events)
3. Set up service communication (REST, gRPC, message queues)
4. Implement service discovery
5. Set up shared infrastructure (logging, monitoring, auth)
6. Create deployment strategy per service
Current monolith: @app/
Services to extract: [list services]
Communication: REST APIs + message queue for async
Service extraction workflow:
Extract this feature into a separate microservice:
1. Identify feature boundaries
2. Create new service repository
3. Move feature code to new service
4. Replace direct calls with API calls
5. Add service-to-service authentication
6. Update deployment pipeline
7. Monitor service independently
Feature: User authentication
New service: auth-service
Communication: REST API with JWT tokens
When to use event-driven:
Event-driven pattern:
Implement event-driven architecture for real-time features:
1. Define event schema (what events exist)
2. Set up event bus/message queue
3. Create event producers (services that emit events)
4. Create event consumers (services that react to events)
5. Implement event persistence (if needed)
6. Add event replay capability (for debugging)
Events needed:
- user.created
- recipe.published
- comment.added
- like.toggled
Use: Server-Sent Events, WebSockets, or message queue
Real example from nextdish.app:
Event sourcing pattern:
Implement event sourcing for this domain:
1. Store all changes as events (not just current state)
2. Rebuild state by replaying events
3. Create event store
4. Implement event handlers
5. Add snapshot support for performance
6. Enable event replay for debugging
Domain: Recipe management
Events: RecipeCreated, RecipeUpdated, RecipeDeleted
Store events in: PostgreSQL or event store database
Solution:
.cursorrules with your style preferencesutils/api.ts”Ctrl+K) to adjust generated codeSolution:
ARCHITECTURE.md file that Cursor can readsrc/api/users.ts, add…”Solution:
Solution:
Solution:
.cursorrules with current best practicescomponents/Button.tsx”The unlimited ROI insight: ROI with Cursor is technically unlimited because anyone on the team can now create tools/software to be used by the whole team. This creates a compounding productivity effect.
Real examples of tool creation:
meridian-whisper: Built to eliminate typing entirely, removing the barrier to “yap until you can’t type anymore” prompting. Result: 60-70% typing reduction, 5-10 hours/week saved.
memvid: Created to manage context better in Cursor, improving AI understanding of codebases. Result: 540x memory efficiency (0.37MB vs 200MB target).
Perplexica MCP: Built to enable detailed research from Cursor for free, eliminating “3 searches a day” limitations. Result: Unlimited research capability directly in your IDE.
Automated documentation system: Symlink-based system for managing multi-project documentation. Result: Resume updates went from 4 hours to 30 minutes.
Combined effect: When team members know how to use Cursor and AI IDEs effectively, they can create custom tools that benefit everyone. One person’s workflow improvement becomes the team’s productivity multiplier.
Measuring time savings:
Calculate ROI from Cursor usage:
1. Track time spent on tasks before Cursor (baseline)
2. Track time spent with Cursor assistance
3. Calculate time saved per task
4. Multiply by task frequency
5. Convert to dollar value (hourly rate × hours saved)
Example:
- Feature implementation: 8 hours → 2 hours (6 hours saved)
- Weekly frequency: 2 features
- Weekly savings: 12 hours
- Monthly savings: 48 hours
- Annual savings: 576 hours
- Value (at $50/hour): $28,800/year
Real metrics from projects:
meridian-whisper:
savant-insights:
nextdish.app:
What to measure:
How to measure:
Set up productivity tracking:
1. Use git commits to track feature completion
2. Use issue tracking for bug fix time
3. Use test coverage tools for quality metrics
4. Use CI/CD logs for deployment frequency
5. Track onboarding tasks completion time
Create a simple dashboard or spreadsheet
Update weekly
Baseline establishment:
Before implementing Cursor workflows, establish baselines:
- Average feature implementation time
- Average bug fix time
- Test coverage percentage
- Deployment frequency
- Code review time
Track for 2-4 weeks to get accurate baseline
Then compare after Cursor adoption
Cursor is powerful, but it’s not magic. Understanding its limitations helps you use it effectively.
1. Highly domain-specific business logic
2. Very large codebases without structure
3. Real-time debugging of complex systems
1. Security-sensitive code
2. Proprietary algorithms or trade secrets
3. Critical production fixes under time pressure
Use Cursor for (80% of work):
Do manually (20% of work):
Remember: Cursor is a tool, not a replacement for understanding. Use it to amplify your skills, not replace them.
Real-world challenges: These scenarios come up in professional environments. Here’s how to handle them with Cursor.
The challenge: Cursor requires internet for AI features, but you might work in environments with limited connectivity.
Solutions:
1. Use local models (when available):
Configure Cursor to use local AI models:
- Download models for offline use
- Configure fallback to local models
- Cache responses for common queries
Note: Check Cursor documentation for current local model support.
2. Batch your work:
Plan your Cursor usage:
- Queue up prompts when you have internet
- Review and refine when offline
- Use Cursor for code generation when connected
- Use traditional tools for review/editing offline
3. Use Cursor’s codebase indexing:
Index your codebase when online:
- Let Cursor build context about your project
- This helps even with limited connectivity
- Tab completions work better with indexed codebases
4. Hybrid approach:
Combine Cursor with offline tools:
- Use Cursor for initial generation (when online)
- Use traditional IDE features for editing (offline)
- Use Git for version control (works offline)
- Use local linters/formatters (offline)
The challenge: Healthcare, finance, and other regulated industries have strict data handling requirements.
Best practices:
1. Data handling:
When working with sensitive data:
- Never paste PHI, PII, or financial data into Cursor
- Use placeholders: "user_id", "patient_record", etc.
- Redact sensitive information before prompting
- Use environment variables for secrets (never in prompts)
Example:
❌ "Fix this function that processes patient SSN: 123-45-6789"
✅ "Fix this function that processes patient identifiers"
2. Code review for compliance:
Review AI-generated code for compliance:
- Check for hardcoded sensitive data
- Verify encryption is used correctly
- Ensure logging doesn't expose sensitive info
- Validate access controls are in place
- Check for GDPR data handling (right to deletion, etc.)
Compliance requirements: [HIPAA/SOC2/GDPR/etc.]
Code to review: @src/
3. Documentation for audits:
Document AI usage for compliance audits:
- Track which code was AI-assisted
- Document review process for AI code
- Maintain audit trail of changes
- Document security reviews
Template: @docs/ai-usage-log.md
4. Enterprise Cursor features:
Use enterprise Cursor features:
- On-premise deployment (if available)
- Data residency controls
- Enhanced security settings
- Audit logging
Check: Cursor enterprise documentation for current offerings.
The challenge: You can’t share proprietary algorithms or trade secrets with AI tools.
Solutions:
1. Use placeholders and abstractions:
When asking Cursor about proprietary code:
- Replace proprietary logic with placeholders
- Describe the interface, not the implementation
- Use abstract descriptions: "proprietary algorithm" not actual code
- Focus on structure, not secrets
Example:
❌ "Optimize this proprietary ML model: [actual code]"
✅ "Optimize this function that uses a proprietary algorithm.
The algorithm takes inputs X, Y and returns Z.
Focus on the wrapper code, not the algorithm itself."
2. Work at the interface level:
Ask Cursor to work with interfaces:
- Define clear interfaces/contracts
- Let Cursor generate boilerplate around interfaces
- Implement proprietary logic yourself
- Use Cursor for everything else
Proprietary: @src/proprietary/ (don't reference in prompts)
Public interface: @src/public-api/
3. Code review workflow:
Review AI-generated code for proprietary concerns:
- Ensure no proprietary logic leaked
- Check that placeholders weren't accidentally filled
- Verify sensitive comments removed
- Validate that only public APIs are documented
The challenge: Teams need consistent, effective prompts that can be shared and improved.
Solution: Create a prompt library:
1. Organize prompts by category:
Create a prompts/ directory:
- prompts/backend/
- database-design.md
- api-creation.md
- performance-optimization.md
- prompts/frontend/
- component-creation.md
- state-management.md
- prompts/testing/
- unit-tests.md
- integration-tests.md
- prompts/documentation/
- api-docs.md
- tutorials.md
Structure: Category → Task → Prompt template
2. Create prompt templates:
Create reusable prompt templates:
- Variables: [PROJECT_NAME], [LANGUAGE], [FRAMEWORK]
- Sections: Context, Requirements, Constraints, Examples
- Version control: Track improvements over time
Template example:
I’m working on [PROJECT_NAME], a [DESCRIPTION]. I need to [TASK].
Context:
Requirements:
Expected output: [DESCRIPTION]
3. Version and improve prompts:
Maintain prompt library:
- Version prompts (v1, v2, etc.)
- Document what works and what doesn't
- Share successful prompts with team
- Create prompt guidelines
Library: @prompts/
Changelog: @prompts/CHANGELOG.md
4. Team prompt sharing:
Share prompts across team:
- Store in shared repository
- Use in .cursorrules references
- Create prompt search/index
- Regular prompt review sessions
Repository: @team-prompts/
Integration: Reference in .cursorrules
The challenge: Two developers, one Cursor instance—how to collaborate effectively.
Strategies:
1. Driver/navigator with Cursor:
Traditional pair programming roles:
- Driver: Types code, uses Cursor
- Navigator: Reviews, suggests, asks questions
- Cursor: Acts as third "pair" providing suggestions
Workflow:
1. Navigator describes what to build
2. Driver uses Cursor to generate code
3. Both review Cursor's output
4. Navigator suggests improvements
5. Driver refines with Cursor
6. Both test together
2. Screen sharing workflow:
Remote pair programming:
- Share screen with Cursor visible
- Both can see Cursor's suggestions
- Navigator can guide prompts
- Driver executes
- Both discuss Cursor's output
Tools: VS Code Live Share, Tuple, etc.
3. Alternating roles:
Switch roles frequently:
- Person A: Uses Cursor for 15 minutes
- Person B: Reviews, suggests, asks questions
- Switch roles
- Both learn from each other's Cursor usage
4. Cursor as teaching tool:
Use Cursor to teach:
- Experienced dev: Uses Cursor, explains why
- Junior dev: Learns patterns, asks questions
- Cursor: Provides examples and explanations
- Both: Discuss trade-offs and decisions
The challenge: Cursor is great for building, but how do you use it to learn?
Learning strategies:
1. Explain code you don’t understand:
Ask Cursor to teach you:
- "Explain this function line by line"
- "Why was this pattern chosen?"
- "What are the trade-offs of this approach?"
- "How does this compare to [alternative]?"
Code: @src/complex-feature.ts
2. Learn by doing:
Build something to learn:
- "I want to learn React hooks. Build a simple counter component and explain each hook."
- "Teach me about database indexing by creating an example."
- "Show me how to implement authentication step by step."
Learning goal: [what you want to learn]
Complexity: Beginner/Intermediate/Advanced
3. Compare approaches:
Learn different approaches:
- "Show me 3 ways to handle state in React and explain when to use each"
- "Compare REST vs GraphQL for this use case"
- "Explain the difference between these two patterns"
Approaches to compare: [list]
Use case: [describe]
4. Code review as learning:
Review code to learn:
- "Review this code and explain what's good and what could be better"
- "What design patterns are used here?"
- "How would you refactor this and why?"
Code: @src/example.ts
Focus: [specific aspect to learn]
5. Build learning projects:
Create projects to learn:
- "I want to learn microservices. Help me build a simple example with 3 services."
- "Teach me about event-driven architecture by building a small project."
- "I want to understand Docker. Create a project that demonstrates containers."
Topic: [what to learn]
Scope: Small/Medium/Large project
The challenge: Legacy codebases (COBOL, mainframes, old frameworks) that Cursor might not know well.
Strategies:
1. Provide extensive context:
When working with legacy code:
- Provide full context about the system
- Explain the technology stack
- Share documentation if available
- Describe patterns used in the codebase
- Give examples of similar code
Legacy system: [COBOL/mainframe/etc.]
Codebase: @legacy/
Documentation: @docs/legacy-system.md
2. Modernize incrementally:
Plan modernization with Cursor:
- Identify what can be modernized
- Create migration plan
- Use Cursor for new/modern code
- Keep legacy code working during transition
- Test thoroughly at each step
Legacy code: @legacy/
Target: [modern stack]
Migration plan: @docs/migration.md
3. Create documentation:
Document legacy systems:
- "Analyze this legacy codebase and create documentation"
- "Explain how this old system works"
- "Create a migration guide from legacy to modern"
Legacy code: @legacy/
Output: @docs/legacy-documentation.md
4. Bridge patterns:
Create bridges between legacy and modern:
- "Create an API wrapper around this legacy system"
- "Build a modern frontend that talks to legacy backend"
- "Create adapters to integrate legacy with new systems"
Legacy: @legacy/
Modern: @modern/
Integration: @integration/
The challenge: Your product has multiple versions, and docs need to stay accurate for each.
Solutions:
1. Version-specific documentation:
Organize docs by version:
- docs/v1/
- docs/v2/
- docs/v3/
- docs/latest/ (symlink to current)
Use Cursor to:
- Generate version-specific docs
- Update docs when version changes
- Maintain changelogs per version
2. Automated version updates:
Create scripts to update docs for new versions:
- Copy previous version's docs
- Update version numbers
- Highlight breaking changes
- Update code examples for new API
- Generate migration guides
Current version: v2.0
New version: v3.0
Docs: @docs/
3. Version-aware code examples:
Ensure code examples match versions:
- Tag examples with version numbers
- Update examples when APIs change
- Show migration between versions
- Test examples for each version
Examples: @docs/examples/
Versions: v1, v2, v3
4. Changelog generation:
Generate version changelogs:
- Compare code between versions
- Identify new features
- List breaking changes
- Document deprecations
- Create migration guides
From version: v2.0
To version: v3.0
Codebase: @.
5. Documentation deprecation:
Handle deprecated features:
- Mark deprecated sections clearly
- Show migration paths
- Set removal dates
- Archive old version docs
Deprecated features: [list]
Current version: v3.0
Removal version: v4.0
Why accessibility matters: Accessible applications work for everyone, including users with disabilities. It’s also often required by law and improves SEO.
Accessibility audit:
Perform an accessibility audit of this React application:
1. Check for semantic HTML (use proper elements)
2. Verify ARIA labels and roles
3. Test keyboard navigation
4. Check color contrast ratios (WCAG AA minimum)
5. Verify screen reader compatibility
6. Test with accessibility tools (axe, WAVE)
Generate report with:
- Issues found (with severity)
- Files and line numbers
- Recommended fixes
- WCAG compliance level
Accessibility implementation:
Make this component accessible:
- Add proper ARIA labels
- Ensure keyboard navigation works
- Add focus indicators
- Check color contrast
- Test with screen reader
- Add skip links if needed
Component: @components/RecipeCard.tsx
Follow WCAG 2.1 AA standards
Real example from nextdish.app:
Form accessibility:
Make this form accessible:
- Associate labels with inputs
- Add error messages with ARIA live regions
- Ensure error messages are announced
- Add required field indicators
- Test with keyboard only
- Test with screen reader
Form: @components/RecipeForm.tsx
Use React Hook Form with proper ARIA attributes
Ctrl+L) with a simple questionCtrl+I).cursorrules file for your projectCtrl+K).cursorrules with your team.cursorrules based on what worksCursor isn’t about replacing developers—it’s about eliminating the friction that slows us down. When used thoughtfully, it lets you:
The key is strategic use: Let AI handle the repetitive work, you handle the creative and critical thinking.
Start today: Open Cursor, ask it one question about your codebase, and see how it changes your workflow.
About the Author: I’m a developer who started their tech journey in May 2024, and has since built multiple projects, including nextdish.app (a full-stack social recipe platform), meridian-whisper (a cross-platform voice control application), an IQ training app, a CIA World Data visualization map, and even used it to help write a sci-fi novel (halfway done!). What started as a tool to help me learn has become an essential part of my workflow—turning days of debugging into hours, enabling me to build ambitious projects I wouldn’t have attempted otherwise. To that point, I’ve also created productivity tools like memvid MCP and Perplexica MCP to extend Cursor’s capabilities, and developed an automated documentation system using symlinks that saves hours on resume, blog posts and documentation updates. This tutorial distills everything I’ve learned about using Cursor across different project types, languages/frameworks, and use cases.
Last Updated: January 25th, 2026
Feedback & Contact: Found this helpful? Have suggestions? I’d love to hear from you!