Interview Prep
Interview Questions for Manual Testing — Bug Lifecycles, Test Cases, and What QA Rounds Actually Cover
Manual testing is the most common entry point into IT for freshers in India. Service companies hire thousands of testers every quarter. The technical round is where most fail — not because the questions are hard, but because candidates recite definitions instead of showing practical understanding.

Manual testing roles are the largest hiring category for freshers at Indian IT service companies — TCS, Infosys, Wipro, Cognizant, and HCL combined hire over 50,000 testers annually.
The Manual Testing Interview in India
Manual testing is the entry point for most QA careers in India. Service companies — TCS, Infosys, Wipro, Cognizant, HCL — hire freshers in bulk for testing roles. The pay starts at 3-4 LPA, but the career path is clear: manual testing → automation testing → SDET → test architect. The first step is clearing the interview.
The interview typically has 3 rounds: written aptitude (logical reasoning, basic coding), technical QA (testing concepts, test case writing, bug reporting), and HR. The technical round is where most candidates fail. Not because the questions are difficult — they are predictable. Candidates fail because they give textbook answers without practical examples. When asked "What is regression testing?" they recite a definition. When asked "Write test cases for a login page," they list 3-4 obvious ones instead of the 15+ that interviewers expect.
This guide covers the actual questions asked in manual testing interviews at Indian companies — with the depth and practical examples that interviewers want to hear. Every answer includes what to say and what most candidates get wrong.
The interviewer does not want to hear "Regression testing is re-testing after code changes." They want to hear: "On my last project, we had 1,200 test cases. After a payment module fix, I identified 85 regression cases by tracing the impact through the order and invoice modules using the traceability matrix."
Testing Fundamentals
These four questions appear in every manual testing interview. They test whether you understand the core concepts or just memorized definitions. The difference between a selected and rejected candidate is almost always in how they explain these basics.
Q1: What is the difference between verification and validation?
Why they ask: This is the opening question in 80% of QA interviews. Most candidates mix them up or give vague answers. The interviewer wants a crisp distinction with examples — not a textbook paragraph.
// VERIFICATION — "Are we building the product RIGHT?" // Static testing — no code execution // Done BEFORE development or during development // Reviews documents, designs, code (without running it) // Examples of Verification: // - Reviewing the SRS document for completeness // - Code review / peer review (reading code, not running it) // - Walkthrough of design documents with stakeholders // - Inspecting wireframes against requirements // VALIDATION — "Are we building the RIGHT product?" // Dynamic testing — requires code execution // Done AFTER development (or during, with builds) // Actually running the software and checking behavior // Examples of Validation: // - Executing test cases on a login page // - Running smoke tests on a new build // - UAT (User Acceptance Testing) by the client // - Performance testing under load // Key difference: // Verification = checking documents/artifacts (no execution) // Validation = checking the actual software (execution required) // Interview tip: Give ONE example for each // "Reviewing the requirements doc is verification. // Testing the login page against those requirements is validation." // That one sentence shows you understand the difference.
Q2: What are the different types of testing?
Why they ask: This question checks if you understand the testing pyramid and when each type is used. Listing types is not enough — you need to explain the sequence and purpose of each. Interviewers at service companies specifically want to hear about smoke and sanity testing because those are done daily on projects.
// Testing Types — in the order they happen:
// 1. UNIT TESTING (Developer does this)
// Tests individual functions/methods in isolation
// Example: Testing a calculateDiscount() function with different inputs
// Tool: JUnit, NUnit, pytest
// 2. INTEGRATION TESTING
// Tests how modules work together
// Example: Login module passes user data to Dashboard module
// — does the dashboard load the correct user profile?
// Types: Big Bang, Top-Down, Bottom-Up, Sandwich
// 3. SYSTEM TESTING (QA team does this)
// Tests the complete application end-to-end
// Example: User registers → logs in → adds to cart → pays → gets confirmation
// This is where manual testers spend most of their time
// 4. ACCEPTANCE TESTING (Client/Business does this)
// Alpha testing: Done at developer's site
// Beta testing: Done at client's site
// UAT: Client verifies the software meets business requirements
// 5. REGRESSION TESTING
// Re-running existing test cases after code changes
// When: After every bug fix, new feature, or code merge
// Example: Payment bug fixed → re-test payment + order + invoice modules
// 6. SMOKE TESTING ("Build Verification Testing")
// Quick check — does the build even work?
// 10-15 critical test cases run on every new build
// Example: Can the app launch? Can user log in? Does the main page load?
// If smoke fails → reject the build, send back to dev
// 7. SANITY TESTING
// Focused check after a specific bug fix or minor change
// Example: Login button was broken → fixed → sanity test ONLY login
// Narrower than regression, done quickly before full testingQ3: What is the difference between black box and white box testing?
Why they ask: This separates candidates who understand testing techniques from those who just run test cases. The interviewer expects you to name specific techniques under each type — not just the definition.
// BLACK BOX TESTING — No knowledge of internal code // Tester only sees inputs and outputs // Manual testers primarily do black box testing // Black Box Techniques: // 1. Equivalence Partitioning // Divide inputs into valid and invalid groups // Test ONE value from each group (saves time) // Example: Age field accepts 18-60 // Valid partition: 18-60 (test with 30) // Invalid partitions: <18 (test with 10), >60 (test with 70) // 2. Boundary Value Analysis (BVA) // Test at the edges of input ranges // Example: Age field 18-60 // Test: 17, 18, 19, 59, 60, 61 // Bugs cluster at boundaries — this catches them // 3. Decision Table Testing // For complex business rules with multiple conditions // Example: Loan approval based on age + income + credit score // Create a table with all condition combinations // 4. State Transition Testing // For features with different states // Example: Order status: Placed → Confirmed → Shipped → Delivered // WHITE BOX TESTING — Full knowledge of internal code // Tester can see the source code and test internal paths // White Box Techniques: // 1. Statement Coverage — every line of code executed at least once // 2. Branch Coverage — every if/else branch executed // 3. Path Coverage — every possible execution path tested // 4. Condition Coverage — every boolean sub-expression tested // Interview tip: Manual testers do black box testing // But knowing white box concepts shows depth // Say: "I primarily do black box, but I understand white box // concepts which helps me write better test cases."
Q4: What is regression testing and when do you perform it?
Why they ask: Regression testing is what manual testers do most often on real projects. The interviewer wants to know if you understand when to trigger it, how to select test cases, and how it fits into the sprint cycle.
// REGRESSION TESTING — Re-running tests after changes // When to perform regression testing: // 1. After a bug fix (the fix might break something else) // 2. After a new feature is added (new code interacts with existing) // 3. After a code merge (integration of multiple developer branches) // 4. After environment changes (OS update, DB migration, server change) // 5. After configuration changes (API endpoint change, third-party update) // How to select regression test cases: // 1. Use the Traceability Matrix // Map requirements → test cases → modules // When Module A changes, find all test cases linked to Module A // ALSO find test cases for modules that depend on Module A // 2. Risk-based selection // High-risk modules (payment, authentication) → always regress // Low-risk modules (about page, static content) → skip if time is short // 3. Priority-based selection // P1 (critical): Always include — login, payment, core workflows // P2 (high): Include if time permits — reports, notifications // P3 (medium): Include in full regression cycles only // P4 (low): Skip unless directly affected // Example answer: // "After a payment module fix, I would regress: // - All payment test cases (directly affected) // - Order module test cases (payment triggers order confirmation) // - Invoice module test cases (payment generates invoice) // - Email notification test cases (payment sends receipt email) // I identify these using the traceability matrix and module dependency map." // This answer shows practical understanding, not textbook knowledge.
Test Cases & Techniques
Writing test cases is the core skill of a manual tester. Every interview includes a "write test cases for X" question. The login page is the most common — interviewers expect 10-15 test cases minimum, not the 3-4 that most candidates write.
Q1: Write test cases for a login page
Why they ask: This is the single most asked question in manual testing interviews. It tests your ability to think systematically about all scenarios — positive, negative, boundary, security, and usability. Writing only "valid login" and "invalid login" will get you rejected.
TC_ID | Description | Steps | Expected Result | Priority
------|------------------------------------------|------------------------------------------------|----------------------------------------|--------
TC001 | Valid login with correct credentials | Enter valid email + valid password, click Login | Redirect to dashboard, welcome message | P1
TC002 | Invalid password with valid email | Enter valid email + wrong password, click Login | Error: "Invalid credentials" | P1
TC003 | Invalid email with valid password | Enter unregistered email + password, click Login| Error: "Account not found" | P1
TC004 | Empty email field | Leave email blank, enter password, click Login | Validation: "Email is required" | P1
TC005 | Empty password field | Enter email, leave password blank, click Login | Validation: "Password is required" | P1
TC006 | Both fields empty | Click Login without entering anything | Validation messages for both fields | P1
TC007 | SQL injection in email field | Enter ' OR 1=1 -- in email, click Login | Error message, no SQL execution | P1
TC008 | XSS attack in email field | Enter <script>alert('xss')</script> in email | Input sanitized, no script execution | P1
TC009 | Password field masking | Type password in password field | Characters displayed as dots/asterisks | P2
TC010 | Remember me checkbox | Login with Remember Me checked, close browser | Session persists on reopening | P2
TC011 | Forgot password link | Click "Forgot Password" link | Redirect to password reset page | P2
TC012 | Account lockout after failed attempts | Enter wrong password 5 times consecutively | Account locked, lockout message shown | P1
TC013 | Email format validation | Enter "abc" (no @), click Login | Validation: "Enter valid email" | P2
TC014 | Max length for email field | Enter 256+ character email | Input truncated or validation error | P3
TC015 | Max length for password field | Enter 100+ character password | Input truncated or validation error | P3
TC016 | Copy-paste in password field | Try to copy text from password field | Copy disabled for security | P3
TC017 | Login with Caps Lock on | Enter password with Caps Lock, click Login | Warning: "Caps Lock is on" or failure | P2Interview tip: Start with positive cases, then negative, then security, then usability. This shows structured thinking. Most candidates stop at TC003 — going to TC010+ shows thoroughness that interviewers remember.
Q2: What is boundary value analysis? Give an example.
Why they ask: BVA is the most effective black box technique. Bugs cluster at boundaries — the edges of valid input ranges. Interviewers want to see that you can identify boundary values for any given field, not just recite the definition.
// BOUNDARY VALUE ANALYSIS (BVA) // Test at the edges of input ranges where bugs are most likely // Example: Age field that accepts 18 to 60 // Boundary values to test: // 17 → just below minimum (INVALID — should be rejected) // 18 → minimum boundary (VALID — should be accepted) // 19 → just above minimum (VALID — should be accepted) // 59 → just below maximum (VALID — should be accepted) // 60 → maximum boundary (VALID — should be accepted) // 61 → just above maximum (INVALID — should be rejected) // Why BVA works: // Developers often write: if (age >= 18 && age <= 60) // Common bugs: // if (age > 18) → rejects 18 (off-by-one error) // if (age < 60) → rejects 60 (off-by-one error) // if (age >= 18 && age < 60) → rejects 60 // BVA catches these off-by-one errors that equivalence // partitioning would miss (testing with age=30 would pass all) // Another example: Password length 8-20 characters // Test: 7 chars, 8 chars, 9 chars, 19 chars, 20 chars, 21 chars // Interview tip: Always mention WHY boundaries matter // "Bugs cluster at boundaries due to off-by-one errors // in conditional statements. BVA specifically targets these."
Q3: What is equivalence partitioning? How is it different from BVA?
Why they ask: Equivalence partitioning and BVA are complementary techniques. Interviewers want to see that you understand both and know when to use each. The key insight: EP reduces the number of test cases, BVA focuses on where bugs hide.
// EQUIVALENCE PARTITIONING (EP) // Divide input domain into classes where behavior is the same // Test ONE value from each class (any value represents the whole class) // Example: Age field accepts 18-60 // Partition 1: age < 18 (invalid) → test with 10 // Partition 2: 18 <= age <= 60 (valid) → test with 35 // Partition 3: age > 60 (invalid) → test with 75 // Only 3 test cases needed instead of testing every age value // Assumption: if 35 works, 25 and 45 will also work (same partition) // EP vs BVA — Key Differences: // EP: Tests ANY value from each partition (middle values) // BVA: Tests values AT the boundaries (edge values) // EP: Reduces total test cases (efficiency) // BVA: Finds boundary bugs (effectiveness) // In practice, use BOTH together: // EP gives you the partitions: <18, 18-60, >60 // BVA gives you the specific values: 17, 18, 19, 59, 60, 61 // Combined test set for age field: // From EP: 10 (invalid low), 35 (valid), 75 (invalid high) // From BVA: 17, 18, 19, 59, 60, 61 // Total: 9 test cases that cover all scenarios efficiently // Interview tip: Always mention using them together // "I use EP to identify partitions and reduce test count, // then BVA to test boundaries within those partitions. // Together they give maximum coverage with minimum cases."
Practice These Questions in a Mock Interview
Reading answers is not the same as explaining them under pressure. Practice with timed mock interviews that simulate the actual QA technical round — test case writing, bug lifecycle explanations, and scenario-based questions with follow-ups.
TRY MOCK QA INTERVIEW →
Bug lifecycle management and JIRA workflows are tested in every QA interview — knowing the states and transitions is non-negotiable.
Bug Lifecycle & JIRA
Bug reporting is the other half of a manual tester's job. Writing test cases finds bugs — writing good bug reports gets them fixed. Every interviewer asks about the bug lifecycle and expects you to know JIRA workflows.
Q1: Explain the bug lifecycle
Why they ask: The bug lifecycle shows you understand the workflow between QA and development. Interviewers want to see that you know all states — including the less obvious ones like Rejected, Deferred, and Reopened. Drawing this on a whiteboard is a common ask.
// BUG LIFECYCLE — Complete Flow // Main Path: // New → Assigned → Open → Fixed → Retest → Verified → Closed // Step-by-step: // 1. NEW: Tester logs the bug in JIRA // 2. ASSIGNED: Test lead/manager assigns it to a developer // 3. OPEN: Developer acknowledges and starts working on it // 4. FIXED: Developer fixes the bug, moves to Fixed // 5. RETEST: Tester retests the fix on the new build // 6. VERIFIED: Fix confirmed working, tester verifies // 7. CLOSED: Bug is resolved, no further action needed // Alternative Paths: // REJECTED: Developer says "not a bug" or "works as designed" // → Tester reviews → if valid concern, escalate to lead // → if developer is right, close with "Not a Bug" resolution // DEFERRED: Bug is valid but will be fixed in a later release // → Low priority bugs or bugs in features being redesigned // → Stays in backlog, tracked for future sprints // REOPENED: Tester retests and the bug still exists // → Fix did not work, or fix caused a new issue // → Goes back to OPEN state, developer re-investigates // DUPLICATE: Same bug already reported by another tester // → Link to original bug, close as Duplicate // NOT REPRODUCIBLE: Bug cannot be reproduced consistently // → Add environment details, try different conditions // → If still not reproducible after 3 attempts, close with notes // Interview tip: Draw this as a flowchart on the whiteboard // Show the main path AND the alternative paths // Mention REOPENED specifically — it shows real project experience
Q2: What is the difference between severity and priority?
Why they ask: This is a trick question because most candidates think high severity always means high priority. The interviewer specifically wants examples where they differ — that shows you understand real project decision-making.
// SEVERITY — Technical impact of the bug (set by TESTER) // How badly does the bug affect the system? // PRIORITY — Business urgency to fix (set by MANAGER/PO) // How soon does this need to be fixed? // They are NOT the same. Examples: // HIGH SEVERITY + HIGH PRIORITY // Payment gateway crashes when user clicks "Pay Now" // Impact: Users cannot complete purchases (revenue loss) // Fix: Immediately — this blocks the core business flow // HIGH SEVERITY + LOW PRIORITY // Application crashes when user uploads a 500MB file // Impact: Severe (crash), but 99% of users upload <10MB files // Fix: Next sprint — affects very few users in practice // LOW SEVERITY + HIGH PRIORITY // Company logo on homepage is the old version // Impact: Minor (cosmetic), but CEO noticed and wants it fixed today // Fix: Immediately — business stakeholder demands it // LOW SEVERITY + LOW PRIORITY // Tooltip text has a minor grammatical error on settings page // Impact: Cosmetic, rarely seen by users // Fix: Whenever convenient — backlog item // Severity Levels: // S1 - Critical: System crash, data loss, security breach // S2 - Major: Major feature broken, no workaround // S3 - Minor: Feature works but with issues, workaround exists // S4 - Cosmetic: UI issues, typos, alignment problems // Priority Levels: // P1 - Urgent: Fix immediately (hotfix) // P2 - High: Fix in current sprint // P3 - Medium: Fix in next sprint // P4 - Low: Fix when time permits (backlog)
Q3: How do you write a good bug report?
Why they ask: A poorly written bug report wastes everyone's time. The developer cannot reproduce it, the manager cannot prioritize it, and the tester has to explain it in a meeting. Interviewers want to see that you can write a bug report that stands on its own.
// BUG REPORT TEMPLATE — What a good bug report looks like: Bug ID: BUG-1042 Title: Payment fails with "Network Error" when using UPI on Android Severity: S1 - Critical Priority: P1 - Urgent Module: Payment Gateway Environment: Android 13, Chrome 120, Production server Build: v2.4.1 (Build #892) Reported By: [Tester Name] Assigned To: [Developer Name] Date: [Date] // STEPS TO REPRODUCE (most important section): 1. Login with valid credentials 2. Add any product to cart 3. Proceed to checkout 4. Select "UPI" as payment method 5. Enter valid UPI ID (e.g., name@paytm) 6. Click "Pay Now" // EXPECTED RESULT: UPI payment request sent to user's UPI app for approval // ACTUAL RESULT: Error message: "Network Error — Please try again" Payment does not process. No request sent to UPI app. Console shows: 500 Internal Server Error on /api/payment/upi // ADDITIONAL INFO: - Works fine with Credit Card and Net Banking - Reproducible 100% of the time with UPI - Tested on 3 different UPI IDs — same error - Was working in Build #890 (broke in #891 or #892) - Screenshot attached: error_screenshot.png - Network log attached: payment_network_log.har // WHAT MAKES THIS A GOOD BUG REPORT: // 1. Clear title (what + where + when) // 2. Exact steps to reproduce (anyone can follow) // 3. Expected vs Actual result (clear gap) // 4. Environment details (OS, browser, build) // 5. Additional context (when it broke, what still works) // 6. Attachments (screenshots, logs)
SDLC & STLC
SDLC and STLC questions test whether you understand where testing fits in the overall software development process. Service companies ask these in every interview because their projects follow structured lifecycles — Waterfall for legacy clients, Agile for newer ones.
Q1: Explain the STLC phases
Why they ask: STLC (Software Testing Life Cycle) defines the tester's workflow from start to finish. Interviewers want to see that you understand each phase and what deliverables come out of it — not just the phase names.
// STLC — Software Testing Life Cycle (6 Phases) // Phase 1: REQUIREMENT ANALYSIS // Input: SRS document, business requirements // Activity: Review requirements for testability // - Which requirements can be tested? // - Which need clarification from the BA/client? // - Identify automation candidates early // Output: Requirement Traceability Matrix (RTM) — initial draft // Who: Test Lead + Senior Testers // Phase 2: TEST PLANNING // Input: Requirements, project plan, resource availability // Activity: Create the test plan document // - Scope: what to test and what NOT to test // - Test strategy: manual vs automation, tools to use // - Resource allocation: who tests which module // - Schedule: timelines for each testing phase // - Risk assessment: what could go wrong // Output: Test Plan document // Who: Test Lead / Test Manager // Phase 3: TEST CASE DEVELOPMENT // Input: Requirements, test plan // Activity: Write test cases and test scripts // - Write detailed test cases with steps and expected results // - Create test data (valid, invalid, boundary values) // - Review test cases with peers (test case review) // Output: Test cases, test data, test scripts // Who: Test Engineers // Phase 4: ENVIRONMENT SETUP // Input: Environment requirements from test plan // Activity: Set up test environment // - Install required software, configure servers // - Set up test databases with test data // - Verify environment with smoke tests // Output: Ready test environment, smoke test results // Who: DevOps / Environment team + Testers // Phase 5: TEST EXECUTION // Input: Test cases, test environment, build // Activity: Execute test cases and log results // - Run test cases, mark Pass/Fail // - Log bugs for failed test cases in JIRA // - Retest fixed bugs // - Run regression tests after fixes // Output: Test execution report, bug reports, updated RTM // Who: Test Engineers // Phase 6: TEST CLOSURE // Input: Test execution results, project completion criteria // Activity: Wrap up testing activities // - Evaluate exit criteria (all P1/P2 bugs fixed?) // - Create test summary report // - Lessons learned meeting // - Archive test artifacts // Output: Test closure report, metrics, lessons learned // Who: Test Lead + entire QA team
Q2: What is a test plan vs test strategy?
Why they ask: Most candidates use these terms interchangeably — they are not the same. The test strategy is organization-level (applies to all projects), the test plan is project-specific. Getting this distinction right shows you have worked on real projects.
// TEST STRATEGY — Organization-level document // Created by: Test Manager / QA Head // Scope: Applies to ALL projects in the organization // Changes: Rarely updated (once a year or less) // Test Strategy covers: // - Testing standards and processes for the company // - Tools approved for use (JIRA, Selenium, etc.) // - Defect management process // - Test environment standards // - Metrics and reporting standards // - Automation framework standards // TEST PLAN — Project-level document // Created by: Test Lead for each project // Scope: Applies to ONE specific project // Changes: Updated as project evolves // Test Plan covers: // - Scope: Features to test and features NOT to test // - Test approach: Which types of testing for this project // - Resources: Team members and their responsibilities // - Schedule: Start date, end date, milestones // - Entry criteria: When to start testing // (e.g., build deployed, smoke test passed) // - Exit criteria: When to stop testing // (e.g., all P1 bugs fixed, 95% test cases executed) // - Risks: What could delay testing // - Deliverables: What documents QA will produce // Key difference: // Strategy = HOW the organization does testing (general) // Plan = HOW this specific project will be tested (specific) // Example: Strategy says "Use JIRA for all defect tracking" // Plan says "For Project X, bugs will be logged in JIRA // project key PROJX, assigned to dev lead within 4 hours"
Q3: What is a traceability matrix and why is it important?
Why they ask: The Requirements Traceability Matrix (RTM) is how you prove that every requirement has been tested. Service companies use it to show clients that testing is complete. Interviewers want to see that you understand its structure and purpose.
// REQUIREMENTS TRACEABILITY MATRIX (RTM) // Purpose: Map every requirement to its test cases // Ensures: No requirement is untested (complete coverage) // Used by: Test lead to track coverage, shown to client // RTM Structure: Req_ID | Requirement Description | TC_IDs | Status | Defects -------|------------------------------|-----------------|-----------|-------- REQ001 | User can login with email | TC001-TC017 | Tested | BUG-042 REQ002 | User can reset password | TC018-TC025 | Tested | None REQ003 | User can update profile | TC026-TC035 | In Progress| BUG-051 REQ004 | Admin can manage users | TC036-TC048 | Not Started| — REQ005 | Payment via UPI | TC049-TC062 | Tested | BUG-078 // Types of Traceability: // Forward: Requirements → Test Cases (are all requirements covered?) // Backward: Test Cases → Requirements (is every test case linked to a requirement?) // Bi-directional: Both ways (complete traceability) // Why RTM is important: // 1. Coverage proof: Shows client that every requirement is tested // 2. Impact analysis: When REQ003 changes, you know TC026-TC035 need updating // 3. Regression selection: When Module A changes, RTM shows which TCs to re-run // 4. Gap identification: If a requirement has 0 test cases, it is untested // 5. Status tracking: Quick view of testing progress per requirement // Interview tip: Mention that you UPDATE the RTM throughout the project // It is not a one-time document — it evolves as requirements change // "I update the RTM after every sprint to reflect new test cases // and changed requirements. It is the single source of truth // for test coverage on the project."
Scenario Questions
Scenario questions are the hardest part of the QA interview because there is no single correct answer. The interviewer is testing your judgment, communication skills, and ability to handle real project situations. These questions separate experienced testers from freshers who only know theory.
Q1: You found a critical bug on the day before release. What do you do?
Why they ask: This tests your understanding of release processes and stakeholder communication. The wrong answer is "I would stop the release." You do not have that authority. The right answer shows process awareness and escalation skills.
// SCENARIO: Critical bug found 1 day before release // Step 1: VERIFY the bug thoroughly // - Reproduce it 3 times to confirm it is consistent // - Test on multiple browsers/devices // - Check if it exists in the previous build (is it new or old?) // - Document exact steps, screenshots, and logs // Step 2: LOG the bug immediately in JIRA // - Severity: S1 (Critical) // - Priority: P1 (Urgent) // - Include all reproduction steps and evidence // - Tag the build number and environment // Step 3: ESCALATE to Test Lead and Project Manager // - Do NOT just log it and wait — actively notify // - Send a message/email: "Critical bug found in [module], // blocks [functionality], details in BUG-XXXX" // - Include impact assessment: "Users cannot complete payment" // Step 4: ATTEND the triage meeting (or request one) // - Present the bug with evidence // - The decision to release or delay is NOT yours // - It is made by: PM + Dev Lead + Test Lead + Product Owner // Possible outcomes: // A) Fix and release: Dev fixes it, QA retests + quick regression // B) Release with known issue: Bug documented, workaround provided // C) Delay release: If the bug is a showstopper with no workaround // D) Partial release: Release other features, hold the affected module // What NOT to say in the interview: // "I would stop the release" — you don't have that authority // "I would tell the developer to fix it" — not your call // "I would ignore it if it's minor" — S1 bugs are never minor // What TO say: // "I would verify, document, escalate, and let the stakeholders // make an informed decision with complete information from QA."
Q2: The developer says "it works on my machine." How do you handle this?
Why they ask: This is the most common QA-developer conflict. The interviewer wants to see that you can handle disagreements professionally — with evidence, not arguments. This question tests your soft skills as much as your technical skills.
// SCENARIO: Developer says "Works on my machine" // This happens on EVERY project. The key: respond with data, not emotion. // Step 1: COMPARE environments // Your machine vs developer's machine: // - OS version (Windows 10 vs Windows 11?) // - Browser version (Chrome 120 vs Chrome 118?) // - Database version (MySQL 8.0 vs 8.1?) // - API endpoint (staging vs local?) // - Test data (different user roles? different data sets?) // Step 2: REPRODUCE on a neutral environment // - Test on the QA/staging server (not your local, not dev's local) // - If it fails on staging → it is a real bug // - If it passes on staging → it might be environment-specific // Step 3: PROVIDE evidence // - Share screenshots with timestamps // - Share browser console logs (F12 → Console) // - Share network logs (F12 → Network tab → export HAR file) // - Share exact steps with exact test data used // - Record a screen video of the bug occurring // Step 4: COLLABORATE, don't argue // - "Can we look at this together on the staging server?" // - "Here are my environment details — can we compare?" // - "I have a screen recording — can I walk you through it?" // Step 5: ESCALATE if needed (professionally) // - If developer still refuses, involve the Test Lead // - The Test Lead and Dev Lead can review together // - Never make it personal — keep it about the bug // What NOT to say: // "You're wrong, it's definitely a bug" // "Just fix it, I've already logged it" // What TO say: // "I can reproduce it consistently on staging. Here's the // screen recording and network logs. Can we debug together?"
Q3: You have 500 test cases but only 2 days. How do you prioritize?
Why they ask: Time pressure is reality on every project. The interviewer wants to see a systematic prioritization approach — not "I would work overtime." This tests your ability to make risk-based decisions.
// SCENARIO: 500 test cases, 2 days, cannot run all // Step 1: CATEGORIZE by priority // P1 (Critical): ~50 test cases — MUST run all // Login, payment, core business workflows, security // P2 (High): ~100 test cases — run as many as possible // Major features, integrations, data processing // P3 (Medium): ~200 test cases — run if time permits // Secondary features, edge cases, UI validations // P4 (Low): ~150 test cases — skip this cycle // Cosmetic, tooltips, rarely used features // Step 2: APPLY risk-based testing // High risk + high usage = test first // Payment module (high risk, used by every customer) // High risk + low usage = test second // Admin bulk delete (high risk, used rarely) // Low risk + high usage = test third // Search functionality (low risk, used frequently) // Low risk + low usage = skip // Settings page color theme (low risk, rarely changed) // Step 3: COMMUNICATE the plan // Email to Test Lead and PM: // "We have 500 TCs and 2 days. My plan: // Day 1: All 50 P1 cases + 80 P2 cases (risk-based selection) // Day 2: Remaining 20 P2 cases + top 50 P3 cases // Skipping: 150 P3 + 150 P4 cases // Risk: [list untested areas and their impact] // Need approval to proceed with this reduced scope." // Step 4: DOCUMENT what was not tested // The test execution report must clearly state: // - What was tested (with results) // - What was NOT tested (with reason: time constraint) // - Risk assessment for untested areas // This protects you and the team if a bug is found later // What NOT to say: "I would work overtime to finish all 500" // What TO say: "I would prioritize based on risk and business // impact, communicate the plan, and document the gaps."
How to Prepare — By Company Type
The depth of QA knowledge tested varies dramatically by company type. A TCS interview and a startup interview ask fundamentally different questions. Here is what each expects and how long to prepare:
Service Companies — TCS, Infosys, Wipro, Cognizant, HCL
Preparation time: 1 week. These companies hire in bulk and ask standardized questions. Focus on SDLC/STLC phases, testing types (know all 7), test case writing (practice login page, ATM, and e-commerce cart), bug lifecycle (draw the flowchart from memory), and the difference between severity and priority. They rarely ask scenario questions for freshers — it is mostly definition-based with a "write test cases for X" practical question.
Most asked: STLC phases, verification vs validation, write test cases for a login page, explain the bug lifecycle, what is regression testing. Nail these five and you clear the technical round at any service company.
Product Companies — Flipkart, Swiggy, Razorpay, Freshworks
Preparation time: 2 weeks. Product companies go deeper. Beyond fundamentals, they test scenario-based questions (critical bug before release, developer disagreements, prioritization under time pressure), API testing basics (what is a REST API, status codes, how to test an API without UI), SQL queries (SELECT, JOIN, WHERE — testers need to verify data in databases), and exploratory testing approaches.
Most asked: Scenario questions, test case design for complex features (not just login — think payment flows, search with filters, multi-step forms), API testing concepts, basic SQL for data verification. They want problem-solvers, not definition-reciters.
Startups — Early-stage to Series B
Preparation time: 2 weeks. Startups want testers who can do everything. Add automation awareness (Selenium basics — you do not need to be an expert, but know what it does), Agile/Scrum terminology (sprints, standups, retrospectives, user stories, acceptance criteria), exploratory testing (how to test without detailed requirements), and basic performance testing concepts (what is load testing, when do you need it).
Most asked: How do you test a feature with no documentation? What is your approach to exploratory testing? How do you decide what to automate vs test manually? Describe your experience with Agile. Startups value adaptability and breadth over deep specialization in one area.
The candidate who writes 15 test cases for a login page, draws the bug lifecycle on the whiteboard without hesitation, and answers scenario questions with "on my project, I handled it by..." gets the offer. The one who recites ISTQB definitions does not.
Manual testing interviews in India are predictable — the same 15-20 questions appear across companies. The difference between selected and rejected candidates is not knowledge, it is depth. Anyone can say "regression testing is re-testing after changes." The candidate who explains how they selected 85 regression cases from 1,200 using the traceability matrix and risk-based prioritization — that candidate gets hired. Practice writing test cases until you can produce 15+ for any feature in 10 minutes. Practice explaining the bug lifecycle until you can draw it from memory. Practice scenario answers until they sound like real project experiences, not rehearsed scripts.
Prepare for Your QA Interview
Practice with AI-powered mock interviews that simulate the actual QA technical round, get your resume ATS-ready, and walk into your next testing interview with confidence.
Free · AI-powered · Instant feedback
Related Reading
Interview Prep
Interview Questions for DevOps
CI/CD, Docker, Kubernetes, and what DevOps rounds actually cover
14 min read
Interview Prep
Interview Questions for Freshers
HR rounds, aptitude, and what first-time job seekers get asked
10 min read
Resume Guide
Software Developer Resume — India
Build a resume that highlights your technical and QA skills
11 min read