🕐 20 min read | 📂 QA Career | ✍️ Testionix Team | 📅 Updated April 2026
Whether you are a fresher preparing for your first QA role or an experienced tester going for a senior position — these are the exact questions hiring managers ask in 2025, with real answers that will actually impress them.
- Basic Software Testing Questions (Freshers)
- Manual Testing Interview Questions
- Automation Testing Questions (Selenium & Playwright)
- API Testing Interview Questions
- Performance Testing Questions (JMeter)
- Agile & SDLC Testing Questions
- Tricky Scenario-Based Questions
- How to Answer QA Interview Questions Like a Pro
- Final Checklist Before Your Interview
Part 1: Basic Software Testing Questions (Freshers)
Q1. What is software testing and why is it important?
Model Answer: Software testing is the process of evaluating a software application to verify that it behaves as expected and to identify any defects before it reaches end users. It is important because bugs found in production cost significantly more to fix than bugs caught during development — studies show the cost multiplies 6 to 100 times. Testing also protects the company’s reputation, ensures user satisfaction, and prevents data loss or security breaches.
Q2. What is the difference between QA, QC, and Testing?
Model Answer: These three are often confused but are distinct. QA (Quality Assurance) is a proactive process focused on preventing defects by improving the development process itself — it’s about standards and processes. QC (Quality Control) is reactive — it involves examining the product to identify defects after it is built. Testing is a specific activity within QC where the software is actually executed to find defects. In simple terms: QA prevents bugs, QC detects bugs, testing executes to find bugs.
Q3. What is STLC (Software Testing Life Cycle)?
Model Answer: STLC is a sequence of activities performed during the testing process. The phases are: Requirement Analysis (understand what needs to be tested), Test Planning (define scope, tools, timeline, resources), Test Case Design (write test cases and test data), Test Environment Setup (prepare staging servers, devices), Test Execution (run tests, log defects), and Test Closure (prepare test reports, lessons learned). Each phase has defined entry and exit criteria.
Q4. What is the difference between Smoke Testing and Sanity Testing?
| Aspect | Smoke Testing | Sanity Testing |
|---|---|---|
| Purpose | Check if build is stable enough to test | Check specific functionality after a fix |
| Scope | Broad — covers critical features | Narrow — focused on the changed area |
| When run | After every new build | After a bug fix or small change |
| Documentation | Usually scripted | Often unscripted / exploratory |
| Done by | Dev or QA team | QA team |
Q5. What is Regression Testing and when do you perform it?
Model Answer: Regression testing verifies that new code changes have not broken existing functionality. You perform it after: any bug fix, addition of a new feature, code refactoring, or environment changes. It is one of the most critical testing types in Agile development because new code is merged continuously. Regression suites are ideal candidates for automation because they run repeatedly on the same features.
Q6. What is the difference between Severity and Priority?
Model Answer: Severity describes the technical impact of a bug on the system — how badly it breaks things. Priority describes the business urgency of fixing it. A classic example of high priority but low severity: a spelling mistake in the company logo on the homepage. It doesn’t crash anything (low severity) but it’s embarrassing and must be fixed immediately (high priority). The reverse — high severity, low priority — could be a crash in an admin panel rarely used by anyone.
Q7. What should a good test case include?
Model Answer: A well-written test case contains: Test Case ID, Module name, Test description (one sentence), Pre-conditions (what must be true before testing), Test steps (numbered and specific), Test data (exact values to use), Expected result (what should happen), Actual result (filled during execution), Status (Pass/Fail/Blocked), and Priority. The most common mistake is writing vague expected results like “it should work” — always write the exact expected outcome.
🧪 Looking to hire a QA engineer with all these skills?
Testionix provides expert QA testing services — manual, automation, API, and performance testing. Starting at $10/hr with flexible contracts.
Get a Free Quote →Part 2: Manual Testing Interview Questions
Q8. What is Exploratory Testing?
Model Answer: Exploratory testing is a simultaneous process of learning, test design, and test execution. Unlike scripted testing, the tester has no pre-written test cases — they explore the application using intuition, experience, and creativity. It is most effective for finding bugs that scripted tests miss because real users don’t follow scripts. It’s particularly valuable for new features, usability issues, and edge cases. Good exploratory testers use “charters” — a time-boxed goal like “explore the checkout flow for 45 minutes and focus on payment failure scenarios.”
Q9. What is Boundary Value Analysis (BVA)?
Model Answer: BVA is a test case design technique that focuses on values at the boundaries of input ranges, where bugs most commonly occur. If a field accepts values from 1 to 100, the boundary values to test are: 0 (just below minimum), 1 (minimum), 2 (just above minimum), 99 (just below maximum), 100 (maximum), and 101 (just above maximum). You don’t need to test every value in between — bugs cluster at boundaries. This is one of the most efficient techniques for reducing the number of test cases while maintaining high coverage.
Q10. What is Equivalence Partitioning?
Model Answer: Equivalence Partitioning divides input data into partitions where all values in a partition are expected to behave the same way. Instead of testing every possible input, you test one representative value from each partition. For example, if an age field accepts 18–60: one test with age 30 (valid partition) is sufficient to represent all valid ages. You also test one invalid value below 18 and one above 60. This dramatically reduces the number of test cases needed.
Q11. How do you write a bug report that developers actually appreciate?
Model Answer: A developer-friendly bug report has five elements: (1) Specific title that answers what broke, where, and under what condition. (2) Numbered reproduction steps starting from a clean state — precise enough that anyone can reproduce it. (3) Expected vs Actual — two clear lines with no ambiguity. (4) Environment details — browser, OS, device, app version, and test environment URL. (5) Evidence — screenshot or screen recording attached. I always record a short video for complex bugs. A good bug report gets fixed faster because the developer doesn’t need to ask follow-up questions.
Q12. What is the defect life cycle?
Model Answer: The defect life cycle tracks a bug from discovery to closure: New → tester logs the bug. Assigned → bug manager assigns to a developer. Open → developer starts investigating. Fixed → developer resolves and marks as fixed. Retest → tester verifies the fix. Closed → fix confirmed. Reopened → if fix didn’t work. Other possible states include Deferred (fixing postponed), Rejected (not a bug), and Duplicate (already reported).
Q13. What is the difference between Black Box, White Box, and Grey Box Testing?
Model Answer: Black Box Testing — tester has no knowledge of internal code. Tests are based on requirements and expected outputs. Most manual QA testing is black box. White Box Testing — tester has full visibility into the source code and tests internal logic, code paths, and branches. Typically done by developers (unit tests). Grey Box Testing — tester has partial knowledge of internal structure, enough to design better tests. API testing is a good example — you know the data structures but not the full implementation.
Part 3: Automation Testing Questions (Selenium & Playwright)
Q14. When should you automate a test and when should you keep it manual?
Model Answer: Automate when: the test runs frequently (regression suite), the test data is predictable and stable, the test involves repetitive steps across many data sets, or the test is for high-risk functionality where you need fast feedback. Keep manual when: the test is being run for the first time, the UI changes frequently making automation brittle, the test requires human judgment (usability, visual appeal), or it’s exploratory testing. A good rule of thumb — if you’ll run it more than 5 times, it’s worth automating.
Q15. What is Playwright and why is it better than Selenium in 2025?
Model Answer: Playwright is Microsoft’s open-source automation framework for end-to-end testing. In 2025, it has overtaken Selenium in popularity for several reasons: it has built-in auto-waiting (no manual waits needed), supports Chrome, Firefox, and Safari with one script, runs significantly faster, has built-in video recording and tracing on failure, and has much simpler setup. Selenium still has advantages in language support breadth and legacy project compatibility, but for new projects, Playwright is the recommended choice.
Q16. What is the Page Object Model (POM)?
Model Answer: POM is a design pattern that creates a separate class (object) for each page of the application. Each class contains the locators and methods for that page. Test scripts then use these page objects rather than directly interacting with locators. The benefit: when a locator changes, you update it in one place (the page object) rather than hunting through dozens of test files. It makes tests more readable, maintainable, and reusable. It’s the most widely used pattern in professional automation frameworks.
Q17. Write a simple Playwright test for a login page.
const { test, expect } = require('@playwright/test');
test('Successful login with valid credentials', async ({ page }) => {
// Navigate to login page
await page.goto('https://yourapp.com/login');
// Fill credentials using best-practice locators
await page.getByLabel('Email').fill('testuser@example.com');
await page.getByLabel('Password').fill('SecurePass123!');
// Click login button
await page.getByRole('button', { name: 'Log In' }).click();
// Assert successful login
await expect(page).toHaveURL(/.*dashboard/);
await expect(page.getByRole('heading', { name: 'Welcome' })).toBeVisible();
});
Key points to mention in the interview: Use getByLabel() and getByRole() locators — they’re more stable than CSS selectors. Always include assertions. Playwright’s auto-waiting means you don’t need explicit waits between steps.
Q18. What is a flaky test and how do you fix it?
Model Answer: A flaky test is one that passes and fails intermittently without any code changes. It’s one of the biggest problems in automation because it erodes trust in the test suite. Common causes and fixes: (1) Timing issues — replaced with proper auto-waiting instead of hardcoded sleep() calls. (2) Fragile locators — switch from CSS classes to data-testid attributes or role-based locators. (3) Test data conflicts — each test should create its own test data and clean up after itself. (4) Environment instability — run tests against a dedicated, stable staging environment.
Q19. What is CI/CD and how does testing fit into it?
Model Answer: CI/CD stands for Continuous Integration and Continuous Delivery. In CI, developers merge code to a shared repository frequently, triggering automated builds and tests. In CD, that code is automatically deployed to staging or production after tests pass. Testing fits in at multiple stages: unit tests run on every commit, integration tests run on every pull request, E2E tests (like Playwright) run before merging to main, and smoke tests run after every deployment. The goal is to catch bugs as early and automatically as possible — before any human has to click deploy.
🤖 Need Playwright automation for your project?
We build complete Playwright automation suites with CI/CD integration. Available from $15/hr — used by startups and enterprise teams alike.
See Automation Services →Part 4: API Testing Interview Questions
Q20. What is API testing and why is it important?
Model Answer: API testing validates the application programming interface directly — sending HTTP requests and verifying the responses — without going through the UI. It’s important because APIs are the backbone of modern applications. Bugs found at the API layer are faster to reproduce, easier to isolate, and cheaper to fix than UI bugs. API tests are also faster and more stable than UI tests, making them ideal for regression testing. Tools commonly used include Postman, REST Assured, and Playwright’s built-in API testing support.
Q21. What HTTP status codes must every API tester know?
| Status Code | Meaning | When to expect it |
|---|---|---|
| 200 OK | Success | GET request returns data |
| 201 Created | Resource created | Successful POST (new user, new order) |
| 204 No Content | Success, no body | Successful DELETE |
| 400 Bad Request | Invalid request data | Missing required field, wrong format |
| 401 Unauthorized | Not authenticated | Missing or invalid token |
| 403 Forbidden | Authenticated but no permission | Regular user accessing admin endpoint |
| 404 Not Found | Resource doesn’t exist | Wrong ID or deleted record |
| 409 Conflict | State conflict | Duplicate email on registration |
| 422 Unprocessable | Validation failed | Valid JSON but invalid values |
| 500 Internal Error | Server-side crash | Unhandled exception on the server |
Q22. What do you test in a REST API endpoint?
Model Answer: For every REST endpoint, I test: (1) Happy path — valid request returns correct data and correct status code. (2) Required field validation — remove each required field and verify 400 error. (3) Data type validation — send wrong data types (string where integer expected). (4) Authentication — test without token (401), with expired token (401), and with insufficient permissions (403). (5) Boundary values — string length limits, numeric ranges. (6) Response schema — all expected fields are present with correct data types. (7) Error messages — verify error responses are clear and don’t leak internal details.
Q23. What is the difference between PUT and PATCH?
Model Answer: PUT replaces the entire resource. If you send a PUT request with only a name field, all other fields get overwritten with nulls or defaults. PATCH partially updates a resource — only the fields you send get updated, everything else stays unchanged. In testing: for PUT, always send the complete resource body. For PATCH, test that partial updates work correctly and that untouched fields retain their original values.
Part 5: Performance Testing Questions (JMeter)
Q24. What is the difference between Load Testing, Stress Testing, and Spike Testing?
Model Answer: Load Testing — tests system behaviour under expected normal and peak load. Goal: verify the system meets performance requirements under anticipated traffic. Stress Testing — pushes the system beyond its limits to find the breaking point. Goal: identify what fails first and how gracefully the system degrades. Spike Testing — suddenly increases load from normal to extreme and back. Goal: test how the system responds to sudden traffic surges, like a flash sale or viral event. All three use tools like JMeter to simulate concurrent virtual users.
Q25. What key metrics do you analyse in a performance test report?
- Response Time — average, 90th percentile (P90), and 95th percentile (P95). P90 means 90% of requests completed within this time. P95 is more important for user experience.
- Throughput — requests per second the system handles. Higher is better.
- Error Rate — percentage of requests that failed. Anything above 1% under normal load is a red flag.
- Concurrent Users — how many virtual users were active during the test.
- CPU & Memory Usage — server resource consumption during the load. Helps identify memory leaks.
- Apdex Score — a standard metric measuring user satisfaction based on response times. Above 0.85 is generally acceptable.
Q26. What is JMeter and what have you used it for?
Model Answer: Apache JMeter is an open-source performance testing tool used to simulate multiple concurrent users hitting an application. I have used JMeter to: create Thread Groups simulating hundreds of concurrent users, record HTTP requests to test web and API endpoints, set up assertions to verify correct responses under load, and generate HTML reports showing response times, throughput, and error rates. JMeter results helped me identify a database bottleneck in a travel booking platform that was causing the checkout page to time out when more than 200 concurrent users were active.
Part 6: Agile & SDLC Testing Questions
Q27. How does testing work in an Agile environment?
Model Answer: In Agile, testing is continuous rather than a final phase. QA is involved from the first day of each sprint — reviewing user stories during sprint planning to identify ambiguities and testability issues. Test cases are written during development, not after. Testing happens in parallel with development within the same sprint. Regression suites run automatically on every build. At the end of each sprint, the tested increment is demonstrated to stakeholders. The key mindset shift: in Agile, quality is everyone’s responsibility — developers write unit tests, QA writes integration and E2E tests, and the team does not move to the next sprint leaving known bugs unfixed.
Q28. What is shift-left testing?
Model Answer: Shift-left means involving QA earlier in the software development lifecycle — moving testing to the left on the project timeline. Instead of testing only at the end, QA reviews requirements at the start, identifies missing acceptance criteria, participates in design discussions, and starts writing test cases before development completes. The benefit is significant cost reduction — a requirement ambiguity caught before coding is 100x cheaper to fix than a bug found in production. Shift-left is a mindset, not just a process.
Q29. What is a Test Plan and what should it contain?
Model Answer: A test plan is a document that defines the overall approach to testing for a project. It contains: Test objectives, Scope (in scope and out of scope), Testing types to be performed, Test environment requirements, Tools and resources needed, Timeline and milestones, Entry criteria (what must be true before testing starts), Exit criteria (what must be true before testing ends), Risk analysis and mitigation, and Deliverables. In Agile, test plans are often lighter and evolve sprint by sprint, but the key elements remain the same.
Part 7: Tricky Scenario-Based Questions
Q30. You find a critical bug one hour before release. What do you do?
Model Answer: First, I verify the bug is genuinely reproducible and understand its full impact — does it affect all users or a subset? Is there a workaround? Then I immediately escalate to the project manager and lead developer with a clear bug report, not just a verbal mention. I provide my recommendation (hold or release) with a risk assessment: what is the probability of a user hitting this bug? What is the impact if they do? The decision to release is a business decision, not mine alone — but my job is to give decision-makers accurate, complete information quickly so they can decide with full context. I never hide bugs because of release pressure.
Q31. How would you test a login page? (Classic interview question)
Model Answer: I approach this systematically across multiple categories. Functional: Valid credentials → successful login; invalid password → error message; invalid email format → validation error; empty fields → required field error. Security: SQL injection in both fields; XSS in name field; check if password is masked; verify HTTPS is used; check session token is invalidated on logout. Boundary: Maximum length email; maximum length password; minimum length password. UX: Does “Remember me” work? Does “Forgot password” work? Does pressing Enter submit the form? Lockout: 5 failed attempts triggers lockout? Lockout message shown? Does it unlock after the stated time?
Q32. A developer says “that’s not a bug, it’s working as designed.” How do you respond?
Model Answer: I go back to the requirements documentation and find the specification for that behaviour. If the documentation supports my expectation and disagrees with the developer’s implementation, I present that evidence calmly — it’s a documentation discussion, not a personal one. If the requirement is genuinely ambiguous, I escalate to the product owner or business analyst to get a definitive answer. I never argue from opinion — I argue from documented requirements. If the product owner confirms the developer is right, I update my test case and document the accepted behaviour. The goal is correctness, not winning arguments.
Q33. How do you decide which test cases to automate first?
Model Answer: I prioritise using three criteria: Frequency — how often will this test run? Regression tests that run every sprint are top priority. Risk — what is the business impact if this feature breaks? Checkout, login, and payment flows are always automated first. Stability — is the feature stable enough that the UI won’t change every sprint? Automating a UI that’s still under active design will waste time on maintenance. I start with the “golden path” — the happy-path flows that must work for the business to function — and expand coverage from there.
Q34. What is your process when you receive a new feature to test?
Model Answer: My process: (1) Read the requirements and user stories thoroughly. (2) Ask clarifying questions for anything ambiguous — before writing a single test case. (3) Identify the acceptance criteria and confirm them with the product owner. (4) Map out the test scenarios covering positive, negative, boundary, and edge cases. (5) Write test cases in our test management tool. (6) Get test cases reviewed if possible. (7) Set up test data. (8) Execute tests and log all defects with full documentation. (9) Retest fixed bugs. (10) Sign off with a test summary report. The most impactful step is #2 — catching ambiguities before development starts saves everyone time.
Part 8: How to Answer QA Interview Questions Like a Pro
Knowing the right answers is only half the battle. How you deliver them determines whether you get the offer. Here are the techniques that separate candidates who get hired from those who don’t:
- Always use the STAR method for behavioural questions — Situation, Task, Action, Result. “Tell me about a difficult bug you found” should have a specific story, not a generic answer.
- Ground every answer in a real example — “In my previous role, when testing a WooCommerce platform, I used BVA to test the discount field and found a rounding error that was causing incorrect totals.” Real examples are memorable and credible.
- Don’t say “I don’t know” — say “I haven’t used that tool but I’m familiar with the concept and have used a similar one.” Show learning agility, not a knowledge gap.
- Ask good questions — at the end, ask “What does the QA workflow look like in your team?” and “What are the biggest quality challenges you’re facing right now?” These show genuine interest and give you valuable information.
- Be honest about automation skills — don’t claim to be a Playwright expert if you’ve only used it a few times. Interviewers test this and honesty builds trust.
- Show ownership — the best QA engineers talk about quality as their responsibility, not just a list of tasks. “I pushed back on the timeline because the checkout flow had an untested payment failure scenario” is far more impressive than “I executed the assigned test cases.”
The best QA engineers don’t just know testing theory — they think like detectives, communicate like consultants, and care about quality like it personally belongs to them. That combination of skills and mindset is what interviewers are really looking for.— Testionix Team, Ahmedabad
Final Checklist Before Your Interview
- ✅ Know STLC, SDLC, and the defect lifecycle cold — these are asked in almost every interview
- ✅ Be ready to write a test case live — have a simple template memorised
- ✅ Know the difference between Smoke, Sanity, Regression, and UAT testing
- ✅ Prepare a real example of a critical bug you found and how you reported it
- ✅ Know at least one automation tool (Playwright or Selenium) with hands-on experience
- ✅ Know basic HTTP status codes for API testing (200, 201, 400, 401, 403, 404, 500)
- ✅ Be ready to answer “How do you test a login page?” with a structured, thorough answer
- ✅ Research the company before the interview — understand their product and tech stack
- ✅ Prepare 2–3 questions to ask the interviewer about their QA process
- ✅ Be honest about what you know and don’t know — integrity matters more than perfection
🏆 Hire QA engineers who already know all of this
Testionix provides experienced QA engineers for web and mobile application testing — manual, Playwright automation, API, and performance testing. No onboarding headaches, no long-term commitments. Flexible contracts from $10/hr.