QA/SDET Theory & Behavioral Interview Questions¶
A comprehensive collection of 60 theory and behavioral interview questions for QA Engineers and Software Development Engineers in Test (SDET).
Table of Contents¶
- Part 1: Software Development & Testing Lifecycles (Q1-8)
- Part 2: Defect Management (Q9-16)
- Part 3: Types of Testing (Q17-28)
- Part 4: Test Design & Documentation (Q29-36)
- Part 5: API Testing Concepts (Q37-44)
- Part 6: Testing Principles & Best Practices (Q45-50)
- Part 7: Behavioral & Situational Questions (Q51-60)
- Interview Tips
Part 1: Software Development & Testing Lifecycles¶
Q1. What is SDLC? Explain its phases.¶
Answer:
SDLC (Software Development Life Cycle) is a systematic process for planning, creating, testing, and deploying software applications. It provides a structured approach to software development.
Phases of SDLC:
flowchart LR
A[1. Requirement<br>Gathering] --> B[2. Planning/<br>Analysis]
B --> C[3. Design]
C --> D[4. Development]
D --> E[5. Testing<br>& Deployment]
E --> F[6. Maintenance]
F -.-> A
| Phase | Description | Key Activities |
|---|---|---|
| 1. Requirement Gathering | Collect and document stakeholder needs | Interviews, surveys, BRS/SRS creation |
| 2. Planning/Analysis | Analyze feasibility and plan resources | Cost estimation, risk assessment |
| 3. Design | Create system architecture | HLD, LLD, database design, UI mockups |
| 4. Development | Write and build the code | Coding, code reviews, version control |
| 5. Testing | Verify and validate the software | Test execution, defect logging, fixes |
| 6. Deployment | Release to production | Installation, user training |
| 7. Maintenance | Ongoing support and updates | Bug fixes, enhancements, monitoring |
Q2. What is STLC? Explain its phases.¶
Answer:
STLC (Software Testing Life Cycle) is a sequence of specific activities conducted during the testing process to ensure software quality goals are met. It defines what testing activities to perform and when.
Phases of STLC:
flowchart LR
A[1. Requirement<br>Analysis] --> B[2. Test<br>Planning]
B --> C[3. Test Case<br>Design]
C --> D[4. Environment<br>Setup]
D --> E[5. Test<br>Execution]
E --> F[6. Test Cycle<br>Closure]
| Phase | Entry Criteria | Activities | Exit Criteria | Deliverables |
|---|---|---|---|---|
| 1. Requirement Analysis | Requirements documents available | Identify testable requirements, analyze feasibility | RTM created, questions clarified | RTM, Automation feasibility report |
| 2. Test Planning | Requirements signed off | Define scope, estimate effort, identify resources | Test plan approved | Test Plan, Effort estimation |
| 3. Test Case Design | Requirements and test plan ready | Write test cases, create test data, review | Test cases reviewed and approved | Test cases, Test data, Test scripts |
| 4. Environment Setup | Test plan and design ready | Setup hardware/software, configure test environment | Environment ready with smoke test | Environment ready, Smoke test results |
| 5. Test Execution | Test cases ready, environment setup | Execute tests, log defects, retest | All tests executed, defects tracked | Test results, Defect reports |
| 6. Test Cycle Closure | Testing complete, defects closed | Generate reports, document learnings | Sign-off from stakeholders | Test closure report, Metrics |
Q3. What is the difference between SDLC and STLC?¶
Answer:
| Aspect | SDLC | STLC |
|---|---|---|
| Full Form | Software Development Life Cycle | Software Testing Life Cycle |
| Focus | Entire software development process | Testing activities specifically |
| Scope | Broader - covers planning to maintenance | Narrower - covers test planning to closure |
| Goal | Deliver quality software product | Ensure software meets quality standards |
| Phases | 6-7 phases including development | 6 phases focused on testing |
| Participants | Developers, analysts, testers, managers | Primarily testers and QA team |
| Output | Working software application | Test artifacts, defect reports, quality metrics |
| Relationship | STLC is a subset of SDLC | Part of the testing phase in SDLC |
Key Point
STLC is executed within the testing phase of SDLC, but modern practices integrate testing throughout all SDLC phases.
Q4. What is Agile methodology?¶
Answer:
Agile is an iterative and incremental approach to software development that emphasizes flexibility, collaboration, customer feedback, and rapid delivery of working software.
Core Values (Agile Manifesto):
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
Key Principles:
- Deliver working software frequently (weeks, not months)
- Welcome changing requirements, even late in development
- Business and developers work together daily
- Build projects around motivated individuals
- Face-to-face conversation is the most effective communication
- Working software is the primary measure of progress
- Sustainable development pace
- Continuous attention to technical excellence
- Simplicity - maximize work not done
- Self-organizing teams
- Regular reflection and adaptation
Common Agile Frameworks:
- Scrum
- Kanban
- XP (Extreme Programming)
- SAFe (Scaled Agile Framework)
- Lean
Q5. Explain Scrum framework and its ceremonies.¶
Answer:
Scrum is an Agile framework for developing, delivering, and sustaining complex products through iterative cycles called Sprints.
Scrum Roles:
| Role | Responsibilities |
|---|---|
| Product Owner | Defines features, prioritizes backlog, represents stakeholders |
| Scrum Master | Facilitates ceremonies, removes impediments, coaches team |
| Development Team | Cross-functional team that builds the product (5-9 members) |
Scrum Artifacts: - Product Backlog: Prioritized list of all desired features - Sprint Backlog: Items selected for current sprint - Increment: Potentially shippable product at sprint end
Scrum Ceremonies (Events):
| Ceremony | Duration | Purpose | Participants |
|---|---|---|---|
| Sprint Planning | 2-4 hours | Select items for sprint, define sprint goal | Entire Scrum team |
| Daily Standup | 15 minutes | Sync progress, identify blockers | Development team |
| Sprint Review | 1-2 hours | Demo completed work to stakeholders | Team + stakeholders |
| Sprint Retrospective | 1-2 hours | Reflect on process, identify improvements | Scrum team |
| Backlog Refinement | Ongoing | Clarify and estimate upcoming items | PO + team |
Sprint Cycle:
flowchart LR
subgraph Sprint["Sprint (2-4 weeks)"]
A[Sprint<br>Planning] --> B[Daily Work +<br>Standups]
B --> C[Sprint Review +<br>Retrospective]
end
C -.->|Next Sprint| A
Q6. What is the difference between Waterfall and Agile?¶
Answer:
| Aspect | Waterfall | Agile |
|---|---|---|
| Approach | Sequential, linear | Iterative, incremental |
| Phases | Complete one phase before next | Phases overlap in iterations |
| Requirements | Fixed at the beginning | Evolve throughout project |
| Customer Involvement | Beginning and end only | Continuous throughout |
| Delivery | Single delivery at end | Frequent incremental deliveries |
| Testing | After development phase | Continuous, integrated |
| Documentation | Extensive upfront | Minimal, just enough |
| Change Handling | Difficult and costly | Expected and welcomed |
| Team Structure | Specialized roles, silos | Cross-functional, collaborative |
| Risk | High - issues found late | Lower - early feedback |
| Best For | Stable requirements, regulated industries | Evolving requirements, fast-paced projects |
When to use Waterfall:
- Requirements are well-understood and stable
- Project has strict regulatory requirements
- Fixed budget and timeline constraints
When to use Agile:
- Requirements are unclear or likely to change
- Quick time-to-market is important
- Customer feedback is valuable
Q7. What is DevOps and CI/CD?¶
Answer: DevOps:
DevOps is a set of practices that combines software development (Dev) and IT operations (Ops) to shorten the development lifecycle and deliver high-quality software continuously.
Key DevOps Practices:
- Continuous Integration (CI)
- Continuous Delivery/Deployment (CD)
- Infrastructure as Code (IaC)
- Monitoring and Logging
- Collaboration and Communication
CI/CD Pipeline:
flowchart LR
subgraph CI["Continuous Integration"]
A[Code Commit] --> B[Build]
A1[Version Control<br>Git] -.-> A
B1[Compile<br>Package<br>Artifacts] -.-> B
end
subgraph CD["Continuous Delivery"]
B --> C[Test]
C --> D[Deploy]
C1[Unit Tests<br>Integration<br>E2E Tests] -.-> C
D1[Staging<br>Production] -.-> D
end
| Term | Definition |
|---|---|
| Continuous Integration (CI) | Automatically build and test code changes frequently (multiple times/day) |
| Continuous Delivery (CD) | Automatically prepare code for release to production (manual approval) |
| Continuous Deployment | Automatically deploy every change that passes tests to production |
Popular CI/CD Tools:
- Jenkins, GitHub Actions, GitLab CI
- CircleCI, Travis CI, Azure DevOps
- AWS CodePipeline, Google Cloud Build
Q8. What is the role of QA in Agile teams?¶
Answer: In Agile teams, QA is integrated throughout the development process rather than being a separate phase at the end.
Key Responsibilities:
| Activity | Description |
|---|---|
| Sprint Planning | Help estimate testing effort, identify testable acceptance criteria |
| Story Grooming | Clarify requirements, ask questions, define edge cases |
| Test Design | Write test cases during development, not after |
| Continuous Testing | Test features as they're developed, provide quick feedback |
| Automation | Build and maintain automated test suites |
| Defect Management | Log, track, and verify bug fixes within the sprint |
| Collaboration | Work closely with developers, participate in code reviews |
| Demo Participation | Help demonstrate completed features in sprint review |
| Retrospectives | Contribute to process improvements |
Shift-Left Approach:
flowchart LR
subgraph Traditional
T1[Requirements] --> T2[Design] --> T3[Development] --> T4[Testing] --> T5[Deployment]
T4 -.- QA1[QA here]
end
flowchart LR
subgraph Agile
A1[Requirements] --> A2[Design] --> A3[Development] --> A4[Testing] --> A5[Deployment]
QA2[QA involved throughout] -.- A1 & A2 & A3 & A4
end
QA in Agile vs Traditional:
| Traditional QA | Agile QA |
|---|---|
| Testing phase at end | Testing throughout sprint |
| Separate QA team | Embedded in dev team |
| Extensive test documentation | Just enough documentation |
| Focus on finding defects | Focus on preventing defects |
| Gatekeeper role | Collaborative partner |
| Manual testing heavy | Automation-focused |
Part 2: Defect Management¶
Q9. What is the Defect Life Cycle?¶
Answer:
The Defect Life Cycle (Bug Life Cycle) is the journey of a defect from its discovery to its closure. It defines the various states a bug goes through during its lifetime.
Defect Life Cycle Diagram:
flowchart TD
NEW[NEW] --> REJECTED[REJECTED]
NEW --> DUPLICATE[DUPLICATE]
NEW --> ASSIGNED[ASSIGNED]
ASSIGNED --> OPEN[OPEN]
OPEN --> DEFERRED[DEFERRED]
OPEN --> FIXED[FIXED]
OPEN --> NOTABUG[NOT A BUG]
FIXED --> RETEST[PENDING RETEST]
RETEST --> REOPEN[REOPEN]
RETEST --> VERIFIED[VERIFIED]
REOPEN --> OPEN
VERIFIED --> CLOSED[CLOSED]
Defect States Explained:
| State | Description | Who Changes |
|---|---|---|
| New | Defect logged for the first time | Tester |
| Assigned | Bug assigned to developer for fixing | Test Lead/Manager |
| Open | Developer starts analyzing/working on it | Developer |
| Fixed | Developer has fixed the defect | Developer |
| Pending Retest | Ready for QA verification | Developer |
| Retest | Tester retests the fix | Tester |
| Verified | Fix confirmed working | Tester |
| Closed | Bug is resolved and closed | Tester/Lead |
| Reopen | Fix didn't work, bug still exists | Tester |
| Rejected | Not a valid defect | Developer/Lead |
| Duplicate | Same as an existing bug | Developer/Lead |
| Deferred | Fix postponed to future release | Manager/PO |
| Not a Bug | Works as designed | Developer/Lead |
Q10. What is the difference between Defect Severity and Priority?¶
Answer:
| Aspect | Severity | Priority |
|---|---|---|
| Definition | Impact of defect on system functionality | Urgency of fixing the defect |
| Set By | QA/Tester | Product Owner/Manager |
| Based On | Technical impact | Business impact |
| Question | "How bad is the bug?" | "How soon to fix it?" |
Severity Levels:
| Level | Description | Example |
|---|---|---|
| Critical | System crash, data loss, no workaround | Application won't start, payment processing fails |
| Major | Major feature broken, workaround exists | Cannot add items to cart (can call to order) |
| Minor | Minor feature affected, easy workaround | Sort by date doesn't work (can sort manually) |
| Trivial | Cosmetic issues, no functional impact | Typo in text, slight color mismatch |
Priority Levels:
| Level | Description | Timeline |
|---|---|---|
| P1 - Critical | Must fix immediately | Within hours |
| P2 - High | Must fix before release | Within days |
| P3 - Medium | Should fix, can wait | Current release if time permits |
| P4 - Low | Nice to fix | Future release |
Severity vs Priority Matrix:
| Scenario | Severity | Priority | Example |
|---|---|---|---|
| High Severity, High Priority | Critical | P1 | Login broken on production |
| High Severity, Low Priority | Critical | P3 | Crash in rarely used feature |
| Low Severity, High Priority | Minor | P1 | CEO's name misspelled on homepage |
| Low Severity, Low Priority | Trivial | P4 | Typo in admin settings page |
Key Point
A bug can have high severity but low priority (or vice versa). They are independent attributes.
Interview Follow-up
Who sets severity vs priority? → Severity is set by QA (technical impact), Priority is set by Product Owner/Manager (business impact).
Q11. What are the components of a Defect Report?¶
Answer: A good defect report contains all information needed to understand, reproduce, and fix the bug.
Essential Components:
| Component | Description | Example |
|---|---|---|
| Defect ID | Unique identifier | BUG-1234 |
| Title/Summary | Brief description (what + where) | "Login fails with valid credentials on Chrome" |
| Description | Detailed explanation of the issue | Full description of observed behavior |
| Steps to Reproduce | Exact steps to recreate the bug | 1. Go to login page 2. Enter... |
| Expected Result | What should happen | User should be logged in successfully |
| Actual Result | What actually happens | Error message "Invalid credentials" appears |
| Environment | Where bug was found | Chrome 120, Windows 11, Production |
| Severity | Impact level | Critical/Major/Minor/Trivial |
| Priority | Fix urgency | P1/P2/P3/P4 |
| Status | Current state | New/Open/Fixed/Closed |
| Assigned To | Developer responsible | John Doe |
| Reported By | Who found the bug | Jane Smith |
| Reported Date | When bug was logged | 2024-01-15 |
| Attachments | Supporting evidence | Screenshots, videos, logs |
| Build/Version | Software version | v2.3.1, Build 456 |
Optional Components:
- Module/Feature area
- Test case reference
- Root cause (after analysis)
- Resolution notes
- Related defects
Q12. What is Defect Triage?¶
Answer: Defect Triage is a process where defects are reviewed, prioritized, and assigned to ensure the most critical issues are addressed first.
Triage Meeting Participants:
- QA Lead/Manager
- Development Lead
- Product Owner
- Project Manager
Triage Process:
flowchart TD
A[1. Review New Defects] --> B{2. Valid Bug?}
B -->|NO| C[Reject/Close]
B -->|YES| D{3. Duplicate?}
D -->|YES| E[Mark as Duplicate]
D -->|NO| F[4. Assess Severity & Priority]
F --> G[5. Assign to Developer]
G --> H[6. Schedule for Fix]
Decisions Made During Triage:
- Is it a valid defect?
- What is the correct severity/priority?
- Who should fix it?
- When should it be fixed?
- Should it be deferred?
Q13. What is Root Cause Analysis (RCA)?¶
Answer: Root Cause Analysis is a systematic process for identifying the underlying cause of a defect rather than just addressing symptoms.
Common RCA Techniques: 1. 5 Whys Technique:
%%{init: {'theme': 'base'}}%%
flowchart TB
P[🔴 Problem: App crashed during checkout]
P --> W1
W1["❓ Why 1: Why did it crash?"]
A1["💥 Null pointer exception in payment module"]
W1 --> A1
A1 --> W2
W2["❓ Why 2: Why was there a null pointer?"]
A2["📭 Credit card object was null"]
W2 --> A2
A2 --> W3
W3["❓ Why 3: Why was credit card null?"]
A3["📡 API returned empty response"]
W3 --> A3
A3 --> W4
W4["❓ Why 4: Why did API return empty?"]
A4["⏱️ Timeout due to slow response"]
W4 --> A4
A4 --> W5
W5["❓ Why 5: Why was response slow?"]
A5["🗄️ Database query had no index"]
W5 --> A5
A5 --> RC[🎯 Root Cause: Missing database index]
style P fill:#ffcdd2,stroke:#c62828
style RC fill:#c8e6c9,stroke:#2e7d32
2. Fishbone Diagram (Ishikawa):
flowchart LR
A[People] --> D[DEFECT]
B[Process] --> D
C[Technology] --> D
E[Environment] --> D
3. Pareto Analysis: Focus on the 20% of causes creating 80% of defects.
Q14. What is the difference between Defect Prevention and Defect Detection?¶
Answer:
| Aspect | Defect Prevention | Defect Detection |
|---|---|---|
| Timing | Before defects occur | After defects exist |
| Focus | Process improvement | Finding existing bugs |
| Goal | Stop defects from entering | Find defects before users |
| Activities | Reviews, standards, training | Testing, inspection |
| Cost | Lower (early investment) | Higher (rework needed) |
Defect Prevention Activities:
- Code reviews and pair programming
- Requirements reviews
- Design reviews
- Coding standards enforcement
- Static code analysis
- Training and knowledge sharing
- Root cause analysis of past defects
- Process improvements
Defect Detection Activities:
- Unit testing
- Integration testing
- System testing
- Regression testing
- Exploratory testing
- User acceptance testing
Cost of Defects by Phase:
xychart-beta
title "Cost to Fix Defects by Phase"
x-axis [Requirements, Design, Development, Testing, Production]
y-axis "Cost ($)" 0 --> 10000
bar [10, 100, 500, 1000, 10000]
Q15. What are common Defect Metrics?¶
Answer:
| Metric | Formula | Purpose |
|---|---|---|
| Defect Density | Total Defects / Size (KLOC or FP) | Measure code quality |
| Defect Removal Efficiency (DRE) | (Defects found before release / Total defects) × 100 | Measure testing effectiveness |
| Defect Detection Efficiency (DDE) | (Defects found in phase / Total defects in phase) × 100 | Measure phase effectiveness |
| Defect Leakage | (Defects found in production / Total defects) × 100 | Measure escaped defects |
| Defect Rejection Rate | (Rejected defects / Total defects) × 100 | Measure defect report quality |
| Defect Age | Close date - Open date | Measure resolution time |
| Defect Reopen Rate | (Reopened defects / Closed defects) × 100 | Measure fix quality |
Example Calculations:
Defect Density:
- 50 defects found in 10,000 lines of code
- Defect Density = 50/10 = 5 defects per KLOC
DRE:
- 95 defects found during testing
- 5 defects found in production
- DRE = 95/(95+5) × 100 = 95%
Q16. How do you write a good Bug Report?¶
Answer: Characteristics of a Good Bug Report:
- Accurate - Correctly describes the issue
- Complete - Contains all necessary information
- Concise - No unnecessary details
- Reproducible - Steps clearly recreate the issue
- Objective - Factual, no opinions or blame
Best Practices: 1. Write Clear Titles:
2. Provide Detailed Steps:
Bad:
1. Go to site
2. Try to login
3. Doesn't work
Good:
1. Navigate to https://example.com/login
2. Enter username: testuser@email.com
3. Enter password: Test123!
4. Click "Sign In" button
5. Observe error message
3. Include Evidence:
- Screenshots with annotations
- Screen recordings
- Console logs
- Network request/response
- Error messages (exact text)
4. Specify Environment:
Browser: Chrome 120.0.6099.130
OS: macOS Sonoma 14.2
Device: MacBook Pro M1
Environment: Staging (https://staging.example.com)
Build: v2.3.1-beta
5. One Bug Per Report: Don't combine multiple issues.
Part 3: Types of Testing¶
Q17. What is the difference between Functional and Non-Functional Testing?¶
Answer:
| Aspect | Functional Testing | Non-Functional Testing |
|---|---|---|
| Focus | What the system does | How the system performs |
| Validates | Business requirements | Quality attributes |
| Question | "Does it work correctly?" | "Does it work well?" |
| Requirements | Functional requirements | Non-functional requirements (NFRs) |
| Execution | Based on user actions | Based on system behavior |
Functional Testing Types:
| Type | Description |
|---|---|
| Unit Testing | Test individual components/functions |
| Integration Testing | Test interaction between modules |
| System Testing | Test complete system end-to-end |
| Regression Testing | Test after changes to ensure no new bugs |
| Smoke Testing | Basic tests to check build stability |
| Sanity Testing | Focused tests on specific functionality |
| UAT | User validates against business needs |
Non-Functional Testing Types:
| Type | What It Tests |
|---|---|
| Performance Testing | Speed, response time, throughput |
| Load Testing | Behavior under expected load |
| Stress Testing | Behavior beyond capacity |
| Security Testing | Vulnerabilities, authentication |
| Usability Testing | User-friendliness, ease of use |
| Compatibility Testing | Works across browsers, devices, OS |
| Reliability Testing | Consistency, failure recovery |
| Scalability Testing | Ability to handle growth |
Q18. What is the difference between Verification and Validation?¶
Answer:
| Aspect | Verification | Validation |
|---|---|---|
| Definition | Are we building the product right? | Are we building the right product? |
| Focus | Process/specifications | End result/user needs |
| Question | Does it meet specifications? | Does it meet user needs? |
| Performed | During development | After development |
| Type | Static testing | Dynamic testing |
| Methods | Reviews, inspections, walkthroughs | Testing with execution |
Verification Activities:
- Requirements review
- Design review
- Code review
- Walkthrough
- Inspection
- Static analysis
Validation Activities:
- Unit testing
- Integration testing
- System testing
- User acceptance testing
- Beta testing
Example:
Building a Calculator App:
Verification: "Did we implement addition correctly per the spec?"
- Code review confirms + operator follows specification
- Design review confirms UI matches mockups
Validation: "Can users actually add numbers easily?"
- User testing shows users can perform addition
- UAT confirms it meets business requirements
Memory Aid
- Verification = Checking documentation, specifications (no code execution)
- Validation = Checking actual product (with code execution)
Q19. What is the difference between Black-box, White-box, and Gray-box Testing?¶
Answer:
| Aspect | Black-box | White-box | Gray-box |
|---|---|---|---|
| Knowledge | No internal knowledge | Full internal knowledge | Partial knowledge |
| Tester Focus | Inputs and outputs | Code structure | Both |
| Also Called | Behavioral testing | Structural testing | Translucent testing |
| Performed By | Testers | Developers | Either |
| Code Access | No | Yes | Limited |
Visual Representation: Black-box Testing: Tester knows Input/Output only
flowchart LR
Input --> System["System<br>[Internal Hidden]<br>? ? ?"]
System --> Output
White-box Testing: Tester knows all code paths
flowchart LR
Input --> A --> B --> Output
A --> C --> D
B --> D
Gray-box Testing: Tester knows some internals
flowchart LR
Input --> A["A<br>[Known]"] --> Hidden["?<br>[Hidden]"] --> Output
Black-box Techniques:
- Equivalence Partitioning
- Boundary Value Analysis
- Decision Table Testing
- State Transition Testing
- Use Case Testing
White-box Techniques:
- Statement Coverage
- Branch/Decision Coverage
- Path Coverage
- Condition Coverage
Q20. Explain Unit, Integration, System, and Acceptance Testing.¶
Answer: Testing Levels Pyramid:
block-beta
columns 1
block:top
UAT["UAT (User Acceptance Testing)"]
end
block:system
SYS["System Testing"]
end
block:integration
INT["Integration Testing"]
end
block:unit
UNIT["Unit Testing"]
end
style UAT fill:#e1f5fe
style SYS fill:#b3e5fc
style INT fill:#81d4fa
style UNIT fill:#4fc3f7
| Level | Scope | Who Tests | Focus |
|---|---|---|---|
| Unit | Individual functions/methods | Developers | Code logic works correctly |
| Integration | Multiple units together | Developers/Testers | Components interact correctly |
| System | Complete application | QA Team | End-to-end functionality |
| Acceptance | Business requirements | Users/Stakeholders | Meets business needs |
Unit Testing:
// Testing a single function
function add(a, b) {
return a + b;
}
// Unit test
test('add function', () => {
expect(add(2, 3)).toBe(5);
expect(add(-1, 1)).toBe(0);
});
Integration Testing:
- Big Bang: Integrate all modules at once
- Incremental: Integrate one by one
- Top-down: Start from main module
- Bottom-up: Start from lowest modules
- Sandwich: Both approaches combined
System Testing:
- Tests complete integrated system
- Validates functional and non-functional requirements
- Performed in environment similar to production
Acceptance Testing:
- Alpha Testing: Done internally at developer's site
- Beta Testing: Done by real users at customer's site
- UAT: User Acceptance Testing against business requirements
Q21. What is the difference between Smoke Testing and Sanity Testing?¶
Answer:
| Aspect | Smoke Testing | Sanity Testing |
|---|---|---|
| Also Called | Build Verification Test (BVT) | Subset of Regression Test |
| Purpose | Check if build is stable for testing | Check if specific bug fix or feature works |
| Coverage | Wide - tests critical paths | Narrow - tests specific area |
| When | Every new build | After minor changes or bug fixes |
| Depth | Shallow | Moderately deep |
| Documentation | Usually scripted | Often unscripted |
| Decision | Accept or reject build | Continue or stop testing |
Smoke Testing Example:
%%{init: {'theme': 'base'}}%%
flowchart LR
A[🚀 New Build<br/>Deployed] --> B{Run Smoke Tests}
B --> C[✅ App launches]
B --> D[✅ Login works]
B --> E[✅ Navigation works]
B --> F[✅ Key feature works]
B --> G[✅ DB connection]
C & D & E & F & G --> H{All Pass?}
H -->|Yes| I[✅ Proceed with<br/>Detailed Testing]
H -->|No| J[❌ Reject Build<br/>Report to Dev]
style I fill:#c8e6c9,stroke:#2e7d32
style J fill:#ffcdd2,stroke:#c62828
Sanity Testing Example:
%%{init: {'theme': 'base'}}%%
flowchart TB
A[🐛 Bug Fix Deployed<br/>Password reset email not sending] --> B[Sanity Test Steps]
B --> C[1️⃣ Click Forgot Password]
C --> D[2️⃣ Enter Email]
D --> E[3️⃣ Submit]
E --> F[4️⃣ Check Email Received]
F --> G[5️⃣ Click Reset Link]
G --> H[6️⃣ Reset Password]
H --> I{Pass?}
I -->|Yes| J[✅ Continue<br/>Regression Testing]
I -->|No| K[❌ Return to<br/>Development]
style A fill:#fff3e0,stroke:#e65100
style J fill:#c8e6c9,stroke:#2e7d32
style K fill:#ffcdd2,stroke:#c62828
Key Difference
- Smoke: "Is the build healthy enough to test?"
- Sanity: "Does this specific change work as expected?"
Q22. What is the difference between Regression Testing and Retesting?¶
Answer:
| Aspect | Regression Testing | Retesting |
|---|---|---|
| Purpose | Ensure changes haven't broken existing features | Verify specific bug fix works |
| Scope | Entire application or affected areas | Only the defect that was fixed |
| When | After any code change | After a defect is fixed |
| Test Cases | Existing test suite | Failed test cases only |
| Automation | Highly recommended | Usually manual |
| Priority | Based on impact analysis | Based on defect priority |
Visual Comparison: Regression Testing: Test multiple features to ensure no side effects
flowchart LR
subgraph All Features
A["Feature A<br>[Test]"]
B["Feature B<br>[Test]"]
C["Feature C<br>[Test]"]
D["Feature D<br>[Test]"]
E["Feature E<br>[Test]"]
F["Feature F<br>[Test]"]
end
Retesting: Test only the specific bug that was fixed
flowchart LR
subgraph Features
A[Feature A]
B[Feature B]
C[Feature C]
D[Feature D]
E["BUG FIX<br>[Test]"]
F[Feature F]
end
style E fill:#90EE90
Regression Testing Strategies:
- Complete Regression: Run all tests (time-consuming)
- Selective Regression: Run tests for affected areas only
- Priority-based: Run critical tests first
Q23. What is the difference between Alpha and Beta Testing?¶
Answer:
| Aspect | Alpha Testing | Beta Testing |
|---|---|---|
| Location | Developer's site | Customer's/User's site |
| Performed By | Internal testers, employees | Real end users, customers |
| Environment | Controlled test environment | Real production-like environment |
| Stage | Before beta testing | After alpha testing |
| Feedback | Internal feedback loop | External user feedback |
| Goal | Find bugs before external release | Validate in real-world conditions |
| Access | Limited to organization | Public or limited external group |
Testing Flow:
flowchart LR
A[Development] --> B[Alpha Testing]
B --> C[Beta Testing]
C --> D[Production Release]
B -.- B1[Internal testers]
C -.- C1[External users]
Alpha Testing:
- First phase of user testing
- White-box and black-box techniques
- Both functional and reliability testing
- Bugs can be addressed immediately
Beta Testing:
- Also called "field testing"
- Real-world usage scenarios
- Wider hardware/software combinations
- Feedback through surveys, bug reports
- May be open (anyone) or closed (selected users)
Q24. What is the difference between Static and Dynamic Testing?¶
Answer:
| Aspect | Static Testing | Dynamic Testing |
|---|---|---|
| Code Execution | No | Yes |
| When | Early in SDLC | After code is written |
| Focus | Prevention | Detection |
| Cost | Lower | Higher |
| Techniques | Reviews, walkthroughs | Testing with inputs |
Static Testing Techniques:
- Code reviews
- Peer reviews
- Walkthroughs
- Inspections
- Static code analysis tools (SonarQube, ESLint)
- Checklist reviews
Dynamic Testing Techniques:
- Unit testing
- Integration testing
- System testing
- Performance testing
- All testing that runs code
Static Analysis Tools:
flowchart LR
A[Code] --> B[Static Analyzer]
B --> C[Report]
C -.- D["• Code smells<br>• Security flaws<br>• Style issues<br>• Complexity<br>• Dead code"]
Benefits of Static Testing:
- Find defects early (cheaper to fix)
- No test environment needed
- Can review non-executable artifacts
- Improves code quality and maintainability
Q25. What is the difference between Exploratory Testing and Scripted Testing?¶
Answer:
| Aspect | Exploratory Testing | Scripted Testing |
|---|---|---|
| Planning | Minimal upfront | Extensive upfront |
| Test Design | During execution | Before execution |
| Documentation | Light, session-based | Detailed test cases |
| Approach | Learning + testing simultaneously | Follow predetermined steps |
| Creativity | Highly encouraged | Limited by scripts |
| Coverage | Unknown areas, edge cases | Known requirements |
Exploratory Testing Approach:
flowchart TD
subgraph Session["Exploratory Testing Session (60 min)"]
Charter["Charter: Explore login functionality<br>focusing on security and edge cases"]
Charter --> A[Learn the feature]
A --> B[Design tests on-the-fly]
B --> C[Execute tests]
C --> D[Document findings]
D --> E[Adapt based on discoveries]
E -.-> B
end
When to Use Each:
| Use Exploratory When | Use Scripted When |
|---|---|
| Learning new features | Compliance/audit requirements |
| Finding creative bugs | Regression testing |
| Time is limited | Need reproducibility |
| Requirements unclear | Coverage tracking needed |
| Complement automation | Critical paths |
Session-Based Test Management (SBTM):
- Charter: Mission for the session
- Time-box: Fixed duration (60-120 min)
- Session notes: Document findings
- Debrief: Review what was learned
Q26. What are the types of Performance Testing?¶
Answer:
flowchart TB
subgraph Performance["Performance Testing Types"]
A[Load Testing]
B[Stress Testing]
C[Spike Testing]
D[Endurance Testing]
E[Volume Testing]
F[Scalability Testing]
end
| Type | Purpose | What It Tests |
|---|---|---|
| Load Testing | Test under expected load | Response time, throughput at normal load |
| Stress Testing | Test beyond capacity | Breaking point, recovery behavior |
| Spike Testing | Test sudden load increase | Behavior during traffic spikes |
| Endurance Testing | Test over extended time | Memory leaks, degradation over time |
| Volume Testing | Test with large data | Performance with high data volume |
| Scalability Testing | Test scaling capabilities | Performance as resources scale up/down |
Load Patterns:
| Test Type | Pattern Description |
|---|---|
| Load Testing | Gradual ramp up, steady load, gradual ramp down |
| Stress Testing | Increase until breaking point |
| Spike Testing | Sudden sharp increase then decrease |
| Endurance Testing | Constant load over long duration (hours/days) |
Key Metrics:
- Response time
- Throughput (requests/second)
- Error rate
- CPU/Memory usage
- Concurrent users supported
Tools: JMeter, Gatling, LoadRunner, k6, Locust
Q27. What is Security Testing?¶
Answer: Security Testing identifies vulnerabilities, threats, and risks in a software application to prevent malicious attacks and data breaches.
OWASP Top 10 Vulnerabilities (2021):
| # | Vulnerability | Description |
|---|---|---|
| 1 | Broken Access Control | Users accessing unauthorized data/functions |
| 2 | Cryptographic Failures | Weak encryption, exposed sensitive data |
| 3 | Injection | SQL, NoSQL, OS command injection |
| 4 | Insecure Design | Flaws in architecture and design |
| 5 | Security Misconfiguration | Default/weak settings |
| 6 | Vulnerable Components | Using outdated/vulnerable libraries |
| 7 | Authentication Failures | Weak passwords, session issues |
| 8 | Data Integrity Failures | Untrusted deserialization, CI/CD flaws |
| 9 | Logging & Monitoring Failures | Insufficient logging |
| 10 | SSRF | Server-Side Request Forgery |
Types of Security Testing:
| Type | Description |
|---|---|
| Vulnerability Scanning | Automated scan for known vulnerabilities |
| Penetration Testing | Simulated attack by ethical hackers |
| Security Audit | Manual review of code and infrastructure |
| Risk Assessment | Identify and prioritize security risks |
| Ethical Hacking | Attempt to breach the system |
Common Security Test Cases:
- SQL Injection attempts
- Cross-Site Scripting (XSS)
- Authentication bypass
- Session management flaws
- File upload vulnerabilities
- Authorization checks
- Sensitive data exposure
Tools: OWASP ZAP, Burp Suite, Nessus, Metasploit
Q28. What is Usability and Accessibility Testing?¶
Answer: Usability Testing:
Tests how easy and user-friendly the application is for end users.
| Aspect | Description |
|---|---|
| Goal | Ensure users can complete tasks easily |
| Focus | User experience, intuitiveness |
| Participants | Real users representing target audience |
| Methods | Task completion, think-aloud, surveys |
Usability Factors:
- Learnability: How easy for first-time users?
- Efficiency: How quickly can tasks be completed?
- Memorability: How easy to remember how to use?
- Errors: How many errors do users make?
- Satisfaction: How pleasant is the experience?
Accessibility Testing:
Tests if the application is usable by people with disabilities.
WCAG Guidelines (Web Content Accessibility Guidelines):
| Principle | Description | Examples |
|---|---|---|
| Perceivable | Content must be presentable | Alt text for images, captions for video |
| Operable | Interface must be usable | Keyboard navigation, skip links |
| Understandable | Content must be clear | Clear labels, error messages |
| Robust | Works with assistive tech | Screen reader compatible |
Accessibility Checks:
- Screen reader compatibility
- Keyboard-only navigation
- Color contrast ratios
- Text resizing
- Alt text for images
- Form labels
- Focus indicators
Tools:
- Usability: UserTesting, Hotjar, Maze
- Accessibility: WAVE, axe, Lighthouse, NVDA
Part 4: Test Design & Documentation¶
Q29. What is the difference between Test Case and Test Scenario?¶
Answer:
| Aspect | Test Case | Test Scenario |
|---|---|---|
| Definition | Step-by-step instructions to test | High-level description of what to test |
| Detail Level | Detailed, specific | Brief, conceptual |
| Derived From | Test scenarios | Requirements |
| Focus | How to test | What to test |
| Includes | Steps, data, expected results | Just the testing idea |
Example - Login Feature: Test Scenario:
Test Cases derived from scenario:
| TC ID | Test Case | Steps | Expected Result |
|---|---|---|---|
| TC001 | Valid login | 1. Enter valid email 2. Enter valid password 3. Click Login | User logged in successfully |
| TC002 | Invalid password | 1. Enter valid email 2. Enter wrong password 3. Click Login | Error message displayed |
| TC003 | Empty email | 1. Leave email blank 2. Enter password 3. Click Login | Validation error shown |
| TC004 | Empty password | 1. Enter email 2. Leave password blank 3. Click Login | Validation error shown |
| TC005 | SQL injection | 1. Enter "' OR '1'='1" as email 2. Click Login | Input rejected |
Relationship:
flowchart TD
A[Requirements] --> B["Test Scenarios<br>(What to test)"]
B --> C["Test Cases<br>(How to test)"]
C --> D["Test Scripts<br>(Automated tests)"]
Q30. What are Test Case Design Techniques?¶
Answer: 1. Boundary Value Analysis (BVA):
Tests at the edges of input ranges.
| Position | Value | Type |
|---|---|---|
| Min-1 | 17 | Invalid |
| Min | 18 | Valid |
| Min+1 | 19 | Valid |
| Max-1 | 59 | Valid |
| Max | 60 | Valid |
| Max+1 | 61 | Invalid |
Test values: 17, 18, 19, 59, 60, 61
2. Equivalence Class Partitioning (ECP):
Divides inputs into groups expected to behave similarly.
| Partition | Range | Test Value |
|---|---|---|
| Invalid (low) | < 18 | 10 |
| Valid | 18-60 | 35 |
| Invalid (high) | > 60 | 70 |
3. Decision Table Testing:
Tests combinations of conditions and actions.
| Conditions | Rule 1 | Rule 2 | Rule 3 | Rule 4 |
|---|---|---|---|---|
| Valid Username | T | T | F | F |
| Valid Password | T | F | T | F |
| Actions | ||||
| Login Success | ✓ | |||
| Show Error | ✓ | ✓ | ✓ |
4. State Transition Testing:
Tests system states and transitions.
stateDiagram-v2
[*] --> Locked
Locked --> Unlocked : Enter correct PIN
Unlocked --> Timeout : Idle 30s
Timeout --> Unlocked : User activity
5. Error Guessing:
Based on tester's experience and intuition about likely errors.
Q31. What is Requirement Traceability Matrix (RTM)?¶
Answer: RTM is a document that maps requirements to test cases, ensuring all requirements are covered by tests.
RTM Structure:
| Req ID | Requirement | TC ID | Test Case | Status | Defect |
|---|---|---|---|---|---|
| REQ001 | User can login with email/password | TC001 | Valid login test | Pass | - |
| REQ001 | User can login with email/password | TC002 | Invalid password test | Pass | - |
| REQ002 | User can reset password | TC010 | Password reset flow | Fail | BUG-123 |
| REQ003 | User can update profile | TC015 | Profile update test | Pass | - |
Types of Traceability:
Forward Traceability:
Requirements → Test Cases
(Ensures all requirements are tested)
Backward Traceability:
Test Cases → Requirements
(Ensures all tests map to requirements)
Bi-directional Traceability:
Requirements ↔ Test Cases
(Both directions)
Benefits:
- Ensures 100% requirement coverage
- Identifies gaps in testing
- Shows impact of requirement changes
- Helps in test prioritization
- Facilitates audit compliance
Q32. What are the components of a Test Plan?¶
Answer: A Test Plan is a document describing the scope, approach, resources, and schedule of testing activities.
IEEE 829 Test Plan Structure:
| Section | Description |
|---|---|
| 1. Test Plan Identifier | Unique ID for the document |
| 2. Introduction | Purpose, overview, scope |
| 3. Test Items | What will be tested (features, modules) |
| 4. Features to be Tested | Specific features in scope |
| 5. Features Not to be Tested | Explicitly out of scope |
| 6. Approach | Testing strategy, types of testing |
| 7. Pass/Fail Criteria | What defines success |
| 8. Suspension/Resumption | Conditions to stop/restart |
| 9. Test Deliverables | Documents, reports produced |
| 10. Testing Tasks | Activities and dependencies |
| 11. Environment Needs | Hardware, software, tools |
| 12. Responsibilities | Who does what |
| 13. Staffing & Training | Team members, skills needed |
| 14. Schedule | Milestones, dates |
| 15. Risks & Contingencies | Risk assessment, mitigation |
| 16. Approvals | Sign-off requirements |
Q33. What is the difference between Test Strategy and Test Plan?¶
Answer:
| Aspect | Test Strategy | Test Plan |
|---|---|---|
| Scope | Organization or program level | Project level |
| Created By | Test Manager/QA Lead | Test Lead |
| Timeframe | Long-term | Project duration |
| Content | General approach, standards | Specific activities, schedules |
| Changes | Rarely changes | May change during project |
| Level | High-level guidelines | Detailed execution plan |
Test Strategy Includes:
- Testing objectives and scope
- Testing types to be used
- Test environment requirements
- Testing tools
- Risk analysis approach
- Defect management process
- Test metrics
- Roles and responsibilities
Test Plan Includes:
- Features to test
- Specific test cases
- Resource allocation
- Detailed schedule
- Entry/exit criteria
- Risk mitigation for this project
Relationship:
Q34. What are Entry and Exit Criteria?¶
Answer: Entry Criteria: Conditions that must be met before testing can begin.
| Entry Criteria | Description |
|---|---|
| Requirements signed off | Requirements document approved |
| Test plan approved | Test plan reviewed and signed |
| Test environment ready | Hardware/software configured |
| Test data available | Data prepared for testing |
| Build deployed | Code deployed to test environment |
| Smoke test passed | Basic functionality verified |
| Tools configured | Test tools set up |
Exit Criteria: Conditions that must be met before testing can end.
| Exit Criteria | Description |
|---|---|
| All test cases executed | 100% test execution |
| Defect targets met | E.g., No P1/P2 open defects |
| Pass rate achieved | E.g., 95% pass rate |
| Coverage met | E.g., 80% code coverage |
| No critical bugs open | All critical defects resolved |
| Sign-off obtained | Stakeholder approval received |
| Documentation complete | Test reports finalized |
Example:
Entry Criteria for System Testing:
✓ Integration testing complete
✓ Build 2.3.0 deployed to staging
✓ Test environment verified
✓ Test data loaded
✓ All P1 defects from integration fixed
Exit Criteria for System Testing:
✓ All test cases executed
✓ Pass rate ≥ 95%
✓ No open P1 or P2 defects
✓ All P3 defects documented
✓ Test summary report approved
Q35. What is Risk-Based Testing?¶
Answer: Risk-Based Testing prioritizes testing activities based on the probability and impact of failures.
Risk Assessment:
Risk = Probability × Impact
Risk Matrix:
| Probability / Impact | Low | Medium | High |
|---|---|---|---|
| High | Medium | High | Critical |
| Medium | Low | Medium | High |
| Low | Low | Low | Medium |
Risk Categories:
| Risk Type | Examples |
|---|---|
| Product Risk | Complex features, new technology, third-party integrations |
| Project Risk | Tight deadlines, resource constraints, unclear requirements |
| Technical Risk | Performance issues, security vulnerabilities |
| Business Risk | Regulatory compliance, financial impact |
Risk-Based Testing Process: 1. Identify Risks: List potential problem areas 2. Assess Risks: Rate probability and impact 3. Prioritize: Focus on high-risk areas 4. Mitigate: Allocate more testing to risky areas 5. Monitor: Track risks throughout project
Benefits:
- Optimize limited testing time
- Focus on what matters most
- Better resource allocation
- Informed go/no-go decisions
Q36. What are Test Estimation Techniques?¶
Answer:
| Technique | Description | When to Use |
|---|---|---|
| Work Breakdown Structure (WBS) | Divide into smaller tasks, estimate each | Detailed planning |
| Expert Judgment | Based on experience | Quick estimates |
| Analogy/Historical | Compare to similar past projects | Similar projects exist |
| Function Point Analysis | Based on functionality size | Large projects |
| Use Case Points | Based on use case complexity | Use case-driven development |
| Three-Point Estimation | (Optimistic + 4×Most Likely + Pessimistic) / 6 | Uncertain estimates |
Three-Point Estimation Example:
Task: Write test cases for login feature
Optimistic (O): 2 days (everything goes smoothly)
Most Likely (M): 4 days (normal conditions)
Pessimistic (P): 8 days (complications arise)
Estimate = (O + 4M + P) / 6
= (2 + 16 + 8) / 6
= 26 / 6
= 4.3 days
Factors Affecting Estimates
- Application complexity
- Number of test cases
- Team experience
- Test environment stability
- Tool availability
- Defect density expectations
- Automation level
Part 5: API Testing Concepts¶
Q37. What is API? Explain REST vs SOAP.¶
Answer: API (Application Programming Interface):
A set of rules and protocols that allows different software applications to communicate with each other.
flowchart LR
A["Client<br>(Frontend)"] <-->|"API<br>Request/Response"| B["Server<br>(Backend)"]
REST vs SOAP Comparison:
| Aspect | REST | SOAP |
|---|---|---|
| Full Form | Representational State Transfer | Simple Object Access Protocol |
| Type | Architectural style | Protocol |
| Data Format | JSON, XML, HTML, plain text | XML only |
| Transport | HTTP only | HTTP, SMTP, TCP, etc. |
| Operations | HTTP methods (GET, POST, etc.) | Defined in WSDL |
| Performance | Faster, lightweight | Slower, more overhead |
| Caching | Supported | Not supported |
| Security | HTTPS, OAuth | WS-Security, SSL |
| State | Stateless | Can be stateful |
| Learning Curve | Easier | Steeper |
| Use Cases | Web/mobile apps, public APIs | Enterprise, banking, transactions |
REST Example:
GET /api/users/123 HTTP/1.1
Host: api.example.com
Response:
{
"id": 123,
"name": "John Doe",
"email": "john@example.com"
}
SOAP Example:
Q38. What are HTTP Methods?¶
Answer:
| Method | Purpose | Idempotent | Safe | Request Body |
|---|---|---|---|---|
| GET | Retrieve data | Yes | Yes | No |
| POST | Create new resource | No | No | Yes |
| PUT | Update/replace resource | Yes | No | Yes |
| PATCH | Partial update | No | No | Yes |
| DELETE | Remove resource | Yes | No | Optional |
| HEAD | Get headers only | Yes | Yes | No |
| OPTIONS | Get supported methods | Yes | Yes | No |
Idempotent: Same request produces same result when repeated. Safe: Doesn't modify server state.
CRUD Operations Mapping:
| CRUD | HTTP Method | Example |
|---|---|---|
| Create | POST | POST /users - Create user |
| Read | GET | GET /users/123 - Get user |
| Update | PUT/PATCH | PUT /users/123 - Update user |
| Delete | DELETE | DELETE /users/123 - Delete user |
PUT vs PATCH:
PUT /users/123
{
"name": "John",
"email": "john@email.com",
"phone": "1234567890"
}
// Replaces entire resource
PATCH /users/123
{
"phone": "9876543210"
}
// Updates only specified field
Q39. What are HTTP Status Codes?¶
Answer: Status Code Categories:
| Range | Category | Description |
|---|---|---|
| 1xx | Informational | Request received, continuing |
| 2xx | Success | Request successful |
| 3xx | Redirection | Further action needed |
| 4xx | Client Error | Client made an error |
| 5xx | Server Error | Server failed |
Common Status Codes:
| Code | Name | Description | When Used |
|---|---|---|---|
| 200 | OK | Request successful | GET success, PUT/PATCH success |
| 201 | Created | Resource created | POST success |
| 204 | No Content | Success, no body returned | DELETE success |
| 301 | Moved Permanently | Resource permanently moved | URL changed |
| 302 | Found | Temporary redirect | Temporary URL change |
| 304 | Not Modified | Use cached version | Caching |
| 400 | Bad Request | Invalid syntax | Malformed request |
| 401 | Unauthorized | Authentication required | Missing/invalid credentials |
| 403 | Forbidden | Access denied | Insufficient permissions |
| 404 | Not Found | Resource doesn't exist | Wrong URL |
| 405 | Method Not Allowed | HTTP method not supported | Wrong method |
| 409 | Conflict | Resource conflict | Duplicate entry |
| 422 | Unprocessable Entity | Validation failed | Invalid data |
| 429 | Too Many Requests | Rate limit exceeded | Throttling |
| 500 | Internal Server Error | Server error | Unhandled exception |
| 502 | Bad Gateway | Invalid upstream response | Proxy/gateway issue |
| 503 | Service Unavailable | Server temporarily down | Maintenance/overload |
| 504 | Gateway Timeout | Upstream timeout | Slow backend |
Q40. What is the structure of HTTP Request and Response?¶
Answer: HTTP Request Structure:
| Component | Example |
|---|---|
| Request Line | POST /api/users HTTP/1.1 |
| Headers | Host: api.example.comContent-Type: application/jsonAuthorization: Bearer token123Accept: application/json |
| Body | {"name": "John Doe", "email": "john@example.com"} |
HTTP Response Structure:
| Component | Example |
|---|---|
| Status Line | HTTP/1.1 201 Created |
| Headers | Content-Type: application/jsonLocation: /api/users/456X-Request-Id: abc123 |
| Body | {"id": 456, "name": "John Doe", "email": "john@example.com", "createdAt": "2024-01-15T10:30:00Z"} |
Common Headers:
| Header | Purpose | Example |
|---|---|---|
| Content-Type | Body format | application/json |
| Accept | Expected response format | application/json |
| Authorization | Authentication | Bearer token123 |
| Cache-Control | Caching rules | no-cache |
| User-Agent | Client information | Mozilla/5.0... |
| Cookie | Send cookies | session_id=abc |
| Set-Cookie | Set cookies (response) | session_id=xyz |
Q41. What are API Authentication Methods?¶
Answer:
| Method | Description | Use Case |
|---|---|---|
| Basic Auth | Username:password base64 encoded | Simple, internal APIs |
| API Key | Unique key in header/query | Public APIs |
| Bearer Token | Token in Authorization header | Mobile/web apps |
| OAuth 2.0 | Token-based authorization flow | Third-party access |
| JWT | Self-contained signed token | Stateless authentication |
1. Basic Authentication:
2. API Key:
3. Bearer Token:
4. OAuth 2.0 Flow:
sequenceDiagram
participant User
participant AuthServer as Auth Server
participant Resource as Resource Server
User->>AuthServer: 1. Request access
AuthServer->>User: 2. User grants permission
User->>AuthServer: 3. Exchange code for token
AuthServer->>User: 4. Access token returned
User->>Resource: 5. Use token to access API
5. JWT (JSON Web Token):
Header.Payload.Signature
eyJhbGciOiJIUzI1NiIs... // Header (algorithm)
.eyJzdWIiOiIxMjM0NTY3... // Payload (claims)
.SflKxwRJSMeKKF2QT4f... // Signature (verification)
Q42. What are the types of API Testing?¶
Answer:
| Type | Description | Focus |
|---|---|---|
| Functional Testing | Verify API works as expected | Correct responses, status codes |
| Validation Testing | Verify API meets requirements | Business logic validation |
| Load Testing | Test under expected load | Response time at scale |
| Security Testing | Test for vulnerabilities | Authentication, injection |
| Reliability Testing | Test consistent behavior | Repeated requests |
| Negative Testing | Test with invalid inputs | Error handling |
| Integration Testing | Test API interactions | End-to-end workflows |
Common API Test Scenarios:
| Category | Test Scenarios |
|---|---|
| Functional Tests | Valid input returns expected output, Correct status codes, Response schema matches spec, Business logic works |
| Negative Tests | Missing required fields, Invalid data types, Boundary values exceeded, Empty/null values, Invalid authentication |
| Security Tests | SQL injection attempts, Invalid tokens rejected, Unauthorized access blocked, Rate limiting works, Sensitive data encrypted |
Q43. What should you validate in API Testing?¶
Answer: Validation Checklist:
| Category | What to Validate |
|---|---|
| Status Code | Correct HTTP status returned |
| Response Body | Data correctness and completeness |
| Response Schema | Structure matches specification |
| Headers | Required headers present |
| Response Time | Within acceptable limits |
| Error Messages | Clear, appropriate error responses |
| Data Types | Correct types (string, number, etc.) |
| Authentication | Proper auth handling |
| Pagination | Correct page data returned |
| Sorting/Filtering | Results correctly ordered/filtered |
Example Validations:
// Status code validation
expect(response.status).toBe(200);
// Response body validation
expect(response.body.user.name).toBe("John");
expect(response.body.user.email).toContain("@");
// Schema validation
expect(response.body).toMatchSchema(userSchema);
// Response time validation
expect(response.time).toBeLessThan(2000);
// Header validation
expect(response.headers['content-type']).toContain('application/json');
// Array validation
expect(response.body.users).toHaveLength(10);
expect(response.body.users[0]).toHaveProperty('id');
Q44. What are common API Testing Tools?¶
Answer:
| Tool | Type | Best For |
|---|---|---|
| Postman | GUI-based | Manual testing, collaboration |
| REST Assured | Java library | Java-based automation |
| Cypress | JavaScript framework | Frontend + API testing |
| Playwright | Multi-language | Cross-browser + API |
| Jest/SuperTest | Node.js | JavaScript API testing |
| pytest + requests | Python | Python API testing |
| SoapUI | GUI-based | SOAP and REST testing |
| Insomnia | GUI-based | REST client |
| curl | Command line | Quick testing, scripting |
| k6 | Performance | Load testing APIs |
Postman Collection Structure:
%%{init: {'theme': 'base'}}%%
flowchart TB
C[📁 Collection] --> A[📂 Auth]
C --> U[📂 Users]
C --> P[📂 Products]
A --> A1[🔑 Login]
A --> A2[🚪 Logout]
U --> U1[👤 Get User]
U --> U2[➕ Create User]
U --> U3[🗑️ Delete User]
P --> P1[📋 List Products]
P --> P2[🔍 Get Product]
style C fill:#e3f2fd,stroke:#1565c0
style A fill:#fff3e0,stroke:#e65100
style U fill:#e8f5e9,stroke:#2e7d32
style P fill:#fce4ec,stroke:#c2185b
REST Assured Example (Java):
given()
.header("Authorization", "Bearer " + token)
.contentType(ContentType.JSON)
.when()
.get("/api/users/123")
.then()
.statusCode(200)
.body("name", equalTo("John Doe"))
.body("email", containsString("@"));
Python requests Example:
import requests
response = requests.get(
"https://api.example.com/users/123",
headers={"Authorization": "Bearer token"}
)
assert response.status_code == 200
assert response.json()["name"] == "John Doe"
Part 6: Testing Principles & Best Practices¶
Q45. What are the Seven Principles of Software Testing?¶
Answer:
| # | Principle | Description |
|---|---|---|
| 1 | Testing shows presence of defects | Testing can show bugs exist, not that they don't |
| 2 | Exhaustive testing is impossible | Can't test all combinations; use risk-based approach |
| 3 | Early testing | Start testing activities early to find bugs sooner |
| 4 | Defect clustering | Most defects found in small number of modules |
| 5 | Pesticide paradox | Same tests become ineffective over time; update tests |
| 6 | Testing is context dependent | Testing varies based on domain and application type |
| 7 | Absence-of-errors fallacy | Bug-free software can still fail to meet user needs |
Detailed Explanations: 1. Testing Shows Presence of Defects:
2. Exhaustive Testing is Impossible:
Login form with:
- Email (infinite possibilities)
- Password (infinite possibilities)
- Browser (100+ options)
- OS (50+ options)
Total combinations = Infinite!
3. Early Testing (Shift Left):
4. Defect Clustering (Pareto Principle):
5. Pesticide Paradox:
Q46. What is the Testing Pyramid?¶
Answer: The Testing Pyramid is a framework that suggests how to balance different types of tests.
block-beta
columns 5
space:2 ui["UI/E2E Tests<br/>10% | Slow | Expensive"]:1 space:2
space:1 api["API/Integration Tests<br/>20% | Medium Speed"]:3 space:1
unit["Unit Tests<br/>70% | Fast | Cheap"]:5
style ui fill:#e8eaf6,stroke:#9fa8da,color:#3949ab
style api fill:#e3f2fd,stroke:#90caf9,color:#1565c0
style unit fill:#e0f2f1,stroke:#80cbc4,color:#00796b
| Level | Quantity | Speed | Cost | Stability |
|---|---|---|---|---|
| Unit | Many (70%) | Fast (ms) | Low | High |
| Integration/API | Some (20%) | Medium (s) | Medium | Medium |
| UI/E2E | Few (10%) | Slow (min) | High | Low |
Why This Shape: Unit Tests (Base):
- Test individual functions/methods
- Fast feedback (milliseconds)
- Easy to maintain
- Run frequently
- High coverage
Integration/API Tests (Middle):
- Test component interactions
- Moderate speed (seconds)
- Verify contracts between services
- Good balance of coverage and speed
UI/E2E Tests (Top):
- Test complete user workflows
- Slow (minutes)
- Brittle, prone to flakiness
- High maintenance cost
- Reserve for critical paths
Anti-Pattern - Ice Cream Cone
block-beta
columns 5
ui["UI Tests - Many | Slow | Expensive"]:5
space:1 api["API Tests - Few"]:3 space:1
space:2 unit["Unit - Almost None"]:1 space:2
style ui fill:#ffebee,stroke:#ef9a9a,color:#c62828
style api fill:#fff8e1,stroke:#ffe082,color:#f57f17
style unit fill:#e8f5e9,stroke:#a5d6a7,color:#2e7d32
Inverted pyramid = expensive, slow, hard to maintain!
Q47. What is Shift-Left Testing?¶
Answer: Shift-Left Testing means moving testing activities earlier in the development lifecycle.
%%{init: {'theme': 'base'}}%%
flowchart LR
subgraph traditional["Traditional Approach"]
direction LR
R1[Requirements] --> D1[Design] --> Dev1[Development] --> T1[Testing] --> Dep1[Deployment]
end
subgraph shiftleft["Shift-Left Approach"]
direction LR
R2[Requirements] --> D2[Design] --> Dev2[Development] --> T2[Testing] --> Dep2[Deployment]
QA1([🧪]) -.-> R2
QA2([🧪]) -.-> D2
QA3([🧪]) -.-> Dev2
QA4([🧪]) -.-> T2
end
style T1 fill:#c8e6c9,stroke:#2e7d32
style R2 fill:#e3f2fd,stroke:#1565c0
style D2 fill:#e3f2fd,stroke:#1565c0
style Dev2 fill:#e3f2fd,stroke:#1565c0
style T2 fill:#e3f2fd,stroke:#1565c0
Shift-Left Practices:
| Practice | Phase | Description |
|---|---|---|
| Requirements Review | Requirements | QA reviews for testability |
| Design Review | Design | QA identifies testing needs |
| TDD/BDD | Development | Write tests before code |
| Unit Testing | Development | Developers test their code |
| Code Reviews | Development | Catch issues before merge |
| Static Analysis | Development | Automated code scanning |
| Continuous Testing | CI/CD | Automated tests on every commit |
Benefits
- Find defects earlier (cheaper to fix)
- Faster feedback loop
- Reduced rework
- Higher quality code
- Better collaboration
Shift-Left vs Shift-Right:
%%{init: {'theme': 'base'}}%%
flowchart LR
subgraph left["⬅️ Shift-Left (Earlier Testing)"]
L1[Requirements Reviews]
L2[Design Reviews]
L3[TDD/BDD]
L4[Unit Tests]
L5[Static Analysis]
end
subgraph center[" "]
DEV[Development<br/>Lifecycle]
end
subgraph right["Shift-Right ➡️ (Production Testing)"]
R1[Feature Flags]
R2[A/B Testing]
R3[Canary Deployments]
R4[Chaos Engineering]
R5[Production Monitoring]
end
left --> center --> right
style left fill:#e3f2fd,stroke:#1565c0
style right fill:#fff3e0,stroke:#e65100
style center fill:#f5f5f5,stroke:#757575
Q48. What is the difference between Test Coverage and Code Coverage?¶
Answer:
| Aspect | Test Coverage | Code Coverage |
|---|---|---|
| Definition | % of requirements tested | % of code executed by tests |
| Focus | Business requirements | Source code |
| Measure | Test cases vs requirements | Statements/branches executed |
| Who Uses | QA, Business | Developers |
| Tools | RTM, test management | Istanbul, JaCoCo, Cobertura |
Code Coverage Types:
| Type | Description | Example |
|---|---|---|
| Statement Coverage | % of statements executed | All lines run at least once |
| Branch Coverage | % of branches executed | Both if/else paths tested |
| Function Coverage | % of functions called | All functions invoked |
| Line Coverage | % of lines executed | Similar to statement |
| Condition Coverage | % of boolean conditions | True/false for each condition |
Example:
function calculate(a, b, operation) { // Line 1
if (operation === 'add') { // Line 2 (branch)
return a + b; // Line 3
} else if (operation === 'subtract') {// Line 4 (branch)
return a - b; // Line 5
} // Line 6
return 0; // Line 7
}
// Test: calculate(2, 3, 'add')
// Statement coverage: 4/7 = 57%
// Branch coverage: 1/3 = 33%
Limitations of Code Coverage
- High coverage ≠ good tests
- Doesn't measure test quality
- Can miss edge cases
- Can be gamed (tests without assertions)
Q49. When should you stop testing?¶
Answer: Testing is theoretically infinite. Use these criteria to decide when to stop:
Exit Criteria Based:
- All planned test cases executed
- Required pass rate achieved
- No critical/high defects open
- Code coverage targets met
- Stakeholder sign-off obtained
Risk-Based:
- Remaining risk acceptable to business
- High-risk areas thoroughly tested
- Diminishing returns on bug discovery
Resource-Based:
- Budget exhausted
- Time deadline reached
- Resources unavailable
Practical Indicators: When the defect discovery rate flattens (finding fewer new bugs over time), consider stopping testing.
Decision Matrix:
| Factor | Stop Testing | Continue Testing |
|---|---|---|
| Critical bugs | None open | Open critical bugs |
| Test execution | 100% complete | Tests remaining |
| Pass rate | ≥95% | <95% |
| Defect rate | Declining | Still finding many bugs |
| Risk | Acceptable | High risk areas untested |
| Deadline | Reached | Time available |
Q50. What is the difference between Quality Assurance and Quality Control?¶
Answer:
| Aspect | Quality Assurance (QA) | Quality Control (QC) |
|---|---|---|
| Focus | Process | Product |
| Nature | Preventive | Detective |
| When | Throughout SDLC | After product is built |
| Goal | Prevent defects | Find defects |
| Approach | Process-oriented | Product-oriented |
| Activities | Process definition, audits, training | Testing, inspection, reviews |
| Responsibility | Entire team | QC/Testing team |
QA Activities:
- Define testing processes
- Create standards and guidelines
- Conduct process audits
- Training and mentoring
- Process improvement
- Metrics and reporting
QC Activities:
- Test case design
- Test execution
- Defect logging
- Code reviews
- Inspections
- Product verification
Relationship:
flowchart TB
subgraph QM["Quality Management"]
direction TB
subgraph QA["Quality Assurance (QA)"]
direction TB
subgraph QC["Quality Control (QC)"]
QCact["Testing, Inspection, Verification"]
end
QAact["Processes, Standards, Audits, Training"]
end
QMact["Strategy, Planning, Governance"]
end
Analogy
- QA: Building the car manufacturing process correctly
- QC: Inspecting each car that comes off the assembly line
Part 7: Behavioral & Situational Questions¶
Q51. Tell me about yourself (for QA/SDET role).¶
Answer Framework (STAR for introduction): Structure:
- Current role and experience
- Key skills and achievements
- Why QA/SDET
- Why this company/role
Sample Answer
"I'm a QA Engineer with 5 years of experience in software testing. Currently at [Company], I lead testing efforts for a microservices-based e-commerce platform.
My expertise includes:
- Automation using Selenium and Playwright
- API testing with Postman and REST Assured
- CI/CD integration with Jenkins and GitHub Actions
- Agile methodologies and cross-functional collaboration
Key achievement: I built an automated regression suite that reduced testing time from 3 days to 4 hours, catching 30% more bugs before release.
I'm passionate about quality engineering and believe testing should be proactive, not reactive. I'm excited about this role because [specific reason related to company/team/technology]."
Tips
- Keep it 1-2 minutes
- Be specific with numbers/metrics
- Connect your experience to the role
- Show enthusiasm
Q52. Describe a challenging bug you found.¶
Answer Framework (STAR Method): Situation: Context and background Task: Your responsibility Action: What you did Result: Outcome and impact
Sample STAR Answer
Situation: "During system testing of our payment module, transactions were randomly failing with no clear pattern. The issue occurred in only 5% of cases."
Task: "I was responsible for identifying the root cause and ensuring it was fixed before the release deadline."
Action: "I started by analyzing the failed transactions and noticed they all happened within milliseconds of each other. I hypothesized a race condition in the concurrent payment processing.
I created test scenarios that simulated multiple simultaneous payments for the same user. I also added detailed logging to track the exact sequence of operations.
The logs revealed that when two payments processed simultaneously, both read the same balance, leading to incorrect calculations. I documented the issue with reproduction steps and worked with the developer to implement database-level locking."
Result: "The fix was implemented and verified within 2 days. We added automated concurrency tests to our regression suite. This prevented potential financial discrepancies that could have affected thousands of users and saved the company from regulatory issues."
Q53. How do you handle tight deadlines?¶
Answer: Key Points to Cover:
- Prioritization approach
- Communication strategy
- Risk management
- Quality balance
Sample Response
"When facing tight deadlines, I follow a structured approach:
1. Prioritize ruthlessly:
- Focus on high-risk, high-impact areas first
- Use risk-based testing to identify critical paths
- Ensure smoke and sanity tests are always run
2. Communicate proactively:
- Inform stakeholders early about testing constraints
- Provide clear status updates with data
- Flag risks and potential quality trade-offs
3. Optimize efficiency:
- Leverage automation for regression tests
- Parallelize independent testing activities
- Eliminate redundant test cases
4. Make informed trade-offs:
- Document what's being deferred
- Ensure critical functionality is thoroughly tested
- Create a follow-up plan for deferred testing
Recently, we had a 2-week sprint compressed to 1 week. I created a priority matrix with the product owner, automated 40% of our critical path tests, and coordinated with developers for early builds. We delivered on time with all P1 features tested and documented P2 items for immediate post-release testing."
Q54. How do you prioritize test cases?¶
Answer: Prioritization Factors:
| Factor | Weight |
|---|---|
| Business criticality | High |
| Risk of failure | High |
| Frequency of use | Medium |
| Visibility to users | Medium |
| Complexity | Medium |
| Historical defects | Medium |
| Regulatory requirements | High |
Prioritization Approach
Priority 1 (Must Test):
- Critical business functions (login, checkout)
- Security-related features
- Regulatory compliance
- Features with high defect history
Priority 2 (Should Test):
- Frequently used features
- New functionality
- Integration points
- Customer-facing features
Priority 3 (Could Test):
- Edge cases
- Cosmetic validations
- Administrative features
- Low-risk changes
Priority 4 (Test If Time):
- Nice-to-have scenarios
- Exploratory testing
- Performance edge cases
Sample Response
"I use a risk-based prioritization matrix:
1. Assess each test case on:
- Business impact (High/Medium/Low)
- Likelihood of failure (High/Medium/Low)
- User exposure (Many/Some/Few users)
2. Create priority buckets:
- P1: High business impact + High failure likelihood
- P2: High impact + Low likelihood OR Low impact + High likelihood
- P3: Low impact + Low likelihood
3. Consider context:
- What changed in this release?
- What broke in the past?
- What's the release risk tolerance?
This approach ensures we always test what matters most, even under time constraints."
Q55. Describe a conflict with a developer and how you resolved it.¶
Answer Framework (STAR Method):
Sample Response
Situation: "A developer rejected a bug I reported, claiming it was 'working as designed.' The issue was that users couldn't recover from a specific error state without clearing their browser cache—poor UX in my view."
Task: "I needed to advocate for the user experience while maintaining a good working relationship with the developer."
Action: "Instead of escalating immediately, I:
-
Listened to understand: I asked the developer to walk me through the technical reasoning. I learned there were complex state management constraints.
-
Gathered evidence: I collected user complaints from support tickets and recorded a video showing the confusing user experience.
-
Focused on the problem, not positions: I said, 'I understand the technical constraints. How might we solve this for the user?'
-
Proposed solutions: I suggested three options with different implementation costs and we discussed trade-offs.
-
Involved the product owner: We jointly presented the issue and options, letting business priorities guide the decision."
Result: "The product owner agreed it was a UX issue worth fixing. The developer implemented a lightweight solution, and we strengthened our working relationship. We now have a collaborative 'bug triage' session to prevent similar conflicts."
Key Takeaways
- Stay objective and professional
- Focus on data and user impact
- Collaborate, don't confront
- Propose solutions, not just problems
Q56. How do you stay updated with testing trends?¶
Answer: Resources and Methods:
| Category | Examples |
|---|---|
| Blogs/Websites | Ministry of Testing, TestProject, Software Testing Help |
| Podcasts | Test Guild, Testing Peers |
| Communities | r/QualityAssurance, LinkedIn groups |
| Conferences | SeleniumConf, STAREAST, Agile Testing Days |
| Certifications | ISTQB, CSTE, AWS Certified |
| Hands-on | Side projects, open source contributions |
| Books | "Agile Testing" by Crispin, "Explore It!" by Hendrickson |
Sample Response
"I believe continuous learning is essential in our field. Here's how I stay current:
Weekly:
- Follow industry blogs (Ministry of Testing, Automation Panda)
- Participate in testing communities on LinkedIn and Reddit
- Listen to testing podcasts during commute
Monthly:
- Attend local QA meetups
- Experiment with new tools in personal projects
- Read articles on emerging trends (AI in testing, shift-left)
Quarterly:
- Take online courses (I recently completed API automation in Python)
- Contribute to open-source test frameworks
- Present learnings to my team
Recently, I learned about Playwright's new features and proposed it for our UI automation. After a POC showing 40% faster execution, we migrated our framework. This kept our testing approach modern and efficient."
Q57. Why do you want to be an SDET?¶
Answer:
Key Points to Cover
- Passion for both development and testing
- Technical skills application
- Impact on quality at scale
- Career growth perspective
Sample Response
"I want to be an SDET because I'm passionate about both coding and quality. Here's why this role excites me:
1. Technical Impact: I love writing code that prevents bugs at scale. Building test frameworks that hundreds of developers use daily is deeply satisfying.
2. Full-Stack Perspective: SDETs understand the entire system—frontend, backend, APIs, infrastructure. This holistic view helps catch issues others miss.
3. Problem-Solving: I enjoy the detective work of finding bugs and the engineering challenge of building reliable automation. Both aspects keep me engaged.
4. Quality Advocacy: I want to be embedded in the development process, shifting quality left rather than being a gatekeeper at the end.
5. Career Growth: The SDET path combines software engineering skills with quality expertise—a rare and valuable combination as companies invest more in automation.
My background in [relevant experience] has prepared me well. I've built [specific project/framework] and reduced regression time by [X%]. I'm excited to bring this passion to your team."
Q58. Describe a time you improved a process.¶
Answer Framework (STAR Method):
Sample Response
Situation: "Our team was running all 500 regression tests manually before each release, taking 3 full days. This created a bottleneck and delayed releases."
Task: "I proposed automating the regression suite and was given the opportunity to lead the initiative."
Action: "I approached this systematically:
-
Analysis:
- Categorized tests by priority and automation feasibility
- Identified 300 tests (60%) suitable for automation
- Selected Selenium with Java based on team skills
-
Planning:
- Started with 50 high-value, stable tests
- Created a reusable framework with Page Object Model
- Set up CI/CD integration with Jenkins
-
Execution:
- Automated in sprints, adding 50 tests per sprint
- Trained team members to contribute
- Established coding standards and code reviews
-
Monitoring:
- Tracked flaky tests and fixed root causes
- Measured time savings and bug escape rate"
Result: "Within 3 months:
- Automated 300 tests running in 45 minutes
- Reduced regression testing from 3 days to 4 hours
- Bug escape rate decreased by 35%
- Team could run regression on every build, not just releases
This freed up time for exploratory testing and caught integration issues earlier. The framework is still in use today."
Q59. How do you handle ambiguous requirements?¶
Answer:
Approach
1. Identify Ambiguities
- Read requirements critically
- Note unclear terms, missing details
- Compare with similar features
2. Research First
- Check existing documentation
- Review similar features
- Look at competitor implementations
3. Ask Targeted Questions
- Prepare specific questions, not "what should this do?"
- Propose options: "Should it work like A or B?"
- Document answers for future reference
4. Document Assumptions
- If can't get answers, document assumptions
- Share assumptions before testing
- Update when clarified
5. Test the Boundaries
- Use exploratory testing for unknown areas
- Test edge cases to expose gaps
- Report findings as questions, not just bugs
Sample Response
"Ambiguous requirements are common, and I've developed a structured approach:
Example: A requirement said 'users should be able to filter products.' No details on filter criteria, behavior, or UI.
My approach:
-
First, I researched similar e-commerce sites and documented common filter patterns.
-
I prepared specific questions:
- Which attributes? (price, category, brand)
- Multiple selections? (price AND category)
- URL-based filters for sharing?
- Performance with 10K products?
-
I created a quick mockup of filter behaviors and reviewed with the product owner.
-
For items without answers, I documented assumptions and shared them before testing.
-
During testing, I used exploratory techniques to find edge cases and proposed them as enhancements.
This proactive approach reduced clarification cycles and gave developers clearer acceptance criteria."
Q60. What's your approach to learning a new tool?¶
Answer:
Learning Framework
Phase 1: Quick Start (Day 1-2)
- Official getting started guide
- Install and run "Hello World"
- Understand core concepts
- Watch intro tutorials
Phase 2: Deep Dive (Week 1)
- Official documentation
- Follow structured course
- Build small practice project
- Join community forums
Phase 3: Apply (Week 2-4)
- Use in real work (POC)
- Solve actual problems
- Learn advanced features
- Share knowledge with team
Phase 4: Master (Ongoing)
- Contribute to community
- Create best practices
- Mentor others
- Stay updated with releases
Sample Response
"I follow a structured yet hands-on approach:
Recent example: Learning Playwright for UI automation
Week 1 - Foundations:
- Read official documentation (excellent for Playwright)
- Followed quick start to automate a simple flow
- Watched 2-3 YouTube tutorials for different perspectives
- Understood key concepts: selectors, assertions, fixtures
Week 2 - Practice:
- Set up a practice project with our tech stack
- Implemented page object model
- Explored advanced features: API mocking, visual testing
- Compared with our existing Selenium framework
Week 3 - Real Application:
- Automated 10 critical test cases from our regression suite
- Documented patterns and best practices
- Created a POC presentation for the team
Week 4 - Knowledge Sharing:
- Led a lunch-and-learn session
- Created a starter template for the team
- Wrote migration guide from Selenium
Within a month, I was proficient enough to lead our framework migration. I believe the best way to learn is by doing—with structured guidance initially, then diving into real problems."
Interview Tips¶
General Tips¶
- Use STAR Method:
- Situation: Set the context
- Task: Describe your responsibility
- Action: Explain what you did (focus here)
-
Result: Share the outcome with metrics
-
Be Specific:
- Use numbers and metrics
- Name specific tools and technologies
-
Give concrete examples
-
Know Your Resume:
- Be ready to discuss any item listed
- Have examples for each skill claimed
-
Prepare for deep dives on projects
-
Ask Clarifying Questions:
- For technical questions, clarify assumptions
-
For behavioral questions, confirm the scenario
-
Think Aloud:
- Share your reasoning process
- Interviewers want to see how you think
Technical Preparation¶
| Area | Topics to Review |
|---|---|
| Testing Fundamentals | STLC, SDLC, testing types, test design |
| Automation | Framework design, tool-specific questions |
| API Testing | REST, HTTP methods, status codes |
| Performance | Load, stress, spike testing concepts |
| CI/CD | Pipeline understanding, integration |
| Agile | Scrum ceremonies, QA role |
Questions to Ask the Interviewer¶
- "What does a typical day look like for a QA/SDET on this team?"
- "What's the test automation strategy and tech stack?"
- "How does QA collaborate with development and product?"
- "What are the biggest quality challenges the team faces?"
- "What does career growth look like for this role?"
- "How do you measure testing success?"
Red Flags to Avoid¶
Avoid These Mistakes
- Blaming others (developers, previous employers)
- Being vague or generic in answers
- Not knowing your own resume
- Showing no curiosity or questions
- Negative attitude about testing ("I want to move to development")
- Inability to admit mistakes or gaps
Final Checklist¶
Before Your Interview
- Review your resume and projects
- Prepare 5-7 STAR stories
- Research the company and role
- Review testing fundamentals
- Prepare questions to ask
- Practice explaining technical concepts simply
- Test your video/audio for virtual interviews
Good luck with your interview!