TestDino

Playwright Interview Questions 2026 for QAs and SDETs

Playwright is the most popular end-to-end testing tool for new web projects.

79 questions

Practical answers with code

10 sections

From fundamentals to AI ecosystem

Playwright Interview Questions 2026 for QAs and SDETs

Microsoft maintains it as open source; it runs the same tests on Chromium, Firefox, and WebKit through a single API, and the bundled test runner removes the third-party glue (Mocha, Jest, allure-reporter) that older Selenium projects used to need.

In 2026, "Playwright experience" sits in nearly every SDET and QA automation listing. The interview bar has moved with it. Recruiters want architecture, trade-offs, and a real debugging story, not just "I have written tests."

This guide includes Playwright interview questions, grouped by topic, so you can jump to where you want to focus: locators, waits, fixtures, debugging, CI, the AI ecosystem, and the rest.

Useful whether you are a fresher prepping your first SDET interview, a Selenium or Cypress engineer making the switch, or a senior SDET going for a lead role.

Playwright Fundamentals & Architecture Interview Questions

Every Playwright interview opens with these. They probe whether you understand what Playwright actually is, why it exists, and how it talks to the browser. Get fluent here, because every later question builds on these answers.

Q1.What is Playwright?

Playwright is an open-source browser automation library from Microsoft. It lets you write scripts that drive Chromium, Firefox, and WebKit through a single API, which makes it the standard choice for cross-browser end-to-end testing in 2026.

It is not just a library though. It ships with its own test runner (@playwright/test) that gives you parallel execution, retries, fixtures, HTML reports, and Trace Viewer out of the box. That is why most teams adopt it as a complete testing framework rather than a drop-in replacement for Selenium.

Q2.Why would you choose Playwright over Selenium?

The short answer in an interview: speed, stability, and a better developer experience.

The version with concrete reasons:

  • Playwright drives the browser over a single bidirectional WebSocket per browser, using browser-specific debugging protocols. Selenium sends each command as a separate HTTP request through the W3C WebDriver protocol. The single-connection model is faster and produces fewer race conditions.
  • Auto-wait is built into every action. You almost never need explicit waits or Thread.sleep style delays.
  • Locators like getByRole, getByText, and getByLabel map to how users find things, so tests break less often when the UI changes.
  • The bundled test runner gives you parallelism, retries, fixtures, and reporting without bolting on a third-party library.
  • Trace Viewer captures DOM snapshots, network calls, and console logs for every step, which makes debugging CI failures dramatically easier.
  • Real WebKit support means you can test Safari engine behaviour without owning a Mac for every developer.

Q3.What programming languages does Playwright support?

Playwright officially supports JavaScript, TypeScript, Python, Java, and .NET (C#). All language bindings share the same underlying automation engine, so the API surface and behaviour are nearly identical.

In the 2026 job market, the JavaScript and TypeScript binding is by far the most adopted. It gets new features first, has the largest community, and dominates Playwright job postings. Unless your team has a strong Java or Python preference, TypeScript is the safe default.

Q4.Which browsers does Playwright support?

3 browser engines: Chromium (which covers Chrome and Edge), Firefox, and WebKit (the engine behind Safari). Tests can run on Windows, macOS, and Linux.

By default, Playwright downloads its own pinned builds of each engine. If you need to test against the production version of Chrome or Edge that your users actually run, you can switch to the stable channel through the channel option:

typescript
Copy
// playwright.config.ts
import { defineConfig } from '@playwright/test';

export default defineConfig({
  projects: [
    { name: 'chromium', use: { browserName: 'chromium' } },
    { name: 'chrome', use: { channel: 'chrome' } },
    { name: 'edge', use: { channel: 'msedge' } },
  ],
});

Q5.What types of testing can you do with Playwright?

End-to-end UI testing is the primary use case, but Playwright also covers API testing through APIRequestContext, component testing for React, Vue, and Svelte, visual regression with toHaveScreenshot, and behaviour-driven testing through the playwright-bdd package when you need Gherkin scenarios.

What it does *not* do: native mobile app testing. Playwright drives mobile *web* through device emulation and can connect to real devices through cloud providers, but for native iOS or Android apps you still need Appium, Maestro, or Detox. Be ready for this question because interviewers love testing whether candidates know Playwright's actual scope.

Q6.What is async/await and why does Playwright use it?

Almost every Playwright method returns a Promise because browser actions happen asynchronously. You await each action, and the surrounding function must be marked async for that to work.

typescript
Copy
test('fill the form', async ({ page }) => {
  await page.goto('https://example.com/login');
  await page.getByLabel('Email').fill('[email protected]');
  await page.getByRole('button', { name: 'Sign in' }).click();
});

The Playwright-specific trap is the missing await. Without it, the action still kicks off, but the test moves on before it finishes:

typescript
Copy
// BUG: no await on click
test('flaky', async ({ page }) => {
  await page.goto('/login');
  page.getByRole('button', { name: 'Sign in' }).click();   // fires and forgets
  await expect(page).toHaveURL(/dashboard/);               // races with the click
});

This is one of the most common sources of intermittent failure in Playwright suites. eslint-plugin-playwright catches missing awaits at lint time and is worth adding to every project on day one.

Q7.Explain Playwright's architecture.

Playwright uses a single bidirectional WebSocket between the test process and a browser-side driver. Selenium, by contrast, sends each command as a separate HTTP request through the W3C WebDriver protocol.

The single connection is what enables Playwright's speed and stability. There is no per-command handshake. The driver inside the browser receives a stream of commands and pushes back events, screenshots, and DOM snapshots in near real time.

Concretely, a test run looks like this:

  1. The test runner launches a browser (Chromium, Firefox, or WebKit).
  2. Playwright opens a single WebSocket to the browser's debugging interface (Chrome DevTools Protocol for Chromium, custom protocols patched into Firefox and WebKit).
  3. Every action and assertion serializes as a message on that socket.
  4. The browser executes the action in-process and replies with the result.
  5. Trace data, network events, and console logs stream back over the same connection.

This is also why Playwright can implement features Selenium struggles with: web-first assertions, deterministic auto-wait, network interception, accessibility tree access for getByRole, and the AI ecosystem features (MCP, browser bind, screencast) introduced in Playwright 1.59.

Setup, Installation & Configuration Interview Questions

Practical questions an interviewer uses to confirm you have actually run Playwright on something. Most are quick, single-fact questions, but they catch out candidates who only watched tutorials.

Q8.How do you install Playwright in a new project?

The official initializer does everything in one command. It creates the folder structure, installs @playwright/test, downloads the 3 browser engines, and writes a working sample test.

bash
Copy
npm init playwright@latest

The wizard asks whether you want TypeScript or JavaScript, where the tests folder should live, and whether to add a GitHub Actions workflow. Answering yes to all the defaults gives you a runnable project in about a minute.

If you are adding Playwright to an existing project, npm install -D @playwright/test followed by npx playwright install does the same job in 2 steps. Quick refresher in case it comes up: npm installs packages, npx runs the binaries those packages ship. npx playwright test works because Playwright drops a playwright binary into node_modules/.bin.

Q9.What does the project structure look like after init?

bash
Copy
my-project/
├── tests/                  # your test files (*.spec.ts)
├── tests-examples/         # demo tests, safe to delete
├── playwright.config.ts    # central configuration
├── package.json
└── .github/workflows/      # CI config (if you opted in)

The 2 files you will touch the most are playwright.config.ts and whatever you create inside tests/. The tests-examples/ folder is just a demo and most teams delete it on day one. The playwright-report/ folder appears after your first run and holds the HTML report.

Q10.What is the difference between headless and headed mode?

Headless mode runs the browser without a visible window. It is faster, uses less memory, and is the default mode for CI pipelines.

Headed mode opens a real browser window so you can watch the test execute. It is slower but invaluable when you are debugging locally or showing a stakeholder what a test does.

bash
Copy
# Run all tests in headed mode
npx playwright test --headed

# Run a specific test file in headed mode
npx playwright test login.spec.ts --headed

You can also force headed mode in config with use: { headless: false }, but most teams keep headless as the CI default and toggle headed locally with the flag.

Q11.What is in playwright.config.ts?

playwright.config.ts is the single source of truth for how your suite runs. The fields you will edit most often:

typescript
Copy
import { defineConfig, devices } from '@playwright/test';

export default defineConfig({
  testDir: './tests',
  fullyParallel: true,
  retries: process.env.CI ? 2 : 0,
  workers: process.env.CI ? 4 : undefined,
  reporter: [['html'], ['list']],
  use: {
    baseURL: 'https://example.com',
    trace: 'on-first-retry',
    screenshot: 'only-on-failure',
    video: 'retain-on-failure',
  },
  projects: [
    { name: 'chromium', use: { ...devices['Desktop Chrome'] } },
    { name: 'firefox', use: { ...devices['Desktop Firefox'] } },
    { name: 'webkit', use: { ...devices['Desktop Safari'] } },
  ],
});

projects is what enables cross-browser runs, use defines defaults that apply to every test, and reporter controls what kind of output you get.

Browser, Context & Authentication Interview Questions

Browser isolation and authentication strategy is where Playwright separates from Selenium-era frameworks. Senior interviews test whether you can reason about contexts, sessions, and multi-role flows without slowing the suite down.

Q12.What is the difference between Browser, BrowserContext, and Page?

These 3 objects form a hierarchy and isolating them properly is what keeps Playwright tests stable.

  • Browser is the actual browser engine instance. Launching it is expensive, so Playwright reuses one browser across many tests.
  • BrowserContext is an isolated session inside that browser. Think of it as a fresh incognito window with its own cookies, localStorage, and cache. Each test gets its own context by default, which is why one test cannot pollute another.
  • Page is a single tab inside a context. A context can hold multiple pages, which is how you handle popups and multi-tab flows.

Once you internalize this hierarchy, multi-user tests, storage state, and parallel isolation all start to make sense.

Q13.newPage versus newContext: when do you use which?

browser.newPage() opens a tab in a new, default context. browser.newContext() creates a fully isolated context, and you call context.newPage() to get a tab inside it.

The difference matters for state. A new context starts with no cookies, no localStorage, no cache. 2 contexts cannot see each other's session. 2 pages in the same context share everything.

typescript
Copy
// 2 isolated users at the same time
const userContext = await browser.newContext();
const adminContext = await browser.newContext({ storageState: 'admin-auth.json' });

const userPage = await userContext.newPage();
const adminPage = await adminContext.newPage();

// userPage and adminPage are fully independent: cookies, storage, network state

Use newPage() for quick scripts. Use newContext() whenever you need isolation, multiple users, different storage states, or different device emulations.

Q14.How do you reuse login state with storageState?

Logging in through the UI in every test is slow and brittle. The storageState pattern logs in once, saves cookies and localStorage to a JSON file, then injects that file into every test that needs an authenticated session.

typescript
Copy
// global-setup.ts
import { chromium, type FullConfig } from '@playwright/test';

export default async function globalSetup(config: FullConfig) {
  const browser = await chromium.launch();
  const page = await browser.newPage();
  await page.goto('https://example.com/login');
  await page.getByLabel('Email').fill('[email protected]');
  await page.getByLabel('Password').fill('hunter2');
  await page.getByRole('button', { name: 'Sign in' }).click();
  await page.waitForURL(/dashboard/);
  await page.context().storageState({ path: 'auth.json' });
  await browser.close();
}

// playwright.config.ts
export default defineConfig({
  globalSetup: './global-setup',
  use: { storageState: 'auth.json' },
});

Now every test starts authenticated without running the login flow. For multi-role test suites, save one JSON per role and switch with test.use({ storageState: 'admin.json' }).

Q15.How do you set up multi-user or multi-role tests?

2 roles, 2 contexts, 2 pages, 1 test. This is how you test things like a customer placing an order while an admin approves it.

typescript
Copy
test('admin approves customer order', async ({ browser }) => {
  const customerContext = await browser.newContext({ storageState: 'customer-auth.json' });
  const adminContext = await browser.newContext({ storageState: 'admin-auth.json' });

  const customer = await customerContext.newPage();
  const admin = await adminContext.newPage();

  await customer.goto('/orders/new');
  await customer.getByRole('button', { name: 'Place order' }).click();
  await expect(customer.getByText('Order placed')).toBeVisible();

  await admin.goto('/admin/orders/pending');
  await admin.getByRole('button', { name: 'Approve' }).first().click();
  await expect(customer.getByText('Order approved')).toBeVisible();

  await customerContext.close();
  await adminContext.close();
});

The 2 contexts are real isolated sessions. They cannot see each other's cookies, but they can both observe the application changing in real time.

Q16.How do you handle multiple tabs or popup windows?

Use the one-shot wait pattern: set up waitForEvent, fire the action, await the resulting page.

typescript
Copy
test('opens terms in a new tab', async ({ context, page }) => {
  await page.goto('/signup');

  const [popup] = await Promise.all([
    context.waitForEvent('page'),
    page.getByRole('link', { name: 'Terms of service' }).click(),
  ]);

  await popup.waitForLoadState();
  await expect(popup).toHaveURL(/terms/);
  await expect(popup.getByRole('heading', { name: 'Terms' })).toBeVisible();
});

context.waitForEvent('page') resolves the next time a new page is created in the context, whether it was opened via target="_blank", window.open, or any other mechanism. After this point, the original page and the popup are independent locator scopes. If you expect multiple popups in a single test, call waitForEvent('page') again or switch to context.on('page', handler) for a persistent listener.

Q17.How do you scale storageState across multiple roles?

A single auth.json works for 1 user. Real apps have multiple roles (admin, editor, viewer, customer, partner). The pattern is one globalSetup that logs in once per role and saves a separate state file.

typescript
Copy
// global-setup.ts
import { chromium } from '@playwright/test';

const ROLES = [
  { name: 'admin',    email: '[email protected]',    password: process.env.ADMIN_PW! },
  { name: 'editor',   email: '[email protected]',   password: process.env.EDITOR_PW! },
  { name: 'customer', email: '[email protected]', password: process.env.CUSTOMER_PW! },
];

export default async function globalSetup() {
  const browser = await chromium.launch();
  for (const role of ROLES) {
    const context = await browser.newContext();
    const page = await context.newPage();
    await page.goto('https://example.com/login');
    await page.getByLabel('Email').fill(role.email);
    await page.getByLabel('Password').fill(role.password);
    await page.getByRole('button', { name: 'Sign in' }).click();
    await page.waitForURL(/dashboard/);
    await context.storageState({ path: `auth/${role.name}.json` });
    await context.close();
  }
  await browser.close();
}
typescript
Copy
// In specs, switch role per test
test.use({ storageState: 'auth/admin.json' });
test('admin can delete users', async ({ page }) => { /* ... */ });

For very large suites, a worker-scoped fixture that logs in once per worker beats globalSetup because it parallelizes the auth step too.

Q18.How do you set up token-based authentication via API for tests?

UI login is slow. For modern apps that issue tokens, hit the auth endpoint directly and inject the token into the browser context.

typescript
Copy
import { test as base } from '@playwright/test';

type Fixtures = { authedPage: Page };

export const test = base.extend<Fixtures>({
  authedPage: async ({ browser }, use) => {
    // 1. Get a token via API
    const apiContext = await request.newContext({ baseURL: 'https://api.example.com' });
    const tokenResponse = await apiContext.post('/auth/login', {
      data: { email: '[email protected]', password: process.env.TEST_PW },
    });
    const { token } = await tokenResponse.json();

    // 2. Inject the token into a fresh browser context
    const context = await browser.newContext({
      extraHTTPHeaders: { Authorization: `Bearer ${token}` },
    });
    const page = await context.newPage();

    // 3. If the app stores the token in localStorage instead of a header
    await page.addInitScript(t => localStorage.setItem('auth_token', t), token);

    await use(page);
    await context.close();
  },
});

This pattern shaves 5 to 10 seconds off every test that needs an authenticated session. On a 500-test suite, that adds up to over an hour saved per CI run.

Locators, Selectors & Strict Mode Interview Questions

Locators are the foundation of every Playwright test, and the place most flaky tests are born. Expect questions on which `getBy*` API to reach for first, how strict mode catches ambiguous matches, and how to write selectors that survive UI refactors.

Q19.What is a locator in Playwright?

A locator is a lazy, reusable reference to an element on the page. It does not perform a query at creation time. Instead, it re-runs the query every time you act on it, which is what makes Playwright tests resilient to elements being re-rendered.

typescript
Copy
const loginButton = page.getByRole('button', { name: 'Sign in' });
await loginButton.click();
await expect(loginButton).toBeHidden();

The same locator works for both the action and the assertion. Locators also auto-wait for the element to be ready before any action, which removes the need for manual waits in most flows.

Q20.How do you use getByRole?

getByRole finds elements by their ARIA role and accessible name. It is the most recommended locator strategy in Playwright because it matches how a real user (or assistive technology) finds the element.

typescript
Copy
// A button with the visible text "Sign in"
await page.getByRole('button', { name: 'Sign in' }).click();

// A link with the visible text "Forgot password"
await page.getByRole('link', { name: 'Forgot password' }).click();

// A textbox with an associated label "Email"
await page.getByRole('textbox', { name: 'Email' }).fill('[email protected]');

If your app is even mildly accessible, getByRole will work for most interactive elements. When it does not, that often signals an accessibility bug worth raising.

Q21.How do you use getByText?

getByText finds elements by their visible text content. It is the natural fit for things like flash messages, link text, or any element that does not have a clear ARIA role.

typescript
Copy
// Substring match (default)
await expect(page.getByText('Welcome back')).toBeVisible();

// Exact match
await page.getByText('Sign in', { exact: true }).click();

// Regex match
await expect(page.getByText(/Order #\d{4}/)).toBeVisible();

The substring match is convenient but can be too greedy on text-heavy pages. Use { exact: true } or a regex when you need precision.

Q22.How do you use getByLabel?

getByLabel finds form controls by their associated <label>. It is the cleanest way to target inputs because it works the same way a sighted user fills out a form: read the label, type into the field next to it.

typescript
Copy
await page.getByLabel('Email').fill('[email protected]');
await page.getByLabel('Password').fill('hunter2');
await page.getByLabel('Remember me').check();

Because it relies on the <label for="..."> association or aria-labelledby, getByLabel only works on properly-labelled inputs. If a developer skipped the label, this is another locator that will surface real accessibility gaps in your app.

Q23.How do you use getByTestId?

getByTestId finds elements by a dedicated test attribute, by default data-testid. It is the most resilient locator because it is decoupled from visual changes, copy edits, and CSS class renames.

html
Copy
<!-- Markup: <button data-testid="checkout-button">Checkout</button> -->
typescript
Copy
await page.getByTestId('checkout-button').click();

You can rename the attribute through testIdAttribute in the config if your team uses something like data-qa instead. The trade-off: test IDs add noise to production HTML, and they only work if the dev team buys into adding them. Most strong frameworks combine getByRole first, with getByTestId as a fallback for elements that have no clear semantic role.

Q24.What is the difference between page.locator() and the deprecated $ and $$ selectors?

Older Playwright code used page.$('selector') and page.$$('selector'), which returned ElementHandle objects. Those handles are bound to a specific moment in the DOM. If the element is re-rendered, the handle goes stale and the next action throws.

page.locator('selector') returns a Locator instead. Locators are lazy: they re-query the DOM on every action and auto-retry until the element is actionable. This is what makes them stable on modern single-page apps where React, Vue, or Svelte constantly re-renders the tree.

The $ and $$ APIs still work for backward compatibility but the official guidance is to use locator(), getByRole(), and friends in all new code.

Q25.How do you handle dynamic elements that change frequently?

Dynamic elements (auto-generated IDs, randomized class names, content-driven attributes) break traditional CSS and XPath selectors. The fix is to lock onto something stable.

In order of preference:

  1. getByRole with the accessible name. Roles and labels are far more stable than class names.
  2. getByText with a regex if the visible text is reliable.
  3. getByTestId, with the dev team adding data-testid to the elements you actually test.
  4. CSS combinators that anchor to a stable parent and walk down: page.locator('[data-section="cart"] button:has-text("Remove")').
typescript
Copy
// Brittle
await page.locator('#btn_3f9a2c1').click();

// Stable
await page.getByRole('button', { name: 'Add to cart' }).click();

If none of those work, the element is probably an accessibility bug worth raising with the dev team.

Q26.What is strict mode violation and how do you fix it?

Playwright runs locators in strict mode by default. If a locator matches more than one element when you call click(), fill(), or any single-element action, it throws a strict mode violation instead of silently picking the first match.

typescript
Copy
// Throws strict mode violation if there are multiple "Submit" buttons on the page
await page.getByRole('button', { name: 'Submit' }).click();

3 ways to resolve it:

typescript
Copy
// 1. Narrow with filter
await page.getByRole('button', { name: 'Submit' })
  .filter({ has: page.getByText('Order') })
  .click();

// 2. Pick a specific match
await page.getByRole('button', { name: 'Submit' }).first().click();

// 3. Anchor to a parent
await page.getByRole('dialog').getByRole('button', { name: 'Submit' }).click();

The right answer is almost always option 3. Strict mode is a feature, not a bug. It catches selectors that would have been flaky later.

Q27.How do you find and act on multiple elements?

Locators support .all(), .first(), .last(), and .nth(index) for working with collections. Strict mode does not apply to these helpers because you are explicitly asking for a list or a position.

typescript
Copy
const items = page.getByRole('listitem');

// Count
const count = await items.count();

// Iterate
for (let i = 0; i < count; i++) {
  const text = await items.nth(i).textContent();
  console.log(text);
}

// Iterate with .all() (returns Locator[], not strings)
for (const item of await items.all()) {
  await expect(item).toBeVisible();
}

// Specific positions
await items.first().click();
await items.last().click();

The classic trap: calling .click() directly on a multi-match locator. That triggers strict mode violation. Use .first(), .nth(), or .filter() to disambiguate.

Q28.How do you chain locator filters?

filter(), and(), and or() let you compose locators precisely without writing complicated CSS.

typescript
Copy
// Filter: a row that contains specific text
const targetRow = page.getByRole('row').filter({ hasText: 'Invoice #1024' });
await targetRow.getByRole('button', { name: 'Pay' }).click();

// and(): match both criteria at once
const primaryButton = page.getByRole('button').and(page.locator('.btn-primary'));

// or(): handle either of 2 states
await page.getByRole('heading', { name: 'Welcome' })
  .or(page.getByRole('heading', { name: 'Sign in' }))
  .waitFor();

Chained filters are how strong frameworks keep tests readable on pages with repeating UI patterns like tables, lists, and grids.

Q29.How does locator resolution actually work?

A locator is lazy. Creating one with page.getByRole('button', { name: 'Save' }) does not query the DOM. Nothing happens until you call an action or assertion on it.

When you do, Playwright runs the resolution loop:

  1. Re-query the DOM for elements matching the selector.
  2. Apply any chained filters (.filter, .first, .nth).
  3. If element matches = 1, run actionability checks on it.
  4. If matches > 1, throw a strict mode violation.
  5. If match = 0, wait and retry until the action timeout expires.
typescript
Copy
const button = page.getByRole('button', { name: 'Save' });
// At this point, no DOM query has happened.

await button.click();
// NOW Playwright queries, re-queries, and waits for actionability.

This is why locators are stable across re-renders. The same locator object queries the *current* DOM every time, so a React re-render between steps does not invalidate it. Compare this to the old ElementHandle model, which captured a node reference at one point in time and went stale the moment that node was replaced.

Q30.How do you build a stable locator strategy across UI changes?

Stable locators come from a hierarchy, not a single rule. The hierarchy in 2026:

  1. `getByRole` with the accessible name. Survives style changes, copy edits, and DOM restructures.
  2. `getByLabel` for form inputs. Same robustness, scoped to labelled fields.
  3. `getByText` for unique copy. Fine for things like "Welcome back" headings.
  4. `getByTestId` for elements with no semantic role. Decoupled from visuals but requires dev cooperation.
  5. CSS or XPath as a last resort, anchored to a stable parent. Avoid raw IDs that look auto-generated.
typescript
Copy
// Brittle locator hierarchy (worst to best)
page.locator('#btn_a3f9c1');                       // generated ID, will break
page.locator('button.primary-btn-large');          // CSS class, breaks on redesign
page.locator('xpath=//div[2]/form/button[1]');     // structural XPath, breaks on DOM change
page.getByText('Save');                            // breaks on copy edit
page.getByTestId('save-button');                   // stable if devs maintain it
page.getByRole('button', { name: 'Save' });        // most stable

Pair this with a code review rule: any new locator must use a getBy* API unless there is a documented reason. The cost of 1 ten-minute review is far less than the cost of debugging a flaky test caused by a brittle selector 6 months later.

Auto-Waiting, Actions & Assertions Interview Questions

Auto-wait is Playwright's headline differentiator from Selenium. Expect questions on what it checks, when it falls short, and which assertion or wait pattern fits each scenario.

Q31.What is auto-wait?

Auto-wait is Playwright's built-in synchronization. Before performing an action like click or fill, Playwright automatically waits for the target element to pass a set of actionability checks: it must be attached to the DOM, visible, stable (not animating), enabled, and able to receive events.

If any of those checks fail, Playwright keeps polling until the global action timeout (default 30 seconds) runs out. That is why you rarely need to write waitForSelector or sleep calls. The same logic applies to web-first assertions like toBeVisible or toHaveText, which retry until the condition is met or the timeout fires.

Q32.What are the most common actions on an element?

Most Playwright tests are built from these actions, all auto-waited.

typescript
Copy
// Click
await page.getByRole('button', { name: 'Submit' }).click();

// Fill (clears the field then sets the value in one shot)
await page.getByLabel('Search').fill('playwright tutorials');

// Press a key
await page.getByLabel('Search').press('Enter');

// Check / uncheck
await page.getByLabel('Subscribe to newsletter').check();
await page.getByLabel('Subscribe to newsletter').uncheck();

// Select from a dropdown
await page.getByLabel('Country').selectOption('IN');

// Hover (useful for menus and tooltips)
await page.getByRole('menuitem', { name: 'Account' }).hover();

// Drag and drop
await page.getByRole('listitem', { name: 'Card 1' })
  .dragTo(page.getByRole('region', { name: 'In progress' }));

For text inputs, fill is the default. Reach for pressSequentially (covered in Q38) when the application logic listens to per-keystroke events like autocomplete dropdowns.

Q33.What are basic assertions in Playwright?

Assertions come from the expect import in @playwright/test. The most common ones are *web-first* assertions, which auto-retry until the condition passes or the timeout expires.

typescript
Copy
import { test, expect } from '@playwright/test';

test('login succeeds', async ({ page }) => {
  await page.goto('https://example.com/login');
  await page.getByLabel('Email').fill('[email protected]');
  await page.getByLabel('Password').fill('hunter2');
  await page.getByRole('button', { name: 'Sign in' }).click();

  await expect(page).toHaveURL(/dashboard/);
  await expect(page.getByRole('heading', { name: 'Welcome' })).toBeVisible();
  await expect(page.getByTestId('order-list')).toHaveCount(3);
});

toBeVisible, toHaveText, toHaveURL, toHaveCount, toContainText, and toBeEnabled cover roughly 90 percent of what you need.

Q34.How is toBeVisible different from manually checking visibility?

toBeVisible is a web-first assertion. It re-checks the condition on a polling loop until either the element becomes visible or the assertion timeout (default 5 seconds) runs out.

A manual check using something like await element.isVisible() returns immediately. If the element is not visible *at that exact moment*, the check returns false and your test fails, even though the element would have appeared 50 milliseconds later.

typescript
Copy
// Reliable: retries until visible or timeout
await expect(page.getByText('Saved successfully')).toBeVisible();

// Fragile: snapshot in time, often false on async UIs
const visible = await page.getByText('Saved successfully').isVisible();
expect(visible).toBe(true);

This is the single most important pattern to internalize for stable tests.

Q35.Why should you avoid waitForTimeout?

page.waitForTimeout(2000) pauses for a fixed duration. It is the test-automation equivalent of Thread.sleep, and it causes flaky tests for 2 reasons:

  1. On a slow CI runner, 2 seconds is not enough, and the test fails.
  2. On a fast machine, 2 seconds is wasted time that adds up across hundreds of tests.

The right approach is to wait for an actual condition: a network response, an element appearing, an assertion passing. Playwright's auto-wait and web-first assertions do this for you in most cases. The narrow exception is for animations or third-party widgets you cannot hook into, where a small fixed wait is sometimes the only option. Even then, document why.

Q36.What types of waits does Playwright support?

Most candidates list three. The complete picture is closer to five.

  1. Auto-wait. Built into every action and web-first assertion. The default and the one you should use 95 percent of the time.
  2. `locator.waitFor({ state })`. Explicitly wait for an element to be visible, hidden, attached, or detached.
  3. `page.waitForURL(pattern)`. Wait for a specific URL or pattern after navigation.
  4. `page.waitForResponse(pattern)`. Wait for a network response that matches a URL or predicate.
  5. `page.waitForFunction(fn)`. Wait for an arbitrary condition to become true in the page context.
typescript
Copy
await page.getByRole('link', { name: 'Open invoice' }).click();
await page.waitForURL(/invoices\/\d+/);
await expect(page.getByRole('heading', { name: 'Invoice details' })).toBeVisible();

Note what is missing from the list: waitForTimeout. It exists, but using it as a synchronization tool is the leading cause of flaky Playwright suites.

Q37.What is the difference between waitForLoadState, waitForURL, and waitForResponse?

These 3 look similar but solve different problems.

  • page.waitForLoadState('load' | 'domcontentloaded' | 'networkidle') waits for the page lifecycle event. networkidle means there have been no network requests for at least 500 ms. Useful for the initial page load on traditional sites.
  • page.waitForURL(pattern) waits for navigation to a specific URL. Use after clicking a link or submitting a form that redirects.
  • page.waitForResponse(predicate) waits for an HTTP response that matches your predicate. Use when an action triggers an XHR or fetch that you need to assert against.
typescript
Copy
// Page lifecycle
await page.goto('/dashboard');
await page.waitForLoadState('networkidle');

// URL change
await page.getByRole('button', { name: 'Save' }).click();
await page.waitForURL(/\/dashboard\/saved/);

// API response
const responsePromise = page.waitForResponse(r =>
  r.url().includes('/api/orders') && r.status() === 200);
await page.getByRole('button', { name: 'Place order' }).click();
const response = await responsePromise;
const body = await response.json();
expect(body.orderId).toBeTruthy();

The waitForResponse pattern (start the wait *before* the click that triggers the request) is one of the cleanest ways to stabilize tests that depend on network calls.

Q38.What is the difference between fill, type, and pressSequentially?

fill clears the input and sets the value in a single operation. It does not fire individual keydown or keypress events for each character.

pressSequentially (and the older type) types the value one character at a time, firing keyboard events for each one. This is what you need for inputs with autocomplete dropdowns or onkeyup logic.

typescript
Copy
// Fast, no per-key events
await page.getByLabel('Email').fill('[email protected]');

// Slow, fires per-key events (use for autocomplete)
await page.getByLabel('Search').pressSequentially('playwright', { delay: 80 });

Default to fill for ordinary form inputs. Reach for pressSequentially only when the application logic depends on per-keystroke events.

Q39.How do you wait for a network-dependent flow?

3 patterns, each suited to a different shape of flow.

typescript
Copy
// 1. Wait for a known response after triggering it
const responsePromise = page.waitForResponse('**/api/checkout');
await page.getByRole('button', { name: 'Pay now' }).click();
await responsePromise;

// 2. Wait for an element that only appears after the response
await page.getByRole('button', { name: 'Pay now' }).click();
await expect(page.getByText('Payment confirmed')).toBeVisible();

// 3. Wait for a status change in the UI (most readable)
await page.getByRole('button', { name: 'Pay now' }).click();
await expect(page.getByRole('status')).toHaveText('Paid');

Pattern 3 is usually the strongest because it asserts the user-visible outcome rather than the implementation detail. Pattern 1 is essential when the UI does not change but you need to validate the API contract.

Q40.What conditions does auto-wait check before any action?

Before performing an action like click, fill, check, or selectOption, Playwright runs a fixed sequence of actionability checks. The action is delayed until every check passes or the timeout fires.

CheckMeaning
AttachedThe element is in the DOM
VisibleHas a non-empty bounding box and visibility is not hidden
StableHas not moved or animated for 2 consecutive animation frames
Receives eventsA hit-test at the element's centre returns the element itself (nothing covering it)
EnabledNot disabled (only checked for elements that can be)
EditableNot read-only (only checked for inputs being filled)
typescript
Copy
// All 6 checks run automatically before this click:
await page.getByRole('button', { name: 'Submit' }).click();

When a test fails with a "timeout exceeded" message, the trace will show which check never passed. That is your first clue: was the button hidden, covered by a modal, animating, or disabled?

Q41.How do you fix async wait issues?

3 patterns cover almost every case.

typescript
Copy
// 1. Replace static delays with web-first assertions
// Bad
await page.click('#save');
await page.waitForTimeout(2000);
expect(await page.textContent('#status')).toBe('Saved');

// Good
await page.click('#save');
await expect(page.locator('#status')).toHaveText('Saved');

// 2. Wait for the network call before asserting on its effects
const responsePromise = page.waitForResponse(r => r.url().includes('/api/save'));
await page.click('#save');
await responsePromise;
await expect(page.locator('#status')).toHaveText('Saved');

// 3. Catch missing awaits at the linting stage
// .eslintrc: extends ['plugin:playwright/recommended']

The third pattern is the one most teams skip. eslint-plugin-playwright flags missing awaits, deprecated APIs, and bad locator patterns *before* they become flaky tests. Adding it to CI catches an entire class of bug at the PR stage.

Interview tip: if asked to fix a flaky test live, narrate the trace investigation first, then propose a fix. Candidates who jump straight to a fix without diagnosing look like they are guessing.

Fixtures, POM & Framework Design Interview Questions

Architectural questions for anyone with 2+ years of Playwright experience. Walk-through-your-framework lives here, and the gap between candidates who memorized POM and candidates who shipped one is obvious in 30 seconds.

Q42.What is a fixture in Playwright?

A fixture is a reusable piece of setup and teardown that the test runner injects into your test function. You request the fixtures you need by destructuring them from the function argument.

typescript
Copy
import { test, expect } from '@playwright/test';

test('open the homepage', async ({ page }) => {
  await page.goto('https://example.com');
  await expect(page).toHaveTitle(/Example/);
});

Built-in fixtures include page, context, browser, and request. You can also write custom fixtures to inject things like a logged-in user, an API client, or pre-seeded test data. Custom fixtures are the cleanest way to remove duplicate setup code from your specs.

Q43.How do you implement the Page Object Model in Playwright?

Page Object Model (POM) is a design pattern where each page or major component of the app gets its own class. The class holds the locators and exposes methods that describe what a user can do on that page. Tests then call those methods instead of touching locators directly.

typescript
Copy
// pages/login.page.ts
import { type Page, type Locator } from '@playwright/test';

export class LoginPage {
  readonly page: Page;
  readonly emailInput: Locator;
  readonly passwordInput: Locator;
  readonly signInButton: Locator;

  constructor(page: Page) {
    this.page = page;
    this.emailInput = page.getByLabel('Email');
    this.passwordInput = page.getByLabel('Password');
    this.signInButton = page.getByRole('button', { name: 'Sign in' });
  }

  async goto() {
    await this.page.goto('/login');
  }

  async signIn(email: string, password: string) {
    await this.emailInput.fill(email);
    await this.passwordInput.fill(password);
    await this.signInButton.click();
  }
}

// tests/login.spec.ts
test('successful sign in', async ({ page }) => {
  const login = new LoginPage(page);
  await login.goto();
  await login.signIn('[email protected]', 'hunter2');
  await expect(page).toHaveURL(/dashboard/);
});

POM keeps locators in one place, so when the UI changes you fix 1 file instead of 50.

Q44.POM versus fixtures: when do you use which?

This question separates candidates who memorized POM from candidates who actually built a framework.

POM is for *what an action looks like*: where the locators live, how the user clicks through a page. Fixtures are for *what state a test needs*: a logged-in user, a seeded database, an API client.

typescript
Copy
// Fixtures provide the state, page objects provide the actions
import { test as base } from '@playwright/test';
import { LoginPage } from './pages/login.page';

type MyFixtures = {
  loginPage: LoginPage;
};

export const test = base.extend<MyFixtures>({
  loginPage: async ({ page }, use) => {
    const loginPage = new LoginPage(page);
    await loginPage.goto();
    await use(loginPage);
  },
});

// In specs
test('reset password link works', async ({ loginPage, page }) => {
  await loginPage.clickForgotPassword();
  await expect(page).toHaveURL(/reset/);
});

The 2 patterns compose. Fixtures inject the page objects you need, page objects expose the actions, and tests stay short and intention-revealing.

Q45.Walk me through your Playwright framework structure.

This is the highest-asked open-ended question in 2026 senior interviews. A clean answer covers folders, configuration, and the layers that hold the project together.

bash
Copy
playwright-framework/
├── tests/                  # Spec files, grouped by feature
│   ├── auth/
│   ├── checkout/
│   └── settings/
├── pages/                  # Page objects (1 per page or component)
├── fixtures/               # Custom fixtures (logged-in user, API client)
├── data/                   # Test data files (.json)
├── utils/                  # Helpers (date formatters, mock builders)
├── .env                    # Local environment variables
├── .env.staging            # Staging overrides
├── playwright.config.ts    # Base config (projects, retries, reporters)
└── package.json

Talking points for the walk-through:

  • tests/ is grouped by feature, not by page, so related specs live together.
  • Page objects hold locators and actions, never assertions.
  • Fixtures provide pre-conditions like an authenticated context or seeded data.
  • Environment-specific values come from .env files, not hard-coded strings.
  • The config drives projects (browsers and devices), retries, and reporters.

If your team uses a centralized reporting platform like TestDino, mention it here too. Interviewers like hearing how you turn the raw Playwright HTML report into something the whole team uses.

Q46.What are built-in fixtures in Playwright?

The runner provides a fixed set of fixtures you can pull into any test by destructuring them from the test argument.

FixtureScopeWhat it gives you
pagetestA fresh page in an isolated context
contexttestThe browser context backing that page
browserworkerThe shared browser instance for the worker
requesttestAn APIRequestContext for direct HTTP calls
browserNametestA string: chromium, firefox, or webkit
playwrightworkerThe Playwright API itself, for low-level operations
typescript
Copy
test('use multiple built-in fixtures', async ({ page, context, request, browserName }) => {
  console.log(`Running on ${browserName}`);

  const apiResponse = await request.get('/api/health');
  expect(apiResponse.status()).toBe(200);

  await context.addCookies([{ name: 'session', value: 'abc', domain: 'example.com', path: '/' }]);
  await page.goto('/');
});

You almost never instantiate browsers or contexts manually. The fixtures handle setup and teardown automatically.

Q47.How do you create a custom fixture?

test.extend is the API. You declare a fixture name, write a setup function that takes (args, use), do the setup, call use(value) to hand it to the test, and add teardown after use returns.

typescript
Copy
// fixtures/auth.fixture.ts
import { test as base, type Page } from '@playwright/test';

type AuthFixtures = {
  authedPage: Page;
};

export const test = base.extend<AuthFixtures>({
  authedPage: async ({ page }, use) => {
    // Setup: log in via API and inject the cookie
    const response = await page.request.post('/api/login', {
      data: { email: '[email protected]', password: 'hunter2' },
    });
    const { token } = await response.json();
    await page.context().addCookies([{
      name: 'auth', value: token, domain: 'example.com', path: '/',
    }]);

    // Hand the prepared page to the test
    await use(page);

    // Teardown (optional): clean up server state
  },
});

// In specs
import { test, expect } from '../fixtures/auth.fixture';

test('settings page loads when authed', async ({ authedPage }) => {
  await authedPage.goto('/settings');
  await expect(authedPage.getByRole('heading', { name: 'Settings' })).toBeVisible();
});

This pattern eliminates duplicated login code and makes specs read like business scenarios.

Q48.Worker-scoped versus test-scoped fixtures: what is the difference?

Test-scoped fixtures (the default) run once per test. Worker-scoped fixtures run once per worker process and are shared across all tests in that worker. Worker scope is how you avoid expensive setup running hundreds of times.

typescript
Copy
import { test as base, type APIRequestContext } from '@playwright/test';

type Fixtures = {
  // Test-scoped: a fresh request context per test
  apiClient: APIRequestContext;
};

type WorkerFixtures = {
  // Worker-scoped: one shared seeded database per worker
  seededDb: { dbName: string };
};

export const test = base.extend<Fixtures, WorkerFixtures>({
  seededDb: [
    async ({}, use) => {
      const dbName = `test_db_${Date.now()}`;
      // ... create and seed the database ...
      await use({ dbName });
      // ... drop the database after all tests in this worker finish ...
    },
    { scope: 'worker' },
  ],
});

Use worker scope for things that are expensive to set up and safe to share: API authentication tokens, seeded databases, browser launches. Keep test scope for anything that should not bleed between tests.

Ship Playwright tests with real intelligence.

AI root-cause analysis
Flaky test detection
Real-time CI insights
Start for free

Network Mocking, API Testing & Special Elements Interview Questions

Modern apps fail in ways selectors cannot catch: third-party iframes, file downloads, slow APIs, missing error states. This section is where you prove you have used Playwright on a real app, not just a todo demo.

Q49.How do you handle iframes in Playwright?

Playwright gives you 3 approaches, each suited to a different scenario.

typescript
Copy
// 1. frameLocator: the modern, recommended way
const stripe = page.frameLocator('iframe[name="stripe-card"]');
await stripe.getByLabel('Card number').fill('4242 4242 4242 4242');

// 2. By name attribute
const frame = page.frame({ name: 'payment' });
await frame!.getByRole('button', { name: 'Submit' }).click();

// 3. Through the element's content frame
const frameElement = page.locator('iframe#chat-widget');
const chatFrame = await frameElement.contentFrame();
await chatFrame!.getByRole('button', { name: 'Send' }).click();

frameLocator is the right choice in nearly every modern flow. It returns a locator-style API, supports auto-wait, and handles late-loading frames cleanly. The other two exist for specific cases where you already have a reference to the frame element or its name.

Q50.How do you handle alerts and dialogs?

Playwright handles native dialogs through an event listener pattern. Register the handler *before* triggering the action that opens the dialog. By default, Playwright auto-dismisses any unhandled dialog so the test does not block.

typescript
Copy
test('confirm delete dialog', async ({ page }) => {
  page.on('dialog', async dialog => {
    expect(dialog.type()).toBe('confirm');
    expect(dialog.message()).toContain('delete this account');
    await dialog.accept();   // or dialog.dismiss()
  });

  await page.goto('/account');
  await page.getByRole('button', { name: 'Delete account' }).click();
  await expect(page.getByText('Account deleted')).toBeVisible();
});

// For prompt dialogs, pass the input value
page.on('dialog', dialog => dialog.accept('My text input'));

Note this is *only* for native browser dialogs (alert, confirm, prompt, beforeunload). Custom in-page modals are just regular DOM elements and you click them with normal locators.

Q51.How do you upload a file?

setInputFiles sets the value on a file input. Pass the path or an array of paths.

typescript
Copy
// Single file
await page.getByLabel('Profile photo').setInputFiles('fixtures/avatar.png');

// Multiple files
await page.getByLabel('Attachments').setInputFiles([
  'fixtures/doc-a.pdf',
  'fixtures/doc-b.pdf',
]);

// Clear the input
await page.getByLabel('Profile photo').setInputFiles([]);

// Upload an in-memory file (no disk write)
await page.getByLabel('Notes').setInputFiles({
  name: 'notes.txt',
  mimeType: 'text/plain',
  buffer: Buffer.from('Hello world'),
});

If the upload UI hides the real <input type="file"> behind a styled button, you may need to anchor the locator to the input directly with page.locator('input[type="file"]').

Q52.How do you handle a file download?

Use page.waitForEvent('download') set up *before* clicking the link that triggers the download. The call returns a Promise that resolves when the download starts. This is a one-shot wait, not a persistent listener like page.on('dialog', ...).

typescript
Copy
test('export CSV', async ({ page }) => {
  await page.goto('/reports');

  const downloadPromise = page.waitForEvent('download');
  await page.getByRole('button', { name: 'Export CSV' }).click();
  const download = await downloadPromise;

  // Save it where you want
  await download.saveAs(`./downloads/${download.suggestedFilename()}`);

  // Or read its contents directly
  const stream = await download.createReadStream();
  // ... process stream ...
});

The waitForEvent call has to be set up before the click. Otherwise Playwright might miss the event entirely.

Q53.How do you do API testing with Playwright?

APIRequestContext is a built-in HTTP client. You can use it without launching a browser at all.

typescript
Copy
import { test, expect, request } from '@playwright/test';

test('users API contract', async () => {
  const api = await request.newContext({
    baseURL: 'https://api.example.com',
    extraHTTPHeaders: { Authorization: `Bearer ${process.env.API_TOKEN}` },
  });

  const response = await api.get('/users/42');
  expect(response.status()).toBe(200);

  const body = await response.json();
  expect(body).toMatchObject({ id: 42, role: 'admin' });

  await api.dispose();
});

This is enough to replace Postman or Supertest in most cases. You get the same retries, fixtures, and reporters as your UI suite, and you can intermix UI and API specs in the same project.

Q54.How do you combine UI and API in a single test?

The most common pattern: use the API to set up state quickly, then use the UI to verify what the user sees. This avoids slow UI setup steps that have nothing to do with what you are testing.

typescript
Copy
test('user sees order in their history', async ({ page, request }) => {
  // Set up state through the API
  const apiResponse = await request.post('/api/orders', {
    data: { userId: 42, items: [{ sku: 'ABC', qty: 1 }] },
  });
  const order = await apiResponse.json();

  // Verify through the UI
  await page.goto('/account/orders');
  await expect(page.getByRole('row', { name: order.id })).toBeVisible();
  await expect(page.getByRole('row', { name: order.id })).toContainText('ABC');
});

2 benefits to call out in interviews. First, this is much faster than placing an order through the UI. Second, if the order-placement UI breaks, this test still catches the order-history UI regression. Both concerns are independent.

Q55.How do you intercept and mock network calls with page.route?

page.route registers a handler for any matching request. The handler can fulfill, modify, or abort the request.

typescript
Copy
// Mock a clean response
await page.route('**/api/users', route => route.fulfill({
  status: 200,
  contentType: 'application/json',
  body: JSON.stringify([{ id: 1, name: 'Alice' }]),
}));

// Pass through but modify the response
await page.route('**/api/orders', async route => {
  const response = await route.fetch();
  const json = await response.json();
  json.items.push({ sku: 'TEST', qty: 99 });
  await route.fulfill({ response, json });
});

// Simulate a server error
await page.route('**/api/checkout', route => route.fulfill({
  status: 500,
  body: JSON.stringify({ error: 'Internal server error' }),
}));

// Block third-party scripts to keep tests fast
await page.route('**/*.{png,jpg,jpeg,gif,woff2}', route => route.abort());

Use mocking for deterministic tests, edge-case responses (errors, slow responses, empty arrays), and to break dependencies on flaky third-party services. Do *not* mock so much that you stop testing the real integration. A hybrid is usually right: mock at the boundary, but keep one happy-path test against the real backend.

Q56.How do you modify or abort requests in flight?

Same page.route API, different actions on the route object.

typescript
Copy
// Modify the request before it goes out (rewrite headers, body, URL)
await page.route('**/api/checkout', route => {
  const headers = { ...route.request().headers(), 'x-test-mode': 'true' };
  route.continue({ headers });
});

// Abort matching requests entirely
await page.route(/google-analytics|tracking-pixel/, route => route.abort());

// Slow down a specific endpoint to test loading states
await page.route('**/api/slow-search', async route => {
  await new Promise(r => setTimeout(r, 2000));
  await route.continue();
});

// Conditionally fulfill or pass through based on payload
await page.route('**/api/login', async route => {
  const data = route.request().postDataJSON();
  if (data.email === '[email protected]') {
    await route.fulfill({ status: 401, body: '{"error":"locked"}' });
  } else {
    await route.continue();
  }
});

This is how you test loading spinners (slow the response), error states (abort or 500), edge cases the backend cannot easily reproduce, and security flows like locked accounts. Network control is one of Playwright's biggest advantages over Selenium.

Reporting, Debugging & Trace Viewer Interview Questions

When a test fails, how do you find out why? These questions test your debugging muscle: HTML reports, Trace Viewer, Inspector, UI Mode, and the workflow you use when CI is red and you cannot reproduce locally.

Q57.How do you generate and view the HTML report?

The HTML reporter is enabled by default in npm init playwright@latest projects. After a test run, the report lives in playwright-report/ and can be opened from the CLI.

bash
Copy
# Run tests; HTML report opens automatically if any test fails
npx playwright test

# Open the last report manually
npx playwright show-report

In playwright.config.ts, you can stack multiple reporters and control when the HTML viewer auto-opens.

typescript
Copy
export default defineConfig({
  reporter: [
    ['html', { open: 'on-failure' }],   // 'always' | 'never' | 'on-failure'
    ['list'],                            // friendly console output
    ['junit', { outputFile: 'results.xml' }], // for CI dashboards
  ],
});

The HTML report works locally. It does not work across runs: each CI run produces a fresh report, so spotting a trend (the test that has been flaking all week) means downloading artifacts one at a time. For that, teams move to a test intelligence platform. TestDino is one example: it ingests the same Playwright output and links every trace back to the run and PR it came from.

Q58.How do you configure screenshots, videos, and traces?

All 3 artifacts (screenshots, videos, traces) live under use in the config. Each one supports modes that balance debugging value against artifact size.

typescript
Copy
export default defineConfig({
  use: {
    screenshot: 'only-on-failure', // 'on' | 'off' | 'only-on-failure'
    video: 'retain-on-failure',    // 'on' | 'off' | 'retain-on-failure' | 'on-first-retry'
    trace: 'on-first-retry',       // 'on' | 'off' | 'retain-on-failure' | 'on-first-retry'
  },
});

The combination above is the most common starting point. It keeps CI artifacts small for green runs but captures full debugging context the moment something breaks. 'on-first-retry' for traces is the setting most teams converge on after a few iterations: it generates traces only when a test needs a retry, which keeps storage costs low while still capturing every flaky failure.

With TestDino, a Playwright test intelligence platform made to debug, manage, and ship Playwright tests in CI, these artifacts become searchable across every run, branch, and PR.

Q59.What is Trace Viewer and when do you use it?

Trace Viewer is Playwright's built-in time-travel debugger. A trace captures, for every action: a screenshot, a DOM snapshot, the network activity, console logs, and source code references.

bash
Copy
# Open a trace from the CLI
npx playwright show-trace test-results/login-spec/trace.zip

# Or open the trace directly from the HTML report by clicking the trace icon

Once it is open, you can scrub the timeline, inspect any element at any point in time, see exactly which selector resolved to which DOM node, and replay the failure step by step. It is the difference between "I think the click happened too early" and "the button was hidden behind a modal at this exact moment".

For senior interviews, expect a follow-up: "How would you debug a test that fails only in CI?" Trace Viewer is the answer.

Q60.How do you use Trace Viewer for root cause analysis?

Trace Viewer turns a CI failure into a recorded session you can scrub through. Open the trace, find the failing action on the timeline, then work backwards:

  1. Inspect the DOM snapshot at the moment of failure. Was the element actually there? Was it covered by something? Was the wrong element matched?
  2. Read the network panel for the same time window. Did an API call fail? Did one never complete? Did one return an empty array?
  3. Check the console panel for errors. Front-end exceptions often surface as broken UI states a few seconds later.
  4. Look at the longest action on the timeline. The longest bar usually marks the symptom: an auto-wait that timed out because something upstream did not happen.
typescript
Copy
// Force a trace for one specific test you cannot reproduce locally
test('cart total updates after coupon', async ({ page }) => {
  // ... test body ...
}).use({ trace: 'on' });

For teams running large suites, the bottleneck is not the trace itself but managing thousands of trace files across runs and PRs. A test intelligence and observability platform like TestDino maps every trace to its run, branch, and PR automatically, so you never have to dig through CI artifacts to find the right one.

Q61.What is the difference between Inspector, UI Mode, and --debug?

3 debugging tools, 3 different shapes of debugging session.

  • npx playwright test --debug runs your tests under the Playwright Inspector. The Inspector shows the test code, lets you step through actions, and includes a "pick locator" tool you can hover over elements with. Best for *fixing one specific test*.
  • npx playwright test --ui opens UI Mode. UI Mode is a visual test explorer with a time-travel panel: you can run any test, watch it execute, scrub through the trace, and edit and re-run without leaving the window. Best for *exploratory debugging and authoring new tests*.
  • page.pause() inside a test stops execution at that line and opens the Inspector. Best for *catching a transient failure mid-flight* when you are not sure which action causes it.
typescript
Copy
test('debug a transient failure', async ({ page }) => {
  await page.goto('/orders');
  await page.pause();   // execution stops here, Inspector opens
  await page.getByRole('button', { name: 'Refresh' }).click();
});

In 2026 interviews, knowing UI Mode is table stakes. It replaced most of the older debugging workflows.

Q62.A test fails in CI but passes locally. How do you debug it?

This is the single most common debugging question at senior level. A strong answer follows a structured process.

  1. Stop guessing. Get the trace. Configure trace: 'on-first-retry' (or trace: 'retain-on-failure' for one-off investigations) and re-run in CI. Open the trace from the artifact.
  2. Compare environments deliberately. Same Node version? Same Playwright version? Same browser channel? Same locale and timezone? Mismatches here are the most common cause.
  3. Suspect the runner first. CI runners are slower than developer laptops. A test that passes at 200 ms locally may need 800 ms in CI. The fix is not a longer timeout. The fix is finding the missing wait condition.
  4. Look for race conditions. Click before navigation completed, assertion before the API responded, action before the animation finished. These show up clearly in the trace timeline.
  5. Check shared state. If parallel tests touch the same database row or the same user account, only one wins. Isolate test data per worker.
  6. Watch for time-of-day issues. Tests that depend on Date.now() or business-hours logic break overnight when local CI timezone differs.
typescript
Copy
// playwright.config.ts: rich diagnostics in CI only
use: {
  trace: process.env.CI ? 'on-first-retry' : 'off',
  video: process.env.CI ? 'retain-on-failure' : 'off',
  screenshot: 'only-on-failure',
}
Interview tip: candidates who say "I would just add a retry" lose points. Candidates who say "I would open the trace, find the action that timed out, and check whether the network call completed" win.

CI/CD, Parallel Execution & Flaky Tests Interview Questions

Senior questions about scaling a suite from 50 tests to 5,000. Parallelism, sharding, flaky test root causes, and the discipline to keep retries from masking real bugs.

Q63.How does parallel execution work?

Playwright runs tests in parallel by default. Each *worker* is a separate Node.js process running its own browser. The runner distributes test files across workers.

typescript
Copy
// playwright.config.ts
export default defineConfig({
  fullyParallel: true,                       // run tests within a file in parallel too
  workers: process.env.CI ? 4 : undefined,   // 4 in CI, half your CPU cores locally
});
bash
Copy
# Override at the command line
npx playwright test --workers=8

Default behaviour: test *files* run in parallel; tests *within* a file run sequentially. Setting fullyParallel: true parallelizes tests inside the file too, which is faster but requires every test to be self-contained. If you depend on order between 2 tests, group them with test.describe.configure({ mode: 'serial' }).

Q64.What are projects and how do you run tests across multiple browsers?

A project is a named test configuration. The most common use is one project per browser, so a single command runs the suite against Chromium, Firefox, and WebKit.

typescript
Copy
// playwright.config.ts
import { defineConfig, devices } from '@playwright/test';

export default defineConfig({
  projects: [
    { name: 'chromium',     use: { ...devices['Desktop Chrome'] } },
    { name: 'firefox',      use: { ...devices['Desktop Firefox'] } },
    { name: 'webkit',       use: { ...devices['Desktop Safari'] } },
    { name: 'mobile-chrome', use: { ...devices['Pixel 7'] } },
    { name: 'mobile-safari', use: { ...devices['iPhone 15'] } },
  ],
});
bash
Copy
# Run all projects
npx playwright test

# Run a single project
npx playwright test --project=firefox

# Run 2 projects
npx playwright test --project=chromium --project=webkit

Projects are also how you split smoke from regression, run different test directories, or apply different storageState for different roles.

Q65.How do you run Playwright tests in CI/CD?

The standard setup uses GitHub Actions (or Jenkins, GitLab CI, CircleCI). The official Microsoft action template handles browser caching automatically.

yaml
Copy
# .github/workflows/playwright.yml
name: Playwright Tests
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci
      - run: npx playwright install --with-deps
      - run: npx playwright test
      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: playwright-report
          path: playwright-report/
          retention-days: 14

Key points an interviewer wants to hear: install --with-deps so system libraries are present, set retries: 2 in CI only, upload the report as an artifact, and treat the artifact as evidence rather than logs.

The artifact answers what failed in one run. It does not answer which test has been flaky for the past month. For that question, the report goes into a centralized platform like TestDino that indexes runs across branches and PRs.

Q66.What are the most common root causes of flaky tests?

Flaky tests do not have a single cause. Field experience across large Playwright suites tends to land on 6 categories, in roughly this order of frequency.

CategoryFrequencyCommon signals
Async / wait issuesMost commonClick before element is stable, assertion before API responds, missing await
Resource constraintsVery common in CITest passes locally, fails on slow CI runner, fails when parallelism is too high
Selector instabilityOccasionalTest breaks after a UI refactor, locator matches wrong element
Order dependencyOccasionalTest passes alone, fails with neighbours, depends on shared DB state
Environmental driftRareBrowser version mismatch, locale or timezone change, third-party API hiccup
Test code bugsRareOff-by-one indices, race in test setup itself

The 90/10 rule applies: fixing the timing and resource categories solves nearly all of them. Flaky-detection platforms like TestDino classify tests by stability score across history, so you fix the worst offenders first instead of chasing yesterday's failure. A deeper Playwright flaky test guide walks through each root cause with the trace patterns that identify it.

Q67.How do you stop flaky tests from blocking CI?

The answer is *not* to retry until they pass. That hides the problem and erodes trust in the suite.

A 2026-grade approach has 4 layers.

typescript
Copy
// 1. Modest retries to absorb genuine transient failures
export default defineConfig({
  retries: process.env.CI ? 2 : 0,
});
bash
Copy
# 2. Treat tests that needed a retry as a hard failure (block the PR)
npx playwright test --fail-on-flaky-tests --retries=2
typescript
Copy
// 3. Quarantine known-flaky tests so they do not block merges while you fix them
test.describe('checkout flow', () => {
  test.fixme(true, 'Tracked in JIRA-1234');
  test('reports tax incorrectly under certain conditions', async ({ page }) => {
    // ...
  });
});
  • Track flakiness across runs. A test that fails 1 in 50 runs looks fine on any single PR but is a long-term reliability problem. Flaky-detection tools that look at history (TestDino is one), classify tests by stability score, and let you fix the worst offenders first instead of chasing yesterday's failure.

The combination is what separates teams that "have flaky tests" from teams that *manage* flaky tests. Retries alone are a coping strategy, not a solution.

Q68.How does sharding work and when do you need it?

Sharding splits the test suite across multiple parallel CI jobs. Each shard runs a subset of the tests, and the results are merged at the end. It is the standard answer to "our suite takes 90 minutes, how do we make it 9 minutes?".

yaml
Copy
# .github/workflows/playwright.yml
jobs:
  test:
    strategy:
      fail-fast: false
      matrix:
        shard: [1, 2, 3, 4]
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci
      - run: npx playwright install --with-deps
      - run: npx playwright test --shard=${{ matrix.shard }}/4
      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: shard-${{ matrix.shard }}-report
          path: blob-report/
          retention-days: 14

2 things to call out in interviews. First, sharding only makes sense after fullyParallel: true is already saturating one machine. Don't shard before you parallelize. Second, the merged report is a separate step using the merge-reports command. Without it, you get 4 disconnected reports instead of one unified view.

Q69.How do you isolate slow tests?

Slow tests poison the suite. They expand global timeouts, hide real performance regressions, and dominate the longest critical path on every CI run. Isolate them deliberately rather than hiding them.

typescript
Copy
// Mark a single test as slow (triples the test timeout)
test('full report export', async ({ page }) => {
  test.slow();   // give this test 3x the default timeout
  // ...
});

// Override the timeout for a whole describe block
test.describe('legacy import flow', () => {
  test.setTimeout(120_000);
  // tests here run with a 2-minute timeout instead of 30 seconds
});

// Tag slow tests so they can be excluded from the fast suite
test('@slow large dataset rendering', async ({ page }) => {
  // ...
});
bash
Copy
# Run only the fast suite locally
npx playwright test --grep-invert @slow

Test tagging plus targeted execution is the cleanest pattern. Fast tests run on every PR. Slow tests run nightly or on demand. Sharding solves the runtime; tagging solves the *signal-to-noise* problem.

Q70.When do retries hide bugs and when do they help?

Retries are a sharp tool. Used well, they absorb genuine transient failures (network hiccups, rate-limited third-party APIs, the rare browser crash). Used poorly, they mask flaky tests that should be fixed.

The rule: retries should make a flaky test visible, not invisible.

typescript
Copy
// playwright.config.ts: retry, but flag and report retries
export default defineConfig({
  retries: process.env.CI ? 2 : 0,
  use: {
    trace: 'on-first-retry',     // capture trace for the failing run only
    screenshot: 'only-on-failure',
    video: 'retain-on-failure',
  },
});
bash
Copy
# CI gate: any test that needed a retry is treated as a hard failure on PRs
npx playwright test --fail-on-flaky-tests

The combination of trace: 'on-first-retry' (so you have evidence) and --fail-on-flaky-tests on PRs (so you cannot ignore the evidence) is what keeps retries useful. Tracking historical flakiness across runs is the third leg: it tells you which tests have been flaky for weeks, not just which one failed today.

AI, MCP & Playwright Ecosystem Interview Questions

The 2026 questions every AI-forward company asks. MCP, codegen, AI agents, and the boundary of what Playwright actually does. Worth knowing even if you are not interviewing at an AI startup.

Q71.What is Playwright MCP and why does it matter?

Playwright MCP (Model Context Protocol) is the standardized bridge between large language models and a live Playwright browser session. Originally specified by Anthropic as an open protocol, the Playwright implementation exposes structured browser-control tools (browser_navigate, browser_click, browser_type, browser_snapshot, plus dozens more covering tabs, network, screenshots, and code execution) that any MCP-compatible AI model can call.

The 2026 reason it matters: structured AI test generation became practical. Earlier screenshot-based approaches asked the LLM to guess at coordinates and selectors from pixels. MCP gives the model the accessibility tree as structured YAML, so it sees roles, labels, and states the same way getByRole does. Tests generated this way actually run on real apps.

bash
Copy
# Install and connect from any MCP client (Claude Code, Cursor, VS Code)
npx @playwright/mcp@latest

# Or expose over HTTP for remote agents
npx @playwright/mcp@latest --port 3000

For interviews at AI-forward companies in 2026, expect questions about: how MCP differs from screenshot-based tools (it doesn't guess), what the snapshot vs vision modes do, and how you would integrate it into an existing test suite. The Playwright AI ecosystem guide on TestDino covers the full stack: protocol, agents, authoring layer, tooling layer.

Q72.What is Playwright CLI?

Playwright CLI is the command-line interface that ships with every Playwright installation. You have been using it already: npx playwright test, npx playwright codegen, npx playwright show-report. These are all CLI commands.

What changed in 2026 is how the CLI fits into AI workflows. When you run npx playwright commands from a coding agent like Claude Code, Cursor, or GitHub Copilot, the agent issues shell commands directly instead of streaming browser state into its context window. Each command is a discrete, stateless operation: navigate, snapshot, click, or generate a test file.

bash
Copy
# Core CLI commands every SDET should know
npx playwright test                     # run the full suite
npx playwright test login.spec.ts       # run a single file
npx playwright codegen https://app.com  # record a flow
npx playwright show-report              # open the HTML report
npx playwright install                  # download browser binaries
npx playwright test --ui                # open the interactive UI mode

The CLI saves artifacts (YAML accessibility snapshots, screenshots, trace files) to disk rather than loading them into an LLM's context window. That is the architectural decision that makes it token-efficient for AI agents: the agent reads only what it needs from the filesystem, not everything at once.

In interviews, demonstrate that you understand the CLI is not just a test runner. It is the full entry point for browser management, test generation, debugging, and report viewing.

Q73.What is the difference between Playwright MCP and CLI?

Both give AI agents access to a Playwright browser. The difference is *where the browser state lives* and how much context it costs.

Playwright MCP streams the browser's accessibility tree, network events, and page state directly into the AI model's context window over the Model Context Protocol. The agent sees everything inline. Rich, immediate, but heavy on tokens.

Playwright CLI saves that same browser state to disk as files: YAML snapshots, screenshots, log files. The agent issues shell commands and reads only the specific files it needs. Lighter, cheaper, and more stable over long sessions.

DimensionPlaywright MCPPlaywright CLI
How state reaches the AIStreamed inline into context windowSaved to disk; agent reads files on demand
Token cost per interactionHigher (~114K tokens for a typical session)Lower (~27K tokens, roughly 4x cheaper)
Session stabilityDegrades after ~15 interactionsStable for 50+ steps
Best forDeep exploration, sandboxed agents, complex multi-step flowsCoding agents with filesystem access, batch test generation
RequiresMCP-compatible clientShell access (terminal)

In interviews, the follow-up is usually: "Can you use both?" Yes. Many teams wire CLI for their CI pipeline smoke checks and MCP for the full Planner, Generator, and Healer agent loop. The CLI vs MCP comparison on TestDino breaks down the architectural trade-offs in detail.

Q74.How do you generate tests using Playwright CLI?

Two paths: Recording and AI-assisted generation. Both start from the CLI.

Recording with codegen is the fastest way to get a working test. You point codegen at a URL, interact with the app in the browser that opens, and Playwright generates test code from your actions in real time.

bash
Copy
# Basic recording: opens a browser, generates TypeScript
npx playwright codegen https://example.com

# Save directly to a file
npx playwright codegen --target=typescript -o tests/login.spec.ts https://example.com

# Record with device emulation
npx playwright codegen --device="iPhone 14" https://example.com

# Record with saved authentication state
npx playwright codegen --load-storage=auth.json https://example.com/dashboard

The output is usable but raw. It captures every click and navigation literally, without assertions or page object structure. Treat it as a first draft, not a finished test.

AI-assisted generation is the 2026 approach. You give a coding agent (Claude Code, Cursor, GitHub Copilot) a natural-language description of what to test, and it uses Playwright CLI commands under the hood to navigate your app, capture accessibility snapshots, and produce test code that follows your project's patterns.

The agent workflow (what happens behind the scenes):

  1. Agent navigates to the page: npx playwright navigate https://example.com/
  2. Agent captures a YAML accessibility snapshot: npx playwright snapshot
  3. Agent reads the snapshot, understands the page structure, and generates test code using getByRole() locators

You can also use the built-in agent system introduced in Playwright v1.56. Run npx playwright init-agents to scaffold the Planner, Generator, and Healer agents. The Planner explores your app and writes a Markdown test plan. The Generator reads that plan and produces .spec.ts files with real, runnable test code.

Interview tip: the strongest answer connects both approaches. "I use codegen for quick exploratory recording, then hand the output to an AI agent that refines it into structured, POM-aligned test code." That shows you understand the record-then-refine workflow that most production teams follow.

Q75.When do you use codegen versus an AI agent?

2 tools, 2 different jobs.

npx playwright codegen is a recorder. You drive the browser manually, it generates Playwright code from your actions. It is great for the first pass on a flow you already know how to test. It does not understand intent, just actions.

bash
Copy
# Record a flow into a file
npx playwright codegen --target=javascript -o login.spec.js https://example.com

An AI agent (Claude Code, Cursor with Playwright MCP, OpenAI Codex) takes a natural-language description and produces a test. It can read existing test patterns in your repo, follow your locator conventions, and write something that fits the project style. It is great when you have a feature spec but no test yet.

Use codegen whenUse an AI agent when
You already know the flow and just want codeYou want the model to plan the test structure
You need a quick first draft to refactorYou want the test to match existing patterns in the repo
You are exploring an unfamiliar UIYou are batching many similar tests at once
The flow is short and linearThe test needs branching logic or setup steps

In senior interviews, expect a follow-up: "How do you keep AI-generated tests from going stale?" The honest answer is that you treat them like any other test code: code review, locator discipline, and the same flaky-test management as the rest of the suite.

Q76.What is a Playwright Skill and when do you use one?

A Playwright Skill is a structured Markdown file (or set of files) that teaches an AI coding agent how to write, debug, and maintain Playwright tests according to your team's specific standards.

The problem it solves is straightforward. AI models trained on generic internet data produce generic test code: brittle CSS selectors, no page object structure, inconsistent naming, missing await calls. A Skill file tells the agent your rules before it writes a single line of code.

yaml
Copy
# Example: what a Playwright Skill file might contain
## Locator Strategy
- Always use getByRole(), getByLabel(), getByTestId() or getByText()
- Never use CSS selectors or XPath unless no semantic alternative exists
- Prefer data-testid as a last resort, never class names

## Test Structure
- Follow the Page Object Model (POM)
- One describe block per feature, one test per user scenario
- Use fixtures for authentication and shared setup

## Assertions
- Use web-first assertions (toBeVisible, toHaveText, toHaveURL)
- Never use page.waitForTimeout() or any hard-coded delays
- Always await every assertion

## Naming
- Test files: feature-name.spec.ts
- Page objects: FeatureNamePage.ts
- Describe blocks: human-readable feature names

Skills work with any AI coding agent that accepts context files: Claude Code reads CLAUDE.md, Cursor reads .cursorrules, GitHub Copilot reads instruction files in .github/. The concept is the same across tools. You feed the agent structured instructions, and it produces code that matches your architecture.

Since Playwright v1.56, the CLI itself supports Skills natively through the agent system. When you run npx playwright init-agents, it scaffolds agent definitions that include Skill-like instructions for the Planner, Generator, and Healer.

The interview-ready distinction: Skills are not test code. They are the knowledge layer that makes AI-generated test code production-ready. Without a Skill, an AI agent writes generic tests. With a Skill, it writes tests that match your project's locator strategy, page object pattern, naming conventions, and assertion style.

For deeper context on how Skills, CLI, and MCP fit together in the full AI testing stack, the Playwright AI ecosystem guide on TestDino covers the complete four-layer architecture.

Q77.When should you NOT use Playwright?

A senior-level question. The strong answer names the limits explicitly.

  • Native mobile apps. Playwright drives mobile *web* through device emulation and can connect to real device clouds, but it does not interact with native iOS or Android UIs. For native automation, use Appium, Maestro, Detox, or XCUITest.
  • Pure unit testing. Playwright is built for browser interactions. For testing isolated functions, use Jest, Vitest, or your runtime's native testing tools. Spinning up a browser to test a date-formatting helper is overkill.
  • Internet Explorer 11 or older browsers. Playwright supports Chromium, Firefox, and WebKit only. If you need to test on IE11 or another niche engine, Selenium remains the realistic option.
  • Test suites with deep WebDriver-specific tooling. If your team has years of investment in Selenium Grid, hub-and-node setups, or a specific BiDi-based pipeline, the migration cost may outweigh the gains. Run a small Playwright pilot on one feature first.
  • Very small projects with no cross-browser need. If you are testing a single Chromium-only intranet tool with 5 tests, plain JavaScript and a headless browser library may be enough. Reach for Playwright when you need cross-browser coverage, parallelism, or a real test runner.
Interview tip: candidates who claim Playwright is right for everything lose credibility. Knowing the boundary is what separates an engineer who picks the right tool from one who has only used one tool.

Q78.What are the security and data privacy risks of using AI testing agents, and how do you mitigate them in Playwright?

A senior-level architecture question.

When you give an AI agent control of a Playwright session or feed page content into an LLM for validation, you risk leaking Personally Identifiable Information (PII), session tokens, or proprietary business logic to the model provider.

To mitigate this in an enterprise environment:

  1. Zero-Retention Policies: Only use Enterprise LLM endpoints that guarantee zero data retention for training.
  2. DOM Redaction: Intercept the DOM payload before it goes to the MCP server. Use Playwright route interception to strip out <script> tags, replace real network payloads, and mask sensitive input values.
  3. Strict Sandboxing: AI agents should only ever execute inside isolated BrowserContexts using synthetic test data, never in production environments.

Q79.In an AI-driven testing pipeline, what is the difference between an "Autonomous" workflow and an "Approval-Required" checkpoint?

In a traditional Playwright suite, everything runs autonomously from start to finish. However, when introducing AI agents that can author or alter tests on the fly, you must separate tasks by risk level:

  • Autonomous Workflows: Used for low-risk, read-only tasks. For example, an agent analyzing the SERP, mapping out accessibility trees, scanning for broken links, or pulling metrics. The pipeline runs straight through and delivers the final diagnostic report.
  • Approval-Required Checkpoints: Used for generative or state-mutating actions. For example, if the AI agent detects a missing test scenario and drafts a new Playwright .spec.ts file, or if it plans to generate synthetic data in the staging database. The pipeline pauses (Phase 1), presents a "Strategic Brief" or code preview to a human SDET, and waits for explicit approval before proceeding to Phase 2.

Conclusion: Acing Your Playwright Interview in 2026

Playwright has evolved from a simple browser automation library into a comprehensive ecosystem equipped with its own runner, AI agents, and advanced debugging tools.

In 2026, interviewers aren't just looking for candidates who can write a `page.click()` command. They want SDETs who understand browser contexts, can design scalable fixtures, mitigate flaky tests in CI/CD, and leverage the new Model Context Protocol (MCP) to integrate AI into their workflows.

By mastering the concepts in this guide will be fully prepared to tackle any Playwright technical interview. Remember, the best way to prove your expertise is by writing clean, deterministic code and demonstrating a strong grasp of debugging strategies.

If you're looking to take your Playwright automation to the next level, start using tools that give you deep visibility into your test execution.

FAQs

TestDino is an AI-powered test intelligence platform built specifically for Playwright. While Playwright handles the execution of your tests, TestDino sits on top to centralize CI logs, group errors, and track test coverage. For SDETs and QAs, it acts as the single source of truth for test health, replacing the need to build and maintain custom HTML reporters.