blob: 14b53fd762519f04b935e3bd360581823e060af9 [file] [log] [blame] [view]
Darwin Huanga8cd38182019-01-10 11:05:101# Web Tests (formerly known as "Layout Tests" or "LayoutTests")
pwnallae101a5f2016-11-08 00:24:382
Kent Tamura59ffb022018-11-27 05:30:563Web tests are used by Blink to test many components, including but not
4limited to layout and rendering. In general, web tests involve loading pages
pwnallae101a5f2016-11-08 00:24:385in a test renderer (`content_shell`) and comparing the rendered output or
6JavaScript output against an expected output file.
7
Kent Tamura59ffb022018-11-27 05:30:568This document covers running and debugging existing web tests. See the
9[Writing Web Tests documentation](./writing_web_tests.md) if you find
10yourself writing web tests.
pwnall4ea2eb32016-11-29 02:47:2511
Kent Tamura59ffb022018-11-27 05:30:5612Note that we changed the term "layout tests" to "web tests".
Kent Tamuraa045a7f2018-04-25 05:08:1113Please assume these terms mean the identical stuff. We also call it as
14"WebKit tests" and "WebKit layout tests".
15
Matt Falkenhagencef09742020-01-06 05:43:3816["Web platform tests"](./web_platform_tests.md) (WPT) are the preferred form of
17web tests and are located at
18[web_tests/external/wpt](/third_party/blink/web_tests/external/wpt).
19Tests that should work across browsers go there. Other directories are for
20Chrome-specific tests only.
21
pwnallae101a5f2016-11-08 00:24:3822[TOC]
23
Kent Tamura59ffb022018-11-27 05:30:5624## Running Web Tests
pwnallae101a5f2016-11-08 00:24:3825
Stephen McGruer7878d062021-01-15 20:23:2026### Supported Platforms
27
28* Linux
29* MacOS
30* Windows
31* Fuchsia
32
33Android is [not supported](https://crbug.com/567947).
34
pwnallae101a5f2016-11-08 00:24:3835### Initial Setup
36
Kent Tamura59ffb022018-11-27 05:30:5637Before you can run the web tests, you need to build the `blink_tests` target
pwnallae101a5f2016-11-08 00:24:3838to get `content_shell` and all of the other needed binaries.
39
40```bash
kyle Ju8f7d38df2018-11-26 16:51:2241autoninja -C out/Default blink_tests
pwnallae101a5f2016-11-08 00:24:3842```
43
pwnallae101a5f2016-11-08 00:24:3844On **Mac**, you probably want to strip the content_shell binary before starting
45the tests. If you don't, you'll have 5-10 running concurrently, all stuck being
46examined by the OS crash reporter. This may cause other failures like timeouts
47where they normally don't occur.
48
49```bash
Dirk Pranke341ad9c2021-09-01 20:42:5750strip ./out/Default/content_shell.app/Contents/MacOS/content_shell
pwnallae101a5f2016-11-08 00:24:3851```
52
53### Running the Tests
54
Robert Ma7ed16792020-06-16 16:38:5255The test runner script is in `third_party/blink/tools/run_web_tests.py`.
pwnallae101a5f2016-11-08 00:24:3856
Dirk Pranke341ad9c2021-09-01 20:42:5757To specify which build directory to use (e.g. out/Default, etc.)
58you should pass the `-t` or `--target` parameter. For example, to
pwnallae101a5f2016-11-08 00:24:3859use the build in `out/Default`, use:
60
61```bash
Robert Ma7ed16792020-06-16 16:38:5262third_party/blink/tools/run_web_tests.py -t Default
pwnallae101a5f2016-11-08 00:24:3863```
64
Robert Ma7ed16792020-06-16 16:38:5265*** promo
Gabriel Charette45cbb4a72021-03-19 15:08:1266* Windows users need to use `third_party\blink\tools\run_web_tests.bat` instead.
Robert Macca3b252020-11-23 20:11:3667* Linux users should not use `testing/xvfb.py`; `run_web_tests.py` manages Xvfb
68 itself.
Robert Ma7ed16792020-06-16 16:38:5269***
70
pwnallae101a5f2016-11-08 00:24:3871Tests marked as `[ Skip ]` in
Kent Tamura59ffb022018-11-27 05:30:5672[TestExpectations](../../third_party/blink/web_tests/TestExpectations)
Xianzhu Wang15355b22019-11-02 23:20:0273won't be run by default, generally because they cause some intractable tool error.
pwnallae101a5f2016-11-08 00:24:3874To force one of them to be run, either rename that file or specify the skipped
Xianzhu Wang15355b22019-11-02 23:20:0275test on the command line (see below) or in a file specified with --test-list
76(however, --skip=always can make the tests marked as `[ Skip ]` always skipped).
77Read the [Web Test Expectations documentation](./web_test_expectations.md) to
78learn more about TestExpectations and related files.
pwnallae101a5f2016-11-08 00:24:3879
pwnall4ea2eb32016-11-29 02:47:2580*** promo
81Currently only the tests listed in
Stephen McGruer7878d062021-01-15 20:23:2082[SmokeTests](../../third_party/blink/web_tests/SmokeTests) are run on the
83Fuchsia bots, since running all web tests takes too long on Fuchshia. Most
84developers focus their Blink testing on Linux. We rely on the fact that the
85Linux and Fuchsia behavior is nearly identical for scenarios outside those
86covered by the smoke tests.
pwnall4ea2eb32016-11-29 02:47:2587***
pwnallae101a5f2016-11-08 00:24:3888
89To run only some of the tests, specify their directories or filenames as
Kent Tamura59ffb022018-11-27 05:30:5690arguments to `run_web_tests.py` relative to the web test directory
91(`src/third_party/blink/web_tests`). For example, to run the fast form tests,
pwnallae101a5f2016-11-08 00:24:3892use:
93
94```bash
Robert Ma7ed16792020-06-16 16:38:5295third_party/blink/tools/run_web_tests.py fast/forms
pwnallae101a5f2016-11-08 00:24:3896```
97
98Or you could use the following shorthand:
99
100```bash
Robert Ma7ed16792020-06-16 16:38:52101third_party/blink/tools/run_web_tests.py fast/fo\*
pwnallae101a5f2016-11-08 00:24:38102```
103
104*** promo
Kent Tamura59ffb022018-11-27 05:30:56105Example: To run the web tests with a debug build of `content_shell`, but only
pwnallae101a5f2016-11-08 00:24:38106test the SVG tests and run pixel tests, you would run:
107
108```bash
Robert Ma7ed16792020-06-16 16:38:52109third_party/blink/tools/run_web_tests.py -t Default svg
pwnallae101a5f2016-11-08 00:24:38110```
111***
112
113As a final quick-but-less-robust alternative, you can also just use the
Xianzhu Wang0a37e9d2019-03-27 21:27:29114content_shell executable to run specific tests by using (example on Windows):
pwnallae101a5f2016-11-08 00:24:38115
116```bash
Xianzhu Wang61d49d52021-07-31 16:44:53117out\Default\content_shell.exe --run-web-tests ||
pwnallae101a5f2016-11-08 00:24:38118```
119
120as in:
121
122```bash
Xianzhu Wang61d49d52021-07-31 16:44:53123out\Default\content_shell.exe --run-web-tests \
124 c:\chrome\src\third_party\blink\web_tests\fast\forms\001.html
pwnallae101a5f2016-11-08 00:24:38125```
Xianzhu Wang0a37e9d2019-03-27 21:27:29126or
127
128```bash
Xianzhu Wang61d49d52021-07-31 16:44:53129out\Default\content_shell.exe --run-web-tests fast\forms\001.html
Xianzhu Wang0a37e9d2019-03-27 21:27:29130```
pwnallae101a5f2016-11-08 00:24:38131
132but this requires a manual diff against expected results, because the shell
Xianzhu Wang0a37e9d2019-03-27 21:27:29133doesn't do it for you. It also just dumps the text result only (as the dump of
134pixels and audio binary data is not human readable).
Jeonghee Ahn2cbb9cb2019-09-23 02:52:57135See [Running Web Tests Using the Content Shell](./web_tests_in_content_shell.md)
Xianzhu Wang0a37e9d2019-03-27 21:27:29136for more details of running `content_shell`.
pwnallae101a5f2016-11-08 00:24:38137
Mathias Bynens172fc6b2018-09-05 09:39:43138To see a complete list of arguments supported, run:
139
140```bash
Robert Ma7ed16792020-06-16 16:38:52141third_party/blink/tools/run_web_tests.py --help
Mathias Bynens172fc6b2018-09-05 09:39:43142```
pwnallae101a5f2016-11-08 00:24:38143
144*** note
145**Linux Note:** We try to match the Windows render tree output exactly by
146matching font metrics and widget metrics. If there's a difference in the render
147tree output, we should see if we can avoid rebaselining by improving our font
Kent Tamura59ffb022018-11-27 05:30:56148metrics. For additional information on Linux web tests, please see
Jeonghee Ahn2cbb9cb2019-09-23 02:52:57149[docs/web_tests_linux.md](./web_tests_linux.md).
pwnallae101a5f2016-11-08 00:24:38150***
151
152*** note
153**Mac Note:** While the tests are running, a bunch of Appearance settings are
154overridden for you so the right type of scroll bars, colors, etc. are used.
155Your main display's "Color Profile" is also changed to make sure color
156correction by ColorSync matches what is expected in the pixel tests. The change
157is noticeable, how much depends on the normal level of correction for your
158display. The tests do their best to restore your setting when done, but if
159you're left in the wrong state, you can manually reset it by going to
160System Preferences → Displays → Color and selecting the "right" value.
161***
162
163### Test Harness Options
164
165This script has a lot of command line flags. You can pass `--help` to the script
166to see a full list of options. A few of the most useful options are below:
167
168| Option | Meaning |
169|:----------------------------|:--------------------------------------------------|
170| `--debug` | Run the debug build of the test shell (default is release). Equivalent to `-t Debug` |
171| `--nocheck-sys-deps` | Don't check system dependencies; this allows faster iteration. |
172| `--verbose` | Produce more verbose output, including a list of tests that pass. |
Xianzhu Wangcacba482017-06-05 18:46:43173| `--reset-results` | Overwrite the current baselines (`-expected.{png|txt|wav}` files) with actual results, or create new baselines if there are no existing baselines. |
pwnallae101a5f2016-11-08 00:24:38174| `--renderer-startup-dialog` | Bring up a modal dialog before running the test, useful for attaching a debugger. |
Quinten Yearsley17bf9b432018-01-02 22:02:45175| `--fully-parallel` | Run tests in parallel using as many child processes as the system has cores. |
pwnallae101a5f2016-11-08 00:24:38176| `--driver-logging` | Print C++ logs (LOG(WARNING), etc). |
177
178## Success and Failure
179
180A test succeeds when its output matches the pre-defined expected results. If any
181tests fail, the test script will place the actual generated results, along with
182a diff of the actual and expected results, into
183`src/out/Default/layout_test_results/`, and by default launch a browser with a
184summary and link to the results/diffs.
185
186The expected results for tests are in the
Kent Tamura59ffb022018-11-27 05:30:56187`src/third_party/blink/web_tests/platform` or alongside their respective
pwnallae101a5f2016-11-08 00:24:38188tests.
189
190*** note
191Tests which use [testharness.js](https://github.com/w3c/testharness.js/)
192do not have expected result files if all test cases pass.
193***
194
195A test that runs but produces the wrong output is marked as "failed", one that
196causes the test shell to crash is marked as "crashed", and one that takes longer
197than a certain amount of time to complete is aborted and marked as "timed out".
198A row of dots in the script's output indicates one or more tests that passed.
199
200## Test expectations
201
202The
Kent Tamura59ffb022018-11-27 05:30:56203[TestExpectations](../../third_party/blink/web_tests/TestExpectations) file (and related
204files) contains the list of all known web test failures. See the
205[Web Test Expectations documentation](./web_test_expectations.md) for more
pwnall4ea2eb32016-11-29 02:47:25206on this.
pwnallae101a5f2016-11-08 00:24:38207
208## Testing Runtime Flags
209
Kent Tamura59ffb022018-11-27 05:30:56210There are two ways to run web tests with additional command-line arguments:
pwnallae101a5f2016-11-08 00:24:38211
Weizhong Xia53c492162021-09-09 17:08:24212### --flag-specific or --additional-driver-flag:
pwnallae101a5f2016-11-08 00:24:38213
Xianzhu Wang61d49d52021-07-31 16:44:53214```bash
215# Actually we prefer --flag-specific in some cases. See below for details.
216third_party/blink/tools/run_web_tests.py --additional-driver-flag=--blocking-repaint
217```
pwnallae101a5f2016-11-08 00:24:38218
Xianzhu Wang61d49d52021-07-31 16:44:53219This tells the test harness to pass `--blocking-repaint` to the
220content_shell binary.
pwnallae101a5f2016-11-08 00:24:38221
Xianzhu Wang61d49d52021-07-31 16:44:53222It will also look for flag-specific expectations in
223`web_tests/FlagExpectations/blocking-repaint`, if this file exists. The
224suppressions in this file override the main TestExpectations files.
225However, `[ Slow ]` in either flag-specific expectations or base expectations
226is always merged into the used expectations.
pwnallae101a5f2016-11-08 00:24:38227
Xianzhu Wang61d49d52021-07-31 16:44:53228It will also look for baselines in `web_tests/flag-specific/blocking-repaint`.
229The baselines in this directory override the fallback baselines.
Xianzhu Wang15355b22019-11-02 23:20:02230
Xianzhu Wang61d49d52021-07-31 16:44:53231By default, name of the expectation file name under
232`web_tests/FlagExpectations` and name of the baseline directory under
233`web_tests/flag-specific` uses the first flag of --additional-driver-flag
234with leading '-'s stripped.
Xianzhu Wang15355b22019-11-02 23:20:02235
Xianzhu Wang61d49d52021-07-31 16:44:53236You can also customize the name in `web_tests/FlagSpecificConfig` when
237the name is too long or when we need to match multiple additional args:
Xianzhu Wang15355b22019-11-02 23:20:02238
Xianzhu Wang61d49d52021-07-31 16:44:53239```json
240{
241 "name": "short-name",
242 "args": ["--blocking-repaint", "--another-flag"]
243}
244```
Dirk Pranke341ad9c2021-09-01 20:42:57245
Xianzhu Wang61d49d52021-07-31 16:44:53246`web_tests/FlagSpecificConfig` is preferred when you need multiple flags,
247or the flag is long.
Xianzhu Wang15355b22019-11-02 23:20:02248
Xianzhu Wang61d49d52021-07-31 16:44:53249With the config, you can use `--flag-specific=short-name` as a shortcut
250of `--additional-driver-flag=--blocking-repaint --additional-driver-flag=--another-flag`.
Dirk Pranke341ad9c2021-09-01 20:42:57251
Xianzhu Wang61d49d52021-07-31 16:44:53252`--additional-driver-flags` still works with `web_tests/FlagSpecificConfig`.
253For example, when at least `--additional-driver-flag=--blocking-repaint` and
254`--additional-driver-flag=--another-flag` are specified, `short-name` will
255be used as name of the flag specific expectation file and the baseline directory.
Xianzhu Wang15355b22019-11-02 23:20:02256
Weizhong Xia53c492162021-09-09 17:08:24257*** note
258[BUILD.gn](../../BUILD.gn) assumes flag-specific builders always runs on linux bots, so
259flag-specific test expectations and baselines are only downloaded to linux bots.
260If you need run flag-specific builders on other platforms, please update
261BUILD.gn to download flag-specific related data to that platform.
262***
263
Xianzhu Wang61d49d52021-07-31 16:44:53264### Virtual test suites
Xianzhu Wang15355b22019-11-02 23:20:02265
Xianzhu Wang61d49d52021-07-31 16:44:53266A *virtual test suite* can be defined in
267[web_tests/VirtualTestSuites](../../third_party/blink/web_tests/VirtualTestSuites),
268to run a subset of web tests with additional flags, with
269`virtual//...` in their paths. The tests can be virtual tests that
270map to real base tests (directories or files) whose paths match any of the
271specified bases, or any real tests under `web_tests/virtual//`
272directory. For example, you could test a (hypothetical) new mode for
273repainting using the following virtual test suite:
pwnallae101a5f2016-11-08 00:24:38274
Xianzhu Wang61d49d52021-07-31 16:44:53275```json
276{
277 "prefix": "blocking_repaint",
278 "bases": ["compositing", "fast/repaint"],
279 "args": ["--blocking-repaint"]
280}
281```
pwnallae101a5f2016-11-08 00:24:38282
Xianzhu Wang61d49d52021-07-31 16:44:53283This will create new "virtual" tests of the form
284`virtual/blocking_repaint/compositing/...` and
285`virtual/blocking_repaint/fast/repaint/...` which correspond to the files
286under `web_tests/compositing` and `web_tests/fast/repaint`, respectively,
287and pass `--blocking-repaint` to `content_shell` when they are run.
pwnallae101a5f2016-11-08 00:24:38288
Xianzhu Wang61d49d52021-07-31 16:44:53289These virtual tests exist in addition to the original `compositing/...` and
290`fast/repaint/...` tests. They can have their own expectations in
291`web_tests/TestExpectations`, and their own baselines. The test harness will
292use the non-virtual expectations and baselines as a fallback. If a virtual
293test has its own expectations, they will override all non-virtual
294expectations. otherwise the non-virtual expectations will be used. However,
295`[ Slow ]` in either virtual or non-virtual expectations is always merged
296into the used expectations. If a virtual test is expected to pass while the
297non-virtual test is expected to fail, you need to add an explicit `[ Pass ]`
298entry for the virtual test.
pwnallae101a5f2016-11-08 00:24:38299
Xianzhu Wang61d49d52021-07-31 16:44:53300This will also let any real tests under `web_tests/virtual/blocking_repaint`
301directory run with the `--blocking-repaint` flag.
Xianzhu Wang5d682c82019-10-29 05:08:19302
Xianzhu Wang61d49d52021-07-31 16:44:53303The "prefix" value should be unique. Multiple directories with the same flags
304should be listed in the same "bases" list. The "bases" list can be empty,
305in case that we just want to run the real tests under `virtual/`
306with the flags without creating any virtual tests.
pwnallae101a5f2016-11-08 00:24:38307
Xianzhu Wang61d49d52021-07-31 16:44:53308### Choosing between flag-specific and virtual test suite
309
310For flags whose implementation is still in progress, flag-specific expectations
311and virtual test suites represent two alternative strategies for testing both
Xianzhu Wangadb0670a22020-07-16 23:04:58312the enabled code path and not-enabled code path. They are preferred to only
313setting a [runtime enabled feature](../../third_party/blink/renderer/platform/RuntimeEnabledFeatures.md)
314to `status: "test"` if the feature has substantially different code path from
315production because the latter would cause loss of test coverage of the production
316code path.
317
318Consider the following when choosing between virtual test suites and
319flag-specific expectations:
pwnallae101a5f2016-11-08 00:24:38320
321* The
322 [waterfall builders](https://dev.chromium.org/developers/testing/chromium-build-infrastructure/tour-of-the-chromium-buildbot)
323 and [try bots](https://dev.chromium.org/developers/testing/try-server-usage)
324 will run all virtual test suites in addition to the non-virtual tests.
325 Conversely, a flag-specific expectations file won't automatically cause the
326 bots to test your flag - if you want bot coverage without virtual test suites,
Xianzhu Wangadb0670a22020-07-16 23:04:58327 you will need to set up a dedicated bot ([example](https://chromium-review.googlesource.com/c/chromium/src/+/1850255))
328 for your flag.
pwnallae101a5f2016-11-08 00:24:38329
330* Due to the above, virtual test suites incur a performance penalty for the
331 commit queue and the continuous build infrastructure. This is exacerbated by
332 the need to restart `content_shell` whenever flags change, which limits
333 parallelism. Therefore, you should avoid adding large numbers of virtual test
334 suites. They are well suited to running a subset of tests that are directly
335 related to the feature, but they don't scale to flags that make deep
336 architectural changes that potentially impact all of the tests.
337
Jeff Carpenter489d4022018-05-15 00:23:00338* Note that using wildcards in virtual test path names (e.g.
Xianzhu Wang61d49d52021-07-31 16:44:53339 `virtual/blocking_repaint/fast/repaint/*`) is not supported in
340 `run_web_tests.py` command line , but you can still use
341 `virtual/blocking_repaint` to run all real and virtual tests
Xianzhu Wang5d682c82019-10-29 05:08:19342 in the suite or `virtual/blocking_repaint/fast/repaint/dir` to run real
343 or virtual tests in the suite under a specific directory.
Jeff Carpenter489d4022018-05-15 00:23:00344
Xianzhu Wanga617a142020-05-07 21:57:47345*** note
346We can run a virtual test with additional flags. Both the virtual args and the
347additional flags will be applied. The fallback order of baselines and
348expectations will be: 1) flag-specific virtual, 2) non-flag-specific virtual,
3493) flag-specific base, 4) non-flag-specific base
350***
351
pwnallae101a5f2016-11-08 00:24:38352## Tracking Test Failures
353
Kent Tamura59ffb022018-11-27 05:30:56354All bugs, associated with web test failures must have the
pwnallae101a5f2016-11-08 00:24:38355[Test-Layout](https://crbug.com/?q=label:Test-Layout) label. Depending on how
356much you know about the bug, assign the status accordingly:
357
358* **Unconfirmed** -- You aren't sure if this is a simple rebaseline, possible
359 duplicate of an existing bug, or a real failure
360* **Untriaged** -- Confirmed but unsure of priority or root cause.
361* **Available** -- You know the root cause of the issue.
362* **Assigned** or **Started** -- You will fix this issue.
363
Kent Tamura59ffb022018-11-27 05:30:56364When creating a new web test bug, please set the following properties:
pwnallae101a5f2016-11-08 00:24:38365
366* Components: a sub-component of Blink
367* OS: **All** (or whichever OS the failure is on)
368* Priority: 2 (1 if it's a crash)
369* Type: **Bug**
370* Labels: **Test-Layout**
371
Mathias Bynens172fc6b2018-09-05 09:39:43372You can also use the _Layout Test Failure_ template, which pre-sets these
pwnallae101a5f2016-11-08 00:24:38373labels for you.
374
Kent Tamura59ffb022018-11-27 05:30:56375## Debugging Web Tests
pwnallae101a5f2016-11-08 00:24:38376
Kent Tamura59ffb022018-11-27 05:30:56377After the web tests run, you should get a summary of tests that pass or
Mathias Bynens172fc6b2018-09-05 09:39:43378fail. If something fails unexpectedly (a new regression), you will get a
379`content_shell` window with a summary of the unexpected failures. Or you might
380have a failing test in mind to investigate. In any case, here are some steps and
381tips for finding the problem.
pwnallae101a5f2016-11-08 00:24:38382
383* Take a look at the result. Sometimes tests just need to be rebaselined (see
384 below) to account for changes introduced in your patch.
385 * Load the test into a trunk Chrome or content_shell build and look at its
386 result. (For tests in the http/ directory, start the http server first.
387 See above. Navigate to `http://localhost:8000/` and proceed from there.)
388 The best tests describe what they're looking for, but not all do, and
389 sometimes things they're not explicitly testing are still broken. Compare
390 it to Safari, Firefox, and IE if necessary to see if it's correct. If
391 you're still not sure, find the person who knows the most about it and
392 ask.
393 * Some tests only work properly in content_shell, not Chrome, because they
394 rely on extra APIs exposed there.
Kent Tamura59ffb022018-11-27 05:30:56395 * Some tests only work properly when they're run in the web-test
pwnallae101a5f2016-11-08 00:24:38396 framework, not when they're loaded into content_shell directly. The test
397 should mention that in its visible text, but not all do. So try that too.
398 See "Running the tests", above.
399* If you think the test is correct, confirm your suspicion by looking at the
400 diffs between the expected result and the actual one.
401 * Make sure that the diffs reported aren't important. Small differences in
402 spacing or box sizes are often unimportant, especially around fonts and
403 form controls. Differences in wording of JS error messages are also
404 usually acceptable.
Robert Ma7ed16792020-06-16 16:38:52405 * `third_party/blink/tools/run_web_tests.py path/to/your/test.html` produces
406 a page listing all test results. Those which fail their expectations will
407 include links to the expected result, actual result, and diff. These
408 results are saved to `$root_build_dir/layout-test-results`.
jonross26185702019-04-08 18:54:10409 * Alternatively the `--results-directory=path/for/output/` option allows
410 you to specify an alternative directory for the output to be saved to.
pwnallae101a5f2016-11-08 00:24:38411 * If you're still sure it's correct, rebaseline the test (see below).
412 Otherwise...
413* If you're lucky, your test is one that runs properly when you navigate to it
414 in content_shell normally. In that case, build the Debug content_shell
415 project, fire it up in your favorite debugger, and load the test file either
qyearsley23599b72017-02-16 19:10:42416 from a `file:` URL.
pwnallae101a5f2016-11-08 00:24:38417 * You'll probably be starting and stopping the content_shell a lot. In VS,
418 to save navigating to the test every time, you can set the URL to your
qyearsley23599b72017-02-16 19:10:42419 test (`file:` or `http:`) as the command argument in the Debugging section of
pwnallae101a5f2016-11-08 00:24:38420 the content_shell project Properties.
421 * If your test contains a JS call, DOM manipulation, or other distinctive
422 piece of code that you think is failing, search for that in the Chrome
423 solution. That's a good place to put a starting breakpoint to start
424 tracking down the issue.
425 * Otherwise, you're running in a standard message loop just like in Chrome.
426 If you have no other information, set a breakpoint on page load.
Kent Tamura59ffb022018-11-27 05:30:56427* If your test only works in full web-test mode, or if you find it simpler to
pwnallae101a5f2016-11-08 00:24:38428 debug without all the overhead of an interactive session, start the
Kent Tamuracd3ebc42018-05-16 06:44:22429 content_shell with the command-line flag `--run-web-tests`, followed by the
Kent Tamura59ffb022018-11-27 05:30:56430 URL (`file:` or `http:`) to your test. More information about running web tests
431 in content_shell can be found [here](./web_tests_in_content_shell.md).
pwnallae101a5f2016-11-08 00:24:38432 * In VS, you can do this in the Debugging section of the content_shell
433 project Properties.
434 * Now you're running with exactly the same API, theme, and other setup that
Kent Tamura59ffb022018-11-27 05:30:56435 the web tests use.
pwnallae101a5f2016-11-08 00:24:38436 * Again, if your test contains a JS call, DOM manipulation, or other
437 distinctive piece of code that you think is failing, search for that in
438 the Chrome solution. That's a good place to put a starting breakpoint to
439 start tracking down the issue.
440 * If you can't find any better place to set a breakpoint, start at the
441 `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at
442 `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`.
Kent Tamura59ffb022018-11-27 05:30:56443* Debug as usual. Once you've gotten this far, the failing web test is just a
pwnallae101a5f2016-11-08 00:24:38444 (hopefully) reduced test case that exposes a problem.
445
446### Debugging HTTP Tests
447
448To run the server manually to reproduce/debug a failure:
449
450```bash
Robert Ma7ed16792020-06-16 16:38:52451third_party/blink/tools/run_blink_httpd.py
pwnallae101a5f2016-11-08 00:24:38452```
453
Kent Tamura59ffb022018-11-27 05:30:56454The web tests are served from `http://127.0.0.1:8000/`. For example, to
pwnallae101a5f2016-11-08 00:24:38455run the test
Kent Tamura59ffb022018-11-27 05:30:56456`web_tests/http/tests/serviceworker/chromium/service-worker-allowed.html`,
pwnallae101a5f2016-11-08 00:24:38457navigate to
458`http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some
Mathias Bynens172fc6b2018-09-05 09:39:43459tests behave differently if you go to `127.0.0.1` vs. `localhost`, so use
460`127.0.0.1`.
pwnallae101a5f2016-11-08 00:24:38461
Kent Tamurae81dbff2018-04-20 17:35:34462To kill the server, hit any key on the terminal where `run_blink_httpd.py` is
Mathias Bynens172fc6b2018-09-05 09:39:43463running, use `taskkill` or the Task Manager on Windows, or `killall` or
464Activity Monitor on macOS.
pwnallae101a5f2016-11-08 00:24:38465
Kent Tamura59ffb022018-11-27 05:30:56466The test server sets up an alias to the `web_tests/resources` directory. For
Mathias Bynens172fc6b2018-09-05 09:39:43467example, in HTTP tests, you can access the testing framework using
pwnallae101a5f2016-11-08 00:24:38468`src="/js-test-resources/js-test.js"`.
469
470### Tips
471
472Check https://test-results.appspot.com/ to see how a test did in the most recent
473~100 builds on each builder (as long as the page is being updated regularly).
474
475A timeout will often also be a text mismatch, since the wrapper script kills the
476content_shell before it has a chance to finish. The exception is if the test
477finishes loading properly, but somehow hangs before it outputs the bit of text
478that tells the wrapper it's done.
479
480Why might a test fail (or crash, or timeout) on buildbot, but pass on your local
481machine?
482* If the test finishes locally but is slow, more than 10 seconds or so, that
483 would be why it's called a timeout on the bot.
484* Otherwise, try running it as part of a set of tests; it's possible that a test
485 one or two (or ten) before this one is corrupting something that makes this
486 one fail.
487* If it consistently works locally, make sure your environment looks like the
488 one on the bot (look at the top of the stdio for the webkit_tests step to see
489 all the environment variables and so on).
490* If none of that helps, and you have access to the bot itself, you may have to
491 log in there and see if you can reproduce the problem manually.
492
Will Chen22b488502017-11-30 21:37:15493### Debugging DevTools Tests
pwnallae101a5f2016-11-08 00:24:38494
Will Chen22b488502017-11-30 21:37:15495* Do one of the following:
Mathias Bynens172fc6b2018-09-05 09:39:43496 * Option A) Run from the `chromium/src` folder:
Tim van der Lippeae606432020-06-03 15:30:25497 `third_party/blink/tools/run_web_tests.py --additional-driver-flag='--remote-debugging-port=9222' --additional-driver-flag='--debug-devtools' --time-out-ms=6000000`
Will Chen22b488502017-11-30 21:37:15498 * Option B) If you need to debug an http/tests/inspector test, start httpd
499 as described above. Then, run content_shell:
Tim van der Lippeae606432020-06-03 15:30:25500 `out/Default/content_shell --remote-debugging-port=9222 --additional-driver-flag='--debug-devtools' --run-web-tests http://127.0.0.1:8000/path/to/test.html`
Will Chen22b488502017-11-30 21:37:15501* Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single
502 link to open the devtools with the test loaded.
503* In the loaded devtools, set any required breakpoints and execute `test()` in
504 the console to actually start the test.
505
506NOTE: If the test is an html file, this means it's a legacy test so you need to add:
pwnallae101a5f2016-11-08 00:24:38507* Add `window.debugTest = true;` to your test code as follows:
508
509 ```javascript
510 window.debugTest = true;
511 function test() {
512 /* TEST CODE */
513 }
Kim Paulhamus61d60c32018-02-09 18:03:49514 ```
pwnallae101a5f2016-11-08 00:24:38515
Steve Kobese123a3d42017-07-20 01:20:30516## Bisecting Regressions
517
518You can use [`git bisect`](https://git-scm.com/docs/git-bisect) to find which
Kent Tamura59ffb022018-11-27 05:30:56519commit broke (or fixed!) a web test in a fully automated way. Unlike
Steve Kobese123a3d42017-07-20 01:20:30520[bisect-builds.py](http://dev.chromium.org/developers/bisect-builds-py), which
521downloads pre-built Chromium binaries, `git bisect` operates on your local
522checkout, so it can run tests with `content_shell`.
523
524Bisecting can take several hours, but since it is fully automated you can leave
525it running overnight and view the results the next day.
526
Kent Tamura59ffb022018-11-27 05:30:56527To set up an automated bisect of a web test regression, create a script like
Steve Kobese123a3d42017-07-20 01:20:30528this:
529
Mathias Bynens172fc6b2018-09-05 09:39:43530```bash
Steve Kobese123a3d42017-07-20 01:20:30531#!/bin/bash
532
533# Exit code 125 tells git bisect to skip the revision.
534gclient sync || exit 125
Max Morozf5b31fcd2018-08-10 21:55:48535autoninja -C out/Debug -j100 blink_tests || exit 125
Steve Kobese123a3d42017-07-20 01:20:30536
Kent Tamuraa045a7f2018-04-25 05:08:11537third_party/blink/tools/run_web_tests.py -t Debug \
Steve Kobese123a3d42017-07-20 01:20:30538 --no-show-results --no-retry-failures \
Kent Tamura59ffb022018-11-27 05:30:56539 path/to/web/test.html
Steve Kobese123a3d42017-07-20 01:20:30540```
541
542Modify the `out` directory, ninja args, and test name as appropriate, and save
543the script in `~/checkrev.sh`. Then run:
544
Mathias Bynens172fc6b2018-09-05 09:39:43545```bash
Steve Kobese123a3d42017-07-20 01:20:30546chmod u+x ~/checkrev.sh # mark script as executable
547git bisect start
548git bisect run ~/checkrev.sh
549git bisect reset # quit the bisect session
550```
551
Kent Tamura59ffb022018-11-27 05:30:56552## Rebaselining Web Tests
pwnallae101a5f2016-11-08 00:24:38553
Xianzhu Wang61d49d52021-07-31 16:44:53554See [How to rebaseline](./web_test_expectations.md#How-to-rebaseline).
Xianzhu Wang95d0bac32017-06-05 21:09:39555
pwnallae101a5f2016-11-08 00:24:38556## Known Issues
557
558See
559[bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=component%3ABlink%3EInfra)
Kent Tamura59ffb022018-11-27 05:30:56560for issues related to Blink tools, include the web test runner.
pwnallae101a5f2016-11-08 00:24:38561
pwnallae101a5f2016-11-08 00:24:38562* If QuickTime is not installed, the plugin tests
563 `fast/dom/object-embed-plugin-scripting.html` and
564 `plugins/embed-attributes-setting.html` are expected to fail.