diff options
author | bors-servo <lbergstrom+bors@mozilla.com> | 2019-09-30 05:23:19 -0400 |
---|---|---|
committer | GitHub <noreply@github.com> | 2019-09-30 05:23:19 -0400 |
commit | 402db83b2b19f33240b0db4cf07e0c9d056b1786 (patch) | |
tree | 9f469fe3aa154519944c07f7d01f755e663a00f5 /components/canvas/webgl_thread.rs | |
parent | 086e06b28b7722b3e268b846e6c507f1060a2931 (diff) | |
parent | d2c299a6c79386fe91f3930914d1d3e7162112a3 (diff) | |
download | servo-402db83b2b19f33240b0db4cf07e0c9d056b1786.tar.gz servo-402db83b2b19f33240b0db4cf07e0c9d056b1786.zip |
Auto merge of #24303 - servo:script-codegen, r=nox
WebIDL codegen: Replace cmake with a single Python script
When [playing around with Cargo’s new timing visualization](https://internals.rust-lang.org/t/exploring-crate-graph-build-times-with-cargo-build-ztimings/10975/21), I was surprised to see the `script` crate’s build script take 76 seconds. I did not expect WebIDL bindings generation to be *that* computationally intensive.
It turns out almost all of this time is overhead. The build script uses CMake to generate bindings for each WebIDL file in parallel, but that causes a lot of work to be repeated 366 times:
* Starting up a Python VM
* Importing (parts of) the Python standard library
* Importing ~16k lines of our Python code
* Recompiling the latter to bytecode, since we used `python -B` to disable writing `.pyc` files
* Deserializing with `cPickle` and recreating in memory the results of parsing all WebIDL files
----
This commit remove the use of CMake and cPickle for the `script` crate. Instead, all WebIDL bindings generation is done sequentially in a single Python process. This takes 2 to 3 seconds.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/24303)
<!-- Reviewable:end -->
Diffstat (limited to 'components/canvas/webgl_thread.rs')
0 files changed, 0 insertions, 0 deletions