Here’s another update for the js web framework benchmark. This time the benchmark has seen lots of contributions:
- Dominic Gannaway updated and optimized inferno
- Boris Kaul added the kivi framework
- Chris Reeves contributed the edge version of ractive
- Michel Weststrate updated react-mobX
- Gianluca Guarini updated the riot benchmark
- Gyandeep Singh added mithril 1.0-alpha
- Leon Sorokin contributed domvm
- Boris Letocha added bobril
- Jeremy Danyow rewrote the aurelia benchmark
- Filatov Dmitry updated the vidom benchmark
- Dan Abramov, Mark Struck, Baptiste Augrain and many more…
Thanks to all of you for contributing!
This suddenly turned the project from something where I tried to write the benchmarks with some healthy smattering of knowledge to a quite well reviewed and tuned comparison. It was especially interesting to see that some frameworks authors took notice of the benchmarks and improved the benchmarks (and sometimes even the frameworks) or added development versions to the comparison. Especially for some of the slower frameworks it shows nice improvements.
Here’s the updated result table (on my MacBook Pro, Chrome 53):
(Disclaimer: The benchmarks computes the geometric mean of the ratio between the durations of the benchmarks of the framework to the duration of the fastest framework for that benchmark. A factor X indicates that a frameworks takes on average X times as long as the fastest version (which is usually vanillajs). In the discussion below I’ll talk about the percentage that a framework is slower than vanillajs. Angular 2 has a factor of 1.82 and thus I’ll say it’s 82% behind or slower than vanillajs.)
- Inferno is crazy fast. It’s only 7% slower and I had to add some tricks for vanillajs to match inferno. Especially the remove row benchmark needed a modification. Simply removing the row from the dom causes a long style recalculation. This can be avoided by shifting the data of all rows below the one that should be deleted one up and then deleting the last row.
Inferno also shines by showing virtually no worst case scenario.
- Kivi is second fastest, only 32% percent behind vanillajs.
- vue 2-beta1 is amazing. It’s only 37% slower than vanillajs. This is a large improvement from vue 1, which is 116% behind vanillajs. The best thing about this improvement is that the application code stayed unchanged (take that angular!).
- bobril and domvm are quite quick and are only 40% or 63% behind vanillajs.
- The aurelia benchmark got a rewrite and it’s pretty good now – though I doubt I’ll like it’s way of (not) dirty checking. It’s now only 66% behind, a bit faster than both react and angular 2. Last time it was 120% behind vanillajs.
- Riot is new in this round, but it doesn’t perform well in this benchmark. On average it takes three times as long as vanillajs.
- React v15.2 was 140% behind. It improved to 82% due to an optimization in react that makes the clear rows benchmark much faster.
- Ractive is currently of one the slowest framework being 260% behind. It shows an impressive improvement with 87% for the edge version.
- Same goes for mithril which improves from being 190% slower to 83%.
- Cycle.js 7, plastiq and vidom are pretty much unchanged still performing very well.
It’s very impressive to see how well those frameworks can perform and how much improvement is possible.
The test driver was rewritten in typescript as a replacement for the java version. A goal was to reduce duration for running the benchmarks which meant to remove all Thread.sleep workarounds. Running the benchmarks e.g. for angular 2 takes now 3:40 minutes instead of 10:48. The new test runner also supports running selective frameworks (e.g. npm run selenium — args –framework angular-v2) and/or benchmarks (npm run selenium — args –benchmark 01_).
The test runner measures the duration for all frameworks according to the chrome timeline including repaint. If you want to try a simplified metric in the browser: All frameworks try to log the duration on the console from event start until repainting finished. This appears to be close to the figures extracted from the timeline. If there’s a difference you better trust the timeline.