Frequently Asked Questions

What is jsPerf? #
jsPerf aims to provide an easy way to create and share test cases, comparing the performance of different JavaScript snippets by running benchmarks. But even if you don’t add tests yourself, you can still use it as a JavaScript performance knowledge base.
Which benchmarking engine is being used? #
jsPerf is proudly powered by Benchmark.js, a robust JavaScript benchmarking library that works on nearly all JavaScript platforms, supports high-resolution timers, and returns statistically significant results. Kudos to John-David Dalton for his awesome work on this project!
I’m getting script warnings when running a test in Internet Explorer. What’s up with that? #
Rather than limiting a script by time like all other browsers do, IE (up to version 8) limits a script to 5 million instructions. With modern hardware, a CPU-intensive script can trigger this in less than half a second. If you have a reasonably fast system you may run into these dialogs in IE, in which case the best solution is to modify your Windows Registry to increase the number of operations (I have mine set to 80,000,000).
I’m getting a warning message telling me to disable Firebug. What’s up with that? #
Enabling Firebug causes all JITs to be disabled, meaning you’ll be running the tests in the interpreter, so things will be very slow. For a fair comparison between browsers, always close Firebug before running any tests.
What happened to the ‘calibration’ feature? #
We decided to remove it since after factoring in the adjusted margin of error, the results are indistinguishable from non-calibrated results.
Why does jsPerf use a Java applet on test pages? Do I have to enable Java to use jsPerf? #
The applet you’re talking about is just a clever trick we’re using to expose Java’s nanosecond timer to JavaScript, so we can use it to get more accurate test results. jsPerf will still work fine if you disable Java; it might just take a little longer to run tests. If you want, you can prevent jsPerf from inserting the Java applet into the document by appending #nojava to any test case URL.
I cannot seem to access jsPerf using IE9. What gives? #
You may get an error message saying “A problem with this webpage caused Internet Explorer to close and reopen the tab”, but really the problem lies with the combination of IE9 and Java Version 6 Update 22 or 23. Luckily, there’s an easy fix: just download and install the latest version of Java.
jsPerf is broken in older Firefox versions on Mac OS X Lion! #
That’s actually an issue with an incompatible Java plugin. When testing in Firefox 3.x under Lion, make sure to disable the Java Embedding Plugin via Firefox → Preferences → General → Manage Add-ons → Plugins. You’ll still be able to use jsPerf, although it won’t use the fancy nanotimer.
I heard somewhere that Chrome has built-in benchmarking extensions. Can I use these for jsPerf? #
Yes, you can run Chrome or Chromium with the --enable-benchmarking flag to improve the accuracy of test results in these browsers.
Can I re-run a single test? #
You can (re-)run specific tests by clicking on their description in the overview table. Quickly clicking several test descriptions causes the first test to start running, while the others will get the pending status.
The Browserscope results look different from the ones I’m getting. Why? #
Browserscope returns the highest known result for each test. Because each test has a margin of error, we submit the results minus the margin or error (the lower limit of the confidence interval, i.e. the lowest suspected value) to Browserscope.
I don’t like clicking buttons. Can I make the tests run automatically after opening a page? #
Sure, just append #run to the URL of the test case, e.g. https://jsperf.com/document-getelementbyid#run.
Can I predefine a specific chart type when linking to a test case? #
Sure you can. For example, if you want the data table to be shown by default, you could append #chart=table to the test case’s URL. The other chart types are bar (the default), column, line, and pie.
Can I predefine a specific Browserscope result filter when linking to a test case? #
Yes. For example, if you want to only show results for mobile browsers by default, you could append #filterBy=mobile to the test case’s URL. The other result filters are popular (the default), all, desktop, major, minor, and prerelease.
Is it possible to execute code before and after each clocked test loop, outside of the timed code region? #
That’s what Benchmark#setup and Benchmark#teardown are for. You can use the Setup and Teardown fields to use the same function (s) for all tests. To target specific tests, you can use setTimeout(function () { ui.benchmarks[0].setup = function (){ … }; }, 1); (and/or teardown) in the Preparation Code field, where 0 is the zero-indexed test ID.
How can I run asynchronous tests? #
Just tick the “async” checkbox for each asynchronous test. You will then have access to a deferred object. In your test code, whenever your test is finished, call deferred.resolve(). Here’s an example: https://jsperf.com/smallest-timeout
Can I add tests to existing testcases, or edit them? #
Sure, just append /edit to the URL of the test case. If you’re the original author of the test case you’re editing and it hasn’t been more than 8 hours since you last visited jsPerf, any changes you make will simply overwrite what you entered before. If those conditions don’t apply, every edit you save will create a new revision, i.e. https://jsperf.com/foo/2, https://jsperf.com/foo/3, and so on.
Can I remove a snippet from my test case? #
Absolutely. Just edit the test case, clear the ‘title’ and ‘code’ fields for the snippet you want to remove and save your changes.
How can I follow up on a specific test case? I’d like to get notified when there’s a new revision. #
Every test case has its own Atom feed which you can subscribe to using your favorite feed reader. Just append .atom to the main test case’s URL to get it.
How can I keep track of new or updated test cases made by a specific user? #
Every author has its own Atom feed, located at https://jsperf.com/browse/author-name.atom. Omit the .atom suffix to get a clickable list instead. If you’re identified on jsPerf (i.e. if you’ve commented, or created/edited a test case), a “My tests” link will appear in the navigation.
Can I get the Browserscope results of my testcase in JSON format so I can do something cool with them? #
Absolutely. On any jsPerf test case, just click the Browserscope logo to get to the results page. Its URL will look something like this: https://www.browserscope.org/user/tests/table/YOUR-TEST-ID. That last part is the corresponding Browserscope test ID. To get its JSON output, just append ?o=json&callback=foo, e.g. https://www.browserscope.org/user/tests/table/YOUR-TEST-ID?o=json&callback=w00t. For more details, see the Browserscope API documentation.
How long will my tests be available on jsPerf? #
Every test case and/or revision that’s added to jsPerf will remain here forever. You can safely link to any jsPerf document; in general, all URLs are supposed to be permalinks.
However, this rule does not apply to invalid/spammy tests, because those will likely get removed.
Will you implement [feature XYZ]? #
Why not? I’m open to suggestions, so please let me know if you have an idea that could make jsPerf more awesome.