CSS :has()

The :has() pseudo class has been in and out of CSS draft specifications in various forms since 1998. There is a clear and repeatedly expressed desire for the ability to style a subject based on qualities of its contents. Performance and complexity concerns have been the most oft cited reasons for not prioritizing work in this area. However, it is difficult for the working group to make important decisions on what is and isn't possible for many use cases without practical implementation experimentation. To this end, Igalia, with funding from eyeo has been giving this priority.

There is quite a lot more detailed information on designs and implementation work as things develop, currently available in an explainers repository. This page will attempt to hit highlights and offer/link to informative tests. Note that these tests require Igalia's current :has()-supporting build for testing.

A note on limits and rationale:

Based on a long history of discussions and the number of challenges at hand, our implementation begins with very strict limits that are not in the selectors draft, but which are reflective of previous spec attempts and numerous discussions over the years as what was potentially most implementable. These are no final limits on the proposal but rather a starting point in order to focus the discussions. The limits we place on :has() initially are:

In fact, we believe (even with some supporting data) these limits are not ultimately desirable and can be loosened significantly. Supporting many classes are pseudos in the arguments, we belive is fairly trivial, in fact. Some, however, like logical combination pseudoclasses introduce a lot of complexity and questions. These initial limits are intended to focus the discussion and set a practical bar that we can overcome or not: if we can not solve acceptably with these limits, that is itself informative..

Measuring Performance

There are a large number of ways we can look at performance - none of them is particularly perfect, but each offers different insights.

CSS selector performance specifically is affected by two separate "phases": Style invalidation, and style (re)calculation. As the DOM tree changes, the invalidation phase attempts to quickly determine what parts of the tree has styles which could be affected by a change. Afterward, the style (re)calculation phase determines what the new styles actually are. These phases are not directly exposed to authors for measurement, however, one way to isolate them is to perform modifications that would cause invalidation, but not recalculation.

Basic, comparative tests

It can be useful to compare how the performance of :has() measures up in both of these phases against the highly optimized selectors that the browser supports today.

Invalidation Only

This test does just that. It isolates invalidation tests with two rules, one for .c .d and one for .b:has(.a). we compare the effects of using addClass and removeClass on trees of similar size and depth trees where a .c or an .a class which wouldn't match the whole selector is toggled, thus causing invalidation but not recalculation. Sample results are shown below, in this case :has invalidation is faster than common invalidations.

Time in microseconds through series of runs, graphed (lower is better). :has() invaliation takes about 1/4th the time.
Scaling

A number of things could affect this measure in real use... Below are some of these variables and more information about how they scale to real use...

Tree depth

The invalidation time for :has() in our implementation scales pretty well with depth.

This test simply adds 10x depth to the potential invaliation set. The results, shown below, are hardly affected.

Time in microseconds through series of runs with two tree depths, graphed (lower is better). :has() invaliation time remains fairly stable despite a 10x deeper tree.

Multi-subject invalidations to a common :has()

In real use, several :has() rules may hinge upon the same change. That is, given .one:has(.b){...} and .two:has(.b){...} - a change to .b invalidates two subjects. This test involves toggling classes inside of :has() which potentially invalidate N subjects. The results increase linearly.

Time in microseconds through series of runs (lower is better). :has() invaliation time increases linearly with the number of subjects affected.

Compound subject invalidations to a common :has()

The subjects of a :has() rules may be compound. That is, a rule could match .one.two.three:has(.b){...}. This test involves toggling classes inside of :has() which potentially invalidate N things that match complex subjects. The results show no discernable difference for the number of compound selectors in a subject.

Time in microseconds through series of runs (lower is better). :has() invaliation time is not noticably affected by the number of simple selectors involved in a compund subject.

Compound subject and argument invalidations to a common :has()

The arguments to a :has() pseudoclass may also be compound. That is, in addition to the previous test a rule could match .one:has(.b.c.d.e){...}.. This test involves toggling classes inside of :has() which potentially invalidate N things that match a subject (potentially compound). The results show a comparative (but slight in real terms) decrease in the perfomance as the number of simple selectors inside and outside the :has() increase.

Time in microseconds through series of runs (lower is better). :has() invaliation time is noticably affected by the number of simple selectors provided as an argument to :has(), however the difference between a :has() involving a single simple selector subject with a single simple selector argument and that with eight of each is, in real terms, a modest ~2 micro seconds.

Number of distinct invalidations inside a :has()

The arguments to a :has() pseudoclass contain complex arguements with many potential matches. That is, in addition to the previous test a rule could match .one:has(.b .c .d .e){...} and classes could toggle any of .b, .c, .d, or .e. This test involves toggling classes inside of :has() invalidate non-subject elements. The results show a linear scale in the perfomance as the number of simple selectors inside which are invalidaed increase.

Time in microseconds through series of runs (lower is better). :has() invaliation time is linearly affected by the number of non-subject selectors are invalidated which are provided as an argument to :has()

One can compare this with a similar test of native selectors via this test, the result looks indiscernable.

Time in microseconds through series of runs (lower is better). Of a similar, but not :has() rule shows that time is linearly affected by the number of non-subject selectors are invalidated.

Recalculation

This test does something similar to the invalidation only test above, but here involves recalculation. Sample results are shown below, in this case :has invalidation is slower than common invalidations.

Time in microseconds through series of runs, graphed (lower is better). :has() recalculation takes about twice the time.

"Whole performance /real world tests"

Each of the above tests provides interesting, discrete data, but ultimately leave a lot to the imagination regarding what real world impacts these might have as authors employ them. For this, we provide a few tests with some common sorts of challenges as played out against a significantly large tree or bigger, and commentary...

Stylesheet invalidation

Entire stylesheets are frequently "inserted" or "removed" from active duty - (this could be through actual DOM insertion of new style sheets, or enabling or disabling existing ones, or - more commonly through MediaQueries). Changing the stylesheets and rules in play comes with its own performance challenges, so we'd like to know that such a change is not disruptive in practice. The following tests attempts to give a realistic "whole" picture compariative cost of invalidating, recalculating and actually painting both an "average size tree ( average tree test ")" and a "significantly large tree very large tree test"by enabling or disabling stylesheets which contain :has based rules, and stylesheets which do not.

The output below shows the impact. On average size trees, there is no noticable impact (differences are within the margin of normal variance). Very large trees do see around a 2x change, however these are still fairly small and seem reasonably comfortable/unnoticable for a user.

# Average tree (~500 elements) standard average took 16.68520000006538ms has average took 16.771699999953853ms standard average took 17.39385000000766ms has average took 16.75085000002582ms standard average took 17.199800000016694ms has average took 16.956050000007963ms standard average took 17.265899999983958ms has average took 16.863149999990128ms standard average took 16.560699999972712ms has average took 17.036649999936344ms standard average took 17.321200000005774ms has average took 17.06545000000915ms # Very large tree (>2000 elements) standard average took 24.620250000007218ms has average took 62.29079999997339ms standard average took 27.483900000006543ms has average took 63.24885000001814ms standard average took 27.46419999995851ms has average took 62.57850000001781ms standard average took 27.73399999998219ms has average took 64.94164999996428ms
whole cost times for toggling stylesheets containing :has rules and "standard" ones without.

Extremely large document and hard cases

Just about the most stressful test we could imagine, and frequently cited in CSS Working Group when this feature has been discussed historically was a static version of the HTML5 living standard single page edition which incorporates some :has() rules - some which would affect rendering as the page was loading and a user was scrolling, and one rule that affects the entire body only when the very last element is received. This test sets up just that. Overall, the actual user experience is not noticably different than it is with the baseline version of the same document without :has() rules at all. Presumably this has to do with the streaming nature of things and the fact that things repeatedly yield to the parser.

Note: All of the tests here involve a "signficantly large tree" with some depth: Over 2k elements, which, based on HTTPArchive data and previous studies, this is more elements than nearly all common pages, so a nice number to choose. Outliers beyond this vary wildly in just how big they attempt to get. We did include one, the HTML5 Living standard single page edition which includes orders of magnitude more elements for stetched perspective.

Our "whole" tests base "a signficantly large tree" and "an average" tree on these relatively stable for as long as we have been tracking them numbers.


What we'd like from this

  1. We'd like some engine feedback on whether we feel that we are close to achieving a result that would allow us consider moving forward with this, perhaps taking this info to CSSWG, or do there are still real concerns
  2. If you have real concerns remaining - can you articulate them? Can you describe (or even better, provide) a test that helps describe a goal we haven't met.

Aside...A few random straggler/early tests

These are earlier tests, they overlap a lot with the tests in this document but focus on very large trees. We're currently keeping them around for later.