Skip to content
This repository has been archived by the owner on Apr 21, 2023. It is now read-only.

Design Doc: Securing Critical CSS Beacons

Jeff Kaufman edited this page Jan 9, 2017 · 3 revisions

Securing Critical CSS Beacons

Jud Porter, 2013-04-12

Brief CSS Beacon Background

The prioritize_critical_css filter in mod_pagespeed uses client-side instrumentation to collect the set of critical CSS selectors on pages. We currently decide to instrument a page with beaconing if we have no critical selector info for that page yet, or if the last beacon response is older than the default pcache expiration time (24 hours). These client side beacons are sent as POST requests back to the server. When the beacon is handled by the server, the list of selectors is stored in the property cache. We store the 10 most recent critical selectors sets and use the union of these sets to determine the overall critical CSS.

Potential attacks

The beacon is susceptible to an attacker sending bogus beacons to the server, thereby causing poor performance or a poor user experience on the page. An attacker sending bogus beacons could affect the page in the following ways.

  • Sending multiple beacons with empty critical selector sets. If the attacker sends enough empty beacons to flush out all legitimate beacons received so far then no styling will be applied to the page until the complete CSS lazyloaded at onload. Enough beacons must be sent to flush out any legitimate beacon responses from the pcache, since we use the union of the last 10 beacons as the critical selector set.

  • Sending a single beacon with all/most selectors marked as critical. This could cause poor performance of the page since all of the CSS will be duplicated on the page, first in the critical CSS section and then again when we lazyload the original CSS.

  • Sending beacons with large beacon responses of bogus selectors that don’t appear on the page. This will cause growth of the size of the property cache and slower performance of the critical CSS filter, since the filter will now have to scan and reject a larger set of critical selectors. Note that this growth is bounded for an individual page because we reject beacon POST bodies larger than 128K and store 10 entries in the property cache, but an attacker could send beacons for random pages/domains that the server doesn’t normally serve.

  • Sending beacons with carefully crafted selectors to specifically modify the appearance of the page before onload. For example, selecting rules that cause an element to be shown that is normally hidden by CSS or vice versa.

Proposed Mitigations

Below is a list of proposed modifications to current beacon handling intended to make the attacks described above harder. However, we ultimately have to use potentially untrustworthy client data, and so securing all attack vectors is likely impossible.

  • Include a nonce in beacon responses. When we decide to reinstrument a page, generate a new nonce, store it in the property cache, and include it with the instrumentation JS injected onto the client page. When a beacon response is received, read and check the expected nonce from the property cache with the value in the beacon, and reject if they don’t match. When receiving a valid beacon response, the nonce should be cleared from the pcache to prevent it from being reused. Note that only the single most recent nonce will be valid at a time, so an attacker can’t collect multiple nonces over time. This will prevent an attacker from being able to send a beacon without first fetching an instrumented version of the page.

  • Time vary when we reinstrument a page. By adding time variation to when we reinstrument it makes it more difficult for an attacker to receive an instrumented page and to send back a beacon with the correct nonce before a legitimate beacon request is sent.

  • Instrument more often. Currently, we reinstrument a page once every 24 hours, which is the default expiration time for a value in the property cache. We should instrument more often, perhaps scaling reinstrumentation time with QPS of the site. This will help to more quickly flush out any malicious beacon responses we may have received.

  • Reject empty beacons. This prevents an attacker from causing the initial page to lack all styling until onload. This case is also mitigated by our taking the union of all critical selector beacons as the final critical selector set. It will only take 1 legitimate beacon response to nullify a set of empty beacon responses.

  • Reject beacons with selectors not found on the page the beacon is coming from.

  • Don’t insert critical CSS if the number of critical selectors exceeds some large percentage (90%?) of the total selectors. This prevents an attacker from causing all CSS on the page to be loaded twice, and helps the case where the critical CSS filter won’t speed up the page.

  • Use page data to compute an initial set of critical selectors. If we traverse the page and add selectors to the pcache unconditionally if we see that they match html data, it’ll be harder for an attacker to fool us into thinking critical CSS is non-critical in such a way that the page content changes visibly.

Clone this wiki locally