
Lighthouse performance scoring
How Lighthouse calculates your overall Performance score Sep 19, 2019 — Updated Jun 4, 2021 Português, Русский, English available in : 한국어 Appears in :
Reading: Lighthouse performance scoring
Performance audits In general, only metrics contribute to your Lighthouse Performance seduce, not the results of Opportunities or Diagnostics. That said, improving the opportunities and diagnostics likely improve the metric unit values, so there is an indirect relationship. Below, we ‘ve outlined why the score can fluctuate, how it ‘s comprised, and how Lighthouse scores each individual measured .
Why your score fluctuates #
A bunch of the unevenness in your overall Performance score and system of measurement values is not due to Lighthouse. When your operation score fluctuates it ‘s normally because of changes in underlie conditions. Common problems include :
- A/B tests or changes in ads being served
- Internet traffic routing changes
- Testing on different devices, such as a high-performance desktop and a low-performance laptop
- Browser extensions that inject JavaScript and add/modify network requests
- Antivirus software
Lighthouse ‘s software documentation on Variability covers this in more astuteness. furthermore, even though Lighthouse can provide you a unmarried overall Performance score, it might be more utilitarian to think of your locate operation as a distribution of scores, preferably than a single count. See the insertion of User-Centric Performance Metrics to understand why .
How the Performance score is weighted #
The Performance score is a weighted average of the metric scores. naturally, more heavily weighted metrics have a bigger effect on your overall Performance seduce. The system of measurement scores are not visible in the report, but are calculated under the hood.
The weightings are chosen to provide a balanced representation of the user’s perception of performance. The weightings have changed over time because the Lighthouse team is regularly doing research and gathering feedback to understand what has the biggest impact on user-perceived performance.
Explore score with the Lighthouse grade calculator
Lighthouse 8 #
Lighthouse 6 #
How metric scores are determined #
once Lighthouse is done gathering the performance metrics ( by and large reported in milliseconds ), it converts each crude metric value into a system of measurement score from 0 to 100 by looking where the system of measurement value falls on its Lighthouse seduce distribution. The scoring distribution is a log-normal distribution derived from the performance metrics of real web site performance data on HTTP Archive. For exercise, Largest Contentful Paint ( LCP ) measures when a exploiter perceives that the largest contented of a page is visible. The metric value for LCP represents the prison term duration between the drug user initiating the page load and the page rendering its primary content. Based on very web site data, top-performing sites render LCP in about 1,220ms, so that measured respect is mapped to a seduce of 99.
Going a bit abstruse, the Lighthouse scoring curve model uses HTTPArchive data to determine two control points that then set the form of a log-normal crook. The 25th percentile of HTTPArchive data becomes a score of 50 ( the median restraint point ), and the 8th percentile becomes a mark of 90 ( the good/green see period ). While exploring the scoring arch plot below, note that between 0.50 and 0.92, there ‘s a near-linear kinship between metric unit value and grade. Around a seduce of 0.96 is the “ point of diminishing returns ” as above it, the arch pulls away, requiring increasingly more system of measurement improvement to improve an already high sexual conquest. Explore the score curvature for TTI .
How desktop vs mobile is handled #
As mentioned above, the score curves are determined from real performance data. Prior to Lighthouse v6, all grudge curves were based on mobile performance data, however a background Lighthouse operate would use that. In practice, this led to artificially high-flown desktop scores. Lighthouse v6 fixed this bug by using specific background score. While you surely can expect overall changes in your perf score from 5 to 6, any scores for background will be significantly unlike .
How scores are color-coded #
The metrics scores and the perf score are colored according to these ranges :
- 0 to 49 (red): Poor
- 50 to 89 (orange): Needs Improvement
- 90 to 100 (green): Good
To provide a well drug user experience, sites should strive to have a beneficial score ( 90-100 ). A “ perfective ” sexual conquest of 100 is extremely challenging to achieve and not expected. For model, taking a score from 99 to 100 needs about the same amount of system of measurement improvement that would take a 90 to 94 .
What can developers do to improve their performance score? #
first base, use the Lighthouse scoring calculator to help understand what thresholds you should be aiming for achieving a certain Lighthouse performance score. In the Lighthouse report, the Opportunities section has detailed suggestions and software documentation on how to implement them. additionally, the Diagnostics section lists extra guidance that developers can explore to further improve their performance. last update : Jun 4, 2021
— Improve article
Share
sign