[HN Gopher] Interaction to Next Paint (INP)
___________________________________________________________________
 
Interaction to Next Paint (INP)
 
Author : 42droids
Score  : 121 points
Date   : 2023-07-11 15:35 UTC (7 hours ago)
 
web link (web.dev)
w3m dump (web.dev)
 
| jgrahamc wrote:
| See also: https://blog.cloudflare.com/inp-get-ready-for-the-new-
| core-w...
 
| troupo wrote:
| Meanwhile "Engineering Leader" at Chrome argues that 2.4s to
| First Contentful Paint is fast:
| https://twitter.com/addyosmani/status/1678117107597471745?s=...
| 
| Google's one (of many) heads has no idea what another (of many)
| heads says or does.
 
  | iamakulov wrote:
  | Isn't that tweet talking about 2.4s for _Largest_ Contentful
  | Paint? It mentions 0.9 for FCP being fast, which I agree is
  | pretty reasonable.
 
| benmccann wrote:
| INP feels like a pretty problematic way to compare sites because
| INP is going to be way lower on a site that doesn't do client-
| side rendering eventhough client-side rendering makes interaction
| with a site faster!
 
  | bob1029 wrote:
  | > client-side rendering makes interaction with a site faster!
  | 
  | I am going to have to disagree. Final HTML from the server is
  | just that. Its final. The client displays it and its done. No
  | XHR, no web sockets, no JS eval. It's done. You can immediately
  | use the webpage and the webserver doesn't care who you are
  | anymore. With SPA, this is the best case. You maybe even start
  | with SSR from the server and try to incrementally move from
  | there. Regardless, the added complexity of SSR->SPA and other
  | various hybrid schemes can quickly eat into your technical
  | bullshit budget and before you know it that ancient forms app
  | feels like lightning compared to the mess you proposed.
  | 
  | Reaching for SPA because you think this will make the site
  | "faster" is pretty hilarious to me. I've never once seen a non-
  | trivial (i.e. requires server-side state throughout) SPA that
  | felt better to me than SSR.
 
    | diroussel wrote:
    | > I've never once seen a non-trivial (i.e. requires server-
    | side state throughout) SPA that felt better to me than SSR.
    | 
    | What about gmail? That has all the state server side. How
    | impressive would it be if all rendering was done server side?
 
  | withinboredom wrote:
  | I completely disagree. Client side has the potential to be very
  | fast, even faster. However, most people are more interested in
  | writing a complex, Turing complete, type system under their
  | client than making fast, easy to use applications.
 
| agilob wrote:
| Will Firefox support it?
 
| doctorpangloss wrote:
| The real metric: INP with ad blocking enabled.
| 
| Example: NYTimes.com on Mobile Safari with AdGuard. 18 seconds.
| 
| Google is being really disingenuous with its so called metrics. A
| stroke of the pen could make INP 200ms across the top 500 sites.
 
  | taosx wrote:
  | NYTimes feels like a SSG when browsing (chrome) from Europe
  | after initial payload but that's as an unauthenticated user
  | with ublock. The sad part is that I can't read most of the
  | articles due to the paywall.
 
  | WorldMaker wrote:
  | Google is an ad company. Why would they make metrics that
  | penalize ads?
 
    | mdhb wrote:
    | And yet... this is a real thing they do.
    | https://www.searchenginejournal.com/google-on-how-it-
    | handles...
 
  | speedgoose wrote:
  | The NY times website on mobile Safari with AdGuard feels
  | perfectly normal on my iPhone 13.
  | 
  | Do you observe the same behaviour in private mode? Something
  | goes wrong on your device.
 
  | whateverman23 wrote:
  | > Example: NYTimes.com on Mobile Safari with AdGuard. 18
  | seconds.
  | 
  | Dear lord, I can't imagine that's the fault of NYTimes.
  | Something is off with your setup.
  | 
  | NYTimes.com is super quick and responsive on my devices.
 
    | coldtea wrote:
    | What "setup"? Parent said Mobile Safari. The setup come off
    | of the factory as a given.
 
      | whateverman23 wrote:
      | Maybe AdGuard? Maybe they have 2G internet?
      | 
      | Something isn't right, and I have a hard time believing
      | it's NYTimes given my experiences with their website.
 
        | withinboredom wrote:
        | Don't even get me started with 2G (or even flaky wifi
        | networks) internet and JS-heavy pages.
 
        | [deleted]
 
  | we_never_see_it wrote:
  | That's all Google cares about. How to invade our privacy and
  | force us to see more ads?
 
| haburka wrote:
| This may be controversial but I think this has the potential to
| be a brilliant metric because it measures some part of web UX
| that's often neglected. It's time consuming to make every single
| interaction display some sort of loading message but it really
| helps make the site feel responsive.
| 
| As long as they avoid the pattern of adding a global loading
| spinner that covers the whole screen. That's just the worst
| possible loading screen. I suppose it would still pass this
| metric.
| 
| Also I'm not sure if I totally understand the metric - I think
| it's simply when the next frame is rendered post interaction,
| which should easily be under 200ms unless you're
| 
| 1. doing some insane amount of client side computation
| 
| 2. talking over the network far away from your service or your
| API call is slow / massive
| 
| and both of these are mitigated by having any loading indication
| so I don't understand how this metric will be difficult to fix.
 
  | modeless wrote:
  | > when the next frame is rendered post interaction, which
  | should easily be under 200ms unless
  | 
  | Have you used doordash.com? I don't know how they do it, but
  | they manage to exceed 200ms on every single click, easily. And
  | they're not alone.
 
  | wielebny wrote:
  | > This may be controversial but I think this has the potential
  | to be a brilliant metric because it measures some part of web
  | UX that's often neglected.
  | 
  | It also seems to be a metric that is very easily gamed.
  | 
  | If all that matters is instant feedback, then just draw that
  | loader as soon as user clicks add to cart, do not wait for the
  | request to start. It does not matter that it will take X or Y
  | seconds.
 
    | minorninth wrote:
    | Honestly, displaying a loader as soon as I click add to cart
    | would be an improvement on many sites. I'd welcome it.
    | 
    | A site that genuinely responds quickly is best. But for a
    | slow site, I'd always prefer one that at least gives me
    | instant feedback that I clicked something over one that
    | doesn't.
 
    | crazygringo wrote:
    | But what you're describing as "gaming" is precisely the
    | behavior this is supposed to incentivize.
    | 
    | Of course you should be setting a visual "in progress" state
    | before you send out a request. And yes that's supposed to be
    | instantaneous, not measured in "X or Y seconds". That's the
    | entire point, to acknowledge that the user did something so
    | they know they clicked in the right place, that another app
    | hadn't stolen keyboard focus, etc.
 
    | iamakulov wrote:
    | > It also seems to be a metric that is very easily gamed.
    | 
    | Fun fact: the current JS-specific metric (which is being
    | fazed out) is First Input Delay, and it was explicitly
    | designed to avoid this gaming:
    | 
    | > FID only measures the "delay" in event processing. It does
    | not measure the event processing time itself nor the time it
    | takes the browser to update the UI after running event
    | handlers. While this time does affect the user experience,
    | including it as part of FID would incentivize developers to
    | respond to events asynchronously--which would improve the
    | metric but likely make the experience worse. > -
    | https://web.dev/fid/
    | 
    | I wonder why they decided to reconsider this trade-off when
    | designing INP.
 
    | mdhb wrote:
    | You will be pleased to know that isn't actually the case and
    | it's instead an example of a single metric taken from a
    | larger suite of metrics known collectively as core web vitals
    | which is what is actually used https://web.dev/vitals/
 
    | [deleted]
 
    | comex wrote:
    | You'd need some pretty inefficient code for there to be a
    | delay between the user clicking a button and even _starting_
    | a request...
    | 
    | But even in that case, instant feedback is probably better
    | for the user. It lets them know the website isn't broken and
    | they don't need to click again, and it also makes the
    | experience feel snappier.
 
  | [deleted]
 
| wfhBrian wrote:
| Starts strong:
| 
| > Chrome usage data shows that 90% of a user's time on a page is
| spent after it loads
| 
| Clearly impressive, breakthrough, research going on at Google.
 
  | Spivak wrote:
  | Less obvious than you think. How long do you spend on the HN
  | front page? When you open say Gitlab how often do you stay
  | there and not immediately click to something else?
 
    | chrsig wrote:
    | > How long do you spend on the HN front page?
    | 
    | ...more time than it takes to load. much, much more time.
 
    | christophilus wrote:
    | If it takes 300ms to load and takes me a few seconds to find
    | a link, I've spent 90% of my time on the page after it loads.
 
  | troupo wrote:
  | This just shows that they don't even understand what they are
  | measuring.
  | 
  | With their engineering leader [1] arguing that 2.4s to display
  | text and images is fast, no wonder they present "people still
  | spend time on websites after they have spent an eternity
  | loading" as a surprising find.
  | 
  | [1]
  | https://twitter.com/addyosmani/status/1678117107597471745?s=...
 
  | quazar wrote:
  | It's much less than I expected.
 
  | fevangelou wrote:
  | Truly epic comment.
 
| marcosdumay wrote:
| So... Chrome is going to officially spy on their users and report
| that data to Google?
 
  | Xarodon wrote:
  | Just like it's already been doing for decades
 
| CSSer wrote:
| I've generally had no gripes about this or web vitals in general
| except for one thing: group population[0]. It's unfair to create
| a blast radius on a small or medium-sized business's website
| simply because enough data doesn't exist to determine the true
| extent of the user experience impact.
| 
| The most recent example I've observed this on was a website with
| a heavy interactive location finder experience that lived on a
| single page. Fine, penalize that page. There's a chance users
| won't initially navigate there anyway. However, because a (very
| minimal, practically irrelevant amount of) similar content on the
| rest of the page was present on 18 other pages, the impact was
| huge.
| 
| The reality of the web today makes this pretty dire in my mind.
| Many businesses choose to run websites that are generally fast,
| but they have to engage with third-party services because they
| don't have the means to build their own map, event scheduler, or
| form experience. The punishment doesn't fit the crime.
| 
| [0]: https://www.searchenginejournal.com/grouped-core-web-
| vitals-...
 
| danielvaughn wrote:
| At first glance, 200ms INP is a pretty high latency for a "good"
| rating. As a comparison, I believe 200ms is an average https
| roundtrip. I'd expect most interactions to be much lower than
| that.
 
  | photonerd wrote:
  | That was my first impression too but then I thought about what
  | it's actually measuring: page responsiveness, not animation
  | jank.
  | 
  | I'm not going to expect a 16ms response or anything for every
  | animation but much slower & you see jank.
  | 
  | For page interactivity though? 0.2s is pretty damn fast. Human
  | response time is 0.15-0.25s
  | 
  | So it's pretty reasonable
 
    | 8n4vidtmkvmk wrote:
    | .2s is slow for a redraw. Just because it might take you
    | 200ms to click after something happens doesn't mean you can't
    | see/notice when things take that long.
 
      | photonerd wrote:
      | I'm not saying it's fast. I'm saying that _based on the
      | goal of what it is measuring_ (user input responsiveness)
      | it's _fast enough_. For the purposes of the metric anyhow.
      | 
      | Plus, speaking from far too long of a career dealing with
      | user testing, respond too _fast_ and users thing you didn't
      | actually do anything.
      | 
      | So you're kind of boned either way. This is just measuring
      | programmatic delay.
 
  | klysm wrote:
  | I guess it depends on how your interactions are implemented. If
  | it's an SPA then 200ms is absurdly slow. But if it's a more
  | traditional form submit or something then it would take a lot
  | longer for your next set of pixels to comes through.
 
    | danielvaughn wrote:
    | Based on the fact that they're hooking into events like
    | onclick, I'd say that they are not looking at traditional
    | form submits, because then the metric would just effectively
    | be First Contentful Paint or something. My interpretation is
    | that they are indeed looking at first paint after an event
    | handler has been fired on the same page.
 
      | iamakulov wrote:
      | This is correct (source: working in web perf for 5 years).
      | 
      | INP is the time between you click/press a key/etc and the
      | moment the next paint happens. It's only measured for on-
      | page interactions, not for navigations.
      | 
      | It's basically like http://danluu.com/input-lag/ but as a
      | web metric.
 
        | danielvaughn wrote:
        | Thanks for confirming, yeah that makes sense. Side note,
        | that input-lag thing is a very cool resource.
 
| romanovcode wrote:
| Absolutely love how one company dictates how you should build
| your websites. Love it!
 
  | mertd wrote:
  | You can build your website any way you like.
 
| barbazoo wrote:
| I'm lacking lots of context obviously but: What good is a
| sophisticated metric when the pages they index are mostly
| blogspam SO clones etc? I'm not interested in the "most
| responsive" SO clone. Seems out of touch with what Google search
| is struggling with these days.
 
  | mdhb wrote:
  | I don't know what made you think this was somehow the only
  | factor in their ranking algorithm or even a particularly
  | heavily weighted one.
 
    | barbazoo wrote:
    | > I don't know what made you think this was somehow the only
    | factor in their ranking algorithm or even a particularly
    | heavily weighted one
    | 
    | I don't think I implied this at all actually.
 
| DueDilligence wrote:
| [dead]
 
| benatkin wrote:
| To me it sounds like this will help pattern of showing a skeleton
| screen when loading data.
| https://www.smashingmagazine.com/2020/04/skeleton-screens-re...
 
| axlee wrote:
| We started seeing reports about it in GSC early July, when over a
| single day all our scores turned to crap with no explanation.
| 
| We are in the yellow, but the biggest culprits for blocking time
| are...Google Tag Manager, GAds (and Ganalytics where we still
| have it). So yeah, thanks Google, can't wait to lose on SEO due
| to your own products. And also, thanks for releasing this without
| the proper analysis tooling. (https://web.dev/debug-performance-
| in-the-field/#inp : this is not tooling, this is undue burden on
| developers. Making people bundle an extra ["light"
| library](https://www.npmjs.com/package/web-vitals) with their
| clients, forcing them to build their own analytics servers to
| understand what web-vitals complains about...or is often wrong
| about)
 
  | falcor84 wrote:
  | I for one commend Google's efforts to improve the web's chances
  | of long-term survival by eliminating themselves and hopefully
  | tracking in general.
 
    | kevin_thibedeau wrote:
    | It's hilarious how AMP pages have become a cesspool of JS
    | dark patterns trying to bombard with you with as many ads as
    | possible and keep you from escaping their site.
 
  | deafpiano wrote:
  | Easy solution, remove all that adware crap.
 
  | devmor wrote:
  | >We are in the yellow, but and biggest culprits for blocking
  | time are...Google Tag Manager, GAds and GAnalytics.
  | 
  | This has been the case for over a decade with Google's
  | "Lighthouse" analysis tool as well. I used to use it as part of
  | a site analysis suite for my clients - a good portion of the
  | time, my smaller clients would end up deciding to replace
  | Google Analytics entirely with a different product because of
  | it.
 
    | askura wrote:
    | With SEO it's entirely the case of Google being Google and
    | just doing whatever they fancy. Core changes aren't often to
    | the benefit of the scene these days. The little space that
    | isn't paid ads is often useless these days.
    | 
    | I don't think the generative AI results they're going to do
    | will be much better either.
 
    | boredumb wrote:
    | Yes a lot of analytics use cases for smaller clients boil
    | down to a server report of page requests. Obviously with
    | google ads you're using it to monetize so that's a different
    | story, but the client side analytics that google provide are
    | bloated and usually overkill for most sites.
 
    | magicalist wrote:
    | > > _We are in the yellow, but and biggest culprits for
    | blocking time are...Google Tag Manager, GAds and GAnalytics._
    | 
    | > _...a good portion of the time, my smaller clients would
    | end up deciding to replace Google Analytics entirely with a
    | different product because of it._
    | 
    | This seems like a good outcome, then? Market pressure may be
    | the only way to get Google analytics to finally cut their
    | footprint.
 
    | greatNespresso wrote:
    | If they can't give up Google Analytics or Google Ads, but
    | still want the perf, give Cloudflare Zaraz a try. I am
    | Product Specialist there, if you need an intro in person,
    | happy to do it. Just reach out to me on Linkedin / twitter.
 
___________________________________________________________________
(page generated 2023-07-11 23:00 UTC)