Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google Seeks to Pacify Consumers with Faster Mobile Pages (bloomberg.com)
30 points by corneliusjac on Feb 24, 2016 | hide | past | favorite | 47 comments


There's a lot of Google negativity here (perhaps not without reason), but when you view the AMP spec and what it tries to solve for, it feels like a smart move to help broaden the adoption of sensible but poorly understood optimizations: https://www.ampproject.org/docs/get_started/technical_overvi...

Those optimizations are worth looking over even if you don't plan on adopting AMP.

I do think that rather than requiring direct use of AMP for specialized search placement, it should really measure site performance and only ensure your site meets the same benchmark, no matter what you do to optimize.

Google may not be a perfect ally of the open web, but I like this approach much more than what I've seen from Facebook.


Google has repeatedly said that they will not give preferential treatment for AMP pages. Almost every publication [1] and comment I've seen gets this wrong.

According to official comment, the ranking algorithm will prioritize page speed, and AMP is a good way to achieve that.

BloombergBusiness uses some questionable sentence structure in order to imply the opposite, but that's on them:

  AMP isn’t “a signal we use in ranking” pages, Besbris said. 
  He declined to comment about whether AMP pages would rank
  higher, though he said some of the signals Google uses for 
  search include whether a page is mobile friendly and how 
  rapidly it loads -- two problems that AMP seeks to tackle.
[1] e.g. http://adage.com/article/digital/google-amp-launch-looms-sea... See their retraction at the end of the article.


Some of those guidelines (avoid style recalculation and use only "GPU-accelerated" animations) are specific to Chrome (and to other browsers of today). Style recalculation is needlessly slow today due to the lack of parallelism and typed CSSOM. And there's no such thing as a "GPU-accelerated" animation, either in the spec or technically: all animations can be run on the GPU with minimal state changes. It's just that browsers found it easiest to only optimize a small subset of cases (in the case of CSS animations, really small--transform and opacity), so as to keep around their legacy, originally-CPU-based, painting infrastructure. It's good that they did that, because it let us get to some degree of graphics acceleration quicker, but we can do so much better.

I'd prefer to just improve the browsers instead of putting the burden on Web authors.


I think the browser specs are broken such there will always be pathological cases, and always cases where 'you're on the performance rails'. End users don't care.

This is no different on any other platform. If you program games directly to OpenGL, for example, you run into all kinds of different bottlenecks, both CPU and GPU. Depending on the pipeline of the various GPUS, and what kinds of hazards they have, there's stuff you have to avoid, or branching paths you have to handle to get good performance. If you use SQL databases, not all databases can handle all queries optimally.

I think it is asking too much of browsers to give them such a huge surface area spec as HTML and CSS, and then ask for them to optimize everything so there's no slow paths.

The practical reality is, there's always a subset of specs known to be fast and well supported and developers just need to know what that is.


> I think the browser specs are broken such there will always be pathological cases, and always cases where 'you're on the performance rails'. End users don't care.

In the case of GPU acceleration, the specs are not broken. There is no fundamental reason why animations on arbitrary properties cannot be GPU accelerated and some can be. Ask anyone who's worked with Scaleform whether rapidly changing vector graphics (which is all these animations are) has to be slow on mobile.

For style recalculation, maybe, maybe not; it's not clear to me that browsers are worse than native app frameworks here, except that we're missing Typed CSSOM, which is important spec work.

> This is no different on any other platform. If you program games directly to OpenGL, for example, you run into all kinds of different bottlenecks, both CPU and GPU. Depending on the pipeline of the various GPUS, and what kinds of hazards they have, there's stuff you have to avoid, or branching paths you have to handle to get good performance.

And game engine authors are fed up with it, which has led to Metal, DX12, and Vulkan. It's not a good way to treat developers; we should try to give them APIs in which, as much as possible, everything is fast. I think the Web APIs can be that, if implemented optimally.

> I think it is asking too much of browsers to give them such a huge surface area spec as HTML and CSS, and then ask for them to optimize everything so there's no slow paths.

I'm not saying that we have to pretend everything in HTML and CSS can be made equally fast. My point is that, in many areas, browsers have done a bad job of even trying to optimize broadly. Take animations, for example: the only thing browsers optimize is two properties out of hundreds, transform and opacity. This is exceptionally problematic, given that there's no technical reason why this has to be the case. I just gave a talk about this a few days ago :)


I think you have a different idea of what gpu acceleration means here. When we say that animating transform and opacity is fast, it means that we can animate it without rasterizing content over and over again i.e. only use composition. This is independent of how rasterization takes place. Rasterization itself can be gpu accelerated, Chrome has started doing this on mobile[1].

Animating other CSS properties such as font-size require us to rasterize new content on every frame of animation. That is the inherent reason for why it's slow. Again we can use the GPU for rasterization but we're still going to do it every frame instead of caching the result in a texture that's then composited over and over again.

[1]: https://www.chromium.org/developers/design-documents/chromiu...


I know how it works. I wrote much of the layout and graphics code for Servo and did the initial bringup of the compositor on Firefox Mobile. You shouldn't have to rasterize new content on every frame when you animate font-size (except for individual glyphs, but that's still quick, just a few ms, if you parallelize it—and even with techniques like SDFs and/or GPU vector graphics tricks like [1] you may not even need to do anything on the CPU). If you animate, say, margin-left, you should in most cases be doing essentially no work on the CPU, and you should be able to submit one draw call to repaint the page since all of the resources (images, glyphs, alpha masks, SVGs) should be cached in an atlas. It should be 1-2ms. Even animating something like border-top-width should be no resource changes and one draw call, exactly as fast as your compositor should do it.

Optimally implemented, I believe there should be no difference between "rasterization" and "compositing". You're painting the same number of pixels, paying the same fragment shading and rop cost, either way. Caching fully rendered Web content in textures assuming that the cost of rasterizing that content is high isn't working very well, and I think it's not worth it.

[1]: http://wdobbie.com/post/gpu-text-rendering-with-vector-textu...


Does using a font-atlas really deliver the same quality as fully hinted true-type fonts? Also, this demo doesn't really have any state changes in it. Wouldn't a more realistic example be of a UI that contains mainly different font weights and sizes, to better gauge the effects of all of the pipeline state changes.

I agree that the browser should be more like a game engine, and less like an X11 buffered window system, in terms of just rasterizing faster enough to render 60fps. But it seems like to me that the CSS spec contains a number of features (rounded corners, soft shadows, etc) that by themselves can be rendered fast on a GPU, but cobbled together in a complex scene graph might result in stalls.


> Does using a font-atlas really deliver the same quality as fully hinted true-type fonts?

Hinting really isn't important on mobile or HiDPI. In fact, no version of OS X or iOS does it. (I don't know about Android, but I wouldn't be surprised if it doesn't on HiDPI screens.)

What's more interesting is antialiasing quality, especially subpixel AA. That's something I think needs more investigation. But I'm cautiously optimistic; there's been some really exciting work in GPU AA and vector graphics lately :)

> Wouldn't a more realistic example be of a UI that contains mainly different font weights and sizes, to better gauge the effects of all of the pipeline state changes.

Don't change state then. :)

State changes really shouldn't be necessary. If you set things up properly, you should be able to render arbitrarily many glyphs with one draw call, bound only by your texture atlas size.

> But it seems like to me that the CSS spec contains a number of features (rounded corners, soft shadows, etc) that by themselves can be rendered fast on a GPU, but cobbled together in a complex scene graph might result in stalls.

They typically don't. The key is to render critical resources like that to a texture atlas and to batch like resources together as you do so. (For example, batch all box shadow pieces together into one draw call, batch all rounded corners into another, etc.) This effectively places a small fixed upper bound on the number of draw calls you issue. Then you can rerender the page for each animation frame with a very small number of draw calls (frequently just one) and a simple blitting shader. If you get your draw call/state change overhead down, essentially nothing on the Web comes close to taxing a modern GPU; GPUs are incredibly fast at rendering batches.

My talk goes into more details, if you're curious: https://air.mozilla.org/bay-area-rust-meetup-february-2016/


Thanks for the link, I'll check it out.

What's the status of Servo BTW? I mean in terms of HTML/CSS spec completeness.

Edit: just saw the demo in the video, awesome job!


Lots of progress made, but still lots to do before we get to rendering most Web sites. And thanks :)


You know, I think it would be great if web browsers fixed things. AMP is fixing a very real problem that users have today.

If and when browsers change (including those used by a lot of people that only update once a year) AMP can just change its guidelines. These are things people have to do today anyway to get fast sites. AMP just packages up in an easy to consume package.


Sure, I'd love browser improvements too, but at the end of the day, you can't automatically optimize everything from the browser. The browser can't magically fix a 1MB JS file in <head>, or serving 2000px wide images to iPhones.

>I'd prefer to just improve the browsers instead of putting the burden on Web authors.

A cynical answer is that Google released this because web authors haven't done a good enough job making fast pages.


> Sure, I'd love browser improvements too, but at the end of the day, you can't automatically optimize everything from the browser. The browser can't magically fix a 1MB JS file in <head>, or serving 2000px wide images to iPhones.

Sure, I agree that many of the optimizations suggested are good practice.

(Note, though, that serving 2000px wide images to iPhones is fixable in the browser, with downscale on decode.) :)

> A cynical answer is that Google released this because web authors haven't done a good enough job making fast pages.

It's not cynical—that's the exact motivation of AMP. My point is that it cuts both ways: we need to keep improving the browser, as there are still massive opportunities for performance improvements.


> (Note, though, that serving 2000px wide images to iPhones is fixable in the browser, with downscale on decode.) :)

From a rendering perspective, yes. From a "make this page load faster", no, because the network download time is 10x-100x larger than the processing time.


All CSS must be inline and size-bound

Does this mean that only inline styling is supported? <div style="..."> Or just that styling should be included on the page in a <style> tag.


I've already gone to great lengths to optimize my mobile site. I am fully confident that AMP cannot possibly make my site any faster. Every page on my site loads via a single HTTP request that transfers 10k of gzipped JS and HTML all at once. Loading an additional script with an extra request for another JS file is completely against my mobile design principles.

Should people like me start using AMP anyway to stay relevant in Google search rankings?


:) AMP isn't for you then. Honestly your site is probably better than AMP could ever be just inherent in the fact your site doesn't need a library enforcing rules, but thats not the point of AMP. AMP is for forcing news sites to do similar things to what I assume you have already done.

Imagine some web developer at Huge Mega News Corp (HMNC), he comes to his manager about the fact they have 5 analytics suites running and when the ads load they bounce the screen around. His manager reviews it and determines its not important because 1. People are still coming to the site 2. It isn't stopping people from coming to the site. Now the dev at HMNC can come to his manager and say hey what you want doesn't conform to AMP, we can't do that.

I have heard multiple times, AMP will not increase your ranking. Load times don't inherently increase your ranking over other content. But if your site has similar merits to another site, then load times among other things will effect who is ranked higher.


He declined to comment about whether AMP pages would rank higher, though he said some of the signals Google uses for search include whether a page is mobile friendly and how rapidly it loads

Showing only results that use AMP would be a very coarse cudgel to enforce page speed. It is entirely within Google's ability to simply directly measure page speed, and to my eyes the more sensible course of action given the capability.


You have to imagine someone there is already measuring this, if for no other reason that evaluating the effectiveness of their own approaches.


AMP does more than just minimize network requests or JS loading size, it structures the page so that browsers can render it faster as well. (that is, controlling for network download and JS parse time)


You are inlining the script on the same html page?

Hope you are using chunked transfers so it will render sooner, pre-gzipped streams end up not being chunked in nginx.

Google has already said AMP is not used as a ranking signal (but mobile friendliness is).


I am confused. Are you referring to 'Transfer-Encoding: chunked'? That's obviously very useful for dynamically generated content. But what advantage does it have over a simple Content-Length header?


Shameless plug, but if you care about web performance, my startup, https://rasterize.io provides easy frontend performance monitoring. It tells you how fast users load your site, broken down by desktop vs. mobile, and geography. You can view most of the chrome devtools waterfall information, for every (modern) visitor.


What's the page load overhead in adding this for each user?


It's an 8kB async-loaded JS script, and then it posts a small (1kB) amount of data back to my servers. Overhead should be tiny, and it's async so it won't block page rendering.


Looking at your source. You are using the timing API to get RUM data. How is your JS different/better than just using Boomerang to get those metrics?


I still think this is a pointless nonsense. Anybody who is going to bother implementing AMP will be equally capable of implementing well-optimised standards-compliant web pages, with the bonus of not having to use this silly restricted format.


Ha, for a second I thought they were going to do something about the ad saturation. uBlock and NoScript on Firefox for Android pacify this consumer just fine.


Content alone loads faster than loading content plus ads.

Making one DNS request is faster than making several, including ones for domains related to advertising.

There are the speed gains that Google can offer and then there are ones that are under the user's control, which Google will never offer, unless they change their business model.

Not requesting ads makes my user experience faster.

Experiment: Disable Javascript for Google searches and what happens? Do the ads still load?


I haven't seen much on the ad data side of this.

Please correct me if I'm wrong, but if a publisher switches to AMP, then all of their ad serving data (and by extension impression data for all of their visitors that Google is able to identify) becomes Google's data essentially. Whereas before, if a publisher wasn't running DFP, AdSense, or GA, then Google was essentially blind to their ad data.

So does this or does this not give Google that level of visibility into the ad layer of a site? If so, that is a MAJOR strategic competitive data advantage once lots of sites switch over, which they will be encouraged to do due to higher mobile ranking as a result of getting their site's sped up.


Okay but what publisher isn't using DFP, AdSense, or GA. Have you seen the media sites being targeted here? Take any one and watch the dozen or more 3rd party domains it hits. Their data goes everywhere already, including to Google.


I don't disagree about the ubiquity of DFP, AdSense and GA. But unpacking that a bit further...DFP gives way richer data than can be had with AdSense or GA if a publisher is running non-Google ads on their site. So Google has limited visibility into anyone not using DFP. I don't have market data on the adoption of DFP, so while I think it is a safe bet it is the major player still, I'm not sure what % of sites do NOT use it.

And for anything not run through DFP and not on Google inventory, this would now give them visibility into those ads, potentially bid data from the headers, potentially audience data for anything passed in plaintext (surprisingly common), etc. Or am I missing something in terms of the data they would see?


Over the past year, Plenty of Fish has been emerging as a higher ROI alternative to Google ad offerings.


Make up your mind Bloomberg:

Google said [...] it will put websites built with its Accelerated Mobile Pages [...] in the Top Stories section of a search results page

VERSUS

AMP isn’t “a signal we use in ranking” pages, Besbris said. He declined to comment about whether AMP pages would rank higher


There is a difference between being the number one in the standard results list and having your page featured a la Google+/Google MyBusiness pages.

I don't know the data on CTR for these featured pages but I think it's probably significant.


When an article about less web bloat has a self-playing video clip...


Not kicking me to a full screen ad for their gmail app every time I log in to gmail would be a nice start.


You know what already makes pages load lightning fast?

Ad blocking.


One of AMP's goals seems to be ensuring that websites aren't noticeably slower just because they have ads on them.


Cool, if they also remove ads from AMP served content, we will be at feature parity!


There are two articles about AMP on the front page right now, this one and https://news.ycombinator.com/item?id=11167428. Which is better?


They both cover different aspects of AMP and there doesn't seem to be much overlap so I think both are ok.


Ok, we won't merge them.


To use AMP you need to load JavaScript from the AMP CDN, which is operated by Google. From <https://github.com/ampproject/amphtml/blob/master/spec/amp-h...:

> The AMP runtime is loaded via the mandatory <script src="https://cdn.ampproject.org/v0.js"></script> tag in the AMP document <head>.

So you are handing over the security of your site and your user's privacy to Google.


you can also host it yourself. See github downloads.


You could also use a subresource integrity check if you are worried about the security but still want to use Google's hosted version.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: