Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I share the distaste for terribly suboptimal solutions, but I'd also like to add another thing - I don't understand this complete lack of concern for wasting power. Bits aren't free; if you make a million of computers redo the same computation that could be done on a single computer once, you're literally making humanity burn a million times more coal for it than needed. I know that this feels irrelevant for any single page, but when scaled globally, this kind of thinking scares and saddens me.


The amount of energy we spend on rendering web pages each day pales in comparison to the amount we spend each day on transportation or even Bitcoin mining. A million times a relatively tiny amount is still relatively tiny.

It's an understandable argument, but I don't think it applies to client-side rendering.


Let me point out that html, css, pngs and jpegs are not bitmap data that can be blitted directly to the screen.

It's not obvious that some simple js transforms is going to be much more (measurably more) resource intensive than laying out a rich html page, even if it is "static" ("just" css1,2,3 -- some of which may be animations).

Hypertext is hypermedia, hypermedia means an object oriented system -- an object oriented system means (some form of) programmability. I don't like the general trend towards "single page apps" for things that are just hypertext applications (such systems can be built with html and css and some javascript for enhancement) -- but this is sort of going the other way: If you just have "text" content, send text -- and then fix the broken useragent with some js so that said text presents nicely. If your useragent already handles text nicely (hello w3m) -- do nothing.

This solution has a certain elegance, with some careful (system) design, it allows for graceful degrading/progressive enhancement. It reads nicely in w3m, it's quite amendable to scraping (one could argue that semantic html is better, and it probably is in isolation, but, as Google seem to have concluded -- semantic ml simply doesn't scale when a large number of sites get it wrong (or just "different")).

[edit: Let me take some of that back -- this* system doesn't render well in w3m, but the rest of my points stand: the approach should be viable -- but maybe there needs to be something along the lines of:

    #at url along the lines of: example.com/some/post/
    <html>
      <noscript>To read without javascript go directly
      to: <a..> example.com/some/post/post.rst</noscript>
    <script src="magick.js">
    </html>
Where magick.js "knows" where to find post.rst, parse it and display it... ]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: