Server-side tracking is what everyone started off with. There is a reason client-side analytics won in the marketplace. They just have a better balance of advantages to disadvantages.
The advantage is that you can measure anything that happens in the browser. As a product owner, what you really care about is the experience you give your visitor/customer, and that happens in the browser, not at the server. This advantage has become stronger over time as sites have used more javascript.
One disadvantage is that if your visitor/customer has javascript turned off, you get no data. This was a concern in the early days of client-side analytics, but not really any more.
A more modern disadvantage is that ad blockers might prevent your analytics script from running. However, this is only a problem for client-side analytics packages that are hosted by ad companies, like Google Analytics. It's not a problem with the concept of client-side analytics in general.
EDIT to add:
Another advantage is that only measuring things in the browser makes it a lot easier to exclude non-browser traffic like bots and spiders from your reports.
That's also a disadvantage because you can miss server-only events like "hot-linked" images or PDF downloads straight from Google. On balance, though, we care a lot less today about hot-linked files than we care about excluding automated traffic.
And in my experience, culturally, client-side packages were a huge help in getting management off of pointless vanity metrics like "hit counts" and caring more about human metrics like visits and time.
You missed the biggest disadvantage of client-side tracking: it’s slow. Especially compared with server-side tracking which has typically zero marginal cost because it’s already happening.
Counting the size of the executed JavaScript, because that’s what matters more than the compressed transfer size:
ga.js is 45KB, matomo.js is something like 50KB. The “new breed” of trackers are currently commonly 2–5KB (though generally if they were written more carefully they’d be well under 1KB), but they’re sure to continue growing because such is the nature of code.
By using client-side tracking on a different host name, you’re making the browser establish a new HTTPS connection—which can happen in the background so long as you do it properly, so it’s not of itself a serious performance issue—and parse and execute probably 50KB of JavaScript, which blocks the main thread while executing. On a substantial fraction of the devices your site will be running on, that’ll block for more than a hundred milliseconds (to say nothing of the couple of hundred more of CPU time that parsing took, which wasn’t blocking, but was taking away from other things it could be spent on), and on older and slower devices it’ll be adding several hundred milliseconds.
Seriously, parsing and executing JavaScript is slower than you realise. If your site uses JavaScript of your own, using just one type of client-side analytics is probably slowing useful page load down by 0.1–0.5s.
> Try analyzing a TB of logs per day when all you really want is aggregated statistics.
You’re making a completely unfair comparison here. Client-side analytics is performing aggregation as it goes; server-side analytics can do just the same, and serious packages in that space do do that. As an example that readily springs to mind, you can feed server logs into Matomo and use it just as effectively as when you feed client logs into it via matomo.js.
On click paths, they’re largely just an artefact of aggregation, so long as you can track individual users (which you admittedly won’t get out of the box in server logs, and perhaps that’s what you were referring to, but you can definitely make it happen). Multiple tabs thwarts doing the “navigated from page A to page B” form of path tracking correctly purely server side, but that’s not a realistic form anyway and isn’t what you’re likely to use; rather you use “page A was loaded, then page B was loaded” and just guess paths to be adjacent loads, which is perfectly compatible with server side analysis.
How much time, yeah, that’s one that you can’t do any sort of good judgement of server-side. Not that client-side tracking is particularly excellent at it. All up though I suspect that so long as your bounce rate isn’t too high you’ll get decent enough figures from reckoning the time until the next page load and eliminating statistical outliers. If bounces are too high the figures could be misleading because bounces are likely to behave differently. But people often put far too much effort into trying to track everything, where it’s commonly quite sufficient to take a smaller sample and extrapolate to the whole.
Most websites aren't Google or Facebook so they're not going to generate terabytes of server logs. A million DAU will generate at most gigabytes of log activity in a day. It's pretty easy to tune logs to be more compact and they're highly compressible. Most sites don't have anywhere close to a million DAU. If you can't handle processing a gigabyte of data it seems like there's fundamental problems with your development team.
I don't know how to help you if you can't look at log time stamps and figure out click paths for users. It was a solved problem twenty years ago.
It really doesn't matter how long it took someone to read and article on a blog. They either saw your ads or clicked affiliate links or they didn't. It doesn't really matter if they finished an article or bounced right off the page. It doesn't matter if they took ten minutes or twenty minutes to read an article. You don't know if their kid interrupted them half way through or if they're just a slow reader.
That kind of shit is just meaningless made up metrics. It's the Gish gallop of web advertising. Throw a wall of bullshit metrics at site owners or advertisers to justify them paying for the privilege of listening to that bullshit.