The reason I want to stay out of that discussion is that it often just amounts to some mud-slinging from one side upon the other. Intentionally mislead is quite an accusation and I don't think Heroku would indulge in it.
Personally, I think it is incredibly naive to build an application around a framework where you have no built-in concurrency. The main reason is that the queue you will build up in front of it is outside your reach so you have to sustain it.
It is also naive to think that your cooked up statistical model resembles reality in any way. Routing is a hard problem so it is entirely plausible that your model does not hold up in reality. Besides, the time it takes to construct those R models is a missed opportunity for improving the backend you have. And the R models doesn't say a lot, sorry. At best they just stir up the storm --- and boy did they succeed.
I agree it is unfortunate that Heroku's documentation isn't better and that New Relic doesn't provide the accurate latency statistics. But to claim that this is entirely Heroku's fault is, frankly, naive as well.
It is quite an accusation, but I'm far from a bystander in this issue. I have extensive, documented communication with Heroku engineers over the course of 1.5 years (Feb 2011 - June 2012).
I'm not discussing any sort of cooked up statistical models. I'm discussing the real-world experience I had scaling an application on Heroku.
"mud-slinging from one side upon the other"
You imply that I was uninvolved before Rap Genius's expose. I assure you that is not the case. I've chosen a side in this argument well before Rap Genius went public.
The "you" were addressed to RG, not your experience. I completely agree that scaling is one of the harder problems in computer science. Most people just run a stateless system and hope for the best, but this is a delicate matter and it is a hard problem.
The hard part being a customer or Heroku is that the problem might be on the other side of the fence. And how do you communicate that in a diplomatic way?
Personally my opinion is something along the lines of "If you use Ruby +rails in that configuration, then you deserve the problem".
Personally, I think it is incredibly naive to build an application around a framework where you have no built-in concurrency. The main reason is that the queue you will build up in front of it is outside your reach so you have to sustain it.
It is also naive to think that your cooked up statistical model resembles reality in any way. Routing is a hard problem so it is entirely plausible that your model does not hold up in reality. Besides, the time it takes to construct those R models is a missed opportunity for improving the backend you have. And the R models doesn't say a lot, sorry. At best they just stir up the storm --- and boy did they succeed.
I agree it is unfortunate that Heroku's documentation isn't better and that New Relic doesn't provide the accurate latency statistics. But to claim that this is entirely Heroku's fault is, frankly, naive as well.