Irrational numbers are also "fundamentally inaccessible" in the exact same (infinite limit) sense that statistical convergence is. It's not a useless concept at all, it's actually the fundamental concept.
What your real world example describes is something different entirely. That's just pragmatically choosing the wrong model due to computational or human tractability. That's not fundamental to statistics, that's just a cheat you made because it's good enough.
Statistics is about acknowledging that cheat and quantifying how much it hurts you; fundamentally such a thing is not possible if truth doesn't exist.
Irrational numbers can be represented exactly; integrals allow us to perform exact calculation using them. In what comparable way does truth (e.g. the true process underlying gene expression) have a role in statistics? We neither measure the truth nor model it; it is absent.
Integrals themselves are, except in very special cases (piecewise functions with rational values), only definable as limits - specifically the limit of the Riemann sum. (You can also use measure theory, but measures themselves are only definable on sigma algebras, which in the non-finite case are also not explicitly constructable.)
In what comparable way does truth (e.g. the true process underlying gene expression) have a role in statistics? We neither measure the truth nor model it; it is absent.
I don't quite understand. You are arguing that statistics doesn't care about truth simply because some biologists are using a model they know to be wrong? That doesn't even make sense.
In applied math in general (which includes but is not necessarily limited to statistics), the following equation holds:
error = |true model - actual approximated model|
We can use the triangle inequality to show:
error <= |true model - best model in class X| + |best model in class X - actual approximated model|
Presumably you all have decided that |true - best| is adequately small via scientific investigation. Or maybe not, maybe your workplace just doesn't care, I don't really know.
Various mathematical techniques, or increasing sample size in a statistical scenario, can be used to reduce |best - actual|. Due to the triangle inequality, this brings you closer to truth.
Statistics is also concerned with expanding class X in such a way as to more easily reduce the model error.
I really feel like I'm missing something, because I truly can't comprehend what you are trying to argue.
What I'm arguing is we have no idea what "true model" is, what we have is "presumed model" and "observation". In the example I gave, we can never know the true source of the data we have observed (biology), we can only test our observations against some constructed model. Biologists are using a model they know to be wrong because that is what all models are - we know them to be wrong, we just can't do otherwise, because the truth is not available to us.
I feel like I've made this same point about four times already, so if you aren't getting it, let's just stop here.
What your real world example describes is something different entirely. That's just pragmatically choosing the wrong model due to computational or human tractability. That's not fundamental to statistics, that's just a cheat you made because it's good enough.
Statistics is about acknowledging that cheat and quantifying how much it hurts you; fundamentally such a thing is not possible if truth doesn't exist.