This is good insofar as it forces you to make local development possible. In my experience: it's a big red flag if your systems are so complex or interdependent that it's impossible to run or test any of them locally.
That leads to people only testing in staging envs, causing staging to constantly break and discouraging automated tests that prevent regression bugs. It also leads to increasing complexity and interconnectedness over time, since people are never encouraged to get code running in isolation.
Ehh... once your systems use more than a few pieces of cloud infrastructure / SaaS / PaaS / external dependencies / etc, purely local development of the system is just not possible.
There are some (limited) simulators / emulators / etc available and whatnot for some services, but running a full platform that has cloud dependencies on a local machine is often just not possible.
The answer (IMHO) is to not use services that make it impossible to develop locally, unless you can trivially mock them; the benefits of such services aren't worth it if they result in a system that is inherently untestable with an environment that's inherently unreproducible.
(I can go on a rant about AWS Lambda, and how if they'd used a standardized interface like FastCGI it would make local testing trivial, but they won't do that because they need vendor lock-in...)
Awesome. you just cost your company $500K in salaries for people to maintain databases, networks, storage, servers and a bunch of other stuff Google/AWS already do much better than you.
How lucky you are that management pays you to pursue your hobbies!
Agreed. And stay away from proprietary cloud services that lock you into a specific cloud provider. Otherwise, you'll end up like one of those companies that still does everything on MS SQL Server and various Oracle byproducts despite rising costs because of decisions made many years ago.
Forcing developers to deal with mocks right from the beginning is critical in my opinion. Unit testing as part of your CI/CD flow needs to be a first priority rather than something that gets thought of later on. Testing locally should be synonymous with running your unit test suite.
Doing your integration testing deployed to a non-production cloud environment is always necessary but should never be a requirement for doing development locally.
> In my experience: it's a big red flag if your systems are so complex or interdependent that it's impossible to run or test any of them locally
At one time this was a huge blocker for our productivity. Access to a reliable test environment was only possible by way of a specific customer's production environment. The vendor does maintain a shared 3rd party integration test system, but its so far away from a realistic customer configuration that any result from that environment is more distracting than helpful.
In order to get this sort of thing out of the way, we wrote a simulator for the vendor's system which approximates behavior across 3-4 of our customer's live configurations. Its a totally fake piece of shit, but its a consistent one. Our simulated environment testing will get us about 90% of the way there now. There are still things we simply have to test in customer prod though.
This is what we do as well. We just stub out the 3rd party integration and inject a dynamic configuration to generate whatever type of response we need.
That leads to people only testing in staging envs, causing staging to constantly break and discouraging automated tests that prevent regression bugs. It also leads to increasing complexity and interconnectedness over time, since people are never encouraged to get code running in isolation.