Regarding being cloud provider agnostic: itโs not always for fault tolerance, there can be a couple different reasons.
1) it gives your company a stronger bargaining position with the cloud provider.
Granted, my companies tend to have extremely high spend- but being able to shave a dozen or so percent off your bill is enough to hire another 50 engineers in my org.
2) you may end up hitting some kind of unarguable problem.
These could be business driven (my CEO doesnโt like yours!), technical (GCP is not supported by $vendor) or political (you need to make a China version of your product, no GCP in China!)
Everything is trade offs. AWS never worked for us because the technical implementation of their hypervisor was not affined to CPU cores of the machine, meaning you often compete with other VMs on memory bandwidth. โ but AWS works in China (kinda). So my solutions support both GCP and AWS as slightly less supported backup.
I'd add another reason: Devs need to be able to run stuff locally sometimes.
It's neat having a serverless single page app that is hosted in s3, served through cloudfront, with lambda's that post messages to sqs queues that are read by god knows what else, but what happens when there's a bug? How do you test it? You can throw more cloud at it and give each dev a way to build their own copy of the stack, but that's even more work to manage. Maybe localstack behaves the same, but can you integrate it with your test framework?
I never took a hard "we must never use aws-only services" approach, but having the ability to run something locally was a huge plus. Postgres RDS? Totally fine, you don't need amazon to run postgres. Redshift? Worth the lock-in given the performance. Lambda? Eh, probably not, given that we already have a streamlined way to host a webapp.
People don't often think of their local development environment as a "platform" that their stuff needs to work in, but it really is. In that sense, unless you're hosting off of your laptop (please don't!), every app is multi-platform.
Every startup I've worked at (and I've been at this for 15+ years) has moved hosting providers, but I still wouldn't put it high on the list of reasons to avoid vendor lock-in. If you make sure someone(s) know how the app actually runs, and you try to pick stuff you can run locally, the vendor lock-in stuff won't be your biggest challenge in the move.
> unless you're hosting off of your laptop (please don't!)
I hate to tell you this, but there was a thread on HN about this exact topic not too long ago. While becoming less common with AWS, Heroku, Cloud Run etc, there are still companies large and small, that get it working on their machine then just run it off their machine until it breaks.
In fact, one of my favorite stories is a guy I know who does ML work gets a crazy multi CPU, multi GPU workstation for each project, and, when the project is running on his machine, they slap readyrails on it and ship it to the data center to run in prod.
i agree 100% that you need to structure the project so there's a way to develop locally in your dev machine -- without a network connection -- and run integration tests against local versions of services.
looks like google spanner have plugged that workflow gap since you evaluated it for your project:
> The Cloud SDK provides a local, in-memory emulator, which you can use to develop and test your applications for free without creating a GCP Project or a billing account. As the emulator stores data only in memory, all state, including data, schema, and configs, is lost on restart. The emulator offers the same APIs as the Cloud Spanner production service and is intended for local development and testing, not for production deployments.
At my $WORK we run all of our backend on AWS but anything I've touched has to also run locally.
We use serverless framework, and there are plugins for running the lambdas locally as well as for dynamodb, sqs, ses and eventbridge.
I think it's a case of choosing your dependencies carefully though. I would be wary of integrating against an AWS service which does not have an API compatible offline or provided-elsewhere alternative.
A case where we fail at this is Cognito. Even our 'offline'/local stack has to connect to our dev environment for that one.
>you may end up hitting some kind of unarguable problem.
Another example: there's multiple countries (for example, here in Russia) where personal data must be stored in data centers located in the country's borders and not every country has a AWS datacenter on its soil
.
1) it gives your company a stronger bargaining position with the cloud provider.
Granted, my companies tend to have extremely high spend- but being able to shave a dozen or so percent off your bill is enough to hire another 50 engineers in my org.
2) you may end up hitting some kind of unarguable problem.
These could be business driven (my CEO doesnโt like yours!), technical (GCP is not supported by $vendor) or political (you need to make a China version of your product, no GCP in China!)
Everything is trade offs. AWS never worked for us because the technical implementation of their hypervisor was not affined to CPU cores of the machine, meaning you often compete with other VMs on memory bandwidth. โ but AWS works in China (kinda). So my solutions support both GCP and AWS as slightly less supported backup.