> Using their initial foothold of OAuth user tokens for GitHub.com, the actor was able to exfiltrate a set of private npm repositories, some of which included secrets such as AWS access keys.
Keep those keys out of source control, folks. There are a lot of options for secrets management these days, and making it harder for attackers to totally own you if they only manage to crack one piece of your infrastructure is key to limiting damage from this sort of attack.
I agree fully with you but I don't think secret management is as easy/cheap as some people pretend. On AWS, for example, each secret you store is $0.40 + $0.05 per 10,000 API calls and that can add up if you only have 1 api key/password/etc per "Secret" (for an individual at least, I hate bleeding off $10+/mo to store tiny bits of strings). Then, once you have the secret stored, you need setup roles/policies to be able to retrieve them.
I have this setup pretty well in my code now but getting there wasn't simple or easy from my perspective and keeping the list of secrets your IAM user can access up to date can be a pain as well.
I'm working in a lambda environment so my options might be more limited but I'm interested to see how other people are solving this issue (maybe specifically for small/side projects). As it stands my lambdas all get a role applied to them that gives them access to the secrets but something not AWS-specific would need a "bootstrap secret" to be injected before the code could call out to the third-party to get the other secrets. For Lambdas I suppose I could inject that "bootstrap secret" in via environmental variables but now I've got a new issue to deal with. Injecting at build time via something like GitHub Actions Secrets is an option I guess.
All that to say, while I agree secrets should never be in source, in practice it's not super easy (I'd love to be proven wrong, maybe I'm not doing it right).
> I agree fully with you but I don't think secret management is as easy/cheap as some people pretend. On AWS, for example, each secret you store is $0.40 + $0.05 per 10,000 API calls and that can add up if you only have 1 api key/password/etc per "Secret" (for an individual at least, I hate bleeding off $10+/mo to store tiny bits of strings). Then, once you have the secret stored, you need setup roles/policies to be able to retrieve them.
There is a middle-step between "lets have API tokens committed in SCM" and "lets deploy a full-authentication system/use this costly solution", and that is using environment variables. In your code, do `process.env.MY_SECRET_KEY` instead of `myGitHubPersonalToken` and then when you run the program, run it with ` MY_SECRET_KEY=myGitHubPersonalToken npm start`. Magically, you can commit your code without exposing any secrets, and share the secret where you need it out-of-band.
Zero-cost, actually easier to configure your software when you need it, and as a bonus, people won't get access to your infrastructure in case someone gets a hold of your source code.
That npm inc isn't aware (or failed to uphold the code quality) of environment variables for secrets is embarrassing.
> when you run the program, run it with ` MY_SECRET_KEY=myGitHubPersonalToken npm start`
But where does this live? Or do you literally mean that Jane The Sys Admin is supposed to type this into her terminal every time the service restarts in the middle of the night?
What if I need to replace a node? Or scale a service? How do these secrets get there?
> But where does this live? Or do you literally mean that Jane The Sys Admin is supposed to type this into her terminal every time the service restarts in the middle of the night?
Depends on how the service is deployed. If you're just running it on a Digital Ocean instance by manually SSHing into the instance and running systemd services, define it in the .service file (it supports defining environment variables).
If you're doing instances via automation (like Terraform), most of them (including Terraform) supports loading things from environment variables. So you run `MY_SECRET_KEY=myGitHubPersonalToken terraform apply` when you create the instance, and use the environment variable in your hcl definitions.
We use Doppler and some other custom built tooling, and it works really well. The Doppler pricing is really fair IMO, and their tooling adds a lot of value for us so I'm OK with paying for it.
Software security is rarely free (even with an OSS tool you've got infra and management costs), but the cost is almost always cheaper than a major breach that could stem from something like this incident, which fortunately was pretty contained.
Our goal is to simplify this as much as possible. We use client-side end-to-end encryption so you don't have to trust a third party, and pricing on our paid plans is based on number of users, not number of secrets.
Having helped build the production secret management bits at Airbnb [0], I would encourage most people to buy, not build. It was a large task and I'm 95% sure we didn't add more value than the cost of all the people working on it for as long as it took, compared to buying a solution.
Just like the rest of Airbnb started before VPCs were GA and thus required a large engineering investment to move everything to VPCs, we started on the secret management stuff before there were a lot of good other options available (though arguably Hashicorp Vault was around and mature enough at the time, and would have been the best alternative). I haven't looked at envkey for production use but I've definitely considered it for home use since it's just so deliciously simple.
I was looking at this with great interest, until I looked at the pricing and found out the lowest hosted pricing tier starts at $150/month.
I know there's a free "community" hosted version, but I'm not sure what the differences is outside of the limits and support, and I'd prefer to see the pricing scale up a bit more gently than 0 -> $150 as soon as I reach the limits of the free offering.
What would need to change for you to feel that's not the case?
The open source version is fully functional and can definitely scale beyond toy projects. You don't get high availability, multi-instance clustering, auto-scaling, multi-region failover etc. built in, but if you put it on a beefy host it can easily handle a large number of users and a very high request rate.
The way I think of it is we give you the fully functional server, but charge for advanced infrastructure and a few advanced features (SSO/Teams).
It's comparable to the open source version of a tool like Vault where you get the server, but need to implement advanced stuff like HA, auto-scaling, networking, etc. yourself, or else use a paid version.
Even though I would personally choose to use the hosted version, I still consider the availability of an open-source solution to be a great hedge against vendor lock-in/failure.
This is only true if I can be confident that I can replicate the hosted setup with the open-source version if I invested the necessary resources. Otherwise, the existence of an open-source option adds little value. In fact it can turn me off from a product since it'd seem like they're using open source as a marketing hook with no real intention of empowering users to be able to actually move off their hosted platforms.
We could potentially enable clustering for the Open Source version. One of the reasons it isn't already is that our clustering implementation is currently AWS-specific, since it relies on the AWS metadata endpoint to look up a host's internal IP, as well as networking rules that allow hosts to talk to each other.
This is why Vault requires another piece like Consul (plus a whole lot of tricky infra/networking work) to achieve HA.
That said, we could allow users of the Open Source version to specify a url via an env var to look up a host's internal IP so that clustering would work.
Auto-scaling is provider-specific though, so I don't see how that could be baked in. Same with secure networking.
I'll also just say that while we do want the open source version to be fully functional (if a bit more DIY), another motivation for us that I see as equally important for a security product is transparency.
While it's inarguably crucial for any clients implementing end-to-end encryption to be open source, I think there's a lot of value in open sourcing the server as well (regardless of how practical it is to actually run) so that users can know what's happening on the server-side, see that the code is high quality and tested, and so on.
Perhaps you'll see this if you get notified of replies: per your suggestion, we have now introduced a lower tier in between the free tier and the ~150/month tier.
Thanks for the feedback. We're considering some changes here: specifically, reducing the limits of the free tier a bit and then adding another tier in the $50/mo range between the free tier and the $150/mo tier. Would that be a better fit for you?
Security software that puts SSO support behind a high pricing tier (and as far as I can tell has no other way to get e.g. any kind of 2FA) is a very bad look.
> EnvKey Business Self-Hosted runs in an AWS account you control. You can use it with any host or cloud provider.
Is a bit confusing, that section should be clarified a bit. Does it mean that you can run the systems using it somewhere else? Does it mean other variants of EnvKey can be run everywhere? ...
On price-gating SSO, I'm sympathetic to your argument. I'll consider adding it to the lower-priced tier.
2FA is already effectively built-in to EnvKey through device-based authorization. A user can only sign in to EnvKey from an authorized device, so an email account compromise won't be enough for an attacker to gain access--they would also need access to an authorized device.
A passphrase can optionally be supplied on top of this for an additional factor (though it's unnecessary if you're already using OS-level disk encryption).
It's basically the same model as SSH. And imo it's superior to SMS or app-based 2FA (perhaps not token-based, which I'm open to adding). It handles the main threat models (phishing/email account compromise) with far better UX and convenience.
"Is a bit confusing, that section should be clarified a bit. Does it mean that you can run the systems using it somewhere else? Does it mean other variants of EnvKey can be run everywhere? ..."
I agree this could be less confusing.
The EnvKey host server runs in your AWS account, but that doesn't mean that apps you integrate EnvKey with are in any way limited to AWS. You could have your apps running in Heroku, GCP, Azure, or whatever, and integrate with your self-hosted EnvKey installation for configuration and secrets management with no problem.
What if they key is in a git-crypted file? I get what you are saying about an open file, but surely best practice is to use encrypted files to store secrets that are needed e.g for deployment
An encrypted file stored along with the git repo (without the decryption key) has a different attack surface. My original comment was more targeted towards users storing their keys in plaintext.
Keep those keys out of source control, folks. There are a lot of options for secrets management these days, and making it harder for attackers to totally own you if they only manage to crack one piece of your infrastructure is key to limiting damage from this sort of attack.