> Each sample of the malware contains a hardcoded name of the victim organization.
> Apart from encrypting the files and leaving ransom notes, the sample has none of the additional functionality that other threat actors tend to use in their Trojans: no C&C communication, no termination of running processes, no anti-analysis tricks, etc.
> Curiously, the ELF binary contains some debug information, including names of functions, global variables and source code files used by the malware developers.
It is for manually targeted attacks. Once it is deployed, the damage is done and the victim is notified. They don't need C&C. The hardcoded victim name is probably just a big FU.
You can have excellent perimeter security but this organisation might just bribe an employee to gain access.
It is far more scary than some automated bot scanning for ports.
I'm not denying its effectiveness, just remarking on its technical merit as a topic of discussion. Once the system is already compromised it becomes less about the payload and more about the attack vector involved. If the payload in question was using novel techniques then it would be a different story but the analysis shows the program to be relatively rudimentary.
Well, no point in over-engineering a solution, right?
To put it another way, sounds like they moved fast (and maybe broke a few things?), put together an MVP that meets their needs, rolled it out, and are now likely learning and gathering feedback for their next iteration... sounds like they fit right in around here!
(This thread reminded me of something a cow-orker used to say: "If it's stupid but it works, it's not stupid".)
Rule #2 -- Never download and execute binaries from the Internet if you can't track it to reputable source. Linux will not help if you execute it.
Rule #3 -- If you can't track it but need to run anyway, jail the heck out of it. Create a VM and run it inside, disabling also its ability to use network for anything else than reaching the Internet. Make sure it can't reach any other machine in your network or ports on your PC through loopback.
Mbedtls is small code size configurable build library. I am not surprised they’re using it, it embeds with applications or firmwares easily and has a decent API. Which cannot be said for openssl.
> RansomEXX is what security researchers call a "big-game hunter" or "human-operated ransomware." [...]
> These groups buy access or breach networks themselves, expand access to as many systems as possible, and then manually deploy their ransomware binary as a final payload to cripple as much of the target's infrastructure as possible.
As somebody who has switched to Linux-only (desktop+servers) years ago, seeing how ransomeware gets "ported" to Linux makes me overthinking my backup system.
Has anybody experience with immutable backups? This is so important, because many ransomeware codes attacks backups first.
I think offline backups on write-once media are the safest. A DVD-RW is the only thing I can think of, but this is quite elaborate and doesn't seem contemprary in 2020. Do I miss something?
How one goes about not accumulating backups forever is a problem with this setup. My basic plan is manually switching to another bucket and verifying the newly backed-up data before deleting the old bucket.
You can also enable a time-based deprecation of hidden files on B2 then you don't have to actively do anything, but in theory if the malware bides its time it could still delete/overwrite everything without you noticing.
If you want to self-host the restic/rest-server also has a --append-only flag that would have a similar effect, but if you use that you'll have to make sure the malware can't hop onto your backup machine via ssh.
Make backup, then disconnect, then use a fresh drive next time, keeping at least 1 extra drive offline on a shelf or in a safe deposit box if you're extra paranoid. Keeping the drive off-site also helps to insure against fire, natural disasters or theft. Rotate drives regularly to keep the off-site backup fresh and to spread the (minimal) wear across several drives. When used in this manner and stored correctly, a set of say 3 to 5 hard drives can be expected to last a _very_ long time.
* This is important, because solid state drives are not suited to long term offline storage, they can lose data surprisingly quickly if you leave them sitting on a shelf.
I thought about this exact problem a month ago when I got paranoid (I'm on Windows) , and my solution involved setting up a separate cheap Linux node in my home and attaching my backup drive to it.
The server is locally SSHable, but only authenticates via password that I have to type in during each backup. Key authentication is disabled. I use borg backup so I don't even have to give shell access to this particular account (there are hardened borg configs available online).
If you're more paranoid about security, you can enable 2FA over SSH, or make sure the backup server itself creates a periodic offline backup of the backup repository, without the SSH account having permissions to that of course.
Honestly though as long as you're not doing something stupid like mounting NFS to your vulnerable device to make backups, you should be mostly fine.
You can do lots of other things, it just depends on what's important / what do you want to achieve. In almost any online storage you can implement backups as close to immutable by having two amounts. One for upload, another one for long term storage with the second one only ever written to via an isolated, non interactive, simple process (think AWS lambda).
Or use you access policies which allow file creation and nothing else. Or many other ways.
rsync...is not a backup strategy, being exactly as vulnerable to ransomware are as your local file. Unless you pair it with something else, but then that's not exactly 'rsync for backup'
Of course, if a user or malicious program runs it it will infect the host operating system. It doesn't matter if it's a physical machine, VM, or EC2 instance.
The common mantra of the cloud is still at play here. It is just other peoples computers. Yes, you can attach persistent storage, EBS or Elastic Block Storage, to an instance. Or if you are worried about the boot drive, your golden image could be infected and every instance you start using that image will be bad.
When asking about the cloud, there are 2 main things to worry about,
1) Can my deployments on the cloud be targeted and infected? Definitely, just like managing your own servers, you can need to manage your stack. This means keeping on top of security patches, proper info sec, least privileges for all tools and people, etc. You control your security, the cloud does not fix that, in fact, it offers new ways to screw up. AWS, and all other platforms, offer a lot of identity and security options, tons of networking options, etc. Like anything in security, the more secure you make it, the more hoops and red tape you put in the way of effective engineers that just want to get shit done. One thing that is easier in the cloud is letting any developer spin up some new instance to test something out. This is useful as it allowed them to be productive without waiting for 3 levels of approval, but it could be some untrusted binary with malware or could just be really shitty and cost 20K over the weekend.
2) The second thing to worry about with the cloud is, can the cloud platform be attacked? Absolutely yes. This was the worry with attacks on virtualization and CPUs themselves. I don't recall the specifics of Spectre and Meltdown, and there was another Intel specific one, but in general, they broke out of isolation. Amazon is very tight lipped on how they do virtualization[1], but if one of these attacks worked, it would be bad news. Just being on the platform could affect you by someone else launching their own EC2 instance. I will bet you dollars to donuts that AWS puts a ton of money and effort to protect against this.
[1] by this point, AWS probably runs their own internal virtualization stack based on something open source from a decade ago. This is so core to their business that they would want full control. They want to squeeze every cycle they can from a CPU to charge to customers and pack instances in as much as they can, while also measuring usage to charge correctly. I've worked on measuring usage to charge customers. This is hard and I am nowhere near their scale. Additionally, they probably want more hooks for debugging, monitoring, alerting, and as we have been talking about, security.
> Apart from encrypting the files and leaving ransom notes, the sample has none of the additional functionality that other threat actors tend to use in their Trojans: no C&C communication, no termination of running processes, no anti-analysis tricks, etc.
> Curiously, the ELF binary contains some debug information, including names of functions, global variables and source code files used by the malware developers.
Seems pretty amateurish...