This looks cool, but if the author is here, I wish they would actually explain the security rather than just citing AES-GCM, which doesn't really explain the security design.
How is the key material established, exactly? How is it rotated? How is it protected when stored? The answers to these questions are a lot more relevant to understanding the security of this application than citing which encryption mode is being used.
>I wish they would actually explain the security rather than just citing AES-GCM
Yeah, AFAIK there isn't really much difference between AES-GCM and any other AES + HMAC algorithm (eg. AES-SHA1) other than that AES-GCM is more performant. In this sort of use case, that's not really relevant, so it's strange to advertise it like it's some sort of feature.
GCM is both an authentication method and encryption mode. The big benefit is its all specified as one primitive because combining primitives is where people often screw up.
HMAC-SHA1 is an authentication mechanism, there's various ways you can combine it with something like AES-CTR or AES-CBC to make something secure, but also plenty of ways to screw it up. Saying something like AES-SHA1 is meaningless because there's no standard way to combine those two, so you dont know what the person is doing, and you still dont know what the mode is.
I don't believe they are confused about the nomenclature, as everything the parent said was correct. In terms of there not being standard ways to add an HMAC to an encrypted string, there are plenty of standard ways to do that. How do you think people implemented secure channels before Cisco released AES-GCM to speed up their VPNs? Just let attackers modify payloads? SSL/TLS, SSH, IPsec and many other widely used secure channel protocols were over a decade old before anyone standardized an authenticated encryption mode.
But none of these widely used and well-standardized protocols require an authenticated encryption mode even though they all have both authentication and encryption as security properties. Authenticated encryption doesn't give you anything other than performance over the standard encrypt + tag approach (which is still widely used).
And AES-GCM has some drawbacks that encrypt + tag do not (although the performance gains usually trump the drawbacks).
AES+HMAC is harder to screw up than GCM, which is notoriously brittle. In particular: AES-CBC + HMAC doesn't fail catastrophically if randomness fails somehow, and GCM does.
Is there a well-reviewed paranoid alternative to libnacl/libsodium, given that ChaCha20-Poly1305 is just as brittle to nonce reuse as AES-GCM?
I trust libsodium/libnacl's randombytes_buf to give me unique nonces as long as I rekey often enough. However, for those who are still rolling their own crypto because they're paranoid about RNG issues, is there a paranoid alternative I can point to in order to tell them they still shouldn't be rolling their own?
I don't think it is _just_ as brittle. There are two mitigating factors:
1. XChaCha20-Poly1305 (I think you _should_ be using the extended nonce version if you're going for a random nonce) ihas twice the amount of bits of AES-GCM. That alone make birthday collisions an entire universe of order of magnitudes less likely.
2. I might be wrong here, but I'm not quite sure if the Poly1305 MAC itself is as brittle as the GHASH MAC in GCM.
That being said, if you're really paranoid about your RNG, even the greater safety margins won't help. If you use the Sony PS3 RNG which always returns 4, it doesn't matter how many bits of random nonce you've got.
You do have nonce-misuse resistant constructions such AES-GCM-SIV. I'm not a cryptographer so I can't comment about how well reviewed they are, but they are only claim to protect you against accidental nonce misuse, not intentionally fixed nonces, so if you have the legendary PS3 RNG, I guess it's no dice (sorry for the pun).
Point taken about different partitions of the IV into counter and nonce.
As far as GCM GHASH vs Poly1305, they use the same math. Just one is in a Galois field modulo the prime 2**130 - 5 and the other is in a 128-bit binary Galois field reduced by a fixed primitive polynomial (the lexographically smallest 128-bit primitive binary polynomial, IIRC).
Poly1305 does drop two bits of the 130-bit result field element when outputting the tag, so I guess that does provide a bit more work for an attacker, but they're both simple polynomial evaluations in finite fields of similar size. Breaking them is just finite field algebra if the r and s values are reused.
Poly1305 is a MAC taken by calculating the tag:
def tag(r, s, message):
v = 0 # No associated data case
for chunk in message:
v += chunk + 2**128
v = (v * r) % (2**130 - 5)
return ((v + s) % (2**130 - 5)) % 2**128
The GCM GHASH tag is:
# multiply() is multiplication in GF 2**128
# add() is addition in GF 2**128 (a.k.a. 128-bit XOR)
def tag(key, iv, message):
r = AES_encrypt(key, 0)
s = AES_encrypt(key, iv)
v = 0 # No associated data case
for chunk in message:
v = add(v, chunk)
v = multiply(v, r)
return add(v, s)
(I think the extra 2**128 addition inside the Poly1305 loop is to ensure the function doesn't have a fixed point at zero, but I'm not sure.)
Ahh... I've misunderstood that extra bit, and I think it's never at the 2**128 position. I've misread the code for padding. A stop bit is used to disambiguate zero padding from messages that end in zeroes, similar to most hash function message padding. How silly of me.
There's several issues with GCM; the most obvious is the susceptibility to repeated nonces (notice that I compared GCM to CBC+HMAC, the most common generic composition) but there's also GCM's weakness with shortened tags, the fact that GCM relies on a multiplication operation that wants tables for efficient software implementations, and the short nonce, which makes it dicey to use random nonces and also limits how much you can encrypt.
The original chapoly constructions aren't as brittle, but share the biggest problem (the nonce uniqueness requirement). But later chapoly constructions allow for wide nonces, which makes it reasonable to use fully random nonces.
I don't care if people use GCM or not; I'd comfortably use it if it was the only AEAD available in my standard library. But I agree with the root comment on this thread that suggests it's a weird thing to brag about.
That's just an api to generate key and then wrap/unwrap. So OK, you generate the key in the server, and then have to get that over to someone who is using a web browser. How does that work? Then you have to store the key somewhere in the browser and in the server -- is it just in memory, in which case you have to get the key to the browsing user with each connection or if the browser is reloaded? Or is the key stored as a plaintext string on disk somewhere? Is it stored as a cookie in the browser? in gnome-keyring? and how is the key rotated? In other words, key management is the tricky part of all these protocols - is the author rolling their own authentication and key establishment protocol or using an existing one, and if the latter, which one and how are they using it? That's how you figure out the security of this cryptographic system. And remember that any secrets in the DOM loaded from origin X are exposed to code served from that origin, so what isolation guarantees are really being made when you want to make your terminal opaque to the webserver?
From a quick skim it looks like the key is base64 encoded into the URL in terminal_id param, so presumably you just share the URL and the collaborator stays on the URL with the key? If the key is ephemeral/regenerated for each session it seems to eliminate most of your concerns.
Yes, this is starting to fill out the details. So is the plan:
1. ssh into your server
2. run the program to get a URL
3. close the ssh connection
4. Open a browser SSH connection and use the url?
In which case,
1. hmm, what is the benefit
2. how do you prevent the hosting webserver from seeing the url parameter? Assuming you want the hosting system to not be able to interfere in the session. If you are OK with such interference, you can make an easier key distribution than this.
Or is the idea:
1. Setup your own webserver
2. authenticated to the webserver with some other protocol
3. the webserver grabs the url (it's hosted on the same server as the terminal server?) and then redirects to a page with the URL that you can use for the session
In which case, err, why encourage people to use this third party server if that server can read the url? Also, there are again more secure ways of handling this that don't expose your terminal key as a URL parameter
Remember URL parameters get leaked in referer and it's generally not recommended to store secrets in them if you can avoid it, which I believe in this use case you can, again assuming I have the workflows right.
What this needs is a simple diagram showing the terminal server, the webserver, and the user's browser, defining the trust relationships between these, and explaining the flow of information among these components during key establishment and rotation.
that hash is still accesible to any javascript loaded from the server, right? And the hash is still sent in referer headers, IIRC (it's been a while since I looked at it)
The server doesn't need to be compromised, as from the point of the user it is an independent security zone and loads whatever js it wants. Perhaps you could install a browser extension that would compute a hash of the js and only allow the load to proceed if it was verified, but apart from that your browser is going to execute whatever js the server loads.
And that would be fine if the threat model was such that the server was trusted with the keys to the channel (because, for example, you control the server), but in that case what is the point of the end-to-end encryption, which is supposed to protect you from the server? If the server was trusted with the channel key, you can roll a much simpler system without end to end encryption.
Again, this is why I was asking for an actual description of the security design. If you are trusting the server to handle the key, then you can really simplify this system. If you are not trusting the server to handle the key (which would be the case if it was not under your control), then this design doesn't appear adequate.
I really wish these types of projects disclosed these types of design assumptions and operational details so that people could review the security efficiently instead of peering through code and trying to guess what the developer's intentions and assumptions are. It would also help the developers. If they are forced to write down: we don't trust the server to access the key but we must trust the server to not try to access the key, then such an exercise would hopefully trigger a moment of clarity that would lead to the creation of more secure systems. For example, they may want to include a browser extension under the control of the user as a part of the overall system, and have the browser extension touch the key in an origin not controlled by the server or have a key establishment protocol run between the browser extension and the ssh server with the webserver being just a transport layer.
It could be made more resistant to birthday attacks by using the session message count as the IV, but I guess it wouldn't matter unless someone kept one of these terminals open for a really long time.
I think it's important to clarify the statement above, for anymore who is not familiar with the issue and keeps misusing AES-GCM.
AES-GCM has a relatively short IV (= nonce): only 96 bits (12 bytes). To make things worse, if an IV ever gets repeated GCM fails catastrophically[1]. The design document[2] explicitly point out that a FIPS 140-2 compliant GCM hardware[3], must take all possible precaution to ensure an IV never gets repeated, even if the device suffers a critical power loss.
The safe way to use AES-GCM in software (for instance, the way it is used in TLS) is to just use a running counter and replace the key before the counter overflows and starts repeating itself. Random counters bad, since if enough messages are generated with the same key the chance of a message that repeats an IV under the same key increases. The statistic phenomenon behind the chance of something like that being repeated is called the "Birthday Problem"[4], so exploiting this kind of weakness is often referred to as a "Birthday Attack"[5].
tl;dr: If you want an easy life, just never use any form of AES at all. Most AES mode can be completely safe if used and implemented with care - but it is generally quite hard to do that. AES has too many knobs and there is too much bad advice on places like Stack Overflow and various blogs.
Generally, for encrypting multiple messages with the same key, the solid advice is to use NaCl secretbox (XSalsa20-Ply1305) or libsodium aead_xchacha20poly1305_ietf[6].
But you probably need more than that when encrypting potentially long-lived bidirectional streams of data.
---
[1]: It fails even worse than other authenticated counter mode ciphers due to the design of the GHASH function:
[3] Keep in mind that like most 2000s vintage cipher specs, GCM was designed as a hardware cipher. Software was more or less an afterthought, if anything at all.
What does this get you over simply attaching to an existing tmux session? You can already SSH into a machine and join any existing tmux session, or better yet, create a new session from an existing session, and get independent viewports and parallel input, or shared, depending on which window you're looking at.
You can quickly provide a shared session connection to an external third party, so they can access computers that are not publicly accessable and that they normally would not have access to without having to do any credential management.
Great link, I have been searching for something like that for a while. Sharing terminals over Google Meet does not work very well. The video compression is particularly lossy with red on black. Pretty annoying with syntax highlighting or colored shell output.
At a quick glance this is not seem to be end to end encrypted, but tmux is running on their server? That's not something I could ever use for work, we are in regulated domain. But you can run your own server, need to check that out.
We've been toying with duckly (née gitduck) - and it provides shared web browsing, shared editing, and shared terminal. There might be some rough edges with the in-browser "window management" - but overall it works pretty well IMHO:
You can self-hosted tmate, but imnho tmate doesn't add that much value over "grant user ssh access and use plain shared tmux/screen".
In that case, you might (for workstation/laptop) have co-worker's on vpn via wireguard /tailscale, bind sshd to the vpn interface, and allow access via ssh keys/certificates.
>the video compression is particularly lossy with red on black.
The irony is that 20 years ago video conferencing was unthinkable, but I used VNC over a dial-up modem. Lossless and much better for terminal sharing than what is generally available today.
Modern computing feels like the dark Middle Ages after the downfall of the Roman empire in some aspects...
I don't know about this project, but I have used tmate.io (self hosted ssh server) for almost seven years for remote pairing, using vim inside of tmate.
Compared to vscode: when sharing the terminal, you don't need to worry about following the other person's cursor or them following yours, as there is only one cursor.
You don't need to worry about telling them what dukes you are opening, as there is only one editor.
You also don't need to worry about registering for an account, as (at least with tmate) it's simply an ssh connection for the remote person.
It is the only way I've found that allows for effective remote pairing. There are a number of downsides, but it is a wonderful tool.
How is the key material established, exactly? How is it rotated? How is it protected when stored? The answers to these questions are a lot more relevant to understanding the security of this application than citing which encryption mode is being used.