This is a very common complaint, and it's not valid. You cannot have privacy without some form of trust or identity. Network cryptography doesn't work that way. We have to assume the adversary controls the network, and so can manipulate any cryptographic handshake they see.
The "identity" in TLS exists principally in order to prevent network adversaries from substituting their own keys for those of your intended peer.
That's 100% accurate but the parent post is clearly about EV certs attempting to provide added trust/identity beyond what's needed for the privacy provided by TLS, and it's skeptical that that's happening.
A more-SSH-like approach would be for browsers to identify PSL-level domains with simple site names, which can be done by-hand for the Alexa top 500 if nothing else; so "https://docs.google.com/whatever" maps to a green "Google docs> whatever" in your URL bar; click to reveal the full URL. The idea is that a local store for this mapping is distributed across all of the browsers out there, so by default you get a yellow arrow on `-> https://idontknowthisdomain.example/` and perhaps as HTTP goes south we get a red one on `-> http://thisdomainneither.example/`. But you can right click that yellow arrow and it asks "Do you trust the following domain?" with a "Name: ____" box to submit your own name. After that, this domain has been white-listed for you to have the given name, just as in HTTPS. Possibly you can even have the page itself suggest the name via a meta tag. The important point is that the user has to say that they trust this domain to have that common-name, before they see it pop up in green.
From there, the browsers can do what they already do best: phone home. If you get millions of different users suggesting the same official name for a web site, it's more reasonable to automatically add them so that more than just the instantaneous Alexa Top 500 is covered.
You still rely on the core certs to provide privacy, but people who visit a phishing site now see by default either a yellow or red arrow where they are accustomed to seeing a green one that they either set up themselves or someone else set up for them.
>From there, the browsers can do what they already do best: phone home. If you get millions of different users suggesting the same official name for a web site, it's more reasonable to automatically add them so that more than just the instantaneous Alexa Top 500 is covered.
I've thought about it a bit, and I must say I'm not really sure what to make of it.
I got some of my CS courses from a man named Kevin Walsh who I think was a Ph.D. candidate at the time; I think he's now teaching at Holy Cross. If my memory serves his group solved (at least for the time being) the Sybil attack problem on the then-active distributed-filesharing-networks at the time, but it never really grew in adoption because the stakes were really low.
Their basic idea was (again if memory serves me right) to let people self-classify into clusters, so your esteem of someone else's ranking of websites would be based on your own ranking of those websites and how they match. In that way, you get a fully distributed web-of-trust system which nevertheless can't be Sybil-poisoned: to inject their own phishing sites, an attacker first needs to inject trustworthy reviews for a bunch of other unclassified sites, which violates the Sybil assumption that your reviewer identity is disposable or forgeable or whatever and demands that you contribute more to the network than you take away.
I'm not sure if this could be quasi-centralized for browser reviews of proposed short-names for domains, since it assumes a big peer-to-peer decentralized system with lots of people reviewing lots of things and lets them sort themselves into their own groups; it gets dodgy if you're retroactively like "okay so here's the main group now". There is a hint of what it would look like if you try to recentralize it and say "this is the consensus for that domain name's shortname" in the form of how blockchain networks look: there you have a similar story of "if you want to break our consensus that your site is a phishing site, you first need to join our network and outperform the rest of it." But yeah, I think that sorting out the data requires some sort of complicated thinking. I'm more dreaming that sorting out the interface and getting approximate correctness for 90% of traffic so that people get accustomed to that little green box being there, is not especially hard.
On the flip-side, this is already a problem in the form that browsers have to detect phishing somehow and there are Sybil attacks on that based on marking other sites as phishing to try to decrease peoples' confidence in these phishing indicators. So even though there is a technical problem here, I'm not sure I see it as novel to this particular approach.
> the parent post is clearly about EV certs attempting to provide added trust/identity beyond what's needed for the privacy provided by TLS, and it's skeptical that that's happening.
The (great-grand)parent is probably not talking about EV certs, because then they'd be stupid. They say:
> We could've had a mostly encrypted Internet a long time ago if encryption and privacy were not hitched to a commercial identity certificate with crappy maintenance tools.
...which wouldn't make sense if it didn't include the basic DV certificates, because then those DV certs would be exactly the sort of quick&easy encryption-without-identity they're looking for.
They're complaining about encryption requiring a process that used to cost enough to discourage many people, even for DV certs. The situation has only really changed in the last two years or so with LE, and you can quite clearly see the impact of free certificates and a slightly better process on the rate of encryption.
I'm talking about the overall post (i.e. JoshTriplett's submission) there, not the parentmost comment by payne92. I read payne92's comment as dreaming about an alternate past where something like Let's Encrypt was available from the start because that was the original model of trust in TLS, not authorities saying "we will certify that you are who you say you are" but merely authorities saying "we will certify that someone who proved their control of that domain name recently, said that this was a valid public key for it."
Yeah, I think we're basically in agreement then. Although I'd note that Let's Encrypt's model is no different than the paid DV certificates that came before–they're just giving those away for free. You were always able to get such a certificate without proof of identity, for something around $100/y.
Key continuity gets you pretty far, though. Knowing I'm talking to the same paypal as I was yesterday is about as, if not more, comforting as having somebody else assert "this is paypal".
But that implies that some time in the past, you had a "first contact" with the real Paypal. You then need something somewhere that helps validate that initial trust.
And let's say a computer already made first contact with paypal.com and bankofamerica.com, what happens when the user needs to format the harddrive or gets a brand new smartphone? Do they export their previous successful handshakes to a flash card and import them into their new device to maintain key continuity? Or is it easier to use an entity (CAs or something similar) to help establish indentity+trust?
You've already established an implicit trust of the DNS at that point to even make that first trust with a paypal.com or bankofamerica.com. The various DNS-pinned certificate proposals aren't necessarily the right answer either, but the proponents are correct that the "first contact" with a server isn't with the server at all, but in DNS name resolution. If you trust DV certs at all, a lot of it is because you trust DNS, for the most part, and the DNS-pinning certificate advocates are at least correct that if we're all relying on that as our identity+trust database then they already are our most important CAs today in identity+trust discussions.
It would be easy enough to have a few "trusted" and "pinned" domains in the browser, or even certs... these pinned sites could then be used to do the cert lookup validation, which then runs against the SOA for the domain...
There's ways to do third party validation of a DNS entry.. only so much that it's the correct entry, since that's really all DV gives you anyway with automation. This is really only needed on first contact too.
Right, but it seemed like tedunangst's continuity suggestion was an answer to tptacek's "You cannot have privacy without some form of trust or identity."
Since continuity doesn't work for 1st contact, it still doesn't solve the fundamental trust issue. Based on the thread's context about the value or worthlessness of CAs... If you have to still use CAs for 1% of the key handshakes, "key continuity" seems like a tangent to the topic.
I think we're veering into pedantic silliness, but "the person I talked to on Monday" is an identity. (Created on the fly when I speak with them on Monday.) This isn't always a useful identity, but perhaps a relevant point in a discussion of identity vs trust.
True. I have first encountered this idea many years ago, with the Kong program [1].
"Unlike most digital signature programs, this one has no concept of "true names". It makes no attempt to determine that the Bob you are talking to is the "real" Bob. It merely ensures that it is the same Bob."
This is a very common complaint, and it's not valid. You cannot have privacy without some form of trust or identity.
There's no reason that "Only $KEY can read this" and "$KEY belongs to $READ_WORLD_ENTITY" have to be tightly coupled.
Just because you need [the network operator to think you have] some form of the latter, does not in any way mean that the former is "not valid" unless it has one particular form of it baked in.
Tight coupling is evil (ref. dependency injection), and complaining about it is not in any way not valid.
Why do we need to assume the adversary controls the network? We've seen real examples of such attacks on networks, using methods like cable splicing, passive wifi listening, etc.
While there are alternatives to such attacks, active attacks require a greater investment by the attacker, and in some cases aren't practically feasible without being detected by the legitimate network operator.
Any given state actor will have full control over some networks. It's fair to say that any given network is fully controlled by at least one state actor, possibly more. It also stands to reason that each state actor has many more networks they can passively listen to. [EDIT: clarity]
Ultimately, if your threat model is to protect against the actions of a state actor who likely does not have active control over your network, but might be able to passively listen, then ubiquitous encryption helps a lot with defense in depth.
A more mundane case is free wifi at an airport. Someone can set up a hotspot with the same SSID and act as a MitM, but it's not undetectable. Here, encrypting application traffic is just one solution, and not necessarily the best one, but you shouldn't be relying on only one layer of protection.
I think that's only true if your trust system recognizes the attacking key as valid for the destination. DV seems to prevent that, absent typo squatting.
I'm not arguing that we have the optimal trust system now. I'm saying that the argument that we should have started with "privacy" for everyone and made trust optional is invalid. A different key infrastructure may very well have been better than the X.509 PKI! But not having any identity in the system is an unrealistic goal.
Whenever dealing with financial websites, I am always extremely suspicious of a website that does not have an EV certificate because of the added level of scruitiny applied to such certs. The number of mis-issued DV certs, and typos means that it would be relatively easy to enter banking credentials to a well-executed phishing site. It is going to be a lot harder to get an EV cert issued to paypa1.com with “PayPal, Inc. [US]” as the organization.
The only website that I regularly have anxiousness about entering my credentials on are Google properties. Since email can be used to reset pretty much anything, Google is one of the most important set of credentials to prevent falling into the wrong hands. Unfortunately they don’t seem to care about EV certs.
Additionally, app-based OAuth screens with web login prompts regularly give me pause. I don’t like not being able to see the URL and certificate information.
It's funny, but this is where having a password manager really saves me... if I don't have a login for a site I use often, I'll really scrutinize it... sometimes login urls change, but not very often.
The "identity" in TLS exists principally in order to prevent network adversaries from substituting their own keys for those of your intended peer.