Austria, Switzerland and Germany have a very long rock climbing history (unsurprisingly), and their respective climbing associations are obsessed about safety.
But YubiKeys are built on a hardened hardware security module instead of a general purpose phone operating system with full network connectivity and a huge attack surface.
Having a UI does not increase security in a meaningful way. The attacker is just going to wait until the victim connects to an interesting target server and then hijack that connection. The ControlMaster feature makes that trivially easy, but it's not hard to do real injection [1].
If the workstation is compromised, it's over.
At that point, all you can do is to prevent an attacker from copying the key or using it without user interaction. A YubiKey does both - you can optionally set it to a mode where you have to approve each signature.
With a bank transaction, the whole transaction is part of the approval process and can be verified out-of-band. With a SSH login, this is not possible since you're still going to trust the workstation as soon as the session is established.
I'm not saying this project is useless - IF your phone is actually more secure than your workstation - which may or may not be the case - AND you've been previously been storing your keys on your workstation, then it's definitely a step up. But really, at that point, just buy a YubiKey (and properly secure your workstation!).
Otherwise, you now have TWO single points of failure instead of one. If either your phone or your laptop is compromised, it's over.
If you want login approvals that show the server name, do it as a second factor and use something like Duo Security with push approvals. This actually increases your security - instead of having, an attacker would now have to compromise both of your devices.
The iPhone has a secure enclave that does elliptic curve key generation and signing[0]. I'd be surprised if they do not implement that soon; it's not terribly difficult[1].
For anyone interested in Kubernetes: Red Hat's OpenShift is worth taking a look at.
It's upstream Kubernetes + a PaaS framework built in top of it.
It takes care of role-based access control, has a secured Docker registry (prevents applications from pulling each other's source code), Jenkins integration and can automatically build, push and deploy your applications.
Our team started using it and it's great. The documentation is top-notch (it's probably the best docs I've ever seen in an open source project).
I've seen many teams re-invent the wheel over and over again, when OpenShift already does most of what they need.
I just tried to get started with minishift and it doesn't seem to work.
`minishift` seems to be similar to `minikube`. On my mac, running `minikube start` successfully starts a minikube instance in Virtualbox.
Unfortunately `minishift start` seems to sit there and fail after 120 seconds (with xhyve and vbox) because "the docker-machine didn't report an IP address", and it seems that the docker-machine is not even created.
This is a shame, I'd very much like to try out openshift. If anyone else has the same issue here please let me know!
Edit: Someone replied but deleted their comment. I should have run `oc cluster up --create-machine`!
I've been using it on my current project and I love it. This is probably my favorite RH product. For anyone who's interested here's a quick way to get a microservices architecture up and running in OpenShift: https://jhipster.github.io/openshift/.
As someone looking to move their production environment to Docker with Kubernetes, I'm wondering how OpenShift compares to Rancher? I've been looking at it a bit and it seems to provide everything with a nice UI as well.
I was once asking myself the same question. Apparently, Openshift has stronger security focused on a multitenant environment in addition to an out-of-the-box Paas. But I would really like see a good comparison of both.
well their hosted version is "US East (Ohio)" only and spinning up a own cluster is sometimes overkill.
you can start a website with just a managed database and 2 or 3 instances where you install your applications. yes it's cool to have klick to deploy, but everything comes with a cost.
Yes I'm aware sorry was on transport so wrote a short reply. What I was relating to was Open 'Cloud' platforms and the presence of RedHat and their support / contributions.
I'm running btrfs in production with a very heavy workload with millions of files and all sorts of different access patterns. Regular deduplication runs, too. We're probably one of the largest btrfs users.
Had a LOT of unplanned downtime due to various issues with older kernel versions, but 4.10+ has been solid so far. You definitely need operational tooling (monitoring, maintenance like balance) and a good understanding of the internals (what happens when you run our of metadata space etc.).
Happy to answer questions!
On a related note: Never ever use the ext4 to btrfs conversion tool! It's horribly broken and causes issues weeks later.
> On a related note: Never ever use the ext4 to btrfs conversion tool! It's horribly broken and causes issues weeks later.
Care to give some details about this and other failures? Part of what makes a FS reputation is not just people telling "it works" but also stories about how the thing crashed and how they recovered from it. IOW, it always works, until it doesn't, and then it still "works" because I can dig myself out of the hole this or that way.
Inspired by the way you can convert a Debian VM to Arch Linux on Digital Ocean, I happen to have been toying with it recently to auto-convert a blank Debian 8.x VM from ext4 to btrfs. Looks like things are fine, but only because the kernel is <4.x and the VM has very little data on it since it's blank.
WARNING: This is a toy. Do not use for production.
It resulted in random, hard to reproduce ENOSPC errors down the line without either data or metadata being anywhere close to full. Neither us nor the btrfs developers that took a look at it were able to figure out what exactly went wrong, but it was something about new blocks not fitting anywhere despite lots of free space.
Someone on #btrfs said that the filesystem layout is a lot different when using the conversion tool and all of the regression testing happens with regular filesystems, not converted ones.
We reinstalled all machines from scratch. Never happened again.
This one I'm fine with since WebAssembly is a worthy replacement, but I'm still annoyed at Google discontinuing Chrome Apps.
Some examples of specialized apps I use all the time that would require a native app otherwise:
- Signal Desktop
- TeamViewer
- Postman
- SSH client
- Cleanflight drone configuration tool
It was one of the best things that happened to Linux desktops in a long time and removing it hurts users and makes them less secure.
Now everyone is moving to Electron and instead of one Chrome instance, I'm now running five which use more than one GB of RAM each. Much less secure, too, since each has its own auto-updater or repository and instead of being sandboxed by Chrome's sandbox, they're all running with full permissions.
It also means I cannot longer use Signal Desktop on my work device since installing native apps is forbidden for good reasons, while Chrome Apps are okay.
It also hurts Chrome OS users since Chrome Apps are being abandoned in favor of Electron. It also makes it less useful for developers to create Chrome Apps since the market is much smaller.
Since Chrome Apps continue to be available on Chrome OS, I'm considering separating that functionality into a stand-alone runtime or making a custom build for Linux. Anyone wants to help with that?
Exactly. I doubt most of the ChromeApps that exist today were made for ChromeOS. They were made because it was an easy and straight forward way of making a webapp on the desktop. Now, as OP mentions, everyone is moving to Electron and NWJS, neither of which works directly on ChromeOS.
The only upside I can see is that CrOS is soon going to support running Android Apps which may save it, but even then... Maybe they'll figure out a way to run Electron/NWJS apps on ChromeOS?
OP isn't calling out how it hurts CrOS, they're describing how it's the place that Chrome Apps are still available, and thus could have that functionality copied out of CrOS and rejiggered to (continue to) work in Linux.
It may take a bit longer before it is disabled everywhere, but I feel like the writing is pretty much on the wall for NaCl at this point; If you develop or depend on apps that leverage it (in any context), this should probably be a warning sign to start thinking about how to sever that dependency (even if it's not urgent).
At a minimum, Postman does not belong on that list. It has been the case for quite some time now that Postman's desktop app is a) better than the Chrome App, and b) recommended by the developer.
Postman aside, I don't understand why anyone would want any of the apps you listed to be installed as a Chrome App. Why would someone want an SSH or TeamViewer client to be tied to their browser? You're installing an application either way; Chrome internalizing the process as an attempt to offer convenience is a strange idea. Chrome is a browser, not an operating system - let the OS do what it does best. Chrome Apps were an unnecessary and proprietary mess - a failed experiment I am happy to see dismissed.
Ad blockers primarily look at domains, so blocking will continue to be possible at the request level. They aren't interpreting or parsing JS to begin with.
> - stronger DRM
If sites were going to ship Web Assembly-based DRM, they would already be shipping Web Assembly along with the Emterpreter. Remember that wasm has a polyfill already. I haven't seen that happening, so I see no reason to believe it'll happen in the future.
> - Bitcoin mining that regular user can't detect
A regular user certainly would notice the 100% CPU consumption. And anyway, bitcoin mining in a WebGL shader would be more profitable than anything wasm-based.
Moreover, though, surreptitious bitcoin mining on consumer PCs would be ludicrously unprofitable no matter what. Here a Stack Overflow answer from last year that calculates how much a site with 2M daily visitors would make if they could all somehow run the fastest C implementation [1]. It was less than 50 cents a day back then, and in the meantime the hash rate has grown by nearly an order of magnitude [2]. Good luck.
You are right, but replace Bitcoin with the latest Fadcoin and it might get lucrative again.
One cool idea that is also gaining traction is in-browser proof of work to prevent DDOS attacks. Basically you have to perform a lengthy computation to get past the (fast, ultra-high bandwidth) firewall. Doesn't slow down the individual user much, but makes an attack much more difficult. I could imagine people using malicious JS to get these proof-of-work tokens.
But yeah, computing power in the cloud has become fairly cheap, it's really hard to see how criminals could benefit from secretly serving WebAssembly to people.
I doubt a regular user would notice 100% CPU usage and even less certain that they would know what to do about it or what was causing it. Most OSes operate chrome just fine when another process is asking for 100% as well.
You must have been working with at least quite young people or something. My experience goes the other way. I've seen people who bought a new computer because their fans were running at max and the reason for that was that they were full with dust and dog hair.
Right, they might notice it, but why would they care? They don't know that they should care, it is just a computer being a computer and it probably spins up the fans for other tasks too. They would be right to not care unless told otherwise, sys ops isn't their job.
I think it would be smart to educate people as part of a regular security briefing for non technical staff though. But if it's something that high of concern to your company maybe an automated CPU usage monitor could alert the team to anomalies.
[1] I was running a test today when the program being tested ran into a very tight loop and the fan really kicked in. I was curious if it would finish and let it run for several minutes until all went quiet and the screen went dark. It had shutdown to prevent heat damage.
Regular users do notice things like that, they just can't articulate it. "I think my computers getting old" and "I think I have a virus" are common ways of trying to explain things like this.
I'm not sure where you're getting any of this from.
First, I don't see why you think WASM-based ads will be any less blockable than JS already is.
Stronger DRM on the web [is already a thing][1], but that has nothing to do with WASM.
Bitcoin mining on the web is already possible with JS and so far that hasn't been an issue. If it does start becoming a widespread problem though, browsers will start taking steps to combat it, like they [already have][2] for pages that needlessly consume CPU in the background.
That's actually a great idea for funding content creation online. If a site is open source, then it would even be possible to prove that only a reasonable share of the client's resources are being utilized. I'd take that funding model over the advertising-driven model that exists now.
WASM is aiming to be fast enough to actually be capable of running a rendering engine. No more DOM for adblockers to look at, the pages may finally become canvases.
Right, but you can already do that. Serve the whole page as a SVG, or JPEG, or PDF. You can already embed ads inline. Or just plain serve them from your own server, calling them article.jpg instead of http:://ads.adcompany.com/advertisement.jpg .
The reason people don't do this, is because ad companies want control. They want to know exactly how often the ads are served and they don't trust you to serve their ad all the time and to all customers. Also they want to rotate it quickly. Finally, they want to track people. So the solution is to use remote javascript. 99% of ads work this way, even mayor news sites don't sell their own ads anymore.
In this scheme, you'd probably still have ads served from a third party server. And even if they obfuscated the domain name, you could still probably identify their blob and block it.
So I'm not to worried about unblockable ads. But you're right, they will try, and I am worried about sites becoming unusable because they will emulate browsers using canvas, poorly.
If you run your own rendering engine you lose accessibility. If you add aria tags to stuff in order to make it accessible, adblockers will be able to read them as well.
I don't think advertisers care about accessibility. Not the same companies that want auto-starting video clips in your browser.
As for the site owners - unless it's someone large enough to care, I think the usual choice between "letting a few disabled persons access the site" and "getting a few more bucks from advertising" would be quite obvious. If that question would even arise.
Don't most ad blockers just rely on media queries on the DOM? I imagine there's a lot of ways to circumvent those techniques when you are rendering raw pixels with wasm.
Those pixels still have to go somewhere in the DOM, and clicks on those pixels still need to be handled. Plus, there's nothing wasm can render that you can't render with uglified JS already.
Put it this way: once you factored in the price of the bus fare to get to the public library to borrow a terminal to check your giant botnet's bitcoin earnings, your criminal enterprise would be operating at a loss.
CPU mining hasn't been viable since 2011 when bitcoin difficulty was below 1,000,000, and today it's near 600,000,000,000.
Turn your website into an interactive canvas, basically an HTML5 Canvas game that functions like a news site, I think it would be much more difficult to block ads in this scenario. One objection is that you can already do this and we don't see it, but a counterpoint is that its a hassle at the moment, and once tooling is advanced C# to WASM compiler or something along those lines, then we'll see frameworks and then it will be pretty simple and possibly compelling... If it gives publishers more control over the experience I don't see how they could pass it up.
Agreed. WebAssembly is not a replacement; the point was not to allow to write extensions in native language, the point was to have a mechanism for escaping the browser sandbox in a controlled way.
Google could have allowed Chrome to be used as a cross-platform GUI library (THE cross platform GUI library), but left it to Electron (lagging behind and requiring distribution); think XUL runner. I don't see the sense in that. I'd absolutely love a modern XUL runner.
Electron is really just Chrome with a separate JS engine that can run native code if the developer chooses to do so.
Personally I'd argue that the "nativeness" of an application depends on how much the developer actually uses that ability to run native code. If you just throw a bunch of standard webapp CSS, JS and HTML in a folder and wrap Chrome around it, it's no more "native" than any other webapp. If on the other hand you have a whole bunch of native code doing, for example, media editing and the HTML and such is just the frontend UI, I'd say sure, that's a native app.
I'm extremely hesitant about installing random non-sandboxed applications. A Chrome app comes sandboxed with well-defined permissions. It's rare for native apps outside of mobile to come sandboxed or be easy to sandbox.
I always wonder what it is that these apps need that cannot be done as regular web apps, as the web platform has provided more and more controlled ways to break out of the browser sandbox. (This is not rethorical by the way.)
eg: I'd like to build an expense tracker with html/css/js/sqlite but I want it to be offline and the user can choose to save their db file in their dropbox/gdrive folder.
> It was one of the best things that happened to Linux desktops in a long time and removing it hurts users and makes them less secure.
It would be more accurate to say it was the best thing to happen to your use of Linux in a long time, and it looks like even that is only because you're trying to use a bunch of closed source, non-cross-platform stuff.
I also disagree that it makes users less secure. The teams working on Debian, Ubuntu, Arch, etc. have much better security track records than some random web developers who've made an "app". There's no way would I trust a web based SSH client, for example.
And sometimes, you have no choice - there's no FOSS alternative to TeamViewer, and thanks to it running inside Chrome, I no longer have to run a Windows VM.
The web based SSH client is published by Google themselves and they use it internally.
> The teams working on Debian, Ubuntu, Arch, etc. have much better security track records than some random web developers who've made an "app".
The way things are, right now, Chrome is much better at protecting apps from each other than my Linux desktop is. If, for example, the Cleanflight or TeamViewer apps were regular apps, a bug in them would fully compromise my account.
---
Off topic remark about Linux distro security: I really like Arch, but security isn't their strongest suit. For example, they still haven't enabled full-system ASLR, citing unfounded performance concerns, when other distributions did so years ago. Even Windows with all their third party apps has a higher percentage of ASLR binaries than the average Arch system.
They also have no central build system and instead rely on volunteers who build the packages on their personal systems and sign them using their personal GPG keys.
I really want ASLR in Arch so I'll keep complaining about it publicly until it finally happens :-)
> The way things are, right now, Chrome is much better at protecting apps from each other than my Linux desktop is.
I have a hard time believing that. With a ton of stuff all running inside of Chrome, it's much easier for them to access each other's data than if they were standalone apps. Further, since Chrome is such a huge attack surface, I would expect it to be less secure than a smaller, more specific application.
On that note, I can go look at my Linux distro's security and bug tracking systems and see all of the known security issues and bugs affecting almost all of the software on my system. Does anything like that even exist for Chrome Apps?
> If, for example, the Cleanflight or TeamViewer apps were regular apps, a bug in them would fully compromise my account.
Isn't that the case whether it's a Chrome App or not? Chrome has a huge attack surface, so it seems there's an even bigger chance of hitting a bug or being affected by an exploit.
The bigger problem seems to be that you're running apps that you don't trust, while I can trust my Linux distro to have safe software in their repositories. Barring bugs, I generally don't have to worry about installing malicious applications.
I'm not sure Google does any kind of vetting for Chrome Apps, but I'm not sure I'd trust them even if they did. They are the largest ad tracking company in the world after all.
> I have a hard time believing that. With a ton of stuff
> all running inside of Chrome, it's much easier for them
> to access each other's data than if they were standalone
> apps.
Ah, the argument from incredulity.
If you're using X11, every command with access to the display server (which is usually everything you run) can read all keyboard and pointer input and screen output and inject arbitrary input.
And? That doesn't change by running inside of Chrome.
The only reason that's even a concern is because you can't trust Chrome Apps to not be malware.
On the other hand, when I "apt-get install <some app>" I know it's not listening to all X keystrokes unless that's a legitimate part of its functionality, because I trust the Debian team to only add trustworthy software to their repos.
>I have a hard time believing that. With a ton of stuff all running inside of Chrome, it's much easier for them to access each other's data than if they were standalone apps.
Chrome apps are subject to sandboxing, and regular native desktop apps (besides apps installed through OS X's app store) generally don't have any sandboxing enforced on them at all.
Your pique is noted, but as more anecdata I use Chrome Apps for Soundcloud and Mixcloud, where it's nice to have a Chrome window that won't collect tabs and have a recognizable icon. I have dozens of tabs open in each of several Chrome windows and it can be a pain to find the one I want. Insert complaint about not being able to switch to a tab from Chrome Task Manager.
I was specifically disputing the claims that Chrome Apps are "one of the best things that happened to Linux desktops in a long time" and that they're noticeably more secure.
Taking Soundcloud as an example, Clementine (and probably most media players) can stream it just fine. Having a Chrome App is nice, but it isn't providing anything that isn't already available. I'd even say the Chrome App is a step backwards, because with Clementine I don't need a separate app for every music service.
The idea being that I should install Clementine to listen to Soundcloud, instead of Chrome, which is already installed? This is the only thing I would use Clementine for, since it doesn't support multiple genres per track so it's out as a music library.
> instead of one Chrome instance, I'm now running five which use more than one GB of RAM each
Is that true? Executables and shared object files are supposed to share (code) memory.
So what big data structures does Chrome use that it can share between tabs (which are processes) and that it can't share between different instances of Chrome?
> It was one of the best things that happened to Linux desktops in a long time and removing it hurts users and makes them less secure.
I will disagree, you can install most of these from the official repository of your distribution, without the use of electron. They are also very secure if you run them as an unprivileged user.
Suppose your JS Chrome App is getting the plug yanked on it, what are your alternatives?
1.) Port it to Electron and keep nearly the same code base
2.) Rewrite the whole thing as a native app in such a language as C++ without the use of Electron
You can't possibly tell me that most developers won't choose #1 instead of #2 in a heartbeat (the switching costs are orders of magnitude more for #2, for one thing). Which is not a Good Thing.
And it's also very obvious that #2 isn't nearly as secure as #1, which runs in a sandbox and so does not have direct unchecked access to users' files like #2 does.
Yes, but the existence of ssh on chrome makes it much much easier to teach a windows user how to try out the linux command line. PuTTY is annoying as hell to help a new person get working and they might not have enough space for vagrant+virtualbox.
PuTTY's UI is an ass-backward mess (and that's mildly put) and the fact that you're used to put up with it or that there exists five Knuth's arrows worth of guides doesn't change that. In fact the latter is probably a testimony of it all. I've used it for years, know it by heart, still hate it, and am pretty well served by and versed in the unix terminal universe, TY. PuTTY has been useful for sure but that was by scarcity as there was basically no alternative on Windows for a decade or more.
"Ubuntu on Windows" as they're calling it now is free, enabled via the control panel, and provides a near perfect bash experience. I use it on the desktop I built for VR to SSH into my digital ocean droplets all the time. Super easy to use, just open powershell, run `bash`, then you can run `ssh` like normal. You have access to your windows files with /mnt/c and etc. for additional drives. The only issue is that Powershell doesn't support the full gamut of colors that Bash does, but that support is coming in the fall creators update and frankly it works fine with every Vim and zsh color scheme I've tested.
10/10 developing for the web on windows is finally tolerable
I don't know enough about ssh for Chrome and bash for Windows to get your point. Isn't bash for Windows (as part of WSL?) free? What is it that costs more than $5 using bash for windows but is free with ssh for Chrome? Genuinely interested, I'm on OSX mostly but I'm WSL curious.
I interpreted that as them not having access to a Windows machine and wanting to try out the experience for themselves before attempting to teach someone else.
I love PuTTY, but my best PuTTY is actually KiTTY, since it saves profiles in a local config file instead of the ominous Windows Registry. Much easier to move around :-)
If you run them as a different user from yourself, maybe, but who does that?
The idea that software is secure if it only runs on your own user account is stupid IMO. I'd rather that software had access to everything on my computer EXCEPT my personal files.
It's about restricting access:
One is protecting others;
The second is protection within your own realm.
Both are needed (Unix was just like: At least don't touch the data / system that other users have)
It's useful outside of Chrome OS if you have a security perimeter based on TLS with ACLs and auditing already in place and you want to use it for SSH as well:
I've got this crazy idea. Since "Linux Desktops" are generally running GNU under the hood for providing user land services, why don't we call those systems... I don't know... "GNU/Linux"? That way we can distinguish them from systems that use the Linux kernel, but have a completely different user land infrastructure.
How many of them actually intimately use the GNU userland as opposed to Xorg and whatever libc's installed? GNU's an increasingly irrelevant portion of unix and unixlike systems -- most of the actually important userland portions are python, ruby, the aforementioned Xorg, etc.
I actually don't think you are incorrect. GNU is not nearly as big a piece of the puzzle as it used to be. It's just that when most people say "Linux Desktop", the part where they say "Linux" usually means the part that GNU makes up. As far as I know, GNU libc is still by far and away the most popular libc installed on those kinds of systems.
So it was just kind of a snarky joke because the parent said that to be a "Linux Desktop" you had to be able to get ssh running (presumably they meant openssh). And while that's not GNU, GNU is what the vast majority of "Linux Desktops" will use to get you there -- so the implication really was that "Linux Desktop" == "GNU/Linux Desktop".
I thought it was funny, but probably I was being too obscure. Also, I should know better than to dive into politics for no good reason.
Yes, I know what it means and includes. Android, which is one of the biggest unixes right now, doesn't use GNU. iOS, which is another one of the biggest unixes right now, doesn't use GNU. Most embedded linuxes don't use GNU. So yes, for the parts of unix which are visible to most people, the gnu parts are not very relevant at all.
While this is true, there is still a unix-like userland typically, at least in the form of busybox or somesuch..
I think there is some value in denoting 'linux the kernel' from 'linux the unix-like system', especially in the face of those systems which mainly use 'linux the kernel' in a non unix-like way, such as here..
e.g.: the 'gnu parts' (e.g. unix-style userland) are hugely important for me in a workstation - I could not do work in a system that doesn't provide the 'gnu(unix) like' user interface. On a phone/consumer/browser device, this is not so much the case
A device using the linux kernel (or Mach kernel in iOS) doesn't make it a "Unix" or "Unix-like" system, despite that same kernel being used in other truly Unix-like systems. The user land (aka GNU in most Linux distros) is what makes it a Unix-like system. That doesn't mean GNU isn't relevant, it means what you considered a "Unix-like" system was overly broad.
This reply is a bit overly pedantic and I apologize, but you kept pushing so I wanted to clarify.
You can choose what kind of laptop you want. ChromeOS is one of the options, and security (+ trivial exchange-ability) is one of the selling points for using a Chromebook.
I tried it for a while, but I'm too used to the Mac to have made the switch more easily, so I moved back. But I know quite a few folks who use and love them. Opinions, as I'm sure you can guess, vary widely. It was surprisingly not-bad, even for a diehard mac user, and that was on a model from two years ago.
Google is a very large engineering organization, and (my opinion here, but one shared by others) recognizes that there's a lot of diversity in what engineers like for their workflow. There's obviously a set of standards for what you can choose from as far as laptops (since the company is buying them), but it's pretty broad.
Niels Provos himself is a Chromebook user (not sure if he needs to access production these days...) and he talks about locking down privileged access to Chromebooks with security keys:
Is there any good way to configure it to handle many subdomains with one instance, or do you still have to pick between using one primary proxy.tld vs. running lots of instances of the proxy?