Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Desktop Linux (Still) Sucks. And What We Can Do To Fix It. (blip.tv)
73 points by r11t on April 27, 2010 | hide | past | favorite | 96 comments


The audio situation is a total debacle. n-1 of the sound systems need to be killed so we can focus on making the remaining one work really well.

It's really funny that this is his first point, because I ran into it just two days ago... the microphone jack on my laptop, which worked fine in Ubuntu hardy, intrepid, and jaunty, no longer works in karmic thanks to the upgrade to pulseaudio (bug has existed in launchpad since november). It's insane that something so simple (analog input jack) on such common hardware (Intel 82801i) could be non-functional in 2010. Of all the weird driver problems I've had in 12+ years of Linux use, this one is probably the most shocking.


It is hard to believe that sound can be so broken, after all these years (and looking at the huge strides other parts of the desktop have made).

If Ubuntu wants to be the uber-friendly, runs anywhere (just works!) distro, they need to avoid these types of regressions from one version to the next. I know they're not in charge of all the upstream work, but they are the ones that make the decision on what version of pulse audio to use (or X version, or whatever). Instead it seems that Ubuntu always tries to stay on the bleeding edge, which is inevitably going to be broken for some people.


But it's not that simple : often, the bleeding-edge fixes others annoying bugs and it's really hard to have an estimate of the affected population. Maybe they should add an extra step to their installer that would allow to send anonymously data about your hardware configuration to Canonical ?


I think you're both right. What I think would made a real difference is making some cruel-but-kind decisions about deprecation.

State officially the other mechanisms are broken, then drop everything that doesn't use pulse from the repositories.


I think Fedora has led on the bleeding edge of pulseaudio. Ubuntu apparently had a borked setup in one version of Ubuntu, but the same code was running well on Fedora.

Having said that, pulseaudio really seems to bring out the cranks and probably most of the issue is that it's become something that it's okay to complain about, rather than something that people get behind and fix.

It is however clearly the future and like many things in Linux ripping out mature half-solutions to replace them with newer full solutions is going to cause regressions for some. As long as Linux is making progress on average then it's all part of the game.



The major advantage of "Desktop Linux" (that I've been using happily for quite a few years now) is that it's utterly unlike Mac OS X or Windows. We use Linux not because it's free, but because it's more comfortable, the UI is better and friendlier, and developing programs for it is a whole hell of a lot easier.

Everything Gnome, Ubuntu and this sort of people do work against these advantages. We use Linux because of its differences - these differences to other systems need to be accentuated (zsh, the suckless project, etc), not fought against (Ubuntu, network-manager, HAL, etc).

By every step these people take, Linux gets more difficult to develop for and less comfortable to use. This needs to stop.


> We use Linux because of its differences

I don't know who you speak for, but it's not about me ... I use Linux because it's cheap and I'm in control.

Other than that I love having aptitude and a good repository, but every once in a while I really wish there was a click-to-install standard and I also really wish I wouldn't burn my weekend over wireless issues.

> developing programs for it is a whole hell of a lot easier

You're not speaking about desktop applications / games. That's hell-like compared to the alternatives, being partly the reason why companies like Adobe aren't investing in it ... I've worked there and I know the arguments that are flying back and forth.

> By every step these people take, Linux gets more difficult to develop for and less comfortable to use. This needs to stop.

People should and will work on whatever they want, and all "wasted effort" arguments are bullshit. If you think there are better paradigms that should be explored, then jump in and show the world how right you are. Talk is cheap.


> You're not speaking about desktop applications / games

Those apps aren't hard to develop because libraries or the environment or whatever; they're hard to develop because their developers want them to be closed, so recompilation and putting them into the real package management system is impossible. Starting with such a handicap makes things complicated :)

> If you think there are better paradigms that should be explored, then jump in and show the world how right you are

The better paradigms have been known and used for 20 years (such as the Unix paradigm of connecting multiple programs, the CLI, and so on). That it doesn't appeal to some end users is not our problem.

(also username post combo, but just kidding :)


> Those apps aren't hard to develop because libraries or the environment or whatever

They are hard to develop because it's difficult choosing libraries, the environment, whatever, and your choice for today may be deprecated in 6 months. Of course, with the proper abstractions you can painlessly rewrite your app to target the newest toys.

But there are always costs involved ... sure you could rewrite it in a couple of weeks, but quality assurance (if you're a professional that doesn't releases pieces of shit) takes as long as it did for the original target.

> they're hard to develop because their developers want them to be closed

Yeah well, it's their choice, and accommodating applications that aren't open-source should be a requirement of any OS because, you know, the majority of desktop apps in production are closed and that ain't changing because it's a valid business model.

> Starting with such a handicap makes things complicated :)

It's only a handicap on Linux. The other platforms, including Solaris, have been doing just fine. Which makes me wonder about which part is handicapped.


And getting new versions of your software out by trying to get X distributors to include your newest version in their repositories (which are completely outside of your control and each with it's own set of rules) is less complicated than just putting new stuff up on your own website? I don't think so. There is one advantage for users certainly - it's easier to find versions of software which are officially sanctioned by their distributor, although it might be outdated by months. Which does often not matter much for server-software, but for desktop software it's something which users don't really accept.

I think we can discuss if this central-control-by-distros model of software distribution has more advantages or disadvantages, but it's not all shiny throughout. As example when stuff go really wrong with that model - maybe you heard of the troubles with Debian, J.Schilling and the cdrtools. I didn't until I noticed I could no longer burn CD's(!) with k3b and had to spend a few hours on figuring out how to fix this (you have to get the Schilling versions of the tools - the official Debian-version simply does not work in some cases and that is known now for a long long time). So there is a well-working combination of k3b+cdrtools which fails to pass the distributor rules (which is certainly fine) and now the authors can't really get the working combination (of 2 open source packages!) in an easy way out to the users.


Their absolutely are reasons why GNU/Linux development is harder/more restrained than Windows at times. On Windows I can go with DirectX or OpenGL depending on my needs, with GNU/Linux I only have OpenGL. Say what you want about DirectX, it is ahead of OpenGL in some respects.


Not me. I use Linux (among other OSes) because it's Open and Free, and I think that it's important to have an alternative to proprietary systems.

I don't consider it to have a better UI or to be easier to develop for and I feel more comfortable with Windows.


Then you need to try the following things: - vim or emacs - tmux or screen - tiling window managers - zsh

Each of these things result in a massive usability increase.


Lowercase f free.

He's saying most people use Linux not because it's $0.00, which includes your reason, that it's liberated.


It's funny you're saying that after all the years of work that went into making traditional window managers more Windows/MacOSX like.


The easiest problem to tackle is .RPM vs .DEB debate that should have ended about 3 years ago. If the LSB had said .DEB it might have happened, but since the "official" standard and the actual majority are diametrically opposed it doesn't want to happen. And of course the filesystems don't perfectly match across distros.


The .rpm vs .deb debate is a fallacy. The real issue is that when packaging for say debian and RHEL you have to care about different glibc, different compiler versions with potentially different ABI, different filesystem conventions, different scripts for post install, etc... The format in which the files are packaged is the least of the issues compared to that. For example, packaging for opensuse once you have a package for RHEL is as much of a PITA than packaging for debian, really.

Once you standardize on what makes packaging difficult across distro, you basically end up with the same system. I think systems like the build service from Suse, etc... are much more useful than wishful thinking on packaging format.


Yes, suse build service is a very nice tool and much more interesting than a single package format, which wouldn't help much. And if the dream is to provide just a single binary package for any distro in the world, it won't work anyway. Well, maybe in theory, but if you mean it seriously, you need to compile and link against exactly the same binaries that the target system use. (Or static linking. Or just believe it will work. But serious commercial vendor who wants to provide a real user support cannot.) Which would mean that every Linux system would have to use the same binaries, which is, hm, nonsense.


Meh, from a source point of view, once I have a debian/ directory, with a properly set up rules file, building becomes trivial. Then it is matter of generating a package per-distro/version with the right environment. This could be very well done in an automated manner and a few vms.


Which is exactly what the build service gives you...


My bad, I was under the impression that it was an RPM only thing.


I hear what you are saying, but we can reduce the differences between build systems while still using different targets.

I do a lot of work with OpenEmbedded, every distro image you create is for a different hardware platform, using different libraries, different tools, etc. But it is a unified build system. So I can say use this version of glibc and spit out the files in that folder, and with very minor tweaks I can use a completely different version of glibc and a different file structure for another image. If we had a unified dependency/build system across distros, we could have completely different contents while having relatively straightforward customization because if you knew one system, you'd know them all.


From what I hear, .DEB is a bitch to do anything outside of the norm, whereas .RPM is much easier and less complex of a package format. That said, I much prefer the apt system on Debian, and Ubuntu. What really needs to happen is to take the best of both worlds and merge it into something of a complete system.


Well, like Arch Linux's Pacman and AUR? :)

I found Pacman to be easy and simple--on the same level as APT, and when the binary package doesn't do what you want, packing your own package is really easy.

http://wiki.archlinux.org/index.php/Pacman

http://wiki.archlinux.org/index.php/Arch_Build_System


upvoted for pacman. I actually have only tried arch at a friend's place, because for some anal reason Arch does not provide livecds. Only a command shell from where you must build your system up, which is quite a pain.

But I have only heard good things about pacman (and it's ultra simple package format). It enables Arch's rolling releases model, which means that you are always up2date without having to go from Beserking Baboon to Chittering Chimpanzee.

It is a much saner model for a stable, usable desktop system. If you can manage to combine this with a kernel update technology like ksplice, you have a super stable, updated Linux desktop without ever having to reboot (which makes the whole splash screen effort quite pointless actually)


As someone who has run Arch on their main PC for a year+, let me tell you it's not all roses. pacman may be simple, but it lacks many features you have come to expect; don't be surprised when everything on your system breaks when you pull in an updated library. Don't be surprised when pacman pulls in a new kernel without noticing, your machine fails to boot because it has nuked all of your old kernel modules. While I agree that the rolling release model is attractive, it requires a much more robust package management system to be practical for people who do not live on the bleeding edge 24/7.


You should really try Arch. I was initially hesitant, because of the whole 'Build you own Linux!' thing, but after doing it once I learned LOTS and realized that it was not near as difficult as it sounds.

There is excellent documentation in the Arch wiki, and the forums are helpful. And I have to say, installing arch linux taught me more about linux than anything else that I have ever done.


Apt has little to do with the underlying package format - there is an apt version for RPM.


I wish talks like these had transcripts. I listened to the first half, but don't have the time for the whole thing.


tldw? Anyone?

sigh It's so annoying when you have to sit through a hour-long video for what would otherwise be a 5-10 minute read.


Might get something from the Mozilla Drmbeat project in a few years; maybe Google's speech recognition subtitles will be able to e port as transcript when it's properly live, too...


I listened to the first half, but didn't listen to the second half, since by that time him constantly cutting out on the mic had driven me insane.


I think what Linux really needs is to embrace some way of getting software outside apt/yum. Having to go through the package managers forces you to be an administrator to install software, and the developer and user have no say in which version of the application you get.

I think Linux distributions should be platforms which developers can develop applications for which they can distribute however they wish. Then users can choose which applications and which versions to install and does not have to go through the AppStore like package management system.

I also think applications should be stand alone. If they need anything outside the platform, these dependencies should be delivered as part of the application.


Every binary distro I've ever used (SUSE, Debian, Ubuntu, RHEL and Fedora) came with a little GUI that let you install packages without going through a repository. Just download the .deb/.rpm and double click on it. This is how a lot of projects deliver their latest builds. If you're talking about a unified package format (it doesn't seem like it) then thats a completely different issue.

As for 'stand-alone' applications, that doesn't make much sense for open source stuff, but there's nothing stopping you from distributing your software that way. A lot of commercial software for Linux is delivered that way.


But everything packaged that way still demands to be installed as root, linked against their depgraph of ancient fucked-with libraries, and split apart into pointlessly separate packages.


It asks for your own password (ala Mac - that said, it could simply ask you to confirm ala Windows).

End users don't do linking or worry about dependencies.


Unfortunately this isn't used enough. It is not used by Firefox and Thunderbird. Instead they provide .tar.bz2 files.

I was pleasantly surprised by skype and chrome, who actually provide rpms and debs. Nice!

Open Office gave me a tarball with a myriad of debs. Not the nicest kind of packaging.

Unfortunately the distributions performs their own packaging of a lot of these applications which may give some people the idea that they should use the version packaged by the distribution. The distributions should really stop doing this and refer people to the official packages instead. Otherwise users will be stuck on old versions.

I also don't know how well this kind of delivery mechanism works with automatic updates. Do the external packages get updated in the same way as the packages delivered by the distribution or does anything like Sparkle exist for Linux?


I really don't understand what you're trying to get at, are you concerned with out of date packages? The whole point of package repositories is to avoid that problem, although QA testing and policies can make that non-obvious to users who aren't running some sort of 'unstable' option.

Some projects (Chrome I think?) release packages that actually add their own repository to the user's system so that the user can receive updates through their standard package manager. Other projects (Firefox) have a built in "check for updates", but then you end up back in the decentralized Windows/OSX update hell. That option is generally disabled by distro packagers for obvious reasons.

I've seen a lot of people citing the lack of package repositories and package managers as a weakness of Windows/OS X but this is definitely the first time I've seen it the other way around.


Yes of course I'm concerned about out of date packages. For example, my Fedora machine at work only has packages for Firefox 3.5. I tried using the tarball from firefox's home page, but for whatever reason it was really unstable.

I don't think it is unreasonable to expect to be able to install the latest version of such a major open source product as Firefox without hassle.

I don't know what you mean by the Windows/OSX update hell. Perhaps you mean the lack of a centralized place to look for updates. Here I agree that the situation is less than ideal, but I wouldn't call it hell. I'm sure the situation can be solved without relying on a centralized repository of all software.

The way Chrome does it may be the best option. Too bad it's not used by more projects.

The main thing I want is a clear separation between the core platform and the applications. Then the platform vendor can focus on the platform and the application developers can focus on their applications.


I think it boils down to what he was talking about at the end: Linux users need to be willing to buy software, and Linux software developers need to be more business savy.


The open question is why anyone would buy software for Linux. The key differentiating factor for _most_ people who use it is that it's free (as in beer). There are two other operating systems that for most people work better than Linux. It's hard to see how you can ask people to pay for software now in the hope that it will eventually be good. I'm sure some people will buy software because it's the Right Thing To Do, but considering the population that selects Linux and their reasons, it's hard to picture a big enough market there to draw really good consumer software developers.

An interesting business question.


I completely disagree. I use Linux because of how well it works and would gladly pay money for it, and I think most long term Linux users stick with it because of how well it works. In fact, most Linux users I know purchased a computer with Windows already on it, and completely wiped it off to put Linux on there. So they've already 'payed' for Windows, but use a different OS anyways.

I'm also not saying people should pay for bad software in hopes that it will become good. I'm saying that we need to make it easier to pay for software so that software that good that already exists can be purchased.

Android is an example of how Linux and payments can work. An open source operating system with many free pieces of software, but an App store where products can still be purchased.


> would gladly pay money for it

So, have you? If so, how much? I've spent $300 on software for my Windows computer in the last year, and another $100 on software for my Mac.

> So they've already 'payed' for Windows, but use a different OS anyways.

Granted, but I think far fewer people would if Linux wasn't completely free. If Linux distros were able to enforce a charge of $5 per download, I suspect consumer adoption would halve or worse.

> Android is an example

I'm not sure that's the best example considering the trouble many developers are having making money on Android. But even if we go with that, there are probably already far more people carrying Android handsets than are using Linux on the desktop.

More than that, there's a question about what type of people and for what motivation people buy Android phones. Notice how I said "buy Android phones," not "install Android". People buy Android phones explicitly because you can easily buy desirable apps on them, unlike so many other phone OSes. But the operating system is not free like beer in the sense that it is easily downloaded and installed on a device people already have on hand. You still in most cases have to go out and buy the device. So it does not suffer (as much) from the selection effect I described earlier. (The evidence I've seen so far does suggest that its demographics with respect to paying for software deviate significantly from those of the iPhone.)


Yeah. I've paid for virtualization tools, games, proprietary libraries that made my life easier (OSS back in the day).

I'd happily pay Dag Wieers for RHEL/Fedora repo access if he asked for cash too.


The differentiating factor for me is that it's just what I've used for 12 years and what I'm most productive in.

Serious Linux users use it because it's the only thing that fits the way they work. Linux has all the interface and programs I'm used to, so of course I'd be more likely to buy software for it than any other OS. What holds me back is that there isn't any software I need that I don't have...


Same for me I guess.

One problem that I do have is that if I pay Canonical, for example, to make the whole OS better, it would mostly still crap out because I made the mistake of buying at AMD/ATI graphics card. Not sure who to blame at that point. The Ubuntu people say its from ATI/AMD and ATI/AMD say that there's not enough money in the market to justify better work on the driver(that or technical problems with the differences between distros). So what then? It would still have the same result.

But, in general, I do vote with my dollars.


I've bought some games for Linux


> The open question is why anyone would buy software for Linux.

Well, I'd be interested in one of the following apps: a desktop wiki, an email client, a contact management app, a feed reader, a music player, a scalable image editor, a layout app, a recording app, a guitar effect app, a GIT client, and probably some more.

Of course, for each of these, there's some free app somewhere. But each of these free ones have something I don't like.

I admit, my willingness-to-pay is quite low. That's partially due to some free apps being available. The better reason, however, is that installing proprietary apps sucks under Linux. I also fear, the regular distro updates will break apps I've payed for.

In other words: If there's a way to conveniently download an app from a store or homepage, and to get security and "distro upgrade" updates (maybe, by registering the app automatically with APT during installation), my willingness-to-pay would be quite a bit higher!

> The key differentiating factor for _most_ people who use it is that it's free (as in beer).

How do you know? Any empirical studies I'm not aware of? Or just the Biased Sample fallacy?

> It's hard to see how you can ask people to pay for software now in the hope that it will eventually be good.

Dunno. There's a lot of applications that were not 'good' when they were first released. People still bought it.


the key differentiating factor for most people is free _as in freedom_.whether they came to linux because their hard-drive crashed and they dont have permission to re_install the OS they payed for that came with their machine, or they are a geek who enjoys the power that comes from the freedom to tinker and benefit from others` tinkering.

no one is aware of the cost of their operating system anyway.


> no one is aware of the cost of their operating system anyway.

Basic macroeconomics suggests this is unlikely.


They may be aware, but not care. The last legal Windows CD I have seen was Windows 98. All others that I ever used were preinstalled.


upvoted.

We buy our linux-based office software (we do NOT use OpenOffice). There should be more paid-for apps that are of much better quality than what's available.

I believe that Linux (or some distro) should incorporate licensing/DRM into the OS itself - which you have the freedom to turn on or off. More or less, it means an app store, but indie developers will not have to worry about building licensing into their software.


What office software do you use btw? I've been looking for some linux-based powerpoint replacement (impress does not cut it)


Softmaker - mind you that we need it mainly for the spreadsheet, but I think the presentation software is also quite decent.Note that they dont have very good compatibility with ODF (sigh...).

The license is extremely liberal.


What? Microsoft and OS X have rich libraries of cost-based applications and they don't require any licensing or DRM embedded in the OS. Why on earth would an operating system that hinges itself on openness take a more Draconian step than any other major OS distributer has (mobile OSes aside)?


Because you need to jumpstart the paid-apps ecosystem. I am not talking about approval processes, but something more on the lines of an Android app store than an iphone one.

What you get by that is : 1. signed software 2. small time developers dont have to worry about licensing 3. Distribution channel 4. Payment gateway

The windows or the mac system was never built on the basis of a package management system, which Linux inherently has. Dont we _already_ have an app store in Linux?

Only, there is no way for a developer to make money off Apt or Yum.


Lindows/Linspire actually had an app store back in the day, where developers could package and distribute commercial software. It used apt as its backend.

Since that didn't work out, it looks like the next best alternative is something like Steam. It seems highly possible that Steam will be released for Linux (a port already exists). Valve could be persuaded by enough software developers to add non-game software to their system.

Neither system required/requires DRM to be integrated into the OS.


No it does'nt, but it is DRM all the same.

If you were an app maker, why would you prefer a closed source DRM, than an open-source security model in the OS ?

I just dont get it: everyone likes Steam, but they dont like DRM in the OS - is it something so distasteful that a computer gets tainted by built in DRM ? And then you will find someone who gets pissed off with Steam, goes looking for alternatives... and the consumer ends up with 10 different kinds of app-store software (on Windows, I have Direct2Drive, Steam and StarForce for different games)

I dont know how it can be managed with a built-in DRM, but I suspect something like a PPA (Personal Package Archives) would enable everyone to self-publish their software, yet still be part of a web-of-trust model.


I think that many of us who don't want DRM in the OS believe in a concept of layers of separation. The core OS should be as widely applicable and as open as possible, while any restrictive technologies or licenses should be in as high a layer as possible, so that they affect only those people who want/need them. The benefit to doing things that way is that all layers become easier to maintain and less prone to failure (DRM being used to blacklist hardware drivers would be considered a failure in this case), because each component in each layer only contains those things necessary for itself and the layers above it.

DRM especially should be in as high a layer as possible, or ideally not present at all, so that its casualties are minimal if the authentication servers disappear or the company holding the keys becomes malevolent.


No, thanks. IMHO, the Linux project sold its soul and that is the current problem. In particular, almost every major developer of Linux is in the payroll of some software giant with interests in only running Linux servers. Nothing wrong with that, but it doesn't fit what I'm looking for running locally.


If the camera person for that video reads this, I suggest placing the speaker in the bottom-left/bottom-right corner of the video frame and keeping the slide visible at all times. You can pan around when people ask questions, but keep as much of the slide visible as possible and move back when the question is done.


The blog post from the speaker has a link to the slides: http://lunduke.com/?p=1075


So much of Linux is designed and modified by committee, so change is slow and painful. Apple managed to get a seriously hot UI onto FreeBSD in a (relatively) short time. Linux has everyone and their dog bartering to make changes, so the UI goes nowhere.

Stop copying Windows, already. Start menu sucks. Dump X.


Did you listen to the talk? :) X.org is quite an advanced graphic system mostly doing what you want it to do. Throwing it out and trying to make something new (for what gain?) means wasting so much resources that it's not even nice. (And how is X related to some start menu?)


You say advanced as if that's a good thing. I think a lighter system that considers hardware acceleration and multiple monitors from the ground up may be a good investment in the long run. Not to mention audio, and vector graphics and so on.

Multiple X sessions (one for each monitor + one for each vnc session + virtual X sessions for whatever reason) has to be combined with Xinemera and with graphic drives and with a KDE/Gnome desktop environment... and all the parts have to fit together perfectly for the system to work. I just don't see that happening anytime soon.

The whole concept of X clients and X servers... it's just too much. X was created back in the 80s, and it still doesn't get the basic usecases right. The requirements for desktop computers have shifted a lot in the last 30 years, so this kind of thing is to be expected.

Starting over every 30 years? I think that's reasonable, especially when it's holding Linux back. We've learned a lot since, and X isn't going to last until 2030... so why not throw it out now?


X.org considers hardware acceleration, with XRender, for example. Quite too often, buggy and incomplete graphic drivers and other problems makes it even worse than no acceleration at all. I don't really see these problems as X.org specific, is there any evidence that they are? I think it's more likely caused by very limited resources that are available, incomplete or proprietary (ati and nvidia tends to lag months behind new X server releases, would they ever catch up with a completely new system?) drivers, and other problems, and throwing out X means throwing out all effort spent to solve this problems.

But that doesn't mean that it is not a good idea to explore other possibilities and experiment with another systems in parallel. Like Wayland, for example. Of course it is. Just, don't hurry with burying X11, it's the best what you've got, and likely to remain so for a long time.


You kind of have to bury X, otherwise you're going to end up with two competing projects and then you're even worse off. See: Linux audio.

If X were to disappear from the face of the earth open source programmers would scramble to get something new into place, and that would in all likelihood be better at dealing the realities of graphics today.

Just because it's the best we got doesn't mean it's good enough.

(I'm not advocating we actually should bury X... just making the general point that sometimes it's worth it to stake a step back to take two steps forward later. I realize it's often not politically feasible.)


Well, I was hoping for many more than just two. I don't really see any other way than many competing projects, one eventually killing all others.

But it shouldn't be that painful. As you've said, it's mostly about toolkits, and it's easy to simply run X server atop of the new system (Mac OS X does that, Linux does that in some ways, Windows can do that, so anybody can). I guess that the terrible mess of Linux audio can be avoided (after all, you don't have X11 applications locking whole graphic system for just themselves, as is often the case in audio).


Depends. Most apps don't link against X, but the toolkits. If the toolkits can be modified to support a cleaner network protocol (something primarily vector based rather than bitmap based).

I don't think the parent poster is saying that X is related to GNOME and KDE's start menu clones, other than they both need changing.


How cleaner? X11 is very extensible, if you wanted some vector graphics methods, you could just make an extension. Whole font drawing got replaced in this way, nobody really use old X core fonts anymore, everybody now uses client-based Xft system. (Well, yes, you could make a new version without this old stuff. But backwards compatibility and so.) In fact, this is what happens if you are drawing with OpenGL, you use X11 just to make a window to draw in, set some other stuff, handle input, but drawing itself completely sidesteps X11 and X server.


Are extensions negotiated over the network? So two hosts can decide they support a random high level primitive and just send that instead?


It's always client/server asymmetric communication. Server supports QueryExtension and ListExtensions commands. You can see extensions supported by your X server with xdpyinfo command.

(see www.x.org/releases/X11R7.5/doc/x11proto/proto.html specification for further details)


The problem might be that the parent doesn't know the difference between X.org and XFree86.


Or, maybe neither and maybe I'm referring to X11.


X.org is just another implementation of X11. Re-implementing it isn't what I'm talking about. No, I don't expect it to be replaced, which I consider a problem. http://en.wikipedia.org/wiki/X_Window_System#Limitations_and... Here's the issue: why did Apple not just use X? Important question.

There isn't a relationship between X and the start menu in my statement.

Yes, listened to the talk. Audio frameworks aren't why the universe isn't using Linux.


Apple didn't use X because OSX is based on NeXT and NeXT didn't use X. As to why NeXT didn't go with X, I don't know, but I assume it had something to do with the state of X in 1988, and those arguments probably aren't valid any more.

For me personally as a Linux user the audio framework is the one point that is causing me the most pain at the moment, and the one thing I really wish they would fix. If I could get audio working perfectly, I really cannot think of any other major complaint (other than a couple of pieces of Windows software I kind of wish I had) in my day to day Linux usage.


There was some X11 in OpenStep: http://en.wikipedia.org/wiki/OpenStep

The link I put up earlier has some ref to Apple's reasoning. In the end, they decided to just replace it instead of just add the missing bits. That is really significant if X brings so much. I think it's just easier to get somewhere useful without the huge network of interested parties involved. I've watched Linux gyrate (and used it) since about 95, each year or so with a prediction it'll take over. My hope is that Google's OS will be a useful makeover for the mainstream.


If you want to try something like throwing out lots of the current design and going to a new one, it'd probably be best to go around, drum up support, and bill it as something new. Sounds cool, but someone has to get coding.


Here's one suggestion: adopt the drag & drop software installation model of OSX.


There's no reason this couldn't be done now, someone just needs to implement that in PackageKit and it would work fine on many distros and package managers. The backend and everything is all there, just make a drag and drop event instead of clicking an "Install" button.

That said, installation of software on Linux has been one its shining points for a long time. Package management is awesome and neither OS X or Windows offer it.


Some package management (apt) would be better if it would be easier for 3rd party vendors to get into the sources.list. For now you either have to get users to edit that file by hand (for which they might not even have the rights) or you have to use another installation mechanism which avoids the package management to which they are used and lose the advantages of stuff like upgrade. Maybe there should be a sources.list per user which allows normal users also to install software and distros could be set-up to work with a certain file-type in a way that if you click it on the web, then a default installer goes up which guides you all the way through the installation.

Right now the situation is that if you want anything that's not in the "official" package repositories (probably the majority of software ...) you are getting into a world where installation is harder than in any other OS. And that is true as well for open-source applications (you want newest blender or one that is months behind?) as closed-source applications (which simply don't get into official repositories anyway).

I also tried using one of the binary installers for a while, but when it stopped working with newer distros (my application still worked) I guessed that tar.gz and zip are for the moment simply the best I can really offer and that's still what I'm using for now.


On Ubuntu there is a /etc/apt/sources.list.d/ directory and files included there are automatically loaded by apt.

Google Chrome uses this mechanism and it seems to work well. You install the .deb package manually the first time and then the package manager will update it automatically.

I thought that this solution was really great when I saw it.


Don't agree. Installation of distribution supported software on linux is very easy already. Drag and drop will just cause a lot of issues with software that actually requires more complicated installation, as well as libraries, etc.


That is pretty much the problem. The only software that installs easy (in most cases) is the "distribution supported software" (in the version(s) currently used by the distribution that is).


I guess one of the questions to answer about ease of installation is what makes Linux software different from the majority of OS X software.

[note] OS X does have a package format for more complicated stuff


Yeah, desktop linux sucks at installing apps.


If history is any evidence, nothing. :P


Actually, as someone who has used Linux as desktop software since 1996, I would say the exact opposite: it's making rapid progress compared to where it was once. Perhaps it still hasn't quite caught up, but I'm relatively certain that soonish, it will be 'good enough' for an ever-larger segment of the population. It's already 'good enough' for a lot of people.


it will always "soon" be ready for large-scale adoption.

But it never will.


Until it is. In 2006 we had Ubuntu on all computers at the company I worked for, because most of the people were either 1) developers and used it anwyway or 2) didn't really need more than a browser and openoffice anyway, and were thus perfectly content with it.


Sell Ubuntu (and other apps like OOo, Gimp, etc) for a nominal $9.95

Give two options in the download area of your software, one green button with a $9.95 download and one blue button for a free download. Both pointing to the same package.

If you want to contribute to open source, click the green one.

Also sell it on best buy and walmart for $9.95 to reach the non-tech-savvy audience.

Even if they drop it on the trash can when they get home. At least some money goes to the pockets of the open sourcerers, which is better than none.


Linux on Desktop is offering too many options for users to install and configure apps.


Does somebody know who is the speaker? Didn't see it on the blip.tv page.


Bryan Lunduke - sorry I saw it on the video now.


I gave up on Linux. Not fodder; ^ if you agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: