Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
My God, it's full of yaks (ronjeffries.com)
184 points by ingve on June 24, 2016 | hide | past | favorite | 113 comments


The problem here is MacOS. The amount of work around and hacker I've had to do on Mac far exceeds anything on Linux and more surprisingly Windows. Let that sink in, it's easier to get things to work on Windows than Mac.


Not my experience at all.

Aside from some expected path issues (builds too stupid to actually look rather than assuming a disk layout, etc) and some BSD <-> Linux issues (builds that assume all the world runs linux), I've had very little problem getting most things working on OS X.

It does help to learn what Apple has, uh, improved. This point is related to what an awful time I have getting anything to work on Windows - I hate it and don't want to learn about it.

At least in the article, there are hints that the author hasn't taken the time to learn about the platform they want to run code on. Which is fine - I feel that way about Windows[1]. But it is also a good idea to understand why you're having the problems you're having.

[1] Which is why I don't touch Win at work and the VM I run at home is a hacky mess that works for what it does, but only after a few aircraft carriers-worth of cursing and slash-and-burn hacking that I know can't be the 'right' windows-land way to do things. (The abomination is completely firewalled off from the outside world - I try not to be a disease vector.)


Aren't you just expressing the same opinion about Windows as the parent about Mac? You are unfamiliar with it and so it takes more effort than you would like. Isn't this a tautology?

Having built applications for all three platforms I find they all require a certain amount of patience but once you understand the motions, it's relatively painless.


> Not my experience at all.

Totally agree. We had developers recently join our team, which primarily uses OS X and Linux. They were excited about using Windows since they had previously been developing in .Net. They now appreciate that development in Windows is not as easy as Microsoft wants you to believe.

The problem that won't go away anytime soon is that Microsoft spent years not making it easy on developers to develop in anything that isn't typical to develop in Visual Studio. They will not overcome that overnight. When a development team has focused on Windows, all is fine, but don't expect things to work without problems otherwise.


It's really not that hard to setup a Windows workstation...


I can vouch that it was at least as of Windows 7. But I hear the Windows folks now have a proper shell and some scriptable package management!


I gave up on MacOS back in 2012. Linux + package management for the win.

Maybe if Apple maintained their own ports tree, but they don't. There's no UNIX under there. There is a BSD kernel, a bastardized UNIX-ish base on top of an ancient file system (yes HFS+ has finally been replaced, but just this year) with a proprietary display system and yaks fucking turtles all the way down.

MacOS is a terrible developer OS. 10.6 and Expose and work spaces. 10.7 took this wonderful thing and replaces it with Mission Control; which only allowed one row of workspaces, no columns and windows were now grouped together so you couldn't see shit. No way to revert to the old behavior either.

Then Apple destroyed Final Cut. Goodbye video editing.

I switched back to Gentoo + i3 and never looked back.

I know there are now tiling window managers for MacOS. That's cool. I might try them one day. If I'm forced to.


As I wrote elsewhere, why you'd you setup your desktop machine as a dev environment?

Use vagrant or Docker or similar on the Mac -- this way you get reproducibility, snapshots, etc.

I'd do that even if I used Linux (in fact I used to use Linux and did just that).

Why would I want to setup my Desktop Linux box as my deployment/developer environment and have to change it and mess with it anytime I work on a new project / another deployment target?

>Then Apple destroyed Final Cut. Goodbye video editing.

As a professional video editor, I have to disagree with this too. They cut some features from FCP7 for the first FCPX release, but they brought them back, with more features and better implementations in subsequent releases. And all that with a codebase that's not some 2000's relic, but built for today's challenges.

Besides, I don't see what's much better to use for video editing on Linux.


I agree that isolation is good, but spinning up a VM while developing each project is massively overkill.

A modern package manager, such as Nix or even most language-specific package managers, will allow you to set up some sort of sandbox for the project without having to take the performance hit of full-blown virtualization while developing. Of course, you'll probably want to spin up a VM while testing, just to be sure. And Nix can handle that for you too!

All while staying completely reproducible from your config files, all without having to bother with your virtualization program's snapshot system.


>I agree that isolation is good, but spinning up a VM while developing each project is massively overkill.

It is a little, but I've found that running headless (and no reason not to) drops this overhead considerably. And with 4 cores, 16GB memory and SSD, I hardly notice it.

Also modern containers like Dockers can drop the overhead to almost zero -- though I haven't really tried them to see how comfortable they are to work with.

Besides, if you natively run e.g. node, postgres, some work queue, etc, you're still gonna get the overhead on your host from those processes.

Nix sounds nice, but as you say you still want to spin some VM to know it all works in the final deployment setup. Not sure about your last suggestion "Nix can handle that for you too", will have to check Nix out more.


I'm currently working on 3 different projects, and have several projects that are on maintenance mode, that I return to fix bugs here and there.

Docker makes this workflow a lot easier than multiple VMs. I'm running ubuntu with vagrant on macOS, and each project has a docker-compose file that spins up all the services. Stopping everything for one project and getting another one up is just one command, and takes a few seconds at most. With docker's layered fs, there is almost no space overhead.

Even if you don't use docker for deployment, you can just use a container as a psuedo-VM to use this workflow.


> Then Apple destroyed Final Cut. Goodbye video editing. > I switched back to Gentoo + i3 and never looked back.

What do you use for video editing on Linux (that's in the same calibre of Final Cut Adobe Premiere)?


I was with you until you mentioned video editing.

The video-editing software and support is vastly better under OS X than under Linux.


Haven't had that ever. And I've run from Java and DB2 to Rails, Django, PHP, Python, Postgres, Go and Node. Plus all kinds of unix userland via Brew (and, once upon a time, Fink -- I've also compiled some stuff from scratch, again not difficult).

Nowadays, though, I run them on Vagrant (again on the mac) -- mainly to have snapshots and easy copying/reproduction of any target system. And even if I run Linux, I'd still run Vagrant on top of it for development too.


Motions re: Vagrant and an earlier post on nix absolutely seconded. If you practice good "environmental hygiene", you can run dev setups on "bare metal" OS X just fine. This can be a but of a hassle to make 100% repeatable, but it can be done.

In my experience, it's far preferable to work with an isolated and repeatable process whenever possible. I really love Vagrant for this, since it scales from simple inline shell scripts to setup a VM, up to integration with your favorite flavor of provisioning software (Chef, Puppet, Ansible, etc.). I've setup environments that ran common provisioning toolchains that scaled from individual dev setups straight to production clusters. Similar motives drive a number of other approaches, esp. container-based systems like Docker.

Once you're fluent in these kinds of tools, you never really want to go back...


For anyone else wondering, I'm guessing coldtea is referring to this: http://www.finkproject.org/. At first I was confused and thought they were talking about Frink, another very interesting but entirely unrelated tool: https://futureboy.us/frinkdocs/


Windows is a nightmare when it comes to nodejs and pip. I ended up abandoning Windows when I was playing on my Mac and basically got things working instantly while I struggled to get the right incantations / pile of dev tools to do what I want.

Ubuntu saves me so much time. (Also for some reason, Windows hates my graphics card and produces terrible sound in my BT headphones. Somehow a multi-billion company can't figure out BT while an open-source project can)


Next month Windows 10 will ship Ubuntu as part of what they're calling "Windows Subsystem for Linux" -- bash, apt-get, etc. https://msdn.microsoft.com/en-us/commandline/wsl/faq

It's actually kind of revolutionary.


I still probably will stick with Ubuntu as my main system.

I have Windows 10 installed on a separate drive for when I want to play games but even then, I have to cater to Windows. Make sure to save constantly because sometimes Windows decides it is too much and causes my graphics card to flip. And I also have to plug my headphones in or else I get static-y mono sound.

Admittedly I could probably go buy "For Windows" headphones and figure out what is going wrong with the graphics card... but I shouldn't have to. We are talking about the latest operating system whose ancestors have been dominant across the world for decades, developed by one of the largest companies in the world. How are they getting it wrong?


> We are talking about the latest operating system whose ancestors have been dominant across the world for decades, developed by one of the largest companies in the world. How are they getting it wrong?

You're also talking about them being able to perfectly support every modern piece of USB/PCI-e/etc connected hardware, which is no small feat even for the market leader. For comparison, I've never had these problems you describe since installing Windows 10. How can you be sure your success with Ubuntu isn't just good luck?


It's available today if you are willing to upgrade to a preview build of Windows. I use it often and I must say, it works pretty much exactly the way you'd expect a native Ubuntu system to work. Some notable issues are a lack of pty support and no X server (although it's possible to use a third-party one.)


Nothing revolutionary to want to lure people on your proprietary ecosystem while at the same time you exploit open-source. Clever marketing strategy? definitely yes.


The problem here is MacOS

I've been doing sysadmin, programming, devops, etc professionally since 1995 and I've had to shave SunOS yaks, HP/UX yaks, Solaris yaks, Windows yaks, MacOS yaks, MacOS X yaks, Python yaks, Ruby yaks, Java yaks, Make yaks, C++ yaks, git yaks, cvs yaks, rcs yaks, ext2,3,4 yaks, Netapp yaks, Fibrechannel yaks, Cisco yaks, F5 yaks, nfs yaks, smb yaks, frame relay yaks (!), tcp yaks, rpm yaks, deb yaks, homebrew yaks, npm yaks, pip yaks, yum yaks, apache yaks, http yaks, xml yaks, tia568a no wait b yaks, serial crossover cable yaks, who the hell thought they could use 75 ohm as thicknet cable yaks, ... you get the idea.

We deal with a complex ecosystem with even more complex interactions that is often poorly documented at best. We're lucky when we can treat it as a black box and it all works. But often something breaks and suddenly we're hoisted by our own petard and there's yak hair friggen everywhere. I consider it a modern miracle that I don't have to pin ethernet duplex anymore! Glad we finally figured that one out.

Converse to your assertion, I switched to OS X back in 10.2 days because I got tired of shaving the hairball that was Linux desktop at the time. Oh, you want to print something? Here's an lpd yak for you... have fun! Oh you have a new monitor, here's some scanline settings to put in your xconfig that may work. Mac OS X is amazingly smooth as a desktop OS. [1]

But since you mention OS X, I'm reminded of of my favorite Apple technical note ever which began: ”As if it were a swarm of bees, you should stay away from the SyncServices folder in Mac OS X.” And that should apply to our whole industry, really... a swarm of bees.

Anywho, shaving yaks, story of my life, at least it pays well.

Pro-tip: when shaving yaks, take notes. I guarantee you you'll shave this yak again and it's handy to have your own reference to refer back to... even in these days of google, stack overflow, and quickly moving technologies. At worst, maybe it will be inspiration for a fun blog post some day. :-)

[1] So here's a fun story. Back in 2003 or so, I wrote a perl script which would forward emails to my phone via SMS. It did so by POSTing to a web form on a Verizon server. Worked fine when I was developing it on my OS X desktop, but failed to work on my Linux server. Yak shaving time. Eventually I'd broken out tcpdump after diagnosing that the Linux server, sitting on the network right next to the Mac, couldn't even connect to the Verizon server. Turned out the Linux server had ECN enabled, which being new at the time, Verizon's firewall was rejecting. Only had to peel about 5 layers of the onion on that one.


> Oh you have a new monitor, here's some scanline settings to put in your xconfig that may work.

I suggest you try linux again, find new yaks and update your arguments. You haven't had to mess with scanlines in over a decade. I've never done it. Heck, you don't even need to create an xorg.conf any more. And some time Real Soon™ even X11 will be gone.


Oh my yes. If you are a geek with zero proprietary software dependencies (as I am), then the Free *nixen offer the qualitatively best experience these days. I was a very early adopter of OS X when it was released (starting with the betas), and happily spent nearly a decade there. At some point the friction of Linux & BSDs decreased relative to the friction of OS X.

I remember editing XF86Config, .fvwmrc, and PPP init scripts back in the long ago, but I'm damned if I could do so from memory today.


Maybe I'm showing my age (or rather, lack of), but what on earth is a 'yak' and why would you shave one?



The problem with a mac as a developer machine is native libraries. The stars have to align for your F/OSS to work properly. Even brew punches itself in the throat regularly. On linux it just works. That said I use my mac more often than Linux, mostly because I spent so much money for it, go figure.


Why you'd setup your desktop machine as a dev environment?

Use vagrant or Docker on the Mac -- you get reproducibility, snapshots, an environment fully compliant with any server you want to deploy to, etc, and you still get to code for it (editing source etc) from the Mac.


I've had some things that were easier on Linux than a Mac, but I've never had (or even heard of someone having) anywhere near the amount of trouble Windows gives you if you want to do any coding outside of Microsoft's tools. What did you set up that was easier to get running on Windows?


it's interesting to me now when I see people saying these types of things about X operating system.

growing up at with a dad from DEC I actually learned how to use systems I've never used since :D but my dad ran Windows and hated macs (he worked at Xerox before DEC). so I grew up with dos and Windows for many years, then I switched to Linux when I was a teen.

I hated on macs and Windows (mostly macs) then, it was a shit show to my eyes and Linux was the ultimate (even though back then with my current pov Linux was the actual shit show, things broke and had to be fixed relentlessly)

I worked in more corporate type jobs for a while and used my Linux and Windows skills when needed, and then started in the OSS startup world and saw that everyone had a Mac, writing free software on a Mac, my head nearly exploded!

but I was given a shiny new MacBook Pro and so I had to give it a go. I was determined to hate it but once I found tools that let me use the command line I was fine with it. it was pretty slick, minus all the crap I had to do to get it working.

in the end I found they all have individual uses that the others aren't as good at and I use all 3 operating systems daily. the Windows boxes are good for some applications that just suck in wine, the Mac is good for applications that are written exclusively for macs but are very useful and I love that it turns on and is usable right when I open the lid (probably the only reason I bring it out with me when I leave my command center)

technology has gotten to an amazing point, especially if you're proficient enough to use and understand it (which admittedly isn't easy to do, but worth it if you can)

some of this stuff is just amazing, I run X11 on everything and I have a few very beefy servers that I use to just run my software on and display it on my main workstation.

using some newer tools I am able to use the video cards on my headless machines, I can run wine server on a dedicated wine box and run rdm or xencenter and when I decide to go mobile I just switch those sessions over to my Mac and use them there.

I have a IDE server that I run IntelliJ and all my other coding projects off of throug X11, it's got super redundancy and is more securely protected than my other boxes.

using automation tools like Jenkins, tasker, pythonista, auto input, auto remote, and Pagerduty I can get notified of any incident and automatically have the servers connected through dbus and yakuake send output to my phone with stalled processes etc..

its all good IMO


As much as I'd like to hate on MacOS (take a perfectly good Unix implementation and break it ;-) ), in this case I've shaved almost the exact same yaks on Linux. At least a while ago (and it seems now) setting up google-cloud-engine was black magic. If you happened to do it exactly right it will take you about 15 seconds. One misstep and you fall into the volcano. Now that it is working for me I try not to touch it...


Exactly my experience. OS X sucks donkey balls for developers. Windows is SO MUCH better... but then nothing beats Linux... a proper one that is... which is called Ubuntu LTS.


This is why I love nix so much.

Getting a dev environment set up with nix is the usual amount of hassle. The difference is that nothing ever works by accident. Therefore, once it does work, I know I've captured all the dependencies and configuration, and I can reproduce the environment wherever I want.


I'm trying to read the docs and understand how nix works. It seems really fantastic but at the same time so radically different that it seems like it would collide with a lot of built in assumptions of Linux development.

For example, does nix play nice with CMake or config files? ie. I can just point them at the packages in /nix/ and it'll work?

Or for example, I often find myself wanting to have a few different version of OpenCV around (with different build flags) - but all version 3.0. Nix seems to make a new hash/folder when the dependencies change. Can you make make a new build/folder when the dependencies stay the same? (or specify to NOT make a new folder and to overwrite the previous compiled version - ex: you fix a bug)

I have a ton of overlapping dependencies between contracts and projects and this would be a cool tool to keep it all organized (right now in my workflow each project/contract would have it's own copy of OpenCV.. which is a bit of a mess)

(I'm totally cheering for this - hope it becomes mainstream, just wondering how "prime-time" is it)


Yeah, there's a learning curve. No doubt about that.

It does sound like nix is a good fit for your use case. You could certainly create separate environments each with a slightly different build of OpenCV. They'd share dependencies where possible, but not step on each other where they differ.

Nix provides a tool called the standard environment (stdenv), which makes compiling classical unix software pretty easy. It assumes the typical download, configure, compile, install sequence and provides the right environment to make that work. It's really flexible, with lots of hooks that let you override the default behaviour and customize the build. The Nix manual has a pretty good overview of it. https://nixos.org/nixpkgs/manual/#chap-stdenv


oh, very interesting! I'm definitely going to look into this further. Thanks for the pointers


Nix supports CMake fine. In my experience adding new packages is not too hard as long as they're not GnuCash 2.6 ;)

Any change will lead to a new output folder, there is no option of reusing them.

Why not just give it a try in VirtualBox to see for yourself how far along it is?


I guess I understand why it has to make a new folder - it just seems that that would means you have to redirect (and relink) anything that uses a recompiled library. Okay, I'll definitely start to dive into this. I was asking around to see if it fits my workflow. thanks!


Yes, that's exactly true. There are some workarounds, but they're a bit hacky and don't normally get used...

Luckily there's a NixOS build cluster, so you don't normally need to recompile glibc yourself. :)


> nothing ever works by accident

This is such an important property.


Then you update and your Nvidia drivers stop working and can't get into X. Lose a day figuring out what happened. Try to take a screenshot and that no longer works either etc. etc. I love Linux as a dev environment but it has its share of yak shaving and sometimes it breaks as well.


Yeah. I actually develop on a Mac and deploy on NixOS. Nix works well on both, and my builds are parameterized by system. Cross-platform dev has its issues too, but they seem easier than dealing with desktop Linux.


While this isn't a solution for many, I usually do development on a Linux VM for these exact reasons. Something not working? Boot a new VM from scratch, install necessary packages, and run it.

Usually faster than a couple days of internet searching.


The developer's response to infrastructure as code. :) I use Vagrant for just about any development task these days, just because I can take all those arcane bits of knowledge, shove them into an ansible playbook (or whatever provisioning tool you like), and hopefully never have to fuck around with them again.

Shave the yak once.

Write a script that shaves the yak for you next time.


Even with Vagrant, don't you want to run your own native IDE? And if you're doing that, the IDE probably wants to look at your dependencies to figure out what your code is doing. Which means you need to set up the whole thing locally anyways. At which point it appears you've defeated the purpose of setting everything up inside Vagrant.


You can generally solve this by just sharing folders between the host & the vagrant box for the code and the dependencies - point your IDE at those.


That involves compromises on battery life. Why not just run Linux instead?


This involves compromises on battery life.


Well played :)


I do all of my development on a 2015 Macbook Air and run a Linux virtual machine almost every day. Still plenty of battery to get me through more than a full day of work and casual browsing.

Way better than Linux on the desktop. Way easier than configuring a second operating system to run the stack. I'm not going to production on MacOS, why would I develop on it?


Given the tendency to use containers these days for various reasons, there's presumably an argument that you could just make that the expected practice. It's still more principled than just downloading and running a single binary unikernel in the container, though exactly what you would gain is a mystery to me (Gentoo points?)


I've gotten to the point where I scoff at projects that expect me to install hours of crap and dependencies at specific versions and can't bother to encode it as a Dockerfile. Like, come on, you had to figure out how to get this installed, eventually you're going to have to do it again, why not document it with code (Dockerfile) and never do it again?

Ironically, just watched someone whine about open source projects using Docker for dev on the 19th and then praising Docker for basically the same thing on the 22nd. Docker isn't magic and no one says it is and people yelling against that strawman are just making themselves look ignorant.


Ron is right: it's a mess. But I have a tried and true recipe for app engine python and java both. First, download the Linux cloud SDK (works fine on a mac). Then download the python google appengine SDK. Put the appengine SDK inside the platform directory of the cloud SDK. Add this to your python path.

I use this framework for Python: https://github.com/agostodev/substrate

I use this Clojure template for Java: https://github.com/nickbauman/cljgae-template


I think the real problem here is the rise of huge libraries/frameworks/SDKs along with all the dependencies they bring, and the dependencies those dependencies need, etc. The problem isn't in code you wrote and should easily be able to debug, it's in the orders of magnitude more code you didn't write but depend on. There's too much complexity, too many moving parts, and it creates a fragile system in total.

But I think it doesn't really need to be that complex and involve so much yak-shaving. Around 10 years ago, when I wanted a personal web app of the similar CRUD type, I opened a text editor, browser, and the PHP documentation, and it was done in a few hours. Uploading a single file to the server was all that I needed to get it working and test it.

I thought I’d write this article about my experience, to try to draw some learning from it. I created the folder and the file, typed in the YAML and the first paragraph, and started Jekyll to build the html as usual.

Would it have been easier and faster to just write the HTML directly and upload it to the server instead of requiring an elaborate (despite hidden behind one command) "build" process that again depends on a large infrastructure of software?

Me, I just want to write programs, mostly small, that do things that may be worth writing about. And I want to write articles like this one, convert them to HTML, add them to a few indexes, and put them on my site

My advice is to avoid all the complexity and big systems that claim to do just about everything while requiring plenty of configuration and dependencies, if all you need is something that could be much simpler. Otherwise you are "making more yaks to shave".


One lesson i had beaten into me at an old job is that software should come with a self-contained reproducible build with an absolute minimum of external dependencies (which might be none at all). If GAE provided a zip file with everything in it, all set up and ready to go, you could download that and get working.

But making a self-contained build is hard, so nobody ever does it.

Java has got usefully close in recent years. You can download and build a Gradle project with only a JDK and a shell installed, plus HTTP out to the internet; the project can (should!) contain a standard wrapper script which can download the right version of the build tool, and the build tool can then download the right versions of the dependencies and do the build. The downloads are cached locally so you don't download the internet on every build. The wrapper script only does straightforward stuff, so it doesn't require a particular shell or version, and the JVM changes slowly and safely enough that you can get away with being pretty sloppy about versions (ie any Java 8, 2014 - 2017+, will do).

There's no dependency on having some SDK installed, or some version of the runtime, or some build tool, or some package manager, or any other bits and bobs. Just a shell and a JDK. You don't have the problems listed in the article, and you don't have to invest time in avoiding them.

The next step would be for the wrapper to download the JDK itself, but i think that's unlikely to happen, due to bandwidth and licensing.

Once you're in the build tool, there are strong conventions about what's where. The 'build' task should do everything; 'check' should run every kind of validation possible; 'assemble' should build every artifact. Ideally, none of them should require any setup, although that requires some discipline on the developer's part.

I don't know of any good way to set up things MySQL or RabbitMQ if you need those for integration tests, which is definitely a problem. Perhaps we need a Docker plugin, so in my build i can say "to run the integration tests, download and start the following images, and inject the container IP addresses into the tests like so". The usual way to get around this is to use pure Java in-memory options for integration tests (eg H2 for a database, maybe something like HornetQ for a message queue), and you can get those like any other dependency, with no manual setup needed.

Sorry for the ramble. tl;dr yaks are bad, but there is hope for canned pre-shaved yaks.


Haskell's stack tool is aiming for something like this. All you need is the stack binary. stack will download GHC for you at a given version, and fetch your dependencies for you. The dependencies are locked to a fixed snapshot of versions. There are curated snaphots, with an LTS model that guarantees all packages in the snapshot work together, so you're insulated from package authors randomly breaking API compatibility on you, and you can upgrade to newer LTS snaphots when you're good and ready.

http://docs.haskellstack.org/en/stable/README/


I'm starting to use stack for the one thing I do in Haskell. Is it really true that all you need is the stack binary? No shared libraries or anything?

When I set it up starting from Ubuntu packages, the process was still convoluted and involved bootstrapping from an older version of ghc.


It seems true on OS X and Linux. I'm not sure if it's 100% statically linked, or if it depends on a particular version of glibc on Linux. Edit: I would check, but I'm not close to a linux box.

I haven't tried it on Windows.

The download instructions say it depends on libffi, libgmp and zlib, which I didn't realise, plus some other tools (gcc, etc.) which I assume are for building packages that wrap C libraries.

http://docs.haskellstack.org/en/stable/install_and_upgrade/#...


Sounds good!

What about getting the stack tool, though? Is it in the package managers at a useful version? It doesn't seem to be in MacPorts. Does it change quickly enough that you need to ensure you have the right version? Ah, i see that the installation process is:

curl -sSL https://get.haskellstack.org/ | sh

Perhaps each project should include a shell script which does this!


When did "curl | sh" become an acceptable install process, again? Let's Encrypt uses it, too, but it makes my elderly sysadmin heart spasm with terror.


It gives me the willies too. But it's not really any different to running the build script in a project you just downloaded.

curl | sudo sh is much more of a problem.


This is why I have always been scared of GAE. Seems like the only thing that works precisely like GAE is GAE :)

True unit tests though ought be runnable outside GAE since your code should not be coupled with GAE if it is unit testable. You still need to test your code correctly integrates with google app engine.

I would take a look at tox and py.test for testing in python, these are what I use in my normal linux environment.


Here's a link to the python stubs:

https://cloud.google.com/appengine/docs/python/tools/localun...

Here's the Java stubs:

https://cloud.google.com/appengine/docs/java/tools/localunit...

I wrap the hell out of the java stuff on App Engine because Java is so pedantic. It ends up looking like this:

https://github.com/nickbauman/cljgae-template/blob/master/re...


GAE has some rough edges, to be sure. But once you know about them, the really hard stuff is dead easy to test. The official test stubs are awesome to work with. I live or die by them: they've never let me down. If it works with the stubs, it has always worked with the real thing.


That is good to hear! I will check out GAE next time.


I have things to do that don't include wasting days of my life because Bozo the Developer updated something and expected downstream to just cope with the issues. Certain communities live on that behavior; its a waste of my time churning.

I would like to point to the Rust developers as ones fostering a more deliberate culture with tools and practices that manage change without exploding downstream.


Java is quite nice this way as well if you can get away with it.


I don't want to derail a splendid rant on yaks, but for GAE users running into this little problem of how to test and run:

We use OS X for development, for the most part, but Linux works well too.

We use nose2 with the following plugin to run unit tests: https://github.com/udacity/nose2-gae . It assumes your GAE SDK is installed at /usr/local/google_appengine.

For API tests, we generally create a webapp2.WSGIApplication object during setup, configured with the routes we want to test, and we send webapp2.Request objects to that application using request.get_response(wsgi_app).

Hope that helps!


One of my jobs as the person usually responsible for setting up and writing the core bits of new projects at my company is to make a build that just works. All most all of our core (e.g. serious) projects consist of the following steps:

* git clone

* setup.{bat, sh)

* Open IDE of choice.

* Click build button. (or run make)

The only dependencies are git, a default PC OS install (Linux, Windows, Mac), and your IDE of choice. Any project or system that can't provide that easily is too complicated by far, our non-technical CEO and sales staff can build and run our code without any help.


I tried to reproduce to file some bugs, but the docs Ron linked to and the datastore_test.py[1] they in turn link to don't do "import webtest", so I guess the docs have been updated recently.

A quick search suggests webtest is part of Pylons[2]. But Pylons isn't listed as a built-in third-party library[3]. So I'm not sure if there's anything to fix.

1. https://github.com/GoogleCloudPlatform/python-docs-samples/b... 2. http://docs.pylonsproject.org/projects/webtest/en/latest/ 3. https://cloud.google.com/appengine/docs/python/tools/built-i...


The Pylons Project is a larger organisation that maintains a lot of open source software, including webtest, Pylons is ALSO a web framework, but that is also unrelated to webtest.

In the authors case, webtest should have been installed separately, rather than using it from within the cherrypy source code, which seems to vendor webtest.


The Pylons maintainers maintain WebTest, but it's an independent library.


Yep. There's a reason after 20+ years that I bought this T-Shirt:

https://teespring.com/maksesoftwarebetter2


Great find. Okay so if 5 other people who feel this aptly visually depicts their daily lives and could click reserve on the hoodie version I would be stoked ! :)


I signed up for notifications, it's available now, for 3 more days.


Before you start trying to draw conclusions about the general state of development, consider that Mr. Jeffries...

"Some further very simple tweak, which I no longer remember, and the test ran … and did nothing. That’s not too surprising, I guess, because there’s no

    if __name__ == '__main__':
"which, I believe, means that no one calls the testrunner.

"You’re supposed to know that!"

...is also the fellow who had some trouble with Sudoku[1].

I say this not as an attack on Mr. Jeffries, but as a way of pointing out that shaving yaks is easier if you know to start with shears and not a hatchet.

[1] http://ronjeffries.com/xprog/articles/oksudoku/


If I read that right... that Sudoku sllver only got as far as becoming a Sudoku board API, aka a object wrapping an array?

And the article spends more bytes talking about whether some code is YAGNI, than the amount of bytes in the code itself?


As far as I know, yes, that's where he gave up.


I don't want to be harsh on the buy, but some of those dark corridors were in fact marked as not where you wanted to go.

>To eliminate this warning, please install libyaml and reinstall your ruby.

This line should make you think of using brew or apt-get or whatever to grab libyaml first. A few clues, the lib prefix is much more common for native libraries rather than interpereted ones. Second, the request to re-install Ruby means that this is a dependency that is required at compile time for Ruby, which rules out anything you'd install as a gem.

But maybe I just have shaved too many yaks and know what what already.


I wouldn't expect anyone to know those details, but I would expect someone to follow the directions if they want it to work.


I can echo GAE being hard to work with at times.

I did an app for a school project and was using Flask. At the time App Engine only had webapp and web2py available as frameworks (maybe it's different now) so a bit of shoehorning was necessary to get Flask working, and Flask was relatively young at the time so there weren't a lot of examples out there of the two technologies being used together. I recall having a lot of trouble working with GAE's file upload API.


As the current maintainer of WebOb it pains me to see Google App Engine shipping 1.1.1 by default, and 1.2.3 as one that the user can select.


I'm glad Docker solves this for me, at least at the company I just joined. One of their microservices has tons of dependencies, including npm modules, databases, testing frameworks, and so on. Most people would scoff at using Docker as a glorified VM, but it honestly does the job, with much less overhead vs VMs.


yeah it's a mess. Modern developer has to know and understand language semantics and ecosystems of Python, Ruby, Java, C, Lua, Javascript and sometimes these are not the main languages of her work. Add to it things like Clojure, Erlang, C++, C#,F#, Purescript, Typescript, Elm, Mathlab... you get the idea


You Python developers have it easy :) Developing in Java on AppEngine adds an additional herd of yaks to the meadow.


The Go variation seems pretty good, though. (Or maybe I've just gotten used to it?)


I just recently installed a piece of software written in Go, from source, on Ubuntu 14.04. It wasn't so nice for a non-Go-programmer.

The instructions wanted me to download a binary tarball of Go and untar it directly into /usr/local. As if I would never need to uninstall it. Apparently even though the version of Go has changed several times since Ubuntu packaged it, they think this will be the last version ever, or something?

I ended up hunting for an Ubuntu PPA that had it instead. Of course all the accepted answers on Stack Overflow point to PPAs that don't exist anymore. A comment sitting at some tragically low score saying "hey, try this one instead?" pointed me to something that worked.

I think you dealt with it the first time and you've gotten used to it. (And maybe there's some nice way to get a newer version of Go when you already have Go.)


OOI, is godeb[0] the something that worked? It "transforms upstream tarballs for the Go language in deb packages and installs them". It's always worked well for me.

[0] https://github.com/niemeyer/godeb


I'm not sure I understand the problem. Renaming the old directory to go.old and installing a new one seems pretty easy to me, but apparently not?

BTW, you can put the Go SDK anywhere you like, as long as you set GOROOT. See "custom location" on https://golang.org/doc/install


For reference, I'm referring to the directions you get at the top of the search results when you type "installing go on ubuntu" into Google.

The directions link to Stack Overflow, though I don't know how the search result picked out these steps, as Stack Overflow has a different accepted answer that doesn't work. Intervention by Google, possibly?

I didn't realize it would only make one directory. That is reassuring. I would still prefer packages as a way to install, upgrade, and remove Go, instead of remembering months from now that Go is installed differently from everything else and there's this directory I need to mess with.

Now, that all would have been more intuitive as something to do in my home directory. If it only needs one directory and it can go anywhere, why would these Google-blessed directions tell me to mess with /usr/local? I believe you, I just assumed that people would be averse to messing with system files without a package if there were any other option.

(I think of Python, where the one thing you're supposed to do with your system Python is use virtualenv to install a non-system Python. Which is another example of something that's only clear to veterans of the language.)

Now... this is a problem with a whole lot of programming languages. Programming languages change. Their libraries change way too fast for an OS-approved package manager to keep up, so you need a package manager for your language. And that changes too, and nobody designs their packaging system to smoothly update to a better packaging system designed by someone else.

Maybe one day every programming language and OS will unite under one purely functional packaging system and never replace it... haha, no.


Seems to me the problem is layers of "automagical" systems for locating dependencies. That in turn end up clobbering each other in various ways.


This is nothing. Ramp-up time at any software shop with more than 2 engineers is months. He had a really easy time from where I stand.


You need to get into better software shops!


They don't exist. Software is an awful business.



MacOS does not seem to be very developer friendly.


I believe part of the problem for Ron there is a.) speed of the community & lack of communication, b.) lack of consistent tool isolation. You notice he's using RVM (while most of the Ruby developers I know have moved to rbenv due to better support) and that it was out of date. Also, he had the foresight to isolate his Ruby tooling by using rvm (good on 'ya, Ron!) for bizarre system dependencies, but did not chose to use a similar Python environment manager strategy such as pyenv of pythonbrew (for suffering the same damn problems). I sincerely empathize with him, but some of this can be argued as the cost of doing development n! generations past we recognized the impact of 'internet time'.


I'm curious what the failure of MacOS was in this instance. This comment at face value seems like you're trolling.


The intermixing of macos python and brew installed python has been painful for me in the past.

Specifically, I spent hours and hours trying to fix

   import matplotlib.pyplot
producing

   ImportError: cannot import name _thread
but only in ipython, not the interpreter. (HAHAHAHA fuck you developers).

A ton of googling and swearing produced the fact that a package named six that does god knows what was being read out of

   /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six.pyc
instead of

   /Library/Python/2.7/site-packages
rm -rf'ing the former path fixed things, but fuck only knows what else I may have broken. Just because I want to plot things from ipython.

ps -- 6 months ago the above used to work. Why did it stop? I have no bloody idea. But this consumed multiple hours of my life.


These issues also crop up with installing python libraries via pip in a system managed by apt/yum/whatever, from what I remember. Dealing with Python versions / libraries that are not part of the OS's package manager can be painful. It's why people use things like virtualenv / pyenv / Docker even in controlled server environments.


Seems like this would have been straight forward with a virtualenv?


Yup. Really, no OS is developer-friendly if you're going to mess with the system Python interpreter, because the system Python interpreter exists to serve system software written in Python. You can easily get yourself into very similar problems with Debian or Fedora or whatever else.

Install a virtualenv, and on OS X, maybe install Python from python.org, and then everything works amazingly.


After I learned more that was probably the right thing to do, but I'm not a python dev. I just want to use sklearn, pandas, and associated tools. I'm not sure what the smooth path to doing that is. At various times people recommend macports, brew, anaconda, and so on... Note that compiling numpy from source sucks, and I'd much rather someone else do so.


There are now wheels (binary distributions of Python modules) of numpy, scipy, and scikit-learn for both Linux and Mac OS X, so it should just be a matter of `virtualenv /tmp/v` (or wherever), `source /tmp/v/bin/activate`, `pip install -U pip` to make sure your pip is new enough to handle wheels, and `pip install numpy scipy scikit-learn`. Nothing gets built from source. Unfortunately the same can't (yet) be said for pandas; `pip install pandas` at this point will try to compile it.

Actually acquiring virtualenv on OS X seems not to be super straightforward, especially if you want to use the system Python interpreter. The best option seems to be to `brew install python`, and then `pip install virtualenv`. Or download Python from python.org, and maybe get Python 3 while you're at it.


one way is to get a standalone python and git clone virtualenv, then use the standalone python to make virtualenv for each project.. this leaves out anything to do with the system python


Thank you! I'll give that a whirl.


Anaconda. Its the best game in town right now.


I have found Anaconda to be substantially slower running on OS X than running on the same hardware in a docker container running on boot2docker.


TBH, it really is not dev friendly due to odd/old versions, having to install brew etc.

Thus, I use my Mac as a fronted with business apps but all dev is done on Linux VMs. I like the separation and avoid all that hair pulling.

Now, dealing with proprietary HW is a whole other can of worms. I spent 3hrs last night trying to load a LE cert into a Polycom phone but had ZERO success getting the phone to use it instead of the factory cert. ARGH.


I have yet to have a problem with it. macOS seems to have the right balance between convenience and hackability.


Which components are hackable? The closed-source kernel? The closed-source userspace libraries? The Internet service infrastructure (iCloud &c.)? The outdated versions of open-source software?

You have root, and you can build your own enclave of open-source code doing as you desire, but this is true of pretty much any system--certainly it's true on Windows, at least.


You answered your own question. I didn't say macOS was more or less hackable than other OSes. Also, I like many of the convenience features.

Linux is many (good) things, but up until this point in time, "convenience" is not a word I would associate with it. macOS and Windows own that world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: