What is the purpose of the .note section? If it is left out, it appears from ktrace that the kernel assumes the binary uses some other ABI, e.g., Linux, etc. (compat_linux, etc.) and the /emul shadow directory gets searched.
From the document: "OpenBSD native executables usually contain a .note.openbsd.ident section to identify themselves, for the kernel to bypass any compatibility ELF binary emulation tests when loading the file"
I have several personal hacks for doing this I have written over the years but I've yet to find anyone else who tried to automate it.
For example, one task is to determine the functions in a library that are not actually used in your program and exclude those from linking, instead of blindly linking libraries full of unused functions (that sometimes cause name conflicts).
"You're browsing without Javascript! If you have no idea what that means, you should ask your technical friend about it. Otherwise - kudos."
No, kudos to you. I could count on one hand the number of times I have been congratulated by one of these capability detection messages for browsing without Javascript.
For me, not being baited into buying stuff is a nice side effect.
I block ads because it's fun to see how badly Google, Apple, Microsoft, apps and websites want data about computer users.
It's a game (I would guess that's how they see it); one that forces the user to be vigilant about networking.
Many times I see commenters on HN making statements to the effect of "users cannot run their own servers" and spurring a "debate" in the context of someone trying to innovate away from the current asymmetric, client-server, "calf-cow" internet.
I saw one such commment earlier today.
Thought experiment:
What about exploits like this one, among so many others over the years, in Microsft Windows?
In many cases it sure looks like the user is "running a server".
There is a port open and listening, waiting for connections. And some remote client can connect and issue commands.
> Many times I see commenters on HN making statements to the effect of "users cannot run their own servers" and spurring a "debate" in the context of someone trying to innovate away from the current asymmetric, client-server, "calf-cow" internet.
What that is about is that most consumer level internet connections do not have a fix IP address. Thus you can't (easily) aim a DNS reference at it etc.
Understood. However try to reconcile this with the thought experiment I gave above. You would be saying that these Windows exploits would not work because users have IP addresses that are changing too frequently. Is it possible that _in practice_ many "dynamic" IP addresses are actually quite static (i.e., remaining the same for months or longer)? In _theory_ they could change by the day or week.
Well most attacks just use such a "server" for the initial attack, afterwards they set up something that make outbound connections to a "command center" or similar.
In technical terms any computer can be a server. Just look at the BBSs that was run out of C64s and similar back in the day.
But a server that can't be reliably reached is a useless server.
And the BBSs worked back in the day because dialing the same number days, weeks, even months inbetween would lead you to the same BBS if the computer was still running.
A domestic internet connection is simply not reliable enough for that. Yes, if nothing happens electrically at either the customer or ISP end the IP will remain for some time. But have a power failure and it is likely that the IP will be reassigned. And that random aspect, that sometimes you can retain the address for months, and other times get it changed within hours, do not help.
I agree firewalls and NAT are a nuisance, and today's internet is not one iota as cool as the BBS days. The nuisances introduced by "ISPs" have hindered but in the long run have not stopped reliable peer to peer internet. I will not name the commonly known examples lest it divert the conversation.
There are a variety of workarounds for dealing with firewalls and NAT, and after years of using them "experimentally", I can attest that they work reliably, at least for me. Some of them are well-known, some of them are commonly used, others are not.
If IP addresses assigned to so-called "reliably reached" servers were as static as you imply in practice, there would be little need for a mechanism like DNS. (And I'm not saying there is, just pointing out that there are a lot of folks who believe IP addresses must be able to change without notice.)
In my experience, domestic internet connections with "dynamic" IP addresses are "reliable enough" to do some "useful" things besides simply partaking in the "calf-cow" web.
"Google makes deployment much easier by using static linking..."
Unless you only are referring to only the Go compiler, can you provide a cite for this statement?
I also use static linking as much as I can. Sometimes it requires quite a bit of work due to how open source developers structure their build systems. I wonder if some folks at Google have also spent significant amounts of time unravelling idiosyncratic build systems found in open source projects to make static, portable binaries.
I often see commenters on HN and elsewhere making commments against static linking. It would be useful to be able to cite to Google's internal practices; these commenters also seem to attribute superior knowhow to Google staff.
They actually don't completely statically link everything - there is a small set of base libraries (glibc and I forget what else), and generally all compiled dependencies other than those from the base runtime are linked into a binary.
Google released a bunch of their main build system recently, so you can see various hints about their internal practices there, starting with the documentation: http://bazel.io/docs/be/c-cpp.html#cc_binary.linkstatic The OSS bazel defaults to linkstatic=0, but I think internally they use linkstatic=1 there. "mostly static" mode.
That's just C/C++ of course, but a similar approach applies to Python, where there are zipped single-file executables that contain all of a python app's dependencies except for python itself and things like libc6. There's a primitive form of that in bazel at https://github.com/bazelbuild/bazel/blob/master/tools/build_.... Twitter's "Pants" build system is ideologically descended from Bazel, and along with it came the Pex python executable format: https://github.com/pantsbuild/pex
Just the fact that Go does mostly-static linking by default is a decent hint in this direction, considering where Go originated.
In the end, neither approach is ideal for all scenarios. General purpose linux distributions have various very good reasons to prefer dynamic linking wherever possible. Cluster operators and anybody distributing binaries for multiple environments have lots of reasons to prefer static linking. It's not a decision to be made in a vacuum.
There's a simpler reason to link nearly everything statically. Besides libc, almost everything is built from source, and not just from source, but from source as it exists at the current revision. So compatible re-use of prebuilt *.so's would be a nightmare and you have two realistic choices:
1. Build them all at current revision and package them as a subtree. This is utterly pointless since due to containerization there will be no library sharing anyway. The only real reason you might want to do this is to save time on static linking.
2. Link statically and ship a single (often enormous) binary with just the stuff you need, saving a bit of disk space, and a bit of time at runtime. Fewer moving parts, what's not to like?