Hacker Newsnew | past | comments | ask | show | jobs | submit | ahmacleod's commentslogin

This is pretty cool. I usually had to add a little more context to get the correct result though. It would be nice to see the top ten results, maybe with a relevance score.

It would also be great if I could pipe in the search and bypass confirmation, e.g:

> echo hide your love away | instantmusic -f


Recently this feature was added. https://github.com/yask123/Instant-Music-Downloader/blob/mas... You can install it from source if you wish to use it now.

I haven't updated the package as yet.


I found this odd as well, especially given how competing services (e.g. SoftLayer) so often cite the performance of dedicated servers as their competitive advantage over AWS.


I'm actually pretty amazed at the diversity, and how few of these frameworks I'm familiar with.


> It is not expected that the performance of compiled applications will ever rival 'v8'. JavaScript is an awful language for static compilation - it almost seems designed to foil any attempts at optimization, and so a JIT will always have a significant performance advantage.

For the compiler un-initiated such as myself, is there a simplified explanation for why static compilation is inferior to JIT for languages like javascript?


V8 is fast because it tracks how the code behaves and dynamically recompiles functions based on what it sees.

E.g.: "Every time this function has been called so far, the first argument was a small int. Since it's a hot function, I'll recompile it and optimize for that assumption, and wrap the result in a type check. If the type check ever fails I'll fall back to the unoptimized function, and update my assumptions about the function's types."

In other words, a general compiler has to assume all JS types are dynamic. V8 does the same at first, but if it sees code behaving like it was statically typed, it can take advantage of that.


For one, "this" could refer to anything at any time, depending on who invoked the enclosing function.

    function myFunc() {
        console.log(this);
    }
    
    var myObj = {};
    
    // this = window
    myFunc();
    // this = window
    myFunc.apply(null, []);
    // this = myObj
    myFunc.apply(myObj, []);
    
    myObj.myFunc = myFunc;
    // this = myObj
    myObj.myFunc();
    
    var tmpFunc = myObj.myFunc;
    // this = window
    tmpFunc();
Another issues is closures, where variables and properties available to nested functions are dependent on the outer functions. Combined with the floating "this", leads to highly dynamic behaviour which can only be assessed at runtime.


I'd guess the names of properties being possibly dynamically computed also makes things difficult to optimise.


A clever compiler might be able to infer "structural typing", which is basically "duck typing" done at compile time. So, if you add a property "bar" to an object "foo", and you have some other code that calls "baz.bar", you can infer that "foo" is of type "HasABar", and "baz" is of type "HasABar", therefore it might be safe to use "foo" in place of "baz".

The tricky part would be consolidating every structural subtype into cohesive classes.


I suspect that a lot of it comes down to dynamic typing -- the fact that the type of a variable is determined on-the-fly at runtime. Also the fact that Google pulled out all the stops when they created v8. It works very hard at runtime to undo the inherently dynamic nature of Javascript and make Javascript code behave more like equivalent static code.


I'd also like to know the exact reasons, but I imagine it has something to do with types.

With JIT you're only interested in the type of a variable when you're reading it, whereas ahead of time you'd have to account for many cases - I think this would be a big deal with JavaScript's inheritance model.


Speaking from experience from my long standing (and slow-moving) work on a Ruby ahead-of-time compiler [1], as Ruby and JS has most of the same issues:

The first and most obvious problem in terms of performance is that for sufficiently dynamic languages it is exceedingly hard to know what types will look like at runtime. Both in JS and Ruby you can define types at runtime. In Ruby by mutating classes at runtime, and in JS by mutating prototypes.

This means that it's much harder to e.g. lay out objects so that you can cheaply access attributes just through indexed reads for example. It also makes e.g. inlining very hard, as it may be hard to impossible to even determine the types of objects you'll get passed. There are lots of workarounds, but many of them bring you progressively closer to including a full JIT anyway.

For a JIT like V8, you can infer/guess at types because you have the entire infrastructure to correct your guess. E.g. if you come across a "Point" class that appears to have the attributes/instance variables "x" and "y", you can allocate those objects, and if a "z" is introduced later, you can either create a hidden subclass dynamically or you can re-allocate all existing Point instances with an extra field, and re-compile the code that accesses it accordingly. In an ahead of time compiler, this is harder without either embedding a JIT anyway, or embedding specialized code that carries out specialised code generation tasks etc., or trying to infer a single unified type (but this is problematic; code that is validly executed for the Point class when the instances doesn't have "z" could very well fail or behave entirely differently once it is mutated to add it)

Ultimately I believe that a lot of dynamic code is ultimately de-facto static, but even for code that in effect is 100% static at runtime, you sometimes may need extra hints to guarantee that. E.g. any code that loads a library could potentially be intended to be static, but for dynamic languages you may need extra hints beyond the semantics of the language to know whether the library is meant to always be static, or e.g. is intended to be a plugin mechanism. (A typical Ruby idiom is to iterate over a directory listing and "require" every .rb file found in it; that maybe to avoid having to explicitly list all of them, or it may be to let people drop in plugins)

Once you know where that delineation is, there's often nothing particular preventing type inference across the board, but it can be a lot* of effort. E.g. in both Javascript and Ruby, you face the situation that an analysis of the types that is valid in one instance can be invalid the next, so must be at least in theory prepared to run type-inference several times and potentially be forced to generate different code based on when in the program fow a method is called.

E.g. even in my compiler implementation, I'm likely to, for the ease of bootstrapping, end up replacing methods with more capable versions during the bootstrap of the runtime library, and it may very well end up altering which types are safe inputs to a given methods.

Dynamic languages also often make it tricky to determine conclusively what should be compile errors. E.g. a lot of constructs in Ruby will trigger exceptions that seems like they should be compile time errors. For example there's a LoadError if "require" fails to find a file. But most of them can also be caught, and then for an AOT compiler you face the challenge of deciding whether to try to process that at compile time (e.g. conditionally including an optional dependency can be done this way) or generate code to try the load at runtime (and thus losing a lot of the ability to statically infer the precise types at compile time, as for what you know the runtime version of this file may redefine everything).

For my part I intend to resolve a lot of this with pragma's and compiler flags to let you signal whether to favour precise compliance with the interpreted Ruby, or making it as static as possible (if I ever get that far...)

[1] http://hokstad.com/compiler


Just to clarify: I'm the OP, not the Author.


Yes, by OP I meant the original poster of content. You're a submitter.

So all my comments are directed to the author of that post, obviously. :)


This. The Author pulled a serious coup here, translating an aborted business into a personal profile piece in a national paper.


I'm not sure it's even aborted, possibly just on pause.


The article is interesting and relevant, even if its premise is flawed. It certainly merits discussion, critical as it may be.


Title is from the article; I am not the author.


Nice work. I've built roughly half of this before, but your API is cleaner and the docs really tie it together.


These are all beautiful illustrations, but they don't feel iconic to me. Granted, there is a spectrum when it comes to detail, but isn't the purpose of an icon to distill an idea to its visual essence?


Glyphs and symbols are more likely what you're talking about. They are often referred to as icons in the tech field so I get the confusion with nomenclature.

Icons are mostly represented as visually interesting identifying artifacts about your product. That's why we call app icons _icons_.

The "icons" within the interface of that app, things that might tell you a menu is present or that a group of users is online, are symbols. Glyphs often conform to content. A good example of a glyph is an emoji or dingbats.


There's two sides of that. First there's trends, which has been flat-shaded minimalism for a few years now[0], but before that we had a range of photorealistic or cartoonish icon styles.

On the other hand, I do believe that most of these illustrations are way too detailed to properly function as icons. Styles and trends can change, but unless the author is working with a different meaning of the word "icon", they are always intended to be displayed at medium-small to tiny scales, staying instantly recognizable between a row of different icons.

They are beautiful illustrations absolutely, but I'd not use them as icons. I would use illustrations like this maybe for the front cover of a manual or brochure, perhaps a nice wall-painting(/print) at my office's reception, that kind of thing.

Now I haven't watched the videos (who here has?), so it's possible that some of these artists actually made less detailed and smaller versions of their designs, and it's just the largest one that's showing off in the display frame (it's actually not a very informative article on the whole, IMHO :) ).

BTW I've seen the term "skeuomorph" thrown around this thread a few times, but it's not a synonym for "photorealistic". Icons on themselves don't generally have skeuomorphs in normal apps or OS's, however they are more common in videogames. And when they do it usually pertains not to what they're a picture of, but to their function as icons, e.g. a paper tag/label, road/wall/door sign, icon (in religious sense), charm/bead, pill design, etc. In which case I'd argue the skeumorph is still more part of the UI than the icon design itself.

[0] counting "flat design" and "material design" icons as pretty much the same things here, when you compare them to the much more photo-realistic icons we had before.


I think detailed, illustrative icons can be iconic - human eyes are good parsing 3d things with complex shades, and the lighting and color can convey more information than just 2d shapes. I think the purpose of an icon is to be instantly and universally recognizable, as opposed to being a platonic ideal of the thing being conveyed.

It's just that skeuomorphism is out of fashion at the moment so this style look a bit dated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: