Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are a sizable number of programmers who believe a language has to be unsafe to be fast. This is because, for several decades, most compilers with subscript checking were really dumb about it. This started with Berkeley Pascal for BSD, which made a subroutine call for every subscript check and gave Pascal under UNIX a bad name. Good Pascal compilers were hoisting subscript checks out of loops in the 1970s, but not on UNIX.

Then there was the C++ approach, trying to do it in templates. That leads to dumb subscript checking, because the compiler has no idea that the "if" involved in subscript checking is a subscript check and could potentially be hoisted out of the loop and done once at loop entry. For most math code, where this really matters, you can do one check at the top of the loop. Maybe zero checks, if the upper bound is coming from some "range" or "len" construct.

At last, smart subscript checking is coming back. This is partly because languages now usually have a "do this to all that stuff" construct ("for i in foo ..." or similar) and it's a no-brainer that you don't need to check subscripts on every iteration. The Go compiler at least has that.

How smart is the Rust compiler about this? It potentially could be very smart.

This is important. Otherwise, people will want to turn subscript checks off for "performance".



It's idiomatic in rust to use iterators rather than loops. When it comes to iterators that for example iterate over a vector/array, there is no bounds-checking. I don't know if that is implemented by explicitly opting out of the bounds checking in the implementation of the iterator, or by some other means. In either case, bounds checking is turned off when the "looping" is encapsulated in iterators, but not for arbitrary indexing (which is a good reason not to use explicit indexing when you don't have to, of course).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: