Hacker Newsnew | past | comments | ask | show | jobs | submit | hardy263's commentslogin

Can you explain what rule this is? I'm highly curious as to what rule can solve all derivatives, and it might be useful for me.


Sure: the line tangent to any point on a smooth curve approximates a sufficiently small bit of the curve surrounding the point arbitrarily well.

Newton's contribution wasn't this, however, but the extension of Descartes' algebraic tangent-finding methods to curves represented by "infinite polynomials", which he neither uses nor explains in the Principia. If you're looking to learn Newton's flavor of calculus "from the master", here it is:

http://archive.org/details/methodoffluxions00newt


f'(x) = lim h->0 (f(x+h)-f(x))/h


If that's the really the rule ekm2 is referencing, the rest of his comment falls apart. This rule is found in the first section about derivatives in any college or high school calculus textbook. They lead you on for a good many pages that calculus problems are actually practically solved by reference to this equation, then grudgingly admit (after forcing you to use it many times) that the power rule, among others, exists.


I assume he's referring to this [1], although Newton invented his calculus because he couldn't use this approach to solve practical problems.

[1] http://en.wikisource.org/wiki/The_Mathematical_Principles_of...


> Can you imagine, then, diving into a website's back-end to see @ all over? It turns out, the previous developer realized all those nasty notices and errors stopped happening if he slapped a @ on everything.

You've turned something that's a person's fault into something that's the language's fault.

Other languages have some sort of warning suppression as well, like Java's @SuppressWarnings or C#'s #pragma warning disable. Although they won't suppress all errors like PHP does, it can still bite you if you don't fix them.

It IS snobbery. You're taking a look at other people's code, and judging the language from it. I've written PHP for about 3 years and I've never once used the @ to suppress errors.


> You've turned something that's a person's fault into something that's the language's fault.

> You're taking a look at other people's code, and judging the language from it.

At some point, you have to start blaming the language for fostering an environment where that code is acceptable.


Meanwhile, people still manage to write memory leaks in garbage collected languages specifically designed to not leak memory.


GC is not about preventing memory leaks, it's there to make managing memory easier not automatic. One of it's biggest advantages is the ability to deal with memory fragmentation which is ridiculously hard to do well in C++ style languages.


You can write bad code in any language. It's the programmers responsibility to make sure the stuff they are writing is good and that only comes from experience with the language.


Of course you can. But is it always an equal share of bad code for each language? If not, then you have to admit that the language itself will encourage or discourage bad code or bad coders.


It's easier to write bad code in certain languages, as well as vice versa.


The article tries to be a comprehensive list of problems with PHP and @ is a notorious one, even if you personally are disciplined enough to avoid it.

Not to mention that I have no idea why the original commenter picked on this. It was listed as one of the 7 or so things that can go wrong with that one single, not unusual line of code. It's not like he had a whole paragraph about why @ is bad.


> The article tries to be a comprehensive list of problems with PHP and @ is a notorious one

Nonsense. The @ error suppression is a tool, just like any other. It should be used sparingly, but it does have its uses. I have been writing in PHP for over a decade and I have used it exactly one time. And yes, it irritates me when I see it in other's code all over the place ... which is why I refactor all external PHP code before I place it inside of mine.

The amazing thing about PHP is that it has a plethora of tools available and the language doesn't force you to write code in some constrained manner according to what some snob perceives as the right way. The only right way is the way that works and works well.

You don't like a feature of PHP? Don't use that feature. Simple as that.


Ok, let me fix that: "The article tries to be a comprehensive list of problems with PHP and @ is an error suppression tool that is notorious for being misused throughout the community".

> I refactor all external PHP code before I place it inside of mine

> You don't like a feature of PHP? Don't use that feature. Simple as that.

I don't know anything about you, but judging from that attitude you haven't worked in many teams. Of course it's not as simple as that. If I had a penny for every time I had to fix someone not checking that strpos() === FALSE, well, I could fund my own startup. You may have the luxury of refactoring all over the place, but the reality out there is that horrible code like this is left to fester until it causes real business damage.


> I don't know anything about you, but judging from that attitude you haven't worked in many teams.

I haven't worked on any teams. I've always written software solo. What of it?

> You may have the luxury of refactoring all over the place, but the reality out there is that horrible code like this is left to fester until it causes real business damage.

Refactoring is not a luxury, but a necessity. Not only do I refactor other peoples' code, but I refactor my own. That's the only way to get to a quality code base.


> I haven't worked on any teams. I've always written software solo. What of it?

Well, that means you have zero experience with the majority of concerns expressed in this thread, and so are not qualified to opine on them. You work in a happy bubble and I envy you for it, but in the real world you very very rarely get the go-ahead to refactor old code. So in the real world, you very very rarely get to see a quality code base. In any language, really, but PHP compounds this problem with its idiosyncrasies. But you wouldn't know about that.


Isn't the fact that you have to refactor so much of other peoples code an indication that something might be wrong? I'm not familiar with the PHP ecosystem but I don't know of many people having to refactor Ruby gems.


> Other languages have some sort of warning suppression as well

But there is a huge difference here. PHP directly encourages it, making it so easy to do "just prefix it with @" and dedicating a part of the core language syntax to it (thus spending such a nice character for such a triviality). I believe (correct me if I'm wrong) that in Java, it is just another annotation, and in C# just another preprocessor directive.

And I think that's what the article is about - so many things are wrong in the very foundation of the language.


In Java annotations can be used to ignore compiler warnings, not runtime errors.

To get the same effect as @ you'd have to use try {} catch {} blocks all over the place and leave the catch blocks empty, as with any other language with runtime exceptions. Sadly this is done more that one would think...


But that was exactly my point. When you do it in Java, you are obviously doing something wrong (or at least not intended to be done). It just looks wrong at the first glance, with empty blocks and all. In PHP, it's just a single character prefix, no bother at all to add, and looks just like any other sigil. Its usage is definitely not discouraged by design, quite the contrary.


> You've turned something that's a person's fault into something that's the language's fault.

PHP programmers have only one single thing in common. PHP.


> I've written PHP for about 3 years and I've never once used the @ to suppress errors.

Is it so wrong to use @? I've always used (@$_REQUEST['foo'] === 'bar') as a shorter way of writing (isset($_REQUEST['foo']) && $_REQUEST['foo'] === 'bar') - I'm curious if there's a problem with that approach.


Actually yes because the error gets generated anyway (and that is slow) and then it's just suppressed at the last moment.

You'd probably be better off creating a function to do what you want succinctly rather than using the suppression operator.


You be using array_key_exists instead.


"You've turned something that's a person's fault into something that's the language's fault."

Out of curiosity, where do you side on the C-is-an-unsafe-evil-language-because-of-pointers debate?


"7 bits" versus "half of 8 bits" are two slightly different things. One has a padding, the other does not. So the file size for a 7 bit encoding would be slightly smaller than an 8 bit one.


Can you use Photoshop filters to do this, or are there more professional programs to do so?


As a curiosity, how many bits were gained (rather than lost) when the second and third characters obtaining a death note were entered? (female and male respectively)

Do the bits of entropy add on, or does it not matter at all?


Depends on the assumptions you make, I think. If we make the assumption that #2 and #3 obtained Death Notes at a known time, then any observations before then still pin down #1 - if there are kills at morning Japan-time, that serves to pin him down to Japan, etc.

But once #2 and #3 become equally active, now any evidence like that serves to narrow down the propositions 'any of #1, #2, #3 are in Japan', so if someone asked you what's the odds that #2 lives in Japan, you would only have 1/3 the evidence you did before - because any kill linked to Japan would only have 1/3 chance of having been #2. If a kill is made using information from a rural Iowan newspaper with circulation of 100, well, all you know is that any of 1/2/3 had access to it (and maybe all 3!). And so on.

(In bit terms, if there were 4 Death Note users, then any observation has 1/4 the power it did, or 1/2^2; similarly, if there were 16 users, or 32 users... Once you established someone was a Death Note user you would still need to do that many more bits of work to figure out which Death Note user you want.)


I had the same thing happen to me. I realized it was because I stopped reading regularly. Before, I would used to read at least 1-2 novels (100-200 pages per book) a day. And during that time period, I had an awesome vocabulary and I could basically spell anything without having to look it up. But now I get mixed up on whether certain words have double letters or not. Now that I've stopped reading regularly due to time constraints, my writing and spelling ability has basically died.


It's impossible to delete your account with Godaddy, that's probably what he's referring to.


I play violin, so although I'm not illiterate in reading music, I find that when I look at a piece, I cannot immediately understand what the melody sounds like until I actually play it. I have trouble switching from different keys between pieces, so I cannot "visualize" the right sounds in my head. But having relative string lengths shows me patterns and gives me more insight into how the piece may sound like. It may even be more informative on where I can place my finger on the string.


It'd be nice if it was possible to search up an exact song based on the melody. Sometimes I remember a melody but I can't remember what song it came from, and it drives me nuts trying to find it.


So does this mean using biginteger for small numbers could require almost 30x more memory?


A serious bignum library wouldn't use base 10. I understood that he intended that you use base 10^15 or something like that. And even that was only by way of illustration.

A more standard approach when using doubles would be to use base 2^32. This is important if you want to implement a Schoenhage-Strassen FFT, which certainly makes use of the fact that the base is a power of 2.


It's actually base 10^7 :) The reason for that is that at the lowest level, the result of any operation on a single digit needs to fit in a JavaScript Number. Since you sometimes have to multiply two digits and 64-bit floats have about 15 digits of precision, the largest power of 10 that will never lose precision is 10^7 (because 10^7 * 10^7 = 10^14).

Of course, it would be more efficient to use a power of two, but using a power of 10 makes converting back and forth between strings (in decimal) quite a bit faster. In my original use case that was a large factor, so it's what I went with.


I understand.

Of course you can also multiply 32 x 32 -> 64 bits in Javascript. But of course you need more than one operation to do it.


It's hard to say exactly how much more memory it takes, but yes, there is quite a bit of overhead, especially for small numbers. But that's the trade-off you have to make in order to support the large numbers. Obviously you wouldn't use BigIntegers for everything.


Seems like using base-10 is a mistake- with a higher base, your bignums will be faster and use less memory.


The article says that the author uses base 10000.


Well, yeah, of course. But only an idiot would use a BIGInt for small numbers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: