Hacker Newsnew | past | comments | ask | show | jobs | submit | viergroupie's commentslogin

>Is the putative student "bad at math" or "bad at math the way it's taught in high school, perhaps even by a certain teacher"?

It could also be a mismatch between someone's personality and the sort of math kids get taught. I did poorly in most of my math classes in high school and college. After failing a few classes, decided I was hopelessly "bad at math", and left the topic alone. A few years I took an abstract algebra class on a lark and really enjoyed it. In hindsight, I did poorly in my earlier math classes because I was undisciplined and really disliked memorizing disconnected tricks and formulas. Abstract algebra, on the other hand, was more elegant, conceptual and fun to learn. Doing well in one class punctured my myth of inability and gave me the confidence to properly learn linear algebra (and take some other interesting classes).


>The remaining intelligences have nothing to do with intelligence or cognitive skills per se, but rather represent personal interests (for example, musical represents an affinity for music; naturalistic, an affinity for biology or geology) or personality traits (interpersonal or intrapersonal skills, which correspond best to the related concept of emotional intelligence).

Hidden in the article is a sneaky mismatch of definitions. Isn't it a tautology to argue that intelligence is singular by using a narrower meaning?


Oh God, the horror. Almost every time neural network library I've seen manages to obscure a very simple idea in piles of useless objects.

"Neural networks" are just the composition of several nonlinear regressions. There's nothing particularly "neural" about them.

Here's a typical 3-layer network:

f(x,Wh,Wo) = tanh(Wo * tanh(Wh * X))

Wh, and Wo are the hidden and output weight matrices respectively. Fix some loss function (ie, L(x,Wh,Wo,y) = || f(x,Wh,Wo) - y||^2), get the gradient of this function, and a take step down the gradient. There's your learning rule.

Now, I understand the desire for flexibility/modularity, but (1) what's the sense of trying to house supervised and unsupervised methods in the same hierarchy? and (2) what could possibly justify Connection and Weight objects?


Well, this thing supports GUI/visualization as well, so many of these objects make sense.


>What about evidence of design patterns? Does it look like the person who wrote the code doesn't know about things like Observer, Visitor, and Decorator patterns?

This is his criterion for a research position? Understanding of the unsolved problems in a domain? "Irrelevant."

Mathematical sophistication needed to abstractly model complicated-seeming scenarios? "A dime a dozen."

Know the decorator pattern? "omg, you're hired!"


No matter which language you choose for productivity, I would start with the exercises in the Little Schemer or Little MLer. They're disarmingly simple while still challenging your brain to shift into a recursive/applicative mindset.

As for what functional language you want to end up using...well, it depends on your goals.

I'm personally partial to the family of languages which use ML-style type systems, which include OCaml, F#, and Haskell.

The core of F# and OCaml are very similar. F# has better libraries whereas OCaml has both a more powerful type system and more powerful module system.

Haskell is less straight forward than OCaml and F# and has many more esoteric features you need to learn before being considered an expert. Nonetheless, many have found the process of learning to be a mind-expanding process. Also, the Haskell community is much larger and more vibrant than what you'll find with any other statically typed functional language.


Thanks. Besides being great for my mind, can I compile an assembly in F# and include it in my .NET application? And if so, do you have any real world examples of this being useful?


I think you're misunderstanding the role of "neural networks" in academia. NIPS has a lot of value to the machine learning and AI communities but, beyond vague inspiration, it has almost no connections to neuroscience. There is a substantial body of work in computational modeling of neuronal behavior, but this stuff is much messier (PDEs with biologically determined constants) and limited in scope than the papers that appear at NIPS.

edit: Relevant conferences in computational neuroscience -

* http://cosyne.org/c/index.php?title=Cosyne_09

* http://www.cnsorg.org/2009/

* http://icms.org.uk/workshops/mathneuro2009


Sorry, but the CS approaches have much to share with neuroscience even in these very early (and messy) days.

Machine learning is now used to analyze neuroimaging data and predict behavioral responses.

http://polyn.com/struct/NormEtal06_TICS.pdf

I also know of one group merging their intelligent tutors with fMRI data to predict types of confusion and offer better suggestions.


I don't really know much about neuroscience proper. I tend to look at the simplified artificial models and synthesize what might be possible, rather than look at all at the biology.


Is that a map of population density I see?


>Adam Leibsohn, a 27-year-old communications strategist who makes roughly $60,000 a year and pays $1,650 a month for his own apartment in the East Village...“It’s kind of a spartan lifestyle,” he says.

We admire your frugality.


The machine learning lectures here are of fantastic quality. I hope this is the future of academia.


Other questions for PG:

- Why does God allow evil?

- Does P = NP?

- How does it feel to be so gosh darn awesome?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: