Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Self-driving cars are new. You've missed my point. The OP is from an older age. There wasn't a lot back then, so for choosing what to focus in, weren't a lot of choices.

RALPH, a 1990 Pontiac Sport minivan, drove across the US 98% autonomously. In 1995.

Much of the control theory, robotics, and AI work that enabled the current self-driving gold rush was invented decades ago.

The Dartmouth workshop was in 1956. Dearth of choices? Please! Those days provided an enormous surplus of choices, almost all of which were good ones! People from that older age invented reinforcement learning, image classification, natural language processing, OOP, hell, even the notion of a pointer! And the list goes on.

Today there are far fewer choices than there were back then because so much has already been done.

That is, of course, assuming you're in the business of "doing things no one else has done before" as opposed to the business of "following a well-trodden path".

So, I guess if your view of the world is confined to "using things other people already invented and explained to me", you might consider the 1950s and 1960s a bleak period when no one knew how to do anything. As opposed to the cusp of a century-long period of continuous innovation...

Again, as the article says, I guess it boils down to what you want to get out of a lifetime of work.

> Funny, you mention graphics, what do graphics usually involve?

shapes, views, raytracing, texture mapping, lighting, color, filtering, scaling, reconstructing, visual surfaces, grids and voxels, octrees, kd-trees, partition trees, polygonal rendering, ...

And that's just the undergrad stuff, not the cutting edge.

Oh yeah, knowing C/CUDA/OpenCL is nice. But when compared to deep expertise, it's a rather trivial time investment and is completely orthogonal: an implementation detail, not the fundamental content.

> Operating systems?

Kernels, scheduling, device drivers, caching, distributed systems, energy models, timing attacks, and the list goes on.

Of course, knowing C is essential, but that's the easy stuff when compared to wrapping your head around a modern OS, or even a tiny piece of a modern OS

> Robotics?

SLAM, sensor fusion, filters, actuation for various types of novel actuators, PDEs and ODEs, optimal control, stability and robustness, system identification, model-predictive control, motors, servos, simulation, etc. And that's just the software side.

Oh yeah, knowing C is nice. But when compared to deep expertise in robotics, it's a rather trivial time investment and is completely orthogonal: an implementation detail, not the fundamental content. Many of the fundamental ideas at techniques in robotics pre-date C by decades.

> On the other hand, how often do you hear about a Java expert? Someone who knows the intricacies of the GC? All about the JVM? Not that often. They exist. But it's not hip. They chose "poorly".

I know a few true, honest-to-god Java experts. They all make insane amounts of money (even by SFBA SE standards) and love their work. Turns out Google has quite a bit of Java code and a metric shitload of money.

You think graphics, robotics, and OSes are just "C and POSIX". That's not true. C and POSIX aren't even table stakes. They're the thing you pick up in a few weeks or maybe a semester so that you can spend several years obtaining the table stakes -- see the list above. Then you need to build true expertise on top of that.

The path from "I know C" to "robotics expert" or "graphics expert" is at the very least a multi-year path. And that's assuming you're bright and have your full work day (and then some) to dedicate to following advances and building your own.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: