Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wouldn't make too restricted assumptions on the form of a future AGI though. I find theoretical projections from a more axiomatic level quite important. It's like making rules for nukes before they were invented by assuming an abstract apocalypse-capable weapon without knowledge of missiles or nuclear fission.

That being said, the absolute majority of AI safety theory seems to fall into the same pothole where philosophy falls; modelling the world through language-logic rather than probabilities. The example you quoted fits this category - it's way too specific and thus unlikely to be useful in any way, even though its wording may deceive its author to believe it to be an inescapable outcome.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: