Not to be overly snarky, but I'd argue that your comment here is a failure of creativity.
If the AGI mechanism can learn from a real world, it can learn from a simulated one (that it can similarly operate within and act upon) -- and in fact that can cut down the time it would take to train the AGI from years/decades (humans) by many orders of magnitude.
We already see things like this in robotics environments, it's a matter of fidelity/simulation quality. Even without perfect quality, if the mechanism of learning is correct, you'd get an intelligence with incomplete ideas/intelligence, not a completely different thing.
If the AGI mechanism can learn from a real world, it can learn from a simulated one (that it can similarly operate within and act upon) -- and in fact that can cut down the time it would take to train the AGI from years/decades (humans) by many orders of magnitude.
We already see things like this in robotics environments, it's a matter of fidelity/simulation quality. Even without perfect quality, if the mechanism of learning is correct, you'd get an intelligence with incomplete ideas/intelligence, not a completely different thing.