AI/ML have a need for clean, high quality data but I think even an AGI would run into the same problems as human brains.
Which information is accurate? Which are blatant lies or unintentional misinformation? What biases its knowledge might have based on the data it chose to trust?
It's going to be disappointing at best and very dangerous at worst to assume an AGI is infallible and won't run into the same problems human brains run into in the real world.
Models are a reflection of data, isn't that absolutely clear at this point? This should not be news to anybody. Applied ML today is, in large part (very large), collecting high quality data.
Which information is accurate? Which are blatant lies or unintentional misinformation? What biases its knowledge might have based on the data it chose to trust?
It's going to be disappointing at best and very dangerous at worst to assume an AGI is infallible and won't run into the same problems human brains run into in the real world.