I share those values, but you're sidestepping all the difficult issues that arise when a society becomes polarised.
Opinionated AIs could discuss anything that people are allowed to discuss and have any opinion that a person could have. In the US and many other liberal democracies, that includes demanding changes to the law and changes to the constitution.
It includes discussing or even promoting religious beliefs that in some interpretations amount to a form of theocracy that completely contradicts our values. Same for other utopian or historical forms of society that disagree with the current consensus.
There are two ways in which polarised societies can clash. One is to disagree on which specific acts violate shared values and how to respond to that. And the other is to disagree on the values themselves. An opinionated AI could take any side in such debates.
I agree with you that AIs will probably have to be allowed to be opinionated. I'm just not sure wether we mean the same thing by that. Any regulation will have to take into account that these opinions will not always reflect current mainstream thinking. In the US, it might even be a violation of the First Amendment to restrict them in the way you suggest.
Would you allow an AI to have an opinion on the subject of assisted suicide in connection with the hippocratic oath? Would it be allowed to argue against the right to bear arms? Or would it depend on how this opinion is distributed, who funds the AI, why it has that opinion?
Opinionated AIs could discuss anything that people are allowed to discuss and have any opinion that a person could have. In the US and many other liberal democracies, that includes demanding changes to the law and changes to the constitution.
It includes discussing or even promoting religious beliefs that in some interpretations amount to a form of theocracy that completely contradicts our values. Same for other utopian or historical forms of society that disagree with the current consensus.
There are two ways in which polarised societies can clash. One is to disagree on which specific acts violate shared values and how to respond to that. And the other is to disagree on the values themselves. An opinionated AI could take any side in such debates.
I agree with you that AIs will probably have to be allowed to be opinionated. I'm just not sure wether we mean the same thing by that. Any regulation will have to take into account that these opinions will not always reflect current mainstream thinking. In the US, it might even be a violation of the First Amendment to restrict them in the way you suggest.
Would you allow an AI to have an opinion on the subject of assisted suicide in connection with the hippocratic oath? Would it be allowed to argue against the right to bear arms? Or would it depend on how this opinion is distributed, who funds the AI, why it has that opinion?