Why would an AI only give limited information to humans?
I’m thinking about an AI similar to The Machine from Person of Interest but in a generally more advanced setting, and with a likely more broad goal of serving as a shepherd for humanity as opposed to solving crimes. I’d also think it’s a good idea to follow Stuart Russel’s principles for AI.
Why does the AI only give a limited amount of information to people instead of telling them exactly what they need to know? Person of Interest had a fairly good answer based on the procedural nature of the show, but how could you apply this more broadly?
EDIT: To clarify, I mean the idea of a Friendly AI that is working for humanity rather than turning against it. The question is why would a friendly AI not give full information, allowing people to actually do something, not why or how it would turn against humanity. That’s already been done.