In the first type of scenario that I mentioned, where you have a single super intelligence is so powerful, then yes, I think a lot will depend on what that super intelligence would want. The standard example being that of a paperclip maximizer. It seems that almost all those goals, if consistently and maximally realized, would lead to a world where there would be no human beings or indeed perhaps nothing that we humans would accord value to.
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.