I don't think we'll get a techno-utopian future - but then
   again, I don't think we'll get _any_ utopian future, generally
   speaking. Utopias don't work.

   If that super-intelligent AI is smarter than us, then it'll have
   the smarts to ask the question whether or not it's good to exist
   at a higher-intelligence state, given their creators are at a
   lower-intelligence. It may kill itself. Might be the smartest
   move -if it's truly smarter than us.

   For it to be smarter than us, it would have to have a more
   developed conscience of morals and ethics. Otherwise, it's not
   smarter than us at all. Just faster perhaps at a certain limited
   amount of tasks.

   NOT programming an AI with a social intelligence, with an
   emotional intelligence, is unleashing a CRIPPLED being that's
   already at a lower level of intelligence than us.

   The fear isn't super-intelligence: the fear is a
   super-intelligent psychopath [or sociopath, depending on what
   definition you choose].

   In short, an AI with a psychiatric condition or an embedded
   personality trait. [again, depending on what definitions you
   choose]