Call it what you will, I don't believe that good triumphs over
   evil and all of that. You're assuming, although I understand I
   do have logical positivist tendencies that need pointing out,
   and I appreciate that somewhat.

   Call it years of Star Trek.

   Yes, the safety measures usually come AFTER the disaster.

   But safety measure increase progressively in all of the
   PRECURSORS to AI.

   Why do you believe we would be JUST AS UNPREPARED for 100,000
   yrs from now AI as we are today?[1]containment-ability Here. I
   drew you a picture. This is what I'm talking about. If we were
   at the bottom, the amount of disaster from a future AI would be
   unthinkable.

   Yet, EVEN THOUGH safety measures LAG BEHIND, they're not _that
   far behind_. I've done some work in Risk Assessment and
   Mitigation. There are a lot of factors involved and not
   everything is predictable but a certain amount of potential
   disaster can be contained if proper measures are taken. I'm not
   _just_ talking out of my ass giving my "pollyanna" opinions.
   Insurance companies are worried about AI.They're worried that
   they won't be able to sell as much insurance as the world gets
   safer and safer. That's new. So AI has the very real possibility
   of putting insurance companies out of business. If insurance
   companies are out of business, there's nobody out there
   assessing risk for profit.

   If there's nobody assessing risk for profit, then what's the
   motive for assessing risk anymore?

   If there's no motive for assessing risk, then nobody will be
   watching the farm.

   All the horses will come out and stampede over humanity. Come
   back to this planet, James. It's a Science Fiction possibility,
   yes. The world is bigger than a story of possibilities and
   probabilities though. Considering you have more emotionally
   invested in this topic than I do, I'll give you the "most
   logical argument" trophy. But AI still isn't going to destroy
   the planet.
   People with financial and other motives will ensure that there
   will be a possibility for any catastrophe of that consequence to
   be mitigated somehow. I don't have to worry about it. You may
   call me a Pollyanna now if you wish and feel that you have won.
   I am ok with that.

   I know for a fact that I am but a 43 yr old man sitting in a
   yellow chair discussing an issue far out of my reach with
   someone whose control is far out of their reach and ultimately,
   becomes just a discussion topic on the Internet of little
   consequence.

   You *did* help give me some material for my arguements against
   AI taking over the planet, and for that, I appreciate it. this
   *is* how you appear, as did Hawking and Eton and the others. A
   valid concern is one thing. The hyperbole and lack of specifics
   relevant to ACTUAL AI however, reduces support, as it sounds
   like an Isaac Asimov tale, rather than a valid concern.

References

   Visible links
   1. http://icopiedyou.com/wp-content/uploads/2015/06/containment-ability.png