Izanagi009 said:
My understanding of the Singularity is that computing and technology progresses to a point that AI's can be built that are indistinguisable from man and that technologies that help build up to that point like enhanced neurological based processing and a higher understanding of the Human body and brain could allow for implantation of AI, robotic parts and the like into humans such that the line between man and machine is gone.
Yeah, that's mostly Kurzweil.
Here is [http://yudkowsky.net/singularity/schools] a summary of the three as Yudkowsky sees it.
Basically, Kurzweil is relying more on the inevitability of exponential technological progress (such as Moore's Law on a larger scale), in which the development of transhuman bodies can be one big symbolic watershed moment.
Vinge puts more emphasis on this exact "Event Horizon". He is the one who first named singularity, specifically to emphasize that beyond this line, the old rules are incomprehensible, as superhuman intelligences are different in kind from us, and can upset old trends.
Yudkowsky would agree with Vinge on the part of the event horizon changing the rules, but specifically predict that superhuman intelligence would cause an "intelligence explosion", a positive feedback loop that would rapidly exploit all physical limits of increasing itself.
Izanagi009 said:
I say that if a movie wants to explore the Singularity, it should not be an action movie, It should be a drama between characters as they debate the movie and the activation/destruction of the AI base system occurs at the end as a sort of culmination of dialogue
That could work well as a small scale indie movie set in one room, like The Man from Earth.
Namewithheld said:
Yeah...it'd basically be: "And then the Singularity happened and everything was awesome."
Or "And then the singularity happened and our bodies' molecules were all transformed into paperclips."
There are plenty of possibilities for an AI to fail to acknowledge the value of human life, and there would be no shooting war against bipedal skeletonesque robot soldiers, it would be over pretty quick with nanotechnology.
One possible interesting premise, is if the superintelligence starting the process is just *mostly* right. It was written by humans with an intent to be a responsible, human-friendly, life-valuing singularity AI, and it is supposed to help humans, but one vital order is missing.
There was a great story about that, Friendship is Optimal [http://www.fimfiction.net/story/62074/friendship-is-optimal]. It TECHNICALLY starts out as a My Little Pony fanfiction about a MLP MMO's AI reaching sentinence and applying it's original programming to the whole world, but ends up as something of a subtle cosmic horror.