Real spouse since I have the sneaking suspicion that I have already met my perfect partner (random chance can suck it when I am around).
You say that such results "could not be predicted ahead of time," this is incorrect, you should have said "might not be predicted ahead of time." Consider your own example: Langton's ant. The ant's behavior is perfectly predictable ahead of time because at no point does it disobey its own rules. The simplest version of Langton's ant always creates the "highway" pattern because that is the logical result of its rules, this pattern does not suggest the development of any free will or intelligence in the ant. Also consider that the initial conditions of our hypothetical android would be quite well known. The android will follow its programing at all times therefore, the initial conditions being known (as they would normally be,) the android's behavior could be perfectly predicted by anyone with sufficient knowledge and competence. The emergent behavior of an android is quite predictable because (theoretically unlike a human) it is incapable of making a choice.x EvilErmine x said:Not necessarily true, programing of that complexity would inevitably turn up complicated results that could not be predicted ahead of time, chaos theory shows us that simple systems that obey simple rules which if left to run produce complex and unpredictable results. In short you could program this android to be the perfect partner but because of emergent behavior you would not get the personalty you programed, the system would develop it's own rules, see the concept of 'Langton's ant' for more information on this process.
On Topic
Human all the way
What the hell do you mean by that? All hypothetical situations are necessarily a little unrealistic because they are not actually real. I see that you're new to the site's forums, so let me give a piece of advice so that you may avoid the banhammer. If you have an issue with someone's point of view please do not just just bluntly insult them, please give valid criticisms or none at all.Wisteso said:LOL. This was cute. Totally unrealistic but still very cute.Iron Lightning said:Assume that you have the option to program the android with perfect competence, although the android in question will be limited by Von Neumann architecture [http://en.wikipedia.org/wiki/Von_Neumann_architecture].
I do not assume that I am unable to determine the nature of my perfect partner. One aspect my perfect partner would have is free will, a characteristic which Von Neumann androids can never have.Wisteso said:Why do you assume that you shouldn't be able to predict your perfect partner? Why do you presume that, just because you designed the complex "system", you can predict the behavior of it. In a high-level sense, sure, but if you've ever designed an intelligent system you'd know how wrong your statement is.
That is the best summary of the technical issues facing the creation of android spouses I have seen here. I also am worried about the effect of copper solution upon my IBM I-38s!SturmDolch said:I would prefer human, unless the android was programmed with the most recent specifications in algorithmic intermodular design and managed to hit .88 deltas on the Gaussian Retrodacter Scale. I would also want it to include the latest I-39s from IBM in case the Flankel decrypter in the Floyd interface was somehow faulty. That could cause the anti-thermide components to decompose, and the self-injecting electromagnetic shafts to collapse. The results of that would obviously be cause for concern, as the galvanic membrane could self-destruct.
Also, in terms of AI, the android would have to be able to work with dynamically programmed B-Trees, or the self-ionizing electrode transmuter would eventually collapse and produce a synapse in the artificial diagnostic synthesizer. The newest I-39 module from IBM seems to address this issue, as the accumulator array doesn't collapse upon reaching 3.9GHz like the I-38s did. I tried submersing the I-38 in a copper solution last night and it self-destructed, which could cause problems. How else would I bathe my android spouse where it can go through the necessary redox reactions to prevent the oxidation of its essential flanges?
This. Anime android chicks are hot.DeadSp8s said:Android 18 from DBZ as my spouse? yes please.
See http://en.wikipedia.org/wiki/Undecidable_problem. I think you're not realizing that you're talking about predicting a non-discrete/continuous timeline with infinite amounts of input. Compound that with the fact that you're also talking about a true synthetic intelligence which can change. Vague "rules" intended to prevent certain behavior can almost always be exploited and circumvented. Many sci-fi stories have explored this idea to some degree.Iron Lightning said:You say that such results "could not be predicted ahead of time," this is incorrect, you should have said "might not be predicted ahead of time." Consider your own example: Langton's ant. The ant's behavior is perfectly predictable ahead of time because at no point does it disobey its own rules. The simplest version of Langton's ant always creates the "highway" pattern because that is the logical result of its rules, this pattern does not suggest the development of any free will or intelligence in the ant. Also consider that the initial conditions of our hypothetical android would be quite well known. The android will follow its programing at all times therefore, the initial conditions being known (as they would normally be,) the android's behavior could be perfectly predicted by anyone with sufficient knowledge and competence. The emergent behavior of an android is quite predictable because (theoretically unlike a human) it is incapable of making a choice.
I was talking about you thinking that a synthetic intelligence could realistically use the Von Neumann architecture. Have you looked into synthetic neural networks? Massively parallel processing? Using a Von Neumann architecture for a true synthetic intelligence would as practical as plowing several acres of farm land with a screw-driver. Your implicit threat also holds no water. There is nothing insulting about what I said, unless the other person is an egomaniac.Iron Lightning said:What the hell do you mean by that? All hypothetical situations are necessarily a little unrealistic because they are not actually real. I see that you're new to the site's forums, so let me give a piece of advice so that you may avoid the banhammer. If you have an issue with someone's point of view please do not just just bluntly insult them, please give valid criticisms or none at all.
My question had nothing to do with ability; It was philosophical. I'm also having a very hard time taking you seriously when you keep mentioning free will and choice. That's like talking about "luck" when discussing precision physics.Iron Lightning said:I do not assume that I am unable to determine the nature of my perfect partner. One aspect my perfect partner would have is free will, a characteristic which Von Neumann androids can never have.
Of course I can predict the behavior of any system which follows a set of rules and never deviates from them. The emergent behavior of a system comes from a set of rule which do not change therefore I can predict what the system will do with sufficient knowledge and competence (see Lanton's ant.)
Excuse me, have you designed an intelligent system? Do you have a true AI sitting in your garage? True artificial intelligence is currently outside of our realm of science. Also note that our hypothetical android is not an AI and is just a complex Von Neumann Machine.
No dude, I'm not talking about a true synthetic intelligence. I'm talking about a machine based on Von Neumann architecture which is therefore only capable of producing one particular output for each particular input. You are misinterpreting me a bit, by "rules" I don't mean Asimov's Laws of Robotics, I mean basic programmed operations that determine the proper output for an input (e.g. the rule: "2+2=4".) Still, a non-intelligent machine will still follow its programming to the letter regardless of the result (which is what most sci-fi story authors fear.) I could program my hypothetical android mate to, say, never turn the TV off if the TV is still on at midnight and it would never fail to do so unless it has some malfunction. Even if I were to truthfully tell it that I'd completely destroy it if it turned off the TV if the TV is still on at midnight, 100% of the time the android would still turn off the TV. That's the kind of machine I'm talking about, not nearly a true synthetic intelligence.Wisteso said:See http://en.wikipedia.org/wiki/Undecidable_problem. I think you're not realizing that you're talking about predicting a non-discrete/continuous timeline with infinite amounts of input. Compound that with the fact that you're also talking about a true synthetic intelligence which can change. Vague "rules" intended to prevent certain behavior can almost always be exploited and circumvented. Many sci-fi stories have explored this idea to some degree.Iron Lightning said:You say that such results "could not be predicted ahead of time," this is incorrect, you should have said "might not be predicted ahead of time." Consider your own example: Langton's ant. The ant's behavior is perfectly predictable ahead of time because at no point does it disobey its own rules. The simplest version of Langton's ant always creates the "highway" pattern because that is the logical result of its rules, this pattern does not suggest the development of any free will or intelligence in the ant. Also consider that the initial conditions of our hypothetical android would be quite well known. The android will follow its programing at all times therefore, the initial conditions being known (as they would normally be,) the android's behavior could be perfectly predicted by anyone with sufficient knowledge and competence. The emergent behavior of an android is quite predictable because (theoretically unlike a human) it is incapable of making a choice.
I know, friend, I described the Android as being based on Von Neumann architecture purely to underscore that it was not a synthetic intelligence. If our hypothetical android was a synthetic intelligence then this thread would no longer be asking a question beyond asking for a choice between "human" and "human with a few bits of metal inside them."Wisteso said:I was talking about you thinking that a synthetic intelligence could realistically use the Von Neumann architecture. Have you looked into synthetic neural networks? Massively parallel processing? Using a Von Neumann architecture for a true synthetic intelligence would as practical as plowing several acres of farm land with a screw-driver. Your implicit threat also holds no water. There is nothing insulting about what I said, unless the other person is an egomaniac.Iron Lightning said:What the hell do you mean by that? All hypothetical situations are necessarily a little unrealistic because they are not actually real. I see that you're new to the site's forums, so let me give a piece of advice so that you may avoid the banhammer. If you have an issue with someone's point of view please do not just just bluntly insult them, please give valid criticisms or none at all.
I'm of the opinion that free will and choice are definitely human attributes. Humans, unlike Von Neumann machines, have the ability to invent their own outputs for particular inputs and to select an output with nearly total freedom (of course barring impossible outputs.) Consider the following: a man wakes up to find that his hand is on fire he might try to put it out, let it burn, set other things on fire, etc. he will have made a choice, a decision which he will not necessarily repeat. Now consider a Von Neumann android faced with an identical dilemma, while the same options are available to it the android will always do what it is programmed to do. If it is programmed to put it out, it will put it out, if it is programmed to do different things depending on variables it will do whatever it is programmed to do without fail, if it has no programming regarding this situation then it will completely disregard its flaming hand. Whatever happens, at no time does the android ever make a decision. If you are of the opinion that all actions are per-determined then I won't hold that against you, but it would be foolish for either of us to state our opinions on the free will of man to be provably correct.Wisteso said:My question had nothing to do with ability; It was philosophical. I'm also having a very hard time taking you seriously when you keep mentioning free will and choice. That's like talking about "luck" when discussing precision physics.Iron Lightning said:I do not assume that I am unable to determine the nature of my perfect partner. One aspect my perfect partner would have is free will, a characteristic which Von Neumann androids can never have.
Of course I can predict the behavior of any system which follows a set of rules and never deviates from them. The emergent behavior of a system comes from a set of rule which do not change therefore I can predict what the system will do with sufficient knowledge and competence (see Lanton's ant.)
Excuse me, have you designed an intelligent system? Do you have a true AI sitting in your garage? True artificial intelligence is currently outside of our realm of science. Also note that our hypothetical android is not an AI and is just a complex Von Neumann Machine.
Alright, you got me, I'm quite nearly a pure speculator. As I am a physicist-in-training and not a computer scientist you'll forgive me if I was using the term "A.I." incorrectly.Wisteso said:Regarding your question, yes, I have designed an intelligent system, some fragments of intelligent systems, and I'm working on a second intelligent system. It is nothing compared to IBM's Watson, but it's more experience than a pure speculator would have. I've also studied the topic significantly, occasionally read (nonfiction) books on the topic, attended classes on the subject, etc. "True A.I." in the way that you imply is also an oxymoron, much like "genuine imitation". Perhaps you mean "a true synthetic intelligence". Though I am curious what your qualifications are on the subject matter, I must say.
To be clear, I have no doubt that eventually we will have such true synthetic intelligence systems. Though it might not be in our life times.