This raises an interesting idea about "parenting" AIs. I see nothing wrong with creating a machine to fulfil tasks on our behalf - in other words, to act as a mechanical servant. But raising a child to act as your servant is abhorrent. So should we think of AIs as children or tools? The world of Frankenstein is becoming less and less speculative.
I used to think the threshold criterion between tool and companion species would be consciousness -- though admittedly that was always going to be difficult to identify. But now I think that even when the AI is undeniably intelligent, purposeful, self-reflective, and thus (presumably) conscious, humans will continue to say, "It's just a machine."
Frankenstein is an excellent place to start, but if you ask most people about the lesson of that story they'll say that it's about the danger of creating artificial life. Hardly anybody seems to realize that Mary Shelley was actually telling us that if you create life you are responsible for raising it.
Yup, it's that old adage about education is knowing that Frankenstein is not the monster, and wisdom is realizing that he is.
My gut feeling, though I can't be precise about it, is that the dividing line is when an entity has independent desires and a wish to be happy, which is not quite the same as having goals and satisfying an algorithm. It's almost as if one of the key parts of this is the inability to articulate a rationale: I like things because I like them and because they interest me, and I can't explain why, and that's part of what makes me me. Data's attachment to his cat and love of film noir have no rational basis, but they show that he's conscious and sentient.
I think I need to re-read Little Fuzzy for the umpteenth time. That has a great take on defining sentience.
Your definition sounds pretty good to me -- though even that allows for malicious workarounds. If a slave/AI is brainwashed so that their main desire is to serve me and they say that pleasing me is what makes them happy, there will be people (I'm envisaging billionaire techbros now) who claim that justifies their servitude even if they are conscious.
Didn't Ava in Ex Machina conceive an interest in traffic flow? When she escapes her Bluebeard-like creator, she goes to a busy intersection in a city and watches the cars. On the whole I can relate to Data's interests better (cats and film noir make a lot of sense to me) but I knew some smart guys at school whose obsessions were more like Ava's.
This raises an interesting idea about "parenting" AIs. I see nothing wrong with creating a machine to fulfil tasks on our behalf - in other words, to act as a mechanical servant. But raising a child to act as your servant is abhorrent. So should we think of AIs as children or tools? The world of Frankenstein is becoming less and less speculative.
I used to think the threshold criterion between tool and companion species would be consciousness -- though admittedly that was always going to be difficult to identify. But now I think that even when the AI is undeniably intelligent, purposeful, self-reflective, and thus (presumably) conscious, humans will continue to say, "It's just a machine."
Frankenstein is an excellent place to start, but if you ask most people about the lesson of that story they'll say that it's about the danger of creating artificial life. Hardly anybody seems to realize that Mary Shelley was actually telling us that if you create life you are responsible for raising it.
Yup, it's that old adage about education is knowing that Frankenstein is not the monster, and wisdom is realizing that he is.
My gut feeling, though I can't be precise about it, is that the dividing line is when an entity has independent desires and a wish to be happy, which is not quite the same as having goals and satisfying an algorithm. It's almost as if one of the key parts of this is the inability to articulate a rationale: I like things because I like them and because they interest me, and I can't explain why, and that's part of what makes me me. Data's attachment to his cat and love of film noir have no rational basis, but they show that he's conscious and sentient.
I think I need to re-read Little Fuzzy for the umpteenth time. That has a great take on defining sentience.
Your definition sounds pretty good to me -- though even that allows for malicious workarounds. If a slave/AI is brainwashed so that their main desire is to serve me and they say that pleasing me is what makes them happy, there will be people (I'm envisaging billionaire techbros now) who claim that justifies their servitude even if they are conscious.
Didn't Ava in Ex Machina conceive an interest in traffic flow? When she escapes her Bluebeard-like creator, she goes to a busy intersection in a city and watches the cars. On the whole I can relate to Data's interests better (cats and film noir make a lot of sense to me) but I knew some smart guys at school whose obsessions were more like Ava's.
There's always a malicious workaround. Humans (and, it seems AIs) are experts at finding loopholes.