Zonie63 -> RE: A little early for the debate, but considering recent developments (12/9/2012 7:57:18 AM)
|
quote:
ORIGINAL: jlf1961 Recently a computer beat the champions at jeopardy, BBC Story which is one step closer to passing the Turing test. So, should we actually build a true artificial intelligence, there are a couple of questions that are already the center of debate. 1) Would such a system be considered alive, since the unit would have to be self aware. 2) Would such a system have rights under the law? Well, as the article you linked indicated, we're still a long way before any true form of artificial intelligence can be created. However, I think there have been some interesting developments toward organically-based processors rather than silicon http://www.sciencedaily.com/releases/2010/04/100425151146.htm http://www.gizmag.com/organic-molecular-computer/15041/ Perhaps, someday, harvested brain tissue from animals or cloned brain tissue from humans might be used as organic processing units for computers which might be capable of becoming self-aware. It's an interesting idea, but I still find it hard to fathom at times - even though I'm a fan of Star Trek as well as The Terminator and The Matrix series where AI is the prominent theme. However, when watching those movies, I was always waiting for the other shoe to drop, that some evil (but very human) programmer behind the whole thing. That would probably be the first hurdle in determining self-awareness, since it would be hard to tell whether it would be the work of a very brilliant programmer and computer designer - or if the device in question really is self-aware capable of independent thought. Are we dealing with something that can truly form independent thoughts and make the same choices as a rational, functioning adult? Or is it something that its programmer told it to do, in which case, it wouldn't really have the ability to make its own choices? Whether it's "alive" or not is even more tricky. Would AI be considered a human invention, human offspring, or some sort of technological "animal," like a watch dog or a work horse? Would it be property? Or would it be given the right to refuse? That was one of the main questions asked in the TNG episode "Measure of a Man" mentioned above. Was Data the property of Star Fleet? Even if Data was declared non-sentient, there was still the question of whether Data was actually the property of Star Fleet, since Data was merely found by Star Fleet. Star Fleet did not construct Data, although his creator was presumed dead at the time (even though he showed up in a later episode). He himself ostensibly made the choice to apply to Star Fleet Academy, to which he was accepted and graduated with honors, given a commission, and treated as a valued member of the crew. It wasn't as if he was suddenly drafted into Star Fleet, so the presumption that Data was the property of Star Fleet seemed ill-conceived on that basis as well. Another line from that same episode, "Data is a toaster" also seemed flawed, since Data was also a one-of-a-kind, created by a presumed dead scientist and which they didn't know how to build any others. So, even though he was human-created, his existence might also arguably be a work of art which should not be dismantled and possibly destroyed. A similar question was raised in TNG (and delved into more extensively in VOY) regarding the apparent sentience of holographic characters, such as Moriarty in Data's and LaForge's recreation of Sherlock Holmes, as well as the character of The Doctor in Voyager. The holographic programs in question were so sophisticated that they developed self-awareness and developed human-like qualities and emotions even more rapidly and seamlessly than Data. There was a similar episode in VOY in which The Doctor was asserting that holograms have rights as sentient lifeforms in the Federation. It ended by showing many of his "brothers" working in a mine like slaves. Another contrast was that Data was actually designed to be sentient, as that was the intention of his creator, whereas the holographic lifeforms just happened as unintended consequences. They weren't intended to be sentient by their programmer, but they just became that way. Similarly, in The Terminator and The Matrix series, the self-awareness "just happened" apparently against the intentions of their programmers. But that would be the biggest obstacle, at least as far as determining their self-awareness, ability to make independent choices, and granting of rights. Is there some intention to technologically replicate human intelligence and create an actual artificial lifeform, or is it something that's expected to "just happen" as an unintended consequence of more sophisticated computer technology? Are we just making a better tool, or are we intentionally trying to create a separate lifeform with intelligence and sentience? I think it's that we're just trying to make a better tool, and that's how it will always be used. Of course, just like any tool (or weapon), we'd have to guard against it being used for malicious purposes. That's probably the bigger issue at stake. If and when computer technology becomes that sophisticated and so much a part of everyone's daily lives (even more so than now), I think the issue of who has their fingers on all the buttons is more important than whether or not the AI has any rights. That may be one reason why AI wouldn't be given any rights, probably because it would be viewed as under someone else's control and not really operating independently, no matter how sophisticated the technology and programming might be.
|
|
|
|