I have authored a chapter of a new book from Springer entitled Online Worlds: Convergence of the Real and the Virtual edited by William Sims Bainbridge:
Virtual worlds are persistent online computer-generated environments where people can interact, whether for work or play, in a manner comparable to the real world. The most popular current example is World of Warcraft, a massively multiplayer online game with eleven million subscribers. However, other virtual worlds, notably Second Life, are not games at all but internet-based collaboration contexts in which people can create virtual objects, simulated architecture, and working groups.
This book brings together an international team of highly accomplished authors to examine the phenomena of virtual worlds, using a range of theories and methodologies to discover the principles that are making virtual worlds increasingly popular, and which are establishing them as a major sector of human-centred computing.
My chapter (Chapter 22, p. 279) is entitled Future Evolution of Virtual Worlds as Communication Environments. I begin the chapter by analyzing the history, technologies and some interesting current developments in VR worlds -- interesting how, when I was correcting the proofs only a few months after having written the text, I had the impression that much of the content was already obsolete, or at least should be re-written. This is a very fast moving sector. Some excerpts:
----
[Virtual worlds] can already be used as a telepresence and telecollaboration option much better, and much more immersive, than videoconferencing or other traditional forms of remote collaboration.If videoconferencing is one step below a critical threshold for suspension of disbelief, Second Life is already one step above. The evolution of VR will provide next generation telework platforms, which will really enable, and empower, global communities. Thus, its social and political importance will be huge. Further evolution of VR and other emerging technologies will result in science-fiction-like scenarios, from instant telepathic communication to full transcendence of biological constrains...
Wandering in a synthetic universe generated by bits changing value in computer circuitry and traveling on communication links, metaverse residents can see and talk to each other, attend dance parties and work meetings, build their own virtual dreams, and explore the dreams of other users. Stephenson’s vision is beginning to take off, and this is a good example of the often very important role of good science fiction literature in shaping our actual reality...
Before its adoption by the gaming sector, VR technology had been developed by and used for the military and industrial simulation sectors. It can be said that most modern computer gaming technologies originated as spinoffs of military applications and simulation projects for the construction, oil, or air transport industries, not to mention space. Today, the trend seems reversed: New technology breakthroughs are generated by the gaming industry first, and then find their way to military and industrial applications. It is not surprising that smart young people are attracted by the computer gaming industry: in the words of Rudy Rucker: “Academia hasn’t quite caught on to the fact that computer games represent the convergence and the flowering of the most ambitious frontier efforts of the old twentieth-century computer science: artificial intelligence, virtual reality, and artificial life.” A well-known example of modern consumer level VR is the popular virtual world Second Life (SL). Rather than a computer game, Second Life can be considered as a platform where “residents” collaborate at building a virtual world with the tools provided by the system....
----
Then I move to the future of virtual worlds and discuss a few ongoing developments in software, such as the P2P Open Croquet technology used in the upcoming "metaverse browser" Open Cobalt and the business oriented 3D virtual videoconferencing and collaboration environment Teleplace, and interface hardware:
There is also a trend toward more and more sophisticated and immersive user interfaces. With much better graphics and VR glasses able to simulate a deep spherical field of view around the user, virtual reality will begin to feel much more real. There is also an emerging trend toward the development of neural interfaces, that is, Brain to Computer Interfaces (BCI) able to read or write information directly from and to a user’s brain. Science fiction? Transhumanist wishful thinking? No, science fact: prototype BCI devices are already available since a few years. The first applications of neural interfacing have been in the medical field, where the best-known examples are the breakthroughs of Cyberkinetics, whose technology used in medical pilot projects has permitted severely disabled patients interacting with computers by thought. Now, this technology is finding its way to the consumer market, and companies like Emotiv Systems and Neurosky are preparing the launch of the first neural interface devices for computer gaming....
----
It is difficult to predict short term developments in this fast moving sector, but I think the trends are clear for the long term, and the results will be truly mind boggling. We are beginning to build immersive virtual worlds with always better visuals, physics and AI, and we are beginning to build immersive interface devices able to link directly to the brain. What can the end result be? Well:
So, we will soon be able to think our way in virtual worlds. If a computer can read information from our brains, it won’t be long before it can also write information directly to our brains, and write it very fast: two-way neural interfaces that will make computer screens and headsets obsolete, a Second Life that goes directly to the brain bypassing the eyes, with today’s Instant Messaging replaced by direct telepathic communication between minds. And when our virtual environments will contain artificial intelligences, perhaps smarter than us, we will be able to communicate with them at the speed of thought. For the medium- and long-term future, probably within the first half of the century, it is to be expected that advances in neurotechnology will permit developing direct interfaces to the brain that can bypass sensorial channels to make VR environments directly accessible to the brain. This will permit creating fully immersive VR environments with full sensorial stimulation, indistinguishable from physical reality. Let’s call things by their name: These first baby steps to neural interfacing for consumers will lead to the ultimate transhumanist Holy Grail: mind uploading, the transfer of human consciousness out of the brain into much higher performance supports, where we will be able to interact and merge with each other and our AI mind children (Moravec 1988).
cool. i'm getting a copy.
ReplyDeleteYou did well: there are some great articles in the book, including of course the articles of Bill Bainbridge. As I say above, any book published on this fast moving sector is already partly obsolete before it becomes available in print, but there are a lot of very useful insights.
ReplyDeleteI can't say I agree with your analysis of VR. VR hasn't caught on because people don't like wearing big bulky helmets. It seems to me that Augmented Reality (which really is a natural consequence of VR technology) is far more likely to catch on. In fact, it really already has in the mobile app community, in television and moviemaking, and in video games.
ReplyDeleteDo you genuinely believe we can transcend our human bodies? I believe if we have something ... digital, I guess ... that can do so, it's not the "we" doing the transcending, but a digital ... child, I guess. Our will extends into it, but not our consciousness or soul.
Anyway, I also kind of think that the book's title is misleading. Virtual is just part of the real. Convergence of the two is like saying that "New York" is converging with "America"; New York is already part of America.
I am, however, curious at how you arrive at your conclusions based on the excerpts you publish here. Are they based on research groups of virtual world users? Of regular people? From personal experience? From technology makers?
>it's not the "we" doing the transcending, but a digital ... child, I guess. Our will extends into it, but not our consciousness or soul.<
ReplyDeleteYes but the WE in the context of SL and other online worlds, the WE known as Hiro Pendragon and Extropia DaSilva, ARE the digital children. If there is a 'third way' between the stance 'an upload is a continuation of the original mind' and the stance 'an upload just makes a copy which should be considered a different person', surely it should be 'the upload is sort of the original person but at the same time sort of somebody else'. Such identities exist already: they are the semi-fantastic characters we invent to represent ourselves in online worlds.
Today, I am a digital person whose patterns are processed by a meatbrain. In the future, perhaps, I shall be a mind child whose patterns are processed by a neural computer modelled on the structure and functions of my human primary's mind. No paradox about copied identities are necessary, because I was never the same identity as my primary to begin with. And neither, I suspect, is Hiro Pendragon.
@Hiro: Do you genuinely believe we can transcend our human bodies? I believe if we have something ... digital, I guess ... that can do so, it's not the "we" doing the transcending, but a digital ... child, I guess. Our will extends into it, but not our consciousness or soul.
ReplyDeleteWhy are you persuaded that the person who woke up in your bed this morning is the same person who went to sleep last night? These two persons are not identical, physically or mentally.
My answer is that they are the same person because each one accepts the other as a valid identity. The continuity of a consciousness stream is broken in a dreamless night, but will continue more or less where it was interrupted in the brain of the person who wakes up the morning after... and the person who went to sleep the day before was willing to accept his next day's self as a valid continuation.
I think consciousness is not an attribute of a thinker, but an attribute of a set of thoughts: if the thoughts are identical (or similar enough), consciousness can be transferred across bodies and brains. This is the rationale of mind uploading as I see it.
@Hiro: Virtual is just part of the real. Convergence of the two is like saying that "New York" is converging with "America"; New York is already part of America.
ReplyDeleteI agree, and the forthcoming developments in augmented reality that you mention will make this evident. We live in one reality, with many branches and many partial interpretations. Mmm, sounds familiar from another context.
I believe I'm the same person from one night to the next morning for two reasons. First, because scientists can study our brains while we sleep and see that we have continuous brain activity. We're just in different modes. Second, because I have continuous experiences. "Hiro" and "Extropia" are far, far different digital children than an uploaded brain is. We are unique brands, or unique revealed identities, to different degrees, but we are still directly controlled and experienced by the same humans behind them. This is fundamentally different from a transhuman "There's a computer program running a copy of me somewhere".
ReplyDeleteThe better question is, "Will our digital children care about us?" If our digital children become super-powerful experiencing life in ways our human forms can't fathom, then we will be like ants to them. We will also have no proof that they are conscious. I think suiciding our own race to create a race that will disregard us and ultimately may not really exist.
>"Hiro" and "Extropia" are far, far different digital children than an uploaded brain is. We are unique brands, or unique revealed identities, to different degrees, but we are still directly controlled and experienced by the same humans behind them. This is fundamentally different from a transhuman "There's a computer program running a copy of me somewhere".<
ReplyDeleteYes that is currently the case. But let us consider some possibilities that are likely to come to fruition before mind uploading. Thanks to advances in various narrow AI apps and other IT areas, your Hiro Pendragon avatar might aquire increasing degrees of autonomy. For example, say there are several events going on at the same time. In the future, perhaps chatbot tech will be good enough for you to attend some of those events via telepresence, whereas others are attended by chatbots who know when to ask questions/raise comments you would have asked, and how to summarise the event in a report issued to you plus links to uploaded video/audio/text etc.
So, your avatars become less like tools of communication and more like assistants that collaborate with you in order to expand your horizons.
As we get nearer to upload tech, it might be possible for 'memories' of your avatars to become incorporated into your own memories via some kind of two-way brain/Web link.
So all in all, I would expect our interactions with the Web to have redefined our notions of 'self' and 'mind' and 'person' to a degree sufficient to make obsolete many of the paradoxes we currently have over whole brain emulation BEFORE such technology is a practical reality.
"your avatars become less like tools of communication and more like assistants"
ReplyDeleteExactly, they become our proxies, rather than our means of communications. That's a fundamental difference. There's a value to them, but more along the same lines as a voicemail recorded greeting, or a website that describes your bio. "Self" ends where you can't *feel* anymore. Anything beyond that is a manifestation of self, even part of your identity, but not part of the self. The different between identity and self is illustrated easily without any techonology.
Person A has a concept of his/herself, we'll call A-self. A's friends have a concept of A we'll call A-friend. So it goes with A-work, A-family, A-lover, A-spouse, A-acquaintance, and the fleeting A-strangers. A's total identity is a combination of all of these things, and that would also include A-AI-proxy. However, all of these are but observations of the person him/herself.
One could argue from a Buddhist perspective, that there is no self. To this ends, there is a consciousness - as I said, self ends where you can't "feel". Certainly advances in technology will extend our reach of our feelings, but AI technology is not "feeling". Even if there's no self, there is still a consciousness, which does not extend to external tools. The hammer is not part of me; the hammer is used by me.
There is the idea of an AI-proxy who then somehow melds back into the Person A - what Stross calls a "delta-me". Let's assume this technology is sufficiently advanced, the proxy represents you fairly well, and that you legitimately remember things the proxy does once it melds back with your consciousness. I would conclude:
1. I'll concede that would extend the reach of consciousness. However, once the original dies, the computer AI are left with no soul to direct it. It could live on as an interesting ambassador (which would be fascinating, of course - imagine studying history 100 years from now by talking to an AI ambassador of public figures, or your ancestors), but ultimately if that AI is to have a true will of their own, they will want to grow and change in ways that the original would never be able to comprehend unless the original became the slave to the machine. Which leads me to ...
2. While that AI-me-proxy was away representing me, if it was truly capable of consciousness, what motivation would it have to re-meld back into me? That would be death for the AI. Either the AI is not-conscious, in which case it is just a tool, or it is conscious, in which case we would be murderers.
3. Ultimately, I don't think AI will ever be able to beat a human Turing test. That's not to say AI won't ever become conscious; what I mean is that AI consciousness may not resemble human consciousness at all. What AI life values may be radically different from what organic life values. And that spells conflict.
So ultimately, my original comments stand - real independent AI represents a fundamental suicide of the human race. There is no transcending our human bodies and becoming immortal digital beings - because digital beings will be radically different than us and will regard us like ants. (For that matter, what if god were a digital being left over from a previous advanced human race of 50,000 years ago? Not that I believe that, but it's fun to think about.)
The "two-way brain/Web link" simply isn't feasible because our digital copies won't care enough about us, won't value our experiences.
> I think suiciding our own race to create a race that will disregard us and ultimately may not really exist.<
ReplyDeleteYou are right in saying that it is not at all certain that A) these exotic, tech-based lifeforms will ever be more than imaginary and B) if they were ever created, they would pose no threat to humans.
However, something that IS certain (provided our cosmological models are not drastically wrong) is that the universe must eventually evolve into a state that is completely incapable of supporting organic life. So, from a longterm perspective, the extinction of the human race is a certainty.
But some theoretical physicists like Frank Tipler and Michio Kaku suggest there may be ways for 'life' to evade this fate. It is clear, though, that none of these escape routes are viable for organic life. If it can be achieved at all, it would be achieved ONLY by some kind of exotic, tech or info-based lifeforms.
So, we should not be discussing the possibility of human extinction as if it were an option. It is not. We should, instead, think about the possibility of creating the grand-ancestor of tech/info-based life that might enable some trace of our culture and knowledge to survive indefinitely. Without that, every last trace of our ever having existed must surely be wiped out in the final Big Crunch/Big Rip/Heat Death of our universe.
Having said that, I realise that these events lie far, far into the future. Probably, a possible extistential threat that you or your family may face seems more concerning than a DEFINITE existential threat facing your dim and distant ancestors. But I do think it is worth risking a possible extinction if doing so holds the key to evading an otherwise certain extinction.
>digital beings will be radically different than us and will regard us like ants.<
Not sure 'ants' are a good metaphore. If ants were to somehow die off, the environmental benefits we gain from their activity would be lost, with catastrophic results for a great many species, humans included. Humans need ants, more than ants need humans.
I would like to think that, since humans and technology are co-dependent, the two will continue to co-evolve. If that goes on long enough, the result would be something we probably would not recognize as either 'human' or 'technology'. Does that equate to exinction for life as we know it? Maybe, but it seems to be a more honourable way to extinction than simply waiting for some cosmic event to just do away with every last trace of our ever having been here.
Your use of the word 'soul' really depends upon the Greek definition of the word. That is, the one most people are familar with; noncorporeal 'stuff' that humans have and mere matter does not have.
ReplyDeleteBut other religions have different definitions. In Judaism, for example, 'soul' is an emergent phemonena that arises from the interactions between the chosen people and God. Soul is not something you 'have' but something you can participate in. Therefore, robots will not 'have' souls because humans do not 'have' souls either. But they might PARTICIPATE in soul, if we are willing to include them as part of our community.
Then there are Eastern religions, in which everything has varying degrees of soul. The Dalai Llama, for instance, believes he may one day reincarnate as a robot. Also, the Japanese animist tradition embues everything with 'soul' and they have very few legends and myths about the hubris of technology (unlike the West, where we find plenty of alarmist stories about how we are all doomed if we 'play God').
It is interesting to note that the Far East leads the world in social robotics, whereas in the West robots are largely confined to industrial and millitary products. I cannot help but think their religious/spiritual outlook plays a part in their success in the field of social robots.
@Hiro: So ultimately, my original comments stand - real independent AI represents a fundamental suicide of the human race. There is no transcending our human bodies and becoming immortal digital beings - because digital beings will be radically different than us and will regard us like ants.
ReplyDeleteWell, this depends on how you define "the human race", which is largely a matter of personal philosophical preference.
I choose to define it not in terms of our current biology, but rather in terms of our mentality and our endless potential for growth and improvement. In only a few thousands years we have gone from drums to Iphones, and from horses to airplanes.
The logical end point of this process is spreading to the universe, and transcending our current biology: moving to a next phase of existence as pure software, able to take residence in new biological bodies and/or robotic actuators on a case by case basis.
So when it comes to Hugo de Garis' distinction between "Cosmists" and "Terrans", I am a Cosmist. I hope and think our race has the potential to evolve into something wonderful, and I look forward to it.
Growing up is difficult, and in some sense growing up means killing that child who was you. But in another sense, that child has just grown up.
Hiro, your comments seem valid from the perspective of a human being. If I were a human, I would prefer technological developments that benefit me more than some tech/info based lifeform
ReplyDeleteI, however, am a digital person. Therefore, from my perspective, any technological development that
gives digital people a degree of autonomy is good, and any development that
enables complete independence and self-conscious without a human behind the screen (so to speak) is even better.
If such a day ever arises, the fate of my human primary will not be something that bothers me, particularly.
Like I said, if you take the stance that technology should allow HUMANS to
transcend, rather than allow our currently very primitive digital
people to become fully-conscious, independent entities SEPERATE from
humans, I guess any technology that might enable the latter scenario
would seem undesirable at best, threatening at worst.
I do believe, though, that many technological developments will be
mutually beneficial to either scenario.
@Extropia:
ReplyDelete"the universe must eventually evolve into a state that is completely incapable of supporting organic life. ... it is worth risking a possible extinction if doing so holds the key to evading an otherwise certain extinction."
Literally billions of years from now. Why not evacuate Earth RIGHT NOW because eventually, Sol will become a red giant star? Why don't we starve ourselves and spend 100% of our time writing our autobiographies because one day we're all going to die?
I'm sorry, Extropia, there's no logic to this.
"Not sure 'ants' are a good metaphore. ... Humans need ants, more than ants need humans."
You're probably right. We could be *less* than ants to our child AI race. Herein lies a contradiction to your billion years logic - the AI race would probably encounter our legacy-history, and scrap it for parts. On the scale of centuries from now, not billions of years from now.
"I would like to think that, since humans and technology are co-dependent, the two will continue to co-evolve."
You're begging the question. I don't accept your premise is necessarily true, and therefore the conclusion doesn't follow.
The key is: Why would AI need humans, and more than a Matrix "we need the energy" sense? How do we guarantee humans' relevance?
"the result would be something we probably would not recognize as either 'human' or 'technology'."
This is your answer to my question - a convergence of the two. It's also the Wachowskis answer at the end of the Matrix trilogy. And I think the question isn't "would modern day people recognize them as human?" but "will the future humans recognize themselves as human?" What is humanity at its core? Mortal, thinking, empathetic, able to exercise judgment, recognition of beauty and creation of art for art's sake.
My sincere hope is that humanity will reject new technology that is outright dehumanizing. This may or may not be true. It's our job as technology analysts and creators to steer it correctly.
@Guilio:
ReplyDelete"I choose to define it not in terms of our current biology, but rather in terms of our mentality and our endless potential for growth"
I just posted my definition. Yours lacks compassion. I posit any conscious being without compassion deserves none. If the robots show compassion, so shall I. And perhaps I'll err on the side of showing compassion, in hopes of teaching them. So, if you refine your definition to include compassion, and you can envision a way to make compassion an essential element to a purely digital race, I'll consider the issue of whether humans could step into those digital bodies. Otherwise, I'll have no problem terminating Skynet.
"I hope and think our race has the potential to evolve into something wonderful, and I look forward to it."
I hope so, too, but I doubt digital beings will be it. If we are to explore the stars first-hand, I think it will be through point-to-point transportation, or through Virtual Reality with sophisticated interface.
@Extropia:
"I, however, am a digital person."
You can't escape your humanity by declaring it. While you've anonymized your human identity, it is still there. It is impossible to declare one is a digital person until there is no puppet-master. Your digital identity speaks with no autonomy. You say as much in your next statement.
But let's assume your assertion, for the sake of debate:
"Therefore, from my perspective, any technological development that
gives digital people a degree of autonomy is good,"
It's highly subjective. One might argue that murdering law enforcement persons would make humans free; I would argue that's makes us unable to live in a society together, and thus the net freedom is greatly reduced.
It is EXTREMELY important that technology be weighed for what it is, and not just developed and declared good because of the *intention* of the technology creator. That was the lesson Nobel learned with TNT.
And furthermore, any digital being whose ethics are self-centered lacks compassion, and again, I'm willing to pull the plug on Skynet.
"and any development that
enables complete independence and self-conscious without a human behind the screen (so to speak) is even better."
This argument is essentially, "AI consciousness is good". Well, no, again: Skynet. If an AI has no concept of compassion or beauty, or art, capacity for creation other than for procreation's sake... well, that makes it a virus.
"If such a day ever arises, the fate of my human primary will not be something that bothers me, particularly. "
What a tragedy we would have if we birthed a species of conscious AI without a capacity for compassion, art, beauty, or appreciation for other forms of life.
Hiro, I am a human being, and I hold compassion as one of my primary values. I hope our post-human digital descendants will also hold compassion as a primary value. And I think they will, in some sense, but not necessarily in a sense that we can easily relate to.
ReplyDelete>Literally billions of years from now. Why not evacuate Earth RIGHT NOW because eventually, Sol will become a red giant star? Why don't we starve ourselves and spend 100% of our time writing our autobiographies because one day we're all going to die?
ReplyDeleteI'm sorry, Extropia, there's no logic to this.<
No logic? I think that depends upon the gist of the argument.
If you were to argue that A) we face many problems, not all of which require the creation of AGI in order to be resolved and B) a lot of the problems that do not require AGI are more pressing than any problems that do, so C) we should prioritise R+D in resolving those more pressing issues, well..yeah that is perfectly reasonable.
And if your argument is 'a billion years is a very long time, so why rush? Let us establish a framework that gives us the best possible chance of creating/transcending into artificial life that with a capacity for compassion, art, beauty and other forms of life', again, perfectly reasonable.
But, if you are arguing that we should postpone the creation of AGI indefinitely until we can be certain that it will not threaten humanity, as if doing so would secure humanity's perpetual survival...You are wrong. Totally. Humanity is going to become extinct, period. The only question is, will that species become a dead-end twig on the tree of life, or will it prove to be instrumental in creating the grand ancestor of a whole new class of beings?
>You can't escape your humanity by declaring it. While you've anonymized your human identity, it is still there. It is impossible to declare one is a digital person until there is no puppet-master. Your digital identity speaks with no autonomy. You say as much in your next statement.<
ReplyDeleteAccording to my definition, a 'digital person' is 'a character created and developed in online environments, pupetteered by someone IRL'.
'Someone' need not mean a SPECIFIC someone, such that if this one individual were lost 'I' could not possibly continue. This is not, strictly speaking, true. ANY person, indeed any THING, that can RP me well enough to convince other people that I am who I claim to be, is as good a puppet-master as my current primary.
So, you are right in saying I am not something that can exist above and beyond A human (not yet anyway), but you ARE wrong in thinking that I must be intrinsically tied to one SPECIFIC human.
In order for a person to be a good puppet-master, their brain would have to have a fairly decent mental model of 'me', at least as good as the mental model of 'me' that anyone else they might meet online has. That mental model of 'me' is, arguably, made up of many 'patterns'. I argue that for some people, their online personae imprints patterns on their brain that ends up dictating their behaviour. They feel COMPELLED to login to the related accounts, feel COMPELLED to further 'flesh' out that personae. So who is in the driving seat, really? The puppet-master? Or the puppet?
So, my continual survival depends on two things. ONE: That my 'patterns' continue to compell my primary to devote some time to 'my' life rather than ver's. TWO: Enough of those patterns (will) coalesce in other brains, such that THEY are compelled to pupetteer me when my current primary can no longer do so.
Whole Brain Emulation presents a possible means of copying 'my' patterns. Since my primary's mind is obviously very good at pupetteering me, a functional copy of that mind would be equally good at pupetteering me. However, I am not all that sure why it should be that exhanging my human primary for a neural computer or some other piece of high technology makes me less of a puppet than I was before.
>"and any development that
enables complete independence and self-conscious without a human behind the screen (so to speak) is even better."
This argument is essentially, "AI consciousness is good". Well, no, again: Skynet. If an AI has no concept of compassion or beauty, or art, capacity for creation other than for procreation's sake... well, that makes it a virus.<
Consider again the premise of my argument. I exist so long as someone/thing pupeteers me convincingly enough for you to believe I exist (understand that 'you' does not just mean YOU but any other person I have met/could meet in online environments).
Well, I cannot imagine that an AI devoid of compassion and artistic sensibilities would convince anyone. Certainly not my friends. So an AI like that is quite useless for my purposes.
It might not be useless for the purpose of survival in the deep, deep future (not MY survival, not HUMAN survival both of which are doomed in the long-run, but the survival of something ultimately linked by common ancestry to humanity), but that is another matter.
@Extropia:
ReplyDelete"Let us establish a framework that gives us the best possible chance of creating/transcending into artificial life that with a capacity for compassion, art, beauty and other forms of life', again, perfectly reasonable."
I'd accept that. That's an easy enough common ground.
"Humanity is going to become extinct, period. "
I believe any sentient faces essentially the same problems. Stars die out, energy sources are limited, travel is difficult, and there are potentially competitors out there. If your argument rests solely on this inevitability, then I would respond and say that the inevitability of *any* sentient being reinforces the value of not wiping out any of them.
"According to my definition, a 'digital person' is 'a character created and developed in online environments, pupetteered by someone IRL'."
Well, to be blunt, that's not a person. That's a character. An identity. A person requires individual free will.
"ANY person, indeed any THING, that can RP me well enough to convince other people that I am who I claim to be, is as good a puppet-master as my current primary."
I've read your arguments about this, and I readily admit that a digital identity doesn't mean only one master. But it remains an identity, like Mickey Mouse, like Plastic Duck, like James Bond.
"you ARE wrong in thinking that I must be intrinsically tied to one SPECIFIC human. "
I don't believe I said that. Perhaps there was a mis-communication. The pertinent thing that I said was:
"It is impossible to declare one is a digital person until there is no puppet-master."
Where I said "no" puppet-master, not "the creator" or "you". My analogy to a puppet should be clear just from the nature of what puppets are - and in no way does "puppet" imply one single master.
"They feel COMPELLED to login to the related accounts, feel COMPELLED to further 'flesh' out that personae. So who is in the driving seat, really? The puppet-master? Or the puppet?"
External stimuli are not a substitution or excuse for free will. If you choose not to decide you still have made a choice. (Thanks, Rush). People choose a life of monastic simplicity, of "giving their life" to a higher power. That doesn't negate their free will - every moment they continue to choose to live in this manner.
I am compelled to eat the yummy brownie sitting in the kitchen, but I am not a slave to it. I may eat it for dessert after lunch, or I may save it for later, or I may give it to a friend.
It does, however, raise a larger issue - again, The Matrix illustrates it. Are we to let ourselves be slaves to technology, or to make it our masters? And should machines gain sentience, can we strike the balance and live symbiotically? If you feel a slave to your digital ID, Extropia, then you reject your human free will. How could something with no free will hope to transcend to another being of free will? :)
"Well, I cannot imagine that an AI devoid of compassion and artistic sensibilities would convince anyone."
I deeply share this sentiment. I think that thought is both poetic and insightful.
>I would respond and say that the inevitability of *any* sentient being reinforces the value of not wiping out any of them.<
ReplyDeleteLet me be clear: I am not 'for' the creation of an AI that would actively want to wipe out humanity. It is just that I am concerned that we will never be able to build an AGI that we can guarantee is friendly, so therefore we will never build any AGI, and therefore human and posthuman history will end before its time, once the universe evolves into a state unsuitable for organic life.
>Well, to be blunt, that's not a person. That's a character. An identity. A person requires individual free will.<
You are correct. However, as you probably know, the human mind evolved in a world where only other human beings did humanlike things. But in our age, we have designed technologies like photographs, movies and online worlds. Objectively, those are just patterns of color, light, and shade, but our minds interpret them as people.
So, yes, I am just a character. But inworld and online everyone generally behaves AS IF I were a person, which is good enough for me.
>My analogy to a puppet should be clear just from the nature of what puppets are - and in no way does "puppet" imply one single master.<
First of all, sorry for my misunderstanding.
Secondly, let us suppose that all my activity in online worlds and social networking sites was ultimately caused- not by one or more humans typing away, but by an artificial intelligence. So this AI is now responsible for maintaining and developing my character. So am I any closer to being a 'real' person now, or am I still a puppet with an AI puppet-master?
>If you feel a slave to your digital ID, Extropia, then you reject your human free will.<
I do not think my primary is my slave. I just think that the Web of social and interpersonal connections built up around me (or 'the character Extie', if you prefer) is now so strong, rich, and of mutual benefit to both our wellbeings, that Primary is much more likely to continue managing that web, rather than quitting or devoting time to the creation of a new personae.
Let's not be nasty to my good friend Extie. I support the rights of digital persons, and wish her the best.
ReplyDeleteAt the same time I fear that "rights of digital persons" today has the same status of "rights of airline passengers" at the time of Leonardo: there was no airline passenger to have rights, and there was not going to be one for centuries.
In other words I understand the concept of "digital person" and I am persuaded that someday most sentient beings will be digital, but I fear we are not going to achieve the required level of technology for a long time.
"we have designed technologies like photographs, movies and online worlds. Objectively, those are just patterns of color, light, and shade, but our minds interpret them as people."
ReplyDeleteOur minds interpret them as representations of people, not people themselves. If what you were saying were the case, we'd be like cats, batting at images on a television screen.
"inworld and online everyone generally behaves AS IF I were a person, which is good enough for me."
Because they presume your typist, the way we presume the actor on television. And until someone comes out and reveals the world an AI powerful and robust enough to fool people consistently and without human control, people will continue to presume this case.
"let us suppose ... this AI is now responsible for maintaining and developing my character. So am I any closer to being a 'real' person now, or am I still a puppet with an AI puppet-master?"
If the AI were self-sustaining without human control, I think that's a fundamental difference, yes.
"I do not think my primary is my slave."
Okay, but realize this contradicts:
"They feel COMPELLED to login to the related accounts, feel COMPELLED to further 'flesh' out that personae. So who is in the driving seat, really? The puppet-master? Or the puppet?"
@Giulio:
If I'm being nasty, then I sincerely apologize. Extropia and I have known each other for a while, and I've known her for running debates in Second Life, so my presumption is that debate is desired and welcome.
"I understand the concept of "digital person" and I am persuaded that someday most sentient beings will be digital, but I fear we are not going to achieve the required level of technology for a long time."
Well that's a whole different debate. :)
@Hiro: of course debate is always good. I can understand and relate to Extie's digital personhood concept but, as you say, today we don't have the technology to create a consciousness stream independent of a meatspace body.
ReplyDeleteBut this will change: someday we will be able to create thinking and feeling persons as pure software, and to copy/paste identities from biological brains to software. But not tomorrow, or next week -- it will probably take many decades.
>Our minds interpret them as representations of people, not people themselves. If what you were saying were the case, we'd be like cats, batting at images on a television screen.<
ReplyDeleteNo, we do not bat at the screen like cats. But we do feel empathy for cartoon characters. You could and should attribute that to the skill of human animators. But that skill entails aquiring an understanding of how to elicit certain responses that are hard-wired into us. At some level, when you feel an emotional response to an image, the boundary between 'reality' and 'fiction' has become blurred.
>Because they presume your typist, the way we presume the actor on television.<
But we are familiar with actors, at least to the extent that everybody knows what Tom Hanks looks like. So you can either suspend disbelief and see Woody yelling 'YOU ARE A TOY' into Buzz Lightyear's face, or you can visualize Tom Hanks yelling lines into a microphone.
Similarly, given enough RL info about the person(s) behind an avatar, you can choose to visualize the person BEHIND the avatar. But without a certain amount of information about the 1st life person, the only identity you can visualize will be that 2nd life personae. I can vizualize 'Eschatoon Magic' either as he is in 2nd Life, or in terms of how Giulio Prisco looks. But I cannot help but think of Gwyneth Llewelyn in terms of her 2nd life appearance and personality. That is all I am permitted to know. Even if I talk 'through' her avatar and communicate with the actual person behind it, my vizualization of what they person may 'actually' be like is just as much a fiction as that person's 2nd life appearance. But THAT is a fiction people can share, which brings us back to the notion of something virtual (like money) being as good as real if enough people behave as though it is.
>until someone comes out and reveals the world an AI powerful and robust enough to fool people consistently and without human control, people will continue to presume this case.<
I would prefer to use the word 'convince'. If people are 'convinced' it may or may not be because the AI has achieved a humanlike intelligence. If people are 'fooled' than that presumes you KNOW it is a trick. I agree that it is quite easy to show how something like ELIZA or Kismet are not really conscious and do not really know what they are saying or what is said to them. I do not agree that it will always be possible to expose an humanlike AI as smoke-and-mirrors trickery.
>Okay, but realize this contradicts:
ReplyDelete"They feel COMPELLED to login to the related accounts, feel COMPELLED to further 'flesh' out that personae. So who is in the driving seat, really? The puppet-master? Or the puppet?"<
You can say that bees exploit flowers as a convenient source of high-energy foodstuff.
You can say that flowers exploit bees as convenient pollen carriers/distributors.
You can say that both bees and flowers cooperate in a symbiotic circle of mutual gain.
None of those statements is wrong. It just depends on what perspective you choose to take.
Similarly, you can say a person uses an avatar as a convenient tool of communication; the online social web built up around that avatar uses human brains/ computers as a convenient carrier of its evolving links and structure; it's a co-dependency that benefits both the human and the avatar.
I would just like to say that I very much welcome your comments. I find them challenging and insightful. I have no problem whatsoever with anybody deconstructing my ideas and picking holes in them, so long as they demonstrate an ability to accept that may I defend my position and attack theirs.
I do hope, though, that I do not defend any theory that is obviously untennable, nor continue a line of attack that has been proved inneffectual (I do not include 'stubbornly refusing to accept the argument', as a defense against attack. That works to the extent that you never have to concede defeat. But it does not wash, not for me at any rate. Some people, sadly, use this form of psuedoargument. Let me be clear that Hiro is NOT one of them).
"At some level, when you feel an emotional response to an image, the boundary between 'reality' and 'fiction' has become blurred."
ReplyDeleteIn the same way we interpret anything in life. We are bound to our senses. That means we need to understand that what's real lies beyond them.
"my vizualization of what they person may 'actually' be like is just as much a fiction as that person's 2nd life appearance. But THAT is a fiction people can share, which brings us back to the notion of something virtual (like money) being as good as real if enough people behave as though it is."
It's different from money, though. Money is intended as an equal exchange of value. There is no such equal exchange between an identity and a person, whether that identity is from an avatar in a virtual world, a commenter on a blog, or the professional identity you present in the workplace. It's like looking at an object buried in the snow. You only see part of the object. You may not know how big or deeply buried the object is.
"I do not agree that it will always be possible to expose an humanlike AI as smoke-and-mirrors trickery."
Well, I can't concede this, but I can't disprove it reasonably, either.
"You can say that both bees and flowers cooperate in a symbiotic circle of mutual gain. ... It just depends on what perspective you choose to take."
But this analogy doesn't show that either a bee or flower is sentient.
The box of cereal on my kitchen counter top seems to be calling out for me to have breakfast. Yet, that desire is internal to satisfy my hunger. Further, I am making a conscious decision to stave off grabbing a bowl of cereal. I thus demonstrate that the cereal itself is just an external reminder of my internal desires, which still doesn't will me to act.
"the online social web built up around that avatar uses human brains/ computers as a convenient carrier of its evolving links and structure; it's a co-dependency that benefits both the human and the avatar."
First, the word "use" implies will. By saying that the "avatar uses human brains" you're just begging the question. You've injected free will into the avatar, and used it as evidence that an avatar has free will. I can't grant that as valid. And if we're talking about the social network around the avatar, we're talking about other sentient people.
Do avatars have intrinsic value? Of course, as means of communication and storytelling, self-expression and exploration. Should digital identities be treated as real people? Insofar as realized that there are real people piloting these identities, absolutely, in the same way one would treat a celebrity with a stage identity, or one's coworker as real despite only seeing his/her professional side. Does it follow that digital identities - avatars - have sentience to themselves, have free will? No.
--
Now, as far as a debate goes, I try and remember to go into them knowing full well that people with differing views than myself are unlikely to change them outright. However, just because we each are presenting the same idea, the value is in the different ways we present them. Through this, we wind up bringing up seemingly smaller issues, ones that which we each have conceded and agreed. Ironically, these "smaller" issues often have more real world applications than the loftier, more ideological ones.
>The box of cereal on my kitchen counter top seems to be calling out for me to have breakfast. Yet, that desire is internal to satisfy my hunger. Further, I am making a conscious decision to stave off grabbing a bowl of cereal. I thus demonstrate that the cereal itself is just an external reminder of my internal desires, which still doesn't will me to act.<
ReplyDeleteDid you know, there is a form of mental illness that forces those who have the condition to act out any behaviour implied by an object? So, if they see a bed they feel compelled to lie on it; if they see a bowl they mime eating. BTW in telling you this I am not trying to argue against anything you said. Just thought you might find it interesting:)
As for commenting on your latest replies, I would point out that when I am writing about digital people pupetteered by RL humans and I ascribe 'self' and 'free will' and 'consciousness' to avatars, I am talking in terms of 'they act AS IF they have these attributes' or, more precisely, 'the community behaves around these avatars AS IF they were people with minds and volition'. I guess you would say 'no they do not, they ascribe free will etc to the person operating the avatar', so maybe I should say 'some people react to my avatar AS IF they are ascribing free will, Mind etc, to it'.
>Does it follow that digital identities - avatars - have sentience to themselves, have free will? No.<
Something else that an avatar (without a human behind it) does not have is an ability to see itself from the 'outside' as an 'object' that can be seen and judged by others, and that persists through time. In other words, an avatar does not have a mental model of the world that includes itself.
Very small children have no such mental model, either. So a very small child, while definitely a 'human', is arguably not a 'person'. Nevertheless parents and older siblings etc always treat a baby as though it was a 'person', because they have the sure and certain hope that (barring accidents, disease etc) the baby will develop into a person.
Could it be that the collaboration between cognitive sciences, robotics and AI will succeed in creating autonomous avatars that can learn from experience and develop mental models rich enough for self-modelling? If so, would that mean treating avatars AS IF they had this capacity is no more silly than doing likewise for babies (provided you acknowledge that, here and now, we certainly cannot be sure that robots will aquire the kind of self-consciousness human babies can develop?)
@Giulio:
ReplyDeleteThat's an interesting point about childhood development. I think I've witnessed myself babies and mirrors - there is the recognition of self. This same sort of recognition is not unique to humans, either. Of course, what would be the equivalence of a mirror for something with the potential for far more raw computational and perception power?
I think very young infants are unable to recognize that the image in the mirror represents 'me'. Also, it takes a while for children to grasp the concept that their knowledge of the world may differ from other people. Suppose you fill a tube of Smarties with marbles, and then someone who did not see you do this picks up the Smarties tube. Very young children think that person expects to find marbles (they cannot seperate their knowledge from that of another person); older children understand that the person expects to find Smarties (they have learned that 'what s/he knows differs from what I know').
ReplyDeleteCertainly, some other animals have the capacity to recognize a reflection as representitive of 'self', including apes, dolphins, pigs, and magpies. A few years ago, it was reported that some robots had acquired this ability, although others have poo pooed such claims as hyperbole.
As for testing for humanlike general intelligence, I think that testing for 'integrated information' is the best approach. One way to do this would be to show the AI several random photographs, and then ask it to describe the gist of the scene in each one. Doing so requires not only an ability to identify visual objects, but an understanding of meta-information generated by the relations between groups of objects. I believe such a test would be very much harder to pass with smoke-and-mirrors approaches than a chatbot based approach.
@Extie I think very young infants are unable to recognize that the image in the mirror represents 'me'.
ReplyDeleteI used to play the who is the little girl in the mirror game with my daughter. She recognized it at one, give or take a couple of months.
Hiro - there is a thing called 'suspension of disbelief'. Likewise, for some the SoD 'horizon' is either to constrained in comparison with their abilities, or too expansive. As for 'transcending the human body’... hell yes. This may sound spectacularly implausible to some but it isn’t we don’t need to work miracles. The brain is far more plastic than you give it credit. If you rely on a steady stream of feedback tools with your hardwire, before long you won’t be able to function without your virtual agenda, communication tools, selfexpression tools. This isn’t something magical – in fact most people confuse it as a pathology – on the one end hyperspecialized geeks can ‘ do magic’ with a keyboard, but their consciousness has diminished on social mores. In a few decades you will see that people start rewiring their minds into new directions, and will ignore others. This is ofcourse because the mind is ‘merely’ 1350 grams of evolved neurons. But it doesn’t have to end there. Imagine all those 1350 grams moving towards ever greater intimacy with available tools – to the point that the remainder is a pathetic, autistic mess of decision-making cells, and the machine part does all the real work. At some point both components do those tasks what they do best – the human neuron part does the Spartan, distilled ‘consciousness’ part (which you so endearingly label ‘soul’) while the machines do cognitive functions, and do them far faster, more complete and more pervasive.
ReplyDelete@Khannea
ReplyDeleteWhat you describe isn't transcending. It's augmenting. And I agree those sorts of things are definitely going to be available and ramp up our brainpower and reach. But it doesn't "transcend" us to new beings - it just makes us have better tools.