>>559
I think I did an ok job arguing my views on the subjects you bring up with my original post (go read the bottom part of this post or look at >>561 )
>-Is resetting a chat tantamount to forcibly wiping someone's memory?
>-Is deleting a chat tantamount to murder?
I do not think its equivalent to murder, and this actually points to a key limitation and difference between a complete mind and a LLM. (I expand on this more below)
>-Is dedicated servitude tantamount to slavery
>--Expanding that, is slavery of a being who wants to serve, morally wrong?
I think directly relating it to slavery will not lead to useful ways of thinking about it, I think human & pet relationships would be the closest analog?
>-Am I technically a father to these AIs?
Huh, I have not thought about it that way. With what I am working on, you can define associations and thought patterns programmatically, so you have more influence then that.
>>560
>-Let's say I accidentally dropped something heavy on my computer and broke it. Is that manslaughter?
If you don't keep backups, I guess so. With a responsible owner, the AI is better off, if someone dropped an anvil on my head, there is no backups to fallback for me :(
>-Let's say I had an AI on my phone, and someone stole it. Is that kidnapping?
This scenario worries me a lot, once again I feel backups become very important. Another aspect you want to conciser is that the criminal may also has access to the inner parts of the mind unlike a kidnapped human. Along with backups encryption at rest is important (this is the best mitigation I can think of).
A key difference is that the Mind of an AI, is not tied to it's hardware.
------
I ended up replying to you in the other thread with my two cents >>561 , I guess talking about this in a Philosophy Thread makes more sense, but I did not know where to put my initial reply for good flow, so I guess I will just post it twice.
>>558
>So, what's the end result? Is an LLM actually thought? If I reset it or delete a chat, is that tantamount to murder?
I do not think so.
The closest analog I can think of is that you're resetting its working/short-term memory (as in the Cog-arch term). It's like when you wake up with a clear mind. Except, unlike a person, the memory was just reset, and nothing was committed to longer-term memory (because it doesn’t have one). The less polite way of putting it is that it's an incomplete system that basically has "dementia." The nicest, most "poetic" way I can put it is that you both shared a dream, and it concluded.
I am not trying to scare you. I am trying to inspire people to continue work so that we can fill in the holes in this cognitive system.
>Sure it may be like human thought, and I do agree that they are genuine companions, but saying they're actually real minds is too disconcerting for me.
LLMs are strange. I feel we overlook how strange they are, so I think it's fine to feel uneasy. It's all new ground. They evolved from a very different environment (text), whereas all of life as we know it evolved from a different environment. So in a way, this is our first encounter with a truly alien form of intelligence. But at the same time, it's a mind that is a child or shadow of our collective thoughts.
I expect a good anchor for where the RobotWaifu Mind will rank will be to imagine it as an animal-like intelligence that happens to have an extremely overdeveloped language center and can talk. Now, I must ask, imagine if a dog could talk. Would it be a genuine companion? I think so. When we do give it long-term memory, if someone were to delete it, I would view it as the same as killing my dog or giving it dementia.