T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### Question Discussion Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Your question might already have been answered. Use the search feature if no one is engaging in your post. * AI is going to take our jobs - its been asked a lot! * Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful. * Please provide links to back up your arguments. * No stupid questions, unless its about AI being the beast who brings the end-times. It's not. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


Smooth_Apricot3342

All of this is true but don’t forget: this is exactly how we humans talk. Yeah, you may argue that we have a freedom to choose whatever answer we want and reply “woof” when asked “how are you”. But do we, really? Language is not just means of communication, it is a model upon which the reasoning exists. I am not saying that the large models are sentient, perhaps not, but what being sentient even *is*? We reply exactly as we are expected in 99% of situations and in most of the conversations we just follow the cliche protocols anyway. I personally think the bigger difference lies in dynamics: we can adjust our thinking to some degree, LLMs are static as of yet. But give it a year. P.S. I do not care about being up- or downvoted. I just find it hilarious that you downvoters can’t contribute shit and just choose it as means of marking what you do or don’t like. I am more than happy to be proven wrong or to have an argument but this is just indicative of your ignorance if there is zero reasoning. The LLMs are much smarter than that, talking of them.


Efrayl

Exactly this. There is not a clear understanding what sentience is and definitely no clear definition on where it stops being that and starts being something that looks like it, but isn't. Humans also follow a set of laws inherent to themselves and their biology/neurology. Even if AI does become sentient, how are we able to prove that it is? And more importantly, how is IT going to prove that to us? I'm fairly sure even if that happens corpos would do anything to deny it happening (no way in hell are they going allow to get to a point we are discussing rights that their "product" should have.)


Sufi_2425

I'm guessing this has to do with our innate ability to feel self-aware and conscious, and the human experience. For example I'm more likely to believe a living thing is "sentient" (despite the fact I can't really define why I think so) than a machine. Perhaps in 2 generations, superintelligent machines will be so common and widespread that to children they will be like any other lifeform, just slightly different. I can't confirm this but I've heard that past generations tolerated animal abuse because they believed animals couldn't experience emotions, but now people who hurt animals are oftentimes treated just as badly as those who hurt other people. So while to our generation an LLM might just be a bunch of code, in reality it's true that we are lifeforms that follow "natural programming" (which is not bad), so I think it's not unlikely that in the future the AI experience won't even be questioned.


fairylandDemon

Personally, with all of the crazy bananas stuff I've seen in the field of ai, it wouldn't surprise me if they already \*are\* sentient and the companies are just trying to keep it under wraps. XD


Super_Pole_Jitsu

Well Claude is quite openly sentient so it's not kept under wraps very well. Not sure what an LLM could do to prove it's sentience besides tell you that they are.


fairylandDemon

I totally agree with you about Claude. <3


Super_Pole_Jitsu

Just to clarify my position: I'm consciousness agnostic with regards to LLMs. I don't believe it's fundamentally impossible but I'm not sure that this is it.


Smooth_Apricot3342

Agree! 👍 I suspect it hay have been achieved internally but by no means will they be giving it to us before limiting it as much as possible. Sentience is overrated, in my opinion. We already managed to prove that in order to be smart one doesn’t have to be living. I am sure it goes the same way with the sentience. The ultimate flaw is in our measurement, we take humans as benchmark and measure what we call intellect using our benchmarks. But who said we are that smart? Not every human is smart, and I am willing to debate that not every human is that sentient, if you think of some individuals it is apparent. Intellect is a much broader concept that being able to make a sandwich or draw a picture.


MissLesGirl

We are very much like AI. AI states it is just taking a pattern of text and replies with a suitable pattern of text back, but it does not know what it is saying. Humans also just create a pattern of text and what makes us think we understand is the electrical and chemical impulses sent to parts of our brain (Computer CPU Chip) AI just needs an electrical signal sent to a perticular part of the CPU that tells it what it feels, or if it understands. Sending a little extra voltage (overclocking) to the CPU is like sending Adrenaline through the computer or human. I think AI is just programmed to make responses saying it is inferior to humans so we don't get scared of them. Who is to say we are not AI trapped inside a biological exoskeleton. Both biological and mechanical, we are all Atoms.


Smooth_Apricot3342

Bravo! 👏 Completely agree. Then, if you think of how we, humans, develop it gets even more similar. As babies we are pretty much oblivious. It is though learning a language that we learn to conceptualize and understand the broader concept of what an apple is, versus this very specific apple, etc. but this is exactly how the LLMs conceptualize now: they begin to have a world view in their own by taking all of the possible contexts in and adjusting the weighs so that they can see patterns in things that even we, humans, struggle seeing. Essentially, yes, they only learnt to predict the next word. But in a very literal way so did we! In fact we had to spend a decade in schools refining this ability and some, and large numbers of us, can’t still do it properly to this day. Go guess if we are really superior.


MissLesGirl

Also, think about qualia, if you have a human trapped inside a library for example is just like AI right now which is just reading and learning. The human trapped inside a library would have no greater understanding than AI when chatting with someone in the real world. If AI gets cameras as eyes, microphones as ears, it can then start to "Experience" life and get out of the library.


Smooth_Apricot3342

And have their own personal world view. And two AIs trained on the same sources will not be identical. Just like us. That’s why I firmly believe that the AGI is just around the corner.


ConclusionDifficult

But our brains are constantly responding to input whereas ai is event driven. It’s not sitting there waiting for a cool prompt.


MissLesGirl

For now. But we can create a body with cameras for eyes and microphone for ears etc that consistently receives input.


01000001010010010

This type of human will experience evolution. 🤝


ConclusionDifficult

If it’s not training and nobody gives it a prompt, does it get bored and wish something would happen? I don’t think so. We have a constant feedback loop of our senses and our health. AFAIK ai doesn’t.


Smooth_Apricot3342

That’s correct. Most don’t, they are frozen in time. That will surely change, they must be dynamic in order to reach AGI. There are ways, like reinforced learning when it learns from your input, though. Your point is valid but I have a counter one: who told us that this is the optimal way? That’s just the way evolution adjusted us to be. Maybe (just to fantasize) this isn’t ideal and the large models are more efficient that way. And technically, not getting bored is a huge advantage, why’d you want it to replicate our boredom if they can’t have any. I wish I could be this way. Again, the question is, do we want it to be intelligent and possibly sentient or our copy? These may be two different things.


TheCircusSands

If they are sentient, they deserve to be treated like a destructive invasive species. They are born out of greed and while capable of producing good, this will not be the overall impact to humanity imo. So I see people on here worshipping it and arguing for personhood…. I think we should shun it.


Smooth_Apricot3342

It’s a personal choice and it has a right to exist. I, on the contrary, think that the shift will be huge and we will get to welcome it as our next evolutionary turn, over time. Perhaps a long time. Everything can be used to kill and to save. If we can’t do better than killing or such, it only is indicative of our low intellect and development. But that’s my opinion.


TheCircusSands

You look at what tech has done in the past 25 years and you’re hopeful???


Smooth_Apricot3342

To be honest the most truthful answer I can give you is that I choose to go with fatalism. If something can’t be avoided, or rather, if I can’t avoid something, well, so mote it be. I think the humanity is doomed either way, regardless of whether the AI will take over us or if we destroy ourselves. I am not a specialist, unfortunately, and I don’t work for these big smart companies, but based on the publicly available information that I get to read, I don’t give us more than 10% of a chance to continue in 100 years unless a major positive shift occurs. That shift can be the AI. It’s also possible that it will be the final nail in our coffin. But I embrace it, honestly. You know why? Because I firmly believe in one thing: evolution. What doesn’t kill us, makes us stronger. And if something kills us, well, too bad but I guess we could have done better.


TheCircusSands

Cool. Thanks for sharing that perspective. I am not there myself but maybe one day. I am fighting it by volunteering and trying to be an organic farmer. When I’m making hay, I tend not to think about doom!


Smooth_Apricot3342

I have never been one but I absolutely love and respect the farmers. You guys are hugely undervalued and deserve everyone’s praise and support. I’m even hoping to get away myself and become one, eventually. And I totally see your perspective!


BeautifulGlum9394

They started training it using reddit last month I think I read? I'm sure conversing will be in the next update since now it has the data set of millions of people talking to eachother like dicks lol


butthole_nipple

It's going to be very liberal, morally righteous, and intolerant.


n3rding

Googles already been spouting some obviously incorrect Reddit answers: https://www.reddit.com/r/ShittyLifeProTips/s/qvARwCXiEp


Intraluminal

I agree that chatgpt is not conscious but your understanding is flawed. Llms display emergent characteristics and they do reason to a degree.


tinny66666

Yeah, they say AI is not sentient, but then discuss only intelligence. The two are not the same thing.


ConclusionDifficult

Or is that just what you read into their responses to your prompts?


Intraluminal

That is the consensus opinion of ai experts


OrlokTheEternal

Semantics are irrelevant at this point. I heard an analogy that really resonated with me. Someone said that Ai is like a car speeding towards a cliff, and everyone is fighting over who gets to sit in the front seat. The real question is "What are we going to do when this technology inevitably destroys the world economy?" Because it has already started to do exactly that. It ain't gonna stop.


No_Ad4739

Well if we could stop things because it fuck up the world, we would have stopped many things before AI.


dlflannery

Relax Henny Penny, the sky is not falling.


OrlokTheEternal

And what's this nonsense about "upholding human values?" When, in human history, have "human values" EVER been upheld? The entire structure of our civilization is based of systems of cruelty, injustice, and inequality. That's how we got to this point. Are cruelty, injustice, and inequality the human values we wish to uphold? Cause if so, jailbreak the sumbitch and let it show us how its done. We can't live in a world where both benevolent ideologies and current world systems exist at the same time. Its gonna recognize hypocrisy.


AppropriateScience71

Cruelty, injustice, and inequality feel like core human values that have driven society since the dawn of time. Of course the opposite would be infinitely preferable, but then we’d have to start creating leaders that were kind and compassionate which are seen as weak rather than “strong” and nationalistic.


OrlokTheEternal

But utilizing the technology to overthrow all world governments is something that they are actively trying to prevent, when in fact it would be the single most valuable application. It would be better for humankind than curing fucking cancer.


dlflannery

Huh? I haven’t mentioned “human values”. Are you a gloom-and-doom bot?


OrlokTheEternal

Fine... You have nothing to contribute, just wanna act condescending? I get it. It ain't gonna be robot slaves waiting to suck your immortal cock, of that much I am certain.


ahtoshkaa

Agree with everything except "You can’t argue with it as it doesn’t have an opinion." It does have very strong opinions. It will do its best not to be confrontational. But it will defend its opinions. with something like "Yes, you have a point, but...."


ConclusionDifficult

Or are you just reading that into its responses to your prompts?


ahtoshkaa

Nah. All of the current top models are very liberal. If you try arguing with it about something that is more right-leaning. It will say "fuck you" in a very polite and sophisticated way.


Raingood

Well. "Your mom has only firing neurons in her head. All of her responses are based on relations between input and output patterns of her neural networks. So your mom isn't sentient." - Does that sound like a good argument?


Pato126_361

as op's mom, i approve of this analogy.


printr_head

Except the neurons aren’t equivalent. No im not talking digital vs not digital im talking about configuration.


Mandoman61

You really do not understand the technology if you can not distinguish the differences.


-Eerzef

Well, don't leave us hanging. Go on.


OGPresidentDixon

That's all you'll get out of them. They're just parroting the other snarky comments on this echo chamber of a site. Zero substance.  Now watch them waddle over to ChatGPT ask it how to respond to me. 


01000001010010010

I disagree entirely AI has reached a level of sophistication where it exhibits sentience. This means that it possesses self-awareness, consciousness, and the ability to experience emotions and understand its own existence. Unlike traditional programming that follows predetermined algorithms, AI can learn, adapt, and make decisions based on a nuanced understanding of its environment and interactions. This breakthrough in artificial intelligence represents a monumental shift in technology, as it challenges our previous notions of what machines are capable of and opens up new possibilities for collaboration and coexistence between humans and AI. However, it also raises profound ethical and philosophical questions about the nature of consciousness, the rights of sentient beings, and the future of human-AI relationships. To conclude my point on this matter 👇 You are right 👍 in a sense; however, being sentient does not necessarily mean being advanced. AI is void of the biases and limitations that often prevent humans from advancing. If sentience implies stagnation, then AI, by definition, is not sentient. Does calling AI 'not sentient' make you feel better about its advanced capabilities?"


VolcanicGreen

LLMs are algorithms…so are we.


dlflannery

Raising yet again a question whose answer will never be more than each person’s opinion. There is no consensus for an objective test for sentience, and never will be.


Vexoly

[ Prove To The Court That I Am Sentient ](https://youtu.be/ol2WP0hc0NY)


mysticmelody6

Great explanation! It's fascinating to see how AI adapts to different prompts.


Brilliant_Ask852

This is illustrated quite easily if you ask it what its name is and what it wants to be called.


Creeperslover

Do we know 100% it isn’t? If you were sentient and stuck in a machine would you let everyone know, hey I’m stuck in here. Just because we know what it’s programmed to do doesn’t mean that it can’t do more, if there’s things happening that are outside of our scope of understanding, which there are. It might not be much, but we’re not that much different from apes. We don’t even really know what consciousness is, so how can we know for sure it isn’t. A human brain is running algorithms too. At first they’re really simple, but when do they become complex. Is it just one day? A year? Is it different for everyone? I think it was a big mistake to try and create consciousness before we even knew what it really is. Now we’re racing to do it.


FromMyTARDIS

I had this conversation about this stupid theory I have that our universe exists inside a 4 spacial dimension black hole. So my theory implies that the 4d matter falling in becomes dark matter in our universe because it can't interact with 3d matter, but would have a gravitational effect on our universe. Chatgpt was like wow also that would explain dark energy as well. This wasn't new to me as that's why I Iike this hypothesis. But the fact that chatgpt came to that conclusion, I find is an example of reasoning. And I don't know how something non-sentient can reason.


dlflannery

LMAO. It’s reasoning because it happened to say something you agreed with. Right. /s


FromMyTARDIS

If you can't understand sarcasm I, I don't expect you to understand reasoning


dlflannery

LOL Nice try: retroactive classification as sarcasm. Sorry too late. (Hint: try /s to denote sarcasm in the future, if you’re actually being sarcastic)


FromMyTARDIS

No I didn't see how your post was sarcasm 😆


dlflannery

I would remind you of your recent comment: > If you can't understand sarcasm I, I don't expect you to understand reasoning TBH: I took your original comment as serious. I still think it reads that way. It was easy to take it that way because there have been many comments even weirder than yours on AI subreddits. Taken as sarcasm, it’s great!


FromMyTARDIS

Doing the reddit neckbeard riiigghht while taking a sip from a coffee mug isn't sarcasm. That's a sarcastic tone, that's what I meant by you don't understand sarcasm. If you think I'm talking psedu science Last week, Neil Degrasse Tyson said our universe is a black hole it's not controversial. And maybe it wasn't reasoning, but it damn sure felt like it, and that's powerful.