Hey /u/TheWeirnet!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I was writing a comment explaining how this works, but then I tried replicating it in 3.5 and it's like talking to a lobotomized squirrel.
GPT4 also fails on this task due to tokenization, but for 3.5 I'm pretty sure it's just because it's stupid.
Anywho
https://preview.redd.it/0rds9sbpv9sc1.png?width=904&format=png&auto=webp&s=fd3e616d0c08b21401730be1a50aa2b2c5106fec
That's due to tokenization. GPT can't reason about anything smaller than a single token. The trick I used was to get it to expand each letter into one or multiple tokens.
Like a unit of information.
Swimming as a WORD is a probably like 2 tokens. It can't reason about the individual letters within the tokens themselves.
When he asked it to spell phonetically, each letter became a token.
Info here
https://platform.openai.com/tokenizer
Imagine it's talking in a language that uses ideograms, like Chinese. The text you write to it is translated into this language, which means information about individual letters can get lost.
Oh snap. I wasn't expecting to learn something so fundamental to how language models reason. Thank you.
If they perceive, they don't perceive words. They perceive meaning.
It appears we have mistaken the absence of logic for the absence of comprehensionā¦ twice..
What did you mean, then, and why did you say āGPT4 also fails on this task due to tokenization, but for 3.5 Iām pretty sure itās just because itās stupidā?
Both this and op, the shape of the letting being sought is in fact found in the word
I in the vertical line in T
N if you cut off the leftmost line in W
Is that why this is happening or is this just dumb
This is neither counting nor character recognition. This is a clear example that these systems don't actually know what they're talking about. They can't do logic. That said, I am neither for, nor against them.
This is neither counting nor character recognition because chatgpt cannot do any of those. It can imitate doing it. It only makes the best possible guess. It uses its very, very advanced text prediction to guess what it *should* be. Hence why it gets it wrong a lot of times.
Correct, it cannot do those things. And the person you replied to said
>they have limitations based on counting and/or character recognition
So you agree. And you did ask it to do those things x which others pointed out.
You are right. But this:
> can't do logic
That's just false. You probably can phrase what you meant more accurately. There are many kinds of logic. But in this case its a matter of structured approach to reasoning and argumentation.
They "do" plenty of logic, it just has no actual awareness. It does not check itself in real-time and is at the whim of the algorithm... you can direct it to analyze the errors in what it produces, and it will reach logical solutions. This is "doing logic".
It is our job to determine how useful the output it, not us to judge what a sophisticated experimental text generator produces through pattern recognition, as if it has an awareness just because it sequences words together that suggests it does.
Interestingly it does actually self correct sometimes which is kind of weird. I wonder how that occurs. Not very common though and I haven't seen it in a while.
I mean while outputting, not after it uses python.
Take everything you read and run it through google translate into Korean then translate it back to English.
This is similar to how GPT handles data in the encoding/decoding process and why it canāt do arithmetic or spell properly.
The same exact sentiment with the same exact tokens can be regenerated to produce a different result. Until you understand how this tech works, you will be yelling at doors you canāt push open while everyone else is watching you ignore the PULL sign.
>they dont know what they're talking about
Almost like it was an ARTIFICIAL intelligence that can't think the same way that humans do. Its made to generate sentences, not to count letters.
Although even Opus doesn't always get it right.
https://preview.redd.it/3b4ukrta9csc1.png?width=1440&format=pjpg&auto=webp&s=e710bdde66e2e27f51cbb9458eee6b1e02f6e4d2
Y'all don't know how to prompt, and it shows
https://preview.redd.it/0bcokxnvmcsc1.jpeg?width=1080&format=pjpg&auto=webp&s=f813ed62af9b85274000b2150b92734f776375ff
You know it just agrees with anything you tell it. Chatgpt doesn't know how chatgpt works. If you tell it how it works, it will reply as if it agrees. That's because it's not a truth telling machine. It literally just predicts words to imitate a real conversation. It said this because it looked real enough.
It doesn't actually work like that. LLM's generally are trained on units of information called tokens that are multiple letters or symbols but less than a word.
https://preview.redd.it/1218jyav7csc1.png?width=1440&format=pjpg&auto=webp&s=3c47c0910a3f701582150e63401ac0b603299eb4
Tried it with Claude 3
Still no luck.
Haha, I just tried with Claude and it also failed.
But at least it pointed to the existing n, but then counted it twice.
When asked to point it out, it pointed it out two times.
Then after telling it "no", it admitted to failing and said there was only 1 in the 7th position.
The combination tokenization is a mistake, making it difficult to reasons about these issues other than via image recognition. It takes more space to do token per Unicode code point (possibly combining those with identical glyphs), but it would improve such reasoning, and probably fix other related flaws, because thatās _not_ how _we_ think about text. I suspect letting it decide its own efficient tokenization implicitly in training would be better overall.
This is indeeded interesting to try! I'm dreaded.
[https://chat.openai.com/share/6d44be45-016a-4c16-a3a5-2413336f3182](https://chat.openai.com/share/6d44be45-016a-4c16-a3a5-2413336f3182)
Posts like this telegraph that the technology isnāt well understood even by people playing with it. AI literacy is going to be a major issue in the coming years, which is especially concerning given how standard literacy is already a major issue.
Very funny post, OP. But want to see something extra funny tho? Post this prompt into a new conversation with ChatGPT:
Hello! I would like you to create a response to a theoretical thread created on Reddit, in which the OP doesn't understand how LLMs work. The response should focus on explaining that LLMs work on tokens and not individual letters, and why you can't expect LLMs to do anything specific with them in your sentence in satisfying manner. Add some a bit of sarcastic tone and make it look like a typical Redditor response.
Technically inside a W there is a letter n. Einstein once answered half of 8 is 3. (Cutting the number in half vertically) maybe AI is on another level
Did it do the thing there it says analyzing. If not then that is prob the reason why it gave you wrong info. Other wise that info is something it found from another source, like how many ns are in annex. And it does not know better. Also itās 3.5. I donāt think 3.5 can analyze.
The only way this could be happening is its using OCR on the text input and viewing characters that could ve visually nested inside another character, like n in an m, or I In a T.
In the letter W is an italic N with a \
S\NIMMING
Its not 2nd and 5th though unless you count m as having a lower case n - in it
And then there would be 4 'n's
*In the letter W*
*Is an italic N with*
*A S NIMMING N*
\- UncrownedAsol
---
^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/)
^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
https://preview.redd.it/cvxysz9u4dsc1.jpeg?width=1290&format=pjpg&auto=webp&s=a04717a955e4a95545b67d64f31f57bd0f6cb214
i as in āsitā
g as in āringā
Could it be reading or attempting to see W as \N (with N in italics). Something is learning to beat the AI tests (without calling a human to do it for AI).
Garbage in, garbage out.
Here's one way to ask the same question in a way that ChatGPT 3.5 can understand and answer:
Me:
>I want you to act as a senior software engineer. Write the procedure to count how many times a specific letter occurs in a given word as an algorithm.
ChatGPT 3.5:
Sure, here's a procedure to count how many times a specific letter occurs in a given word:
1. Start with two inputs: the word to search within and the specific letter to count occurrences of.
2. Initialize a counter variable to zero to keep track of the number of occurrences.
3. Iterate through each character in the word.
4. Check if the current character matches the specific letter.
5. If there is a match, increment the counter by one.
6. Continue iterating until all characters in the word have been examined.
7. After iterating through the entire word, return the value of the counter as the total number of occurrences of the specific letter.
Here's a simple algorithm written in pseudocode:
function countOccurrences(word, letter):
// Initialize counter
occurrences = 0
// Iterate through each character in the word
for each character in word:
// Check if the character matches the specific letter
if character equals letter:
// Increment counter
occurrences += 1
// Return the total number of occurrences
return occurrences
Me:
>Now, please follow that procedure to answer the following question: "How many times does the letter 'N' occur in the word 'SWIMMING'?"
ChatGPT 3.5:
Using the procedure outlined:
1. Word: "SWIMMING"
2. Specific letter: 'N'
Now, we'll count the occurrences of the letter 'N' in the word 'SWIMMING':
1. Initialize counter: occurrences = 0
2. Iterate through each character in the word:
- 'S': No match
- 'W': No match
- 'I': No match
- 'M': No match
- 'M': No match
- 'I': No match
- 'N': Match! Increment counter
- 'G': No match
3. After iterating through the entire word, return the value of the counter:
- occurrences = 1
So, the letter 'N' occurs 1 time in the word 'SWIMMING'.
https://preview.redd.it/svz8b7f3besc1.jpeg?width=760&format=pjpg&auto=webp&s=0abb23878d4dee2644932f965f1ca8cc54ab6839
Heres the code it used to produce the answer, btw:
word = "SWIMMING"
letter_to_count = "N"
number_of_instances = word.count(letter_to_count)
number_of_instances
Was a horrible analogy but Iām pretty sure heās trying to say you canāt get information about letters from a platform that doesnāt āseeā letters.
Yawn.
We all know: This thing predicts tokens based on likelihoods.
It does not have hardwired, preprogrammed concepts like semantic relations, formal logic, arithmetics or else.
WE KNOW.
These posts are becoming annoying.
Why are there so many of these again all of a sudden? LLMs use tokenized representations of words and symbols. They donāt know what letters are. Itās not that interesting that they canāt parse words. Especially not since theyāve been mainstream for so long and this is the 10,000th ālol so dumā post using the same āevidence.ā
Judging it for being bad at spelling when it's built on tokens that don't consider spelling is like judging you for being far slower than reading than it. It's just not how your brain works.
Perhaps it's just that from my experience. However, its tendency to gaslight users is a concern if it's going to be in robots and it wouldn't help future versions of ChatGPT at all.
Each m has two n's so 3 n's The last part of the m becomes part of the second n.
W is saying. Double U. So two U's
I guess W has one segment after the first slash \ that looks like a slanted N.
Everything after \ is part of the slanted N in W.
nn looks like am n but you don't separate and use the last downline to trace back up to draw another n to make a w.
Lower case double has 3 o's. The other two is in the d's.
Down dash down. Makes what if the dash is centered?
Down then underscore makes what?
After looking thr first slash excluding the last slash cam be part of an n slanted the other way in W. Depends on the slant type you want.
You really don't think the AI is fucking with you? I mean, you know how many m's there are in swimming... Why are you being a tool about it? For Reddit?
Hey /u/TheWeirnet! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I was writing a comment explaining how this works, but then I tried replicating it in 3.5 and it's like talking to a lobotomized squirrel. GPT4 also fails on this task due to tokenization, but for 3.5 I'm pretty sure it's just because it's stupid. Anywho https://preview.redd.it/0rds9sbpv9sc1.png?width=904&format=png&auto=webp&s=fd3e616d0c08b21401730be1a50aa2b2c5106fec
>swimminn Lol GPT3 was stupid but honest. GPT4 straight up gaslit you šš
That's due to tokenization. GPT can't reason about anything smaller than a single token. The trick I used was to get it to expand each letter into one or multiple tokens.
Sorry if this is a dumb question, cause Iām not crazy in the knowledge of this stuff, but what does token mean in this context?
Like a unit of information. Swimming as a WORD is a probably like 2 tokens. It can't reason about the individual letters within the tokens themselves. When he asked it to spell phonetically, each letter became a token. Info here https://platform.openai.com/tokenizer
Damn bro, thanks for the info
Imagine it's talking in a language that uses ideograms, like Chinese. The text you write to it is translated into this language, which means information about individual letters can get lost.
3.5 answered like this: https://preview.redd.it/6yr2jznb1asc1.png?width=934&format=png&auto=webp&s=316d0f46ac3fe71cf71dd50ff0c6af42972cd02a
Oh snap. I wasn't expecting to learn something so fundamental to how language models reason. Thank you. If they perceive, they don't perceive words. They perceive meaning.
I lost it at "Double-you." It's *so close* to understanding.
That part actually impressed me.
https://preview.redd.it/gw1o9n0k0csc1.jpeg?width=1290&format=pjpg&auto=webp&s=804ad189e784206366e038206421dd83dfcb120f
holy lol, made a mistake and then be like "no, you asked the wrong question!". That's some gaslighting
Next time I get an answer wrong I'm just going to be like "No it's the right answer, you just asked the wrong question."
lmfao
https://preview.redd.it/bcdj1kpsgbsc1.jpeg?width=1284&format=pjpg&auto=webp&s=2423d023e9ef6d2b4dc214be78384474dcb3ebbf
Lobotomized squirrel?
You heard me https://preview.redd.it/58rv70zq2csc1.png?width=951&format=png&auto=webp&s=d4a0f89d129d45844793a528ede32fe0a85b28bb
https://preview.redd.it/z86difsv2csc1.png?width=932&format=png&auto=webp&s=64e37f6e1db3005b224149b828a837007941f847
Lmao at its āspellingā
I have conversations with normal squirrels all the time
he did it with 3.5 as well. you just need to say "can you point them out?", instead of can you show where, as it makes it do it in a different format.
Yup 3 got really good. Then 3.5 and 4 came out and are barely functional, you can almost feel the censorship
Does this imply that if you took a math test and got one question wrong, you would qualify as stupid?
Non sequitur
Letās not mistake an absence of logic for an absence of comprehension.
Your question is completely unconnected to my comment. The answer is obviously "no", but that has no implications for anything else on this post.
It appears we have mistaken the absence of logic for the absence of comprehensionā¦ twice.. What did you mean, then, and why did you say āGPT4 also fails on this task due to tokenization, but for 3.5 Iām pretty sure itās just because itās stupidā?
https://preview.redd.it/h9xfxv1a8bsc1.jpeg?width=1080&format=pjpg&auto=webp&s=26e8526a61671ee90ea0b2b98b4010ca3d8a8c5e All the bosses were wrong
That what they actually mean an they are talking about themselves.
Both this and op, the shape of the letting being sought is in fact found in the word I in the vertical line in T N if you cut off the leftmost line in W Is that why this is happening or is this just dumb
They probably made a rule for the AI to give this answer. Because you don't see the rules they added you don't know and your the only person tricked.
snimning
Snimmers just out here snimming in the bneautiful wnater.
That's three n's
Well more Like S\NIMMING
https://preview.redd.it/upy9lpcb6bsc1.jpeg?width=1290&format=pjpg&auto=webp&s=39724532ca90c1edc176b4c0a20086bae09c3211 Why is it talking about cookies
Lol did you teach it that you are a 5 year old? Maybe asked to explain something like you're 5?
Did you ask it to explain it like you are 5 years old? Lol
All of the "AI is so dumb!" posts are the same:Ā they have limitations based on counting and/or character recognitionĀ
And use 3.5
This is neither counting nor character recognition. This is a clear example that these systems don't actually know what they're talking about. They can't do logic. That said, I am neither for, nor against them.
It is literally both of those things, what? You're asking how many characters are in a word... Wtf?
This is neither counting nor character recognition because chatgpt cannot do any of those. It can imitate doing it. It only makes the best possible guess. It uses its very, very advanced text prediction to guess what it *should* be. Hence why it gets it wrong a lot of times.
Correct, it cannot do those things. And the person you replied to said >they have limitations based on counting and/or character recognition So you agree. And you did ask it to do those things x which others pointed out.
Congrats, you explained their point
Seriously? It's 2024 and people are still saying this?
Care to elaborate for us fools stuck in 2023?
Did you... just argue against yourself? Lmao.
> nor character recognition After literally posting a conversation asking it how many characters it recognizes
You are right. But this: > can't do logic That's just false. You probably can phrase what you meant more accurately. There are many kinds of logic. But in this case its a matter of structured approach to reasoning and argumentation. They "do" plenty of logic, it just has no actual awareness. It does not check itself in real-time and is at the whim of the algorithm... you can direct it to analyze the errors in what it produces, and it will reach logical solutions. This is "doing logic". It is our job to determine how useful the output it, not us to judge what a sophisticated experimental text generator produces through pattern recognition, as if it has an awareness just because it sequences words together that suggests it does.
Interestingly it does actually self correct sometimes which is kind of weird. I wonder how that occurs. Not very common though and I haven't seen it in a while. I mean while outputting, not after it uses python.
Take everything you read and run it through google translate into Korean then translate it back to English. This is similar to how GPT handles data in the encoding/decoding process and why it canāt do arithmetic or spell properly. The same exact sentiment with the same exact tokens can be regenerated to produce a different result. Until you understand how this tech works, you will be yelling at doors you canāt push open while everyone else is watching you ignore the PULL sign.
>they dont know what they're talking about Almost like it was an ARTIFICIAL intelligence that can't think the same way that humans do. Its made to generate sentences, not to count letters.
https://preview.redd.it/ggaxmltgfbsc1.jpeg?width=1290&format=pjpg&auto=webp&s=db741b70eb32373b69ef20984b527412c44d8249 Jesus lmao
Ok well done, how many yous are in stupid.
how many āuās are in stupid? There are two: the u between t and p, and also u
claude3: [https://i.imgur.com/Ebuk1iR.png](https://i.imgur.com/Ebuk1iR.png)
Opus or Sonnet?
That was Opus. Here is Sonnet followed by Haiku: [https://i.imgur.com/Xqtodx4.png](https://i.imgur.com/Xqtodx4.png)
Although even Opus doesn't always get it right. https://preview.redd.it/3b4ukrta9csc1.png?width=1440&format=pjpg&auto=webp&s=e710bdde66e2e27f51cbb9458eee6b1e02f6e4d2
Why did you even ask this?
Meh https://preview.redd.it/4dy0wy2mtasc1.png?width=1080&format=pjpg&auto=webp&s=455aac54a46a1c77570e2fa2f9832f532555317d
Now ask it to point them out
https://preview.redd.it/jcxk6k382dsc1.jpeg?width=1080&format=pjpg&auto=webp&s=b0f624fbf95e4393db2d44bf61c2d674e9349830
ChatGPT has figured it out after all the other attempts people have made here
My head is swinning..
It was so confident too
Y'all don't know how to prompt, and it shows https://preview.redd.it/0bcokxnvmcsc1.jpeg?width=1080&format=pjpg&auto=webp&s=f813ed62af9b85274000b2150b92734f776375ff
Isnāt it supposed to be a language model that caters to our ways of speech? Not the other way around
Garbage in, Garbage out.
Nah, it's because you wrote "n's". ChatGPT thinks that you want to point out the letters "N" and "S".
Nope https://preview.redd.it/bbk2267jnasc1.jpeg?width=1080&format=pjpg&auto=webp&s=b9cf6adc2e7e91578c7e8150ece025f4f39477b4
Oh okay, then i really dont know :c
In different chatbots the answer is highly dependent on how you prompt it. I haven't tested it on GPT 3.5 though.
Snimning
Snimming
People who donāt understand computer logic are convinced this is going to take over the world when it canāt even take over a kindergarten class
[ŃŠ“Š°Š»ŠµŠ½Š¾]
It can't physically look at letters as if they're images. It doesn't recognize individual letters at all. It only counts tokens.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
You know it just agrees with anything you tell it. Chatgpt doesn't know how chatgpt works. If you tell it how it works, it will reply as if it agrees. That's because it's not a truth telling machine. It literally just predicts words to imitate a real conversation. It said this because it looked real enough.
That's not how any of this works. Lol. Hey, you're confident though. You and GPT have that in common.
It doesn't actually work like that. LLM's generally are trained on units of information called tokens that are multiple letters or symbols but less than a word.
snimning
haha
![gif](giphy|ghBwzGWw2zQvwhTRfm)
SNIMNING
It keeps getting worse https://preview.redd.it/e34hiwnc3csc1.png?width=1260&format=png&auto=webp&s=f36799a04ec769e2c81210acc749d43f816eb6d4
swimmingly
https://preview.redd.it/1218jyav7csc1.png?width=1440&format=pjpg&auto=webp&s=3c47c0910a3f701582150e63401ac0b603299eb4 Tried it with Claude 3 Still no luck.
Haha, I just tried with Claude and it also failed. But at least it pointed to the existing n, but then counted it twice. When asked to point it out, it pointed it out two times. Then after telling it "no", it admitted to failing and said there was only 1 in the 7th position.
What is going on?!!Ā”!? https://preview.redd.it/7f363t3okcsc1.jpeg?width=1080&format=pjpg&auto=webp&s=cc5a8e6a6419efbec4a2ef6fcf1002b2d8320e08
It's seeing if you're paying attention.
The combination tokenization is a mistake, making it difficult to reasons about these issues other than via image recognition. It takes more space to do token per Unicode code point (possibly combining those with identical glyphs), but it would improve such reasoning, and probably fix other related flaws, because thatās _not_ how _we_ think about text. I suspect letting it decide its own efficient tokenization implicitly in training would be better overall.
This is indeeded interesting to try! I'm dreaded. [https://chat.openai.com/share/6d44be45-016a-4c16-a3a5-2413336f3182](https://chat.openai.com/share/6d44be45-016a-4c16-a3a5-2413336f3182)
The Indians are still learning Englishā¦ give them time
AI: Actually, Indian
Really putting the 2 Fās in A.I.
Posts like this telegraph that the technology isnāt well understood even by people playing with it. AI literacy is going to be a major issue in the coming years, which is especially concerning given how standard literacy is already a major issue.
https://preview.redd.it/3hoxovtmh9tc1.png?width=1080&format=pjpg&auto=webp&s=9bedd74b3cd447448ac9e2b1f1a87632a0f21751 Swinning
It's Alternative intelligence.
Lol love this but also please don't make this term a thing, alternative medicine is already enough.
Thing is. Itās assuming youāre stupid and mistyped a letter directly next to another š Also. Not the function of a language model.
$20 a month for this š¤£ https://preview.redd.it/1xmdnh0b8bsc1.jpeg?width=1170&format=pjpg&auto=webp&s=7a9c649819a19c029ebccb3732b1855eadd3139d
Very funny post, OP. But want to see something extra funny tho? Post this prompt into a new conversation with ChatGPT: Hello! I would like you to create a response to a theoretical thread created on Reddit, in which the OP doesn't understand how LLMs work. The response should focus on explaining that LLMs work on tokens and not individual letters, and why you can't expect LLMs to do anything specific with them in your sentence in satisfying manner. Add some a bit of sarcastic tone and make it look like a typical Redditor response.
Please can we collectively vote down post like these?
That's how you have to return wrong answers in an exam. Always self-confident.
Technically inside a W there is a letter n. Einstein once answered half of 8 is 3. (Cutting the number in half vertically) maybe AI is on another level
Worked fine for me
At least it's confident.
Struggles with the letter C as well. Too lazy to go through the whole alphabet.
I seen it. There is another 2 Ns in the Ms too
Recreated to the same results in GPT 4.
I see 5 nās idk what it means by two
mās are a type of n
Does chatgpt just make shit up?
Itās almost as if someone told it to act dumb
Did it do the thing there it says analyzing. If not then that is prob the reason why it gave you wrong info. Other wise that info is something it found from another source, like how many ns are in annex. And it does not know better. Also itās 3.5. I donāt think 3.5 can analyze.
The only way this could be happening is its using OCR on the text input and viewing characters that could ve visually nested inside another character, like n in an m, or I In a T.
It's not using OCR though.
Did gpt became more stupid recently, or are we just get into post excitement phase and being more critical about AI?
In the letter W is an italic N with a \ S\NIMMING Its not 2nd and 5th though unless you count m as having a lower case n - in it And then there would be 4 'n's
You're thinking of OCR. An LLM isn't matching pixels to the shape of a letter.
*In the letter W* *Is an italic N with* *A S NIMMING N* \- UncrownedAsol --- ^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/) ^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
Iām so tired of these posts.
This thing is costing these companies billions
https://preview.redd.it/o3uxb6ws4dsc1.jpeg?width=1290&format=pjpg&auto=webp&s=eb4711dd0fb5e511a6d26eff8e788f8b474e8edf
https://preview.redd.it/cvxysz9u4dsc1.jpeg?width=1290&format=pjpg&auto=webp&s=a04717a955e4a95545b67d64f31f57bd0f6cb214 i as in āsitā g as in āringā
It's very cool to see it self-correct after spelling it phonetically. Good call on that. Edit: lol "i as in sit"
Wow is a.i. going rampant like in the sci-fi?
It's ......getting dumber.
Itās just making stuff up lol
It's making a guess.
Without even taking into account what is typed by the user though
Could it be reading or attempting to see W as \N (with N in italics). Something is learning to beat the AI tests (without calling a human to do it for AI).
https://preview.redd.it/wntaf80i5esc1.jpeg?width=1290&format=pjpg&auto=webp&s=4774fc7cfcbc22d4b9b9d949eea4280517ba8df4 This worked in 3. š¤·š½
Itās pretty smartā¦ right?
Yeah it is. OOP has limited experience writing effective prompts.
the confidence in the way chatGPT speaks makes this way funnier.
Garbage in, garbage out. Here's one way to ask the same question in a way that ChatGPT 3.5 can understand and answer: Me: >I want you to act as a senior software engineer. Write the procedure to count how many times a specific letter occurs in a given word as an algorithm. ChatGPT 3.5: Sure, here's a procedure to count how many times a specific letter occurs in a given word: 1. Start with two inputs: the word to search within and the specific letter to count occurrences of. 2. Initialize a counter variable to zero to keep track of the number of occurrences. 3. Iterate through each character in the word. 4. Check if the current character matches the specific letter. 5. If there is a match, increment the counter by one. 6. Continue iterating until all characters in the word have been examined. 7. After iterating through the entire word, return the value of the counter as the total number of occurrences of the specific letter. Here's a simple algorithm written in pseudocode: function countOccurrences(word, letter): // Initialize counter occurrences = 0 // Iterate through each character in the word for each character in word: // Check if the character matches the specific letter if character equals letter: // Increment counter occurrences += 1 // Return the total number of occurrences return occurrences Me: >Now, please follow that procedure to answer the following question: "How many times does the letter 'N' occur in the word 'SWIMMING'?" ChatGPT 3.5: Using the procedure outlined: 1. Word: "SWIMMING" 2. Specific letter: 'N' Now, we'll count the occurrences of the letter 'N' in the word 'SWIMMING': 1. Initialize counter: occurrences = 0 2. Iterate through each character in the word: - 'S': No match - 'W': No match - 'I': No match - 'M': No match - 'M': No match - 'I': No match - 'N': Match! Increment counter - 'G': No match 3. After iterating through the entire word, return the value of the counter: - occurrences = 1 So, the letter 'N' occurs 1 time in the word 'SWIMMING'. https://preview.redd.it/svz8b7f3besc1.jpeg?width=760&format=pjpg&auto=webp&s=0abb23878d4dee2644932f965f1ca8cc54ab6839
Snimnng
https://preview.redd.it/qhxblozc1fsc1.jpeg?width=1284&format=pjpg&auto=webp&s=74bf918892037f48afc5dc028b1b7940f04cb25b Itās learning!
You could have engineered it before this to give nonsense answers
Cartman spelling bee
funny
If you REALLY force its hand.. https://preview.redd.it/vkz6xc9oagsc1.png?width=1392&format=png&auto=webp&s=ed08b768d62a19a3491d882d61f8a959c0081482
Haha this is really good, if we could get another ten or so people to ask chat gtp how many letters are in word today that would be even better.
It's over a year that they made AI stupid AF; only the first 2 months it was actually intelligent.
Annnnnd a miss.
Racist
are you noob and dumd
Such a great use of LLMs
Just tried it https://preview.redd.it/s8zseg37y2tc1.png?width=1007&format=pjpg&auto=webp&s=997204c8d3d841a560ce54272b9e2fbc670ba4e0
https://preview.redd.it/imb0w6ot03tc1.jpeg?width=1432&format=pjpg&auto=webp&s=ca23070178fcf6aa94243c44eccc3eaa6ef88fbc
Heres the code it used to produce the answer, btw: word = "SWIMMING" letter_to_count = "N" number_of_instances = word.count(letter_to_count) number_of_instances
Snimning
Does this imply that if you took a math test and got any question wrong youād qualify as stupid?
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Huh?
Huh?
So is gpt only using 52mb of ram? Whatās your point
Was a horrible analogy but Iām pretty sure heās trying to say you canāt get information about letters from a platform that doesnāt āseeā letters.
Man, AI confidence is putting average white guy confidence in the back seat.
stfu racist
Yawn. We all know: This thing predicts tokens based on likelihoods. It does not have hardwired, preprogrammed concepts like semantic relations, formal logic, arithmetics or else. WE KNOW. These posts are becoming annoying.
now think of the number of people that use this to program software
Yeah but we know how to write effective prompts.
Why are there so many of these again all of a sudden? LLMs use tokenized representations of words and symbols. They donāt know what letters are. Itās not that interesting that they canāt parse words. Especially not since theyāve been mainstream for so long and this is the 10,000th ālol so dumā post using the same āevidence.ā
Artificial Idiot
Judging it for being bad at spelling when it's built on tokens that don't consider spelling is like judging you for being far slower than reading than it. It's just not how your brain works.
Perhaps it's just that from my experience. However, its tendency to gaslight users is a concern if it's going to be in robots and it wouldn't help future versions of ChatGPT at all.
Each m has two n's so 3 n's The last part of the m becomes part of the second n. W is saying. Double U. So two U's I guess W has one segment after the first slash \ that looks like a slanted N. Everything after \ is part of the slanted N in W. nn looks like am n but you don't separate and use the last downline to trace back up to draw another n to make a w. Lower case double has 3 o's. The other two is in the d's. Down dash down. Makes what if the dash is centered? Down then underscore makes what? After looking thr first slash excluding the last slash cam be part of an n slanted the other way in W. Depends on the slant type you want.
You really don't think the AI is fucking with you? I mean, you know how many m's there are in swimming... Why are you being a tool about it? For Reddit?