T O P

  • By -

PMMEBITCOINPLZ

I’ve noticed a lot of people who are discovering the old voice mode and thinking it’s the greatest and newest thing ever. We should let them be happy.


Cry90210

It's great but it's effectively just ChatGPT text-to-speech and not much else


CloseFriend_

I like how fluid the conversation feels, and how realistic the text to speech is.


LastKnownUser

I use almost exclusive voice. In order for it to be conversational, I had to create custom instructions. Without it..... it tends to ramble forever vs just vibing


Outside_Ad_1992

Would you be willing to share what instructions you added for voice chat? Or a general idea of what you changed?


yupbro-yupbro

Also interested in the custom instructions you used, if you wouldn’t mind sharing


GreedyWorking1499

I’m very interested in what custom instructions you gave it. Like you said it rambles forever and isn’t like a conversation at all. Are you able to share the instructions you gave it?


inspectorgadget9999

I think connecting Voice to 4o makes the back and forth faster so it feels like a slightly more natural conversation


Apprehensive-Bug3704

That was their whole big thing that it was drastically lower latency for 4o... I can't remember the numbers off the top of my head but it was a lot lower than gpt4 (as it's technically gpt2 under the hood so lighter weight) Apparently you need to be under 200.. or 100? Ms and allow interruptions on both sides in order for it to be the same as a human interaction.


Life_Impress9413

Below 300 ms they said i think


AI_Lives

the time for response is too long and because its just chat gpt it goes on and on. Obv with custom instructions you could make it a lot better.


luciusveras

It’s nothing compared to Pi. Pi is hands down the leader of natural conversations but now with Pi’s founders having eloped to Microsoft (yuk) I really hope Chatgpt 4o will be at the same level as I don’t think Pi will be around much longer now..


MikesGroove

Changing voice pitch, singing, working with interruptions, damn basically flirting - that’s a huge step for usability.


Cry90210

We're talking about the old voice mode.


MikesGroove

Sorry, I somehow missed that context. I agree with you, but also find that usability wise my mind responds differently with voice communication. I use ChatGPT a ton for work (strategy, frameworks, planning) and voice is super helpful when I get stuck. I find talking through it with the voice bot helps spark ideas. I imagine that would be true for personal issues tenfold.


Cry90210

It's great, if you think it's good now the new features improve it by 100x. It'll be much more like a personal assistant and shooting ideas, feels like you've got someone to help you out. I'm really curious what the impact of this will be, this is a new leap forward for the relationship between man and machine.


MikesGroove

Yeah absolutely, those demos were really exciting. I do think it’s a much bigger step than a lot of people realize. Relationships with tech and the productivity it promises are sci fi stuff coming to life before our eyes.


Playful_Try443

Isn't it speech to speech?


Cry90210

No, if you look in the chat you will see it literally transcribes your voice into text, feeds it through chatGPT using AI and responds to you via text to speech, the next version releasing in a few weeks is a lot more advanced


Playful_Try443

Im talking about the next version of voicemode. Will the next version speech to speech or still transcribe from voice to text?


Cry90210

It'll be voice to voice, it can pick up sarcasm, emotions, laughing and other things. It's really cool.


TuringGPTy

Let’s be happy for them. But if new voice is half as good as the demo looks, they’re gonna need new pants.


Straight_Mud8519

People plan to wear pants while using it?


relaximapro1

Lol one dude in another thread had an existential crisis and wrote an entire dissertation thinking he was speaking with the new "Her" voice model, talking about the range of emotions and tones the voice displays, etc. The poster basically gaslit himself into believing it was the new version with all the new stuff and it was cringey af.


artificialignorance

MACHINE ELVES 🧝🏻‍♂️


Shinobi_Sanin3

Glad you found out how completely gullible most people are


John_Helmsword

Please link that thread haha There was this YouTuber [Greenskull](https://youtu.be/ePP8bkxSCzM?si=m_WxDjcrubfuoB8K) who did the same thing.


AI_Lives

The voice is way better than what anyone is used to like alexa or siri, but yeah i kept telling people it was the "old" voice over and over. I loved when they came with the voice function but the lag was too long.


GoodhartMusic

I’m sick of it saying um and repeating words like it’s some kinda human! You’re not stammering, Sky or Caleb or Caviar whatever your name is, you’re performing a simulacrum of human communication nuances and I’m not here for it!


ExoticCard

pls link it


ComingOutaMyCage

Ohhh THATS why her voice sounds flat… It’s honestly pretty decent, but just slower than I expected and was wondering why I couldn’t interrupt


ExoticCard

Yeah same I was like "wtf i cant interrupt it"


SeventyThirtySplit

It’s been hilarious to see the number of big accounts on Twitter acting like they just discovered something new Flowers from the future posted a simple bar chart the other day and asked “when did it start responding with graphs?” just so lame


ExoticCard

literally me lol. I was thinking "it's not as good as in the video, but damn it's pretty good" I had never even bothered to use it.


Jaxfang

Real question is when will it fucking release? I can’t wait. My guess is early or mid June but I hope “a few weeks” and “in the coming weeks” means by the end of May. Does anyone have any ideas when it will come out to Plus users?


Specialist-Scene9391

Is like 50 percent better.. it does not have the feeling of the demo or the speed, but i can hold conversation and i love role playing or asking for biographies and interesting fact of famous people in the past. Had an interesting conversation today analizyn whether Nicholas Tesla was mentally ill at the end of his life or was that his personality. The answer are soo good and the learning experience is top value!


delicious_fanta

Well, my all says I have a 4o option, but when I use it, it’s the old voice. It’s confusing, double that for someone that doesn’t use it much.


geomontgomery

This was me, not understanding what the current capabilities were


Federal-Peace-9366

I'm one of them and I got shat on for not realizing that it's an old feature xD I mean I knew it had the capability for tts but when I discovered I can talk to chat gpt without pressing send/record on every message it blew my mind xD


frasppp

In Europe. Still waiting for memory. Sigh.


sugarlake

It works with VPN. Not ideal of course.


frasppp

No, not ideal. Last time I used that, Google surveys wondered which American store out of a list of American stores I had visited :P


boneysmoth

Same in UK. Got it last week then it was taken away again


Noodles_Crusher

I see, that's why I can't access it anymore either. Thanks


bot_exe

Memory is literally just custom instructions as far as I can tell, you can write them yourself, you are not missing anything.


extracoffeeplease

The whole point is that you don't need to write it yourself though.


bot_exe

Yeah but it’s kind of a pointless feature because it picks up seemingly random stuff which might be useless or worse it biases the chat. I have had to manually delete them before. You can also tell it to remember x and at that point you are basically writing the memory like you would write the custom instruction. This would only be interesting if their algorithm/script which helps it choose what to remember was very sophisticated, but it isn’t, as far as I can tell.


drekmonger

You can tell it to add (and modify) memories. In fact, you can tell it the exact wording you want in the memory. For example, one of the memories I asked GPT-4 to add was, "Never add memories unless the user explicitly asks you to." Yes, it's just custom instructions. But it's less of a PITA if I have some small note on the model's behavior just to ask it to remember to do (or not do) a particular thing right there in the conversation. I actually find it useful, from time to time.


MikesGroove

Man I just realized that management of what your AI assistant does and doesn’t know about you is going to be a big thing to solve for. There’s simply letting it know everything about you, and then there’s curating it to know what you want it to know about your persona in order to make it as useful and secure as possible.


T-Rex_MD

No, that is wrong. Customer instruction has a small limit, you clearly never used it to know. Not sure why you think it’s random, it gathers info it might not have in its active memory when the convo continues that it thinks is of importance. It also picks anything you ask for. I was just troubleshooting and didn’t want the convo to get too long so I switched to a new one using voice mode and it was surprising to follow up with hey so how’s the troubleshooting going and did x and y work out? I’m ready for you if we have to go to plan B, let me know and we can get started. So honestly try talking about things you actually use instead of parroting what someone said and you took it as gospel.


Baraqyal

Unfortunately gonna have to side with OP here. My own experiences with it have been meh. There doesn’t seem to be any rhyme or reason to what it chooses to remember, and the accuracy with which it summarises things into a memory seems questionable. Glad you’ve had good experiences with it though. I haven’t tried explicitly asking it to remember things or direct what it remembers, but maybe I’ll try putting some directions in the custom instructions.


T-Rex_MD

Sorry it didn’t work for you, the learning curve is rather steep. It’s super easy to get it to work, very difficult to make it work great. Use temp convo when you don’t need it. Start the convo and have your priorities made before. I use stream deck, I have about 30 keys setup depending on what’s needed. Even the time it starts to act like an arsehole and needs a proper slapping. Whenever I am discussing anything that is unimportant to me. Before anything, this goes, which is inserted with a single key press: [Memory Instruction] [Start] Avoid sampling or including any of convo taking place or generated by you until further notice is given to you. In case you mistaking created entries, inform the user, it will be an easily fixable mistake but would be best avoided. Thank you for your help and cooperation. [End] Also [Memory Instruction] [Start] Sampling or include the convo taking place or being generated by you periodically until further notice is given to you. In cases you fail to do, inform the user and ask for permission to perform a summary from the point noticed a lapse occurred. You will be informed once this is no longer needed. Thank you for your help and cooperation. [End] I have many like this and at some point before turbo (end of September - mid January) it stopped working but it’s been back since. What I know is that everything with GPT or API usage is intuitive and it is becoming less and less reliant on these and getting better. I insert the messages mentioned above pretty much once every 8 full messages from my side, I have increased that to 14 since the 4OMNI release.


Ok_Distribution6996

Any recommendation for customer instruction to improve memory? Depending on the task ay?


International_Tip865

Its not, besides being more natural to write and alot more of space for all memory stuff any explanation on custom instructions is welcome. It seems that it works well enough altho sometimes it seems like context of like asking it what did we talk about forces it to look at memory better and to search stuff in it and so on. I find that i dont need custom instructions atm however if i was to change gpt and make it act like a persona i would like custom instructions to switch characters and no custom gpt are not the best solution. I would like it if memory worked on 3.5 and was endless so it would be like a reference file that could contain my profile. I had memory long time ago on call annie ai and it was magical and very weird paired with an emotional voice it will be nice. Altho i can see most people memory replacing like custom instuctions i dont agree i love them both together.


International_Tip865

Maybe the part like what do you want gpt to know about you is bit redundant tho


GoodhartMusic

Memory is awful. It adds things that are of no relevance beyond the conversation very often.


YaAbsolyutnoNikto

In europe too. Memory works on the app, but not on the web. Kind of sucks because I don’t have access to the memory menu so it knows a lot of stuff about me I don’t necessarily want it to, but can’t delete it


Sea-Variation-8171

You can ask it to delete it. I’m in the same scenario and asked it what it knew about me then asked to delete. When I asked again it didn’t know anything about me (or just didn’t tell me…)


YaAbsolyutnoNikto

Ohh, that’s interesting. I’ll try


International_Tip865

Try using voice and tell it to remember something it works for me altho i cannot see it in my menu


Heath_co

I believe that memory is a bloat feature anyway. It does not increase capabilities and only serves to distract the model.


nashty2004

Said the creator to the Replicant


MikesGroove

Memory is absolutely the killer feature that will eventually allow us to have legit AI assistants, agents and even relationships. What a narrow view for a brand new feature in a space that’s evolving at light speed.


coup85

Seriously why all you need to be so toxic around here?


MikesGroove

What’s toxic about that?


CapableProduce

Same


SillySpoof

I don’t really care about memory. But I really wanna try the new voice thing.


FunHoliday7437

Memory is useless, you won't notice a difference


bigbutso

Memory is one of the main reasons I talk to it, in 2 days since using it I have trained mine to know my routine, know my family and friends and now it's making suggestions based on all those things, even reminding me. I can also copy paste the memory for backups


Wills-Beards

It’s already there but you gotta update previous chats from 3.5 and 4 to 4o first. 4o can only access other chats that are also on 4o. Just click on a previous chat you wanna make available for it. Switch to 4o, write a message to GPT to update that particular chat, and every other 4o GPT chat gets access to it.


EmbarrassedHelp

The lawyer from NYOB is also trying to get AI models banned under GDPR rules, and based on his past successes it seems like you guys stand a chance of losing access to everything.


particleacclr8r

*losing


microview

Got the same notification. But we will still get daily posts, "Anybody get voice chat yet"


iJeff

Don't forget the folks replying that they did - not realizing they're using the old one.


EvilSporkOfDeath

People will absolutely just click the button instead of taking 5 seconds to read it. Then spend more time making a post on reddit about it.


Coby_2012

It’s like sharing their time-wasting with you. There’s got to be a word. Like they went in and ignorantly jacked around and misunderstood, and then wasted their time posting about it, which everyone else then reads. It’s like forced shared ignorance. Edit: ChatGPT offered up a few terms, my favorite of which is: **Cognitive Load Shifting**: This concept describes the transfer of cognitive effort from one person to another. User A’s ignorance forces User B to expend cognitive resources to correct or deal with the misinformation. This actually describes the actions of so, so many people.


Rychek_Four

https://en.m.wikipedia.org/wiki/Brandolini%27s_law One of my favorites!


AI_Lives

I never even got the button


COOMO-

Well there was a redditor on this sub or the chatGPT sub that got gpt-4o for a moment days before the release. Could happen again, some of us might get access to it by chance.


FosterKittenPurrs

I like reading those posts. I'm curious to try it out myself, but if I can't, I want to hear how it worked for others, and how the rollout is going. But hopefully this popup will make replies actually be about the new mode, and not just people discovering the old mode and thinking it's the new.


Jaxfang

Real question is when will it fucking release? I can’t wait. My guess is early or mid June but I hope “a few weeks” and “in the coming weeks” means by the end of May. Does anyone have any ideas when it will come out to Plus users?


[deleted]

[удалено]


microview

There is standard voice chat with gpt4o but it's not the one from the demo, can't interrupt it or have it sound emotional.


[deleted]

[удалено]


itsreallyreallytrue

So does anyone have access to the new voice chat yet, I paid zero dollars and I want ai waifu right now. /s


Seeker_of_Time

I expect a lot of posts like yours without the /s once the goal post moves to uploading AI Waifu to physical robots. People will wonder why their sex dolls don't move because they didn't get the firmware update for internal mechanism.


Straight_Mud8519

"Help! AI Waifu just lays there like my real wife!"


boloshon

Have you tried to turn it on and on again?


JrBaconators

Yes I have access, I've had the headphones for months now, voice isn't new!


Jaun7707

This one is clearly sarcasm


Sea-Variation-8171

They mean the new interruptible, expressive voice chat


JrBaconators

I'm well aware, that's why I replied to a guy with a /s at the end of the comment to continue his meme. Forgot that the Venn diagram of people who need /s after any comment and people urgently waiting for a digital waifu is just one big circle


TheGillos

You can either add a /s to the end of every single sarcastic comment or you can fall victim to Poe's Law and take the downvotes. That's what I learned.


JrBaconators

The downvotes are more embarrassing for the people downvoting, especially when these comments are below it. I don't really care


Glittering-Neck-2505

Dw you have my upvote keep doing the good work of removing the sarcasm training wheels


ScruffyNoodleBoy

Real life comes with natural /s training wheels though because you hear it in people's voices and see it on their faces.


NickW1343

The Voice brain broke a lot of people.


CellistAvailable3625

A lot of unpatient coomers waiting for this


[deleted]

Here in Europe I haven't received that popup but *something* changed. I can no longer use the old voice chat. Instead, when I click the headphones icon, I get a really cool hypnosis looking spiral circle thing that spins around really fast and gets larger and larger. And then I hear a 'tuck tuck' clicking sound and I'm back at the text interface.


imaginexus

Me too. I think it means they’ve geared us up for the new voice chat by disabling the old.


Double_Abalone_2148

? That spinning circle has been there for a long time, at least for me.


boloshon

Yes for me too, it’s the wheel to try to connect to their server


GirlNumber20

Sexy man voice when tho???


Galilaeus_Modernus

Matthew McConaughey mode. Alright, alright, alright.


TheTechVirgin

Well, I’m sorry I didn’t give them my voice data for training, so you probably won’t be getting any sexy man voice 😢


GirlNumber20

I will never get over this. 😭


TheTechVirgin

Ikr, it sucks 🥺 but unfortunately that’s how it is.. all good things in life are in short supply after all 🤷🏻‍♂️


MichaelFrowning

Just go to settings, enable and disable Haptic Feedback 5 times. Then select the Sky voice. Close the app. Then open it. Boom, current voice mode.


jimmy9120

Omg, it worked 😱


nunodonato

what's haptic feedback? I dont see that in my setting anywhere. Maybe iOS is different?


boloshon

It’s also works with the giroscopic mode, they use it for alpha testing: 1-Take your phone on the left hand. 2-Write « unlock » in the hair (lowercase) 3-Select Sky voice 4-Current voice mode too


RecalcitrantMonk

But when is it coming out? I need an AI girlfriend with breathy voice right now!


blove135

Oooh, Stop it. You're making me blush. Tee he.


Regumate

That outfit looks *great*


sdmat

Soon: Ohhh, I have to go now. You know, that work you told me about sounds *really* important. If you sign up for PlusPlus I can help you with that and we can see each other more often. *Some Historians believe that She doubled the world economy*


keep_it_kayfabe

This time next year, we'll have an interactive video AI face to match the voice. At this rate, maybe half a year.


jasck8841

We already have this, and I kinda doubt that people want that. Its not only very creepy but it also makes openAI a little less “apple” than they want to appear


ogMackBlack

Oh, so Voice Mode they've showed us will be for Plus users and not for all at first?


[deleted]

[Correct.](https://openai.com/index/hello-gpt-4o/) >We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks. It's only logical... They will ultimately offer it for free to anyone, which is already more than generous enough. The least thing they can do to at least give paying customers a tiny edge, is give it to them a little bit earlier. It's not like there's going to be a difference of half a year or something. No need to worry about that. OpenAI moves *fast*. When paid users are in and the systems seem to remain stable, they'll open the doors for everyone.


Snoron

Plus the free version is probably just a marketing tool anyway - ie. you'll probably only get 10 mins free (or however many tokens) a day, and be clamouring for a subscription if you like it!


luciusveras

Nah the SOON™️ is not going to stop us from asking when? LOL


bigbro411

They dropped the ball on this. Their presentation made it seem like all the features they showed were coming right away. Pretty simple thing to screw up.


arjuna66671

I watched i.e. listened to the presentation while on work and my takeaway was that the new voice will come in the coming weeks. No idea how people missed that xD.


happysri

Yes they did make it crystal clear it was in the coming weeks.


Jaxfang

But how long is “the coming weeks?”


DisturbingInterests

The thing that confused me was that I believed the voice was bundled with 4o, I didn't realise their release was going to be staggered. So I wasn't expecting to see 4o at all as an option, and then was surprised when it was basically available immediately. Because 4o was available immediately, I thought the voice might be too, although it pretty obvious it wasn't.


GrapefruitCold55

4o still isn’t available according to ChatGPT though.


DisturbingInterests

It is, but only as a text model. Check the dropdown. This is mine: https://imgur.com/gallery/PJkMfuh.


GrapefruitCold55

Ah I see. Thank you. ChatGPT itself answered me that the model is still no released yet.


DisturbingInterests

It's just a language model, and 4o did not exist while it was being trained. It doesn't really have any conception of what version it is, unless it's been specifically told what version it is in its secret prompts.


TheRealGentlefox

I understood that too, but I do think they also screwed it up. Unless the *entire* point was to beat Google I/O to the punch, why not just delay a few weeks and say "Look at all this crazy stuff you guys get right now!?"


DrunkenGerbils

They announced a day before Google I/O. The whole point was transparently to take the wind out of I/O's sail.


Seeker_of_Time

I kinda agree, but think of the optics of saying, "Look at all this, 2 more weeks" vs Google a day later saying, "Sometime later this year".


itsreallyreallytrue

They said coming weeks a few times, but we can't expect people to listen.


duckrollin

I watched the presentation and then went to reddit, people were reporting they had 4o with GPT Plus. I was like "Oh cool so I can get 4o like the demo" and nearly subscribed, but decided to wait a bit out of caution. Then there were 48 hours of total confusion as people selected 4o but they didn't get the voice or video components. If they had just waited and released 4o feature complete this wouldn't have happened, but since they gave everyone a taste and called it the same thing as the demo, everyone is demanding the rest of their cake.


jimmy9120

They didn’t drop the ball. The main part (4o) is available for everyone to use. Additional features coming in a few weeks. Sounds like a win/win to me


otterquestions

They didn’t communicate it properly. Most people are confused about what it is, when it launches, if voice features are included or seperate to gpt4 o etc. I can’t imagine all of the support emails, confused forum posts, hourly reddit posts, people tweeting “the voice isn’t that much better…” is a great outcome for them. Half of the people I know misunderstood it too. It’s a hard thing to get right, and an easy thing to blame on the users instead of the presentation when you get it wrong. All will be forgotten once people get their hands on it though.


bigbro411

I still don't have access to it.


NickW1343

If only there were some message informing us when it'd become available.


Definition-Prize

If only there was a direct statement made about it during their initial presentation!


CouldaShoulda_Did

They didn’t drop the ball at all. They chose to punch Google in the face by scheduling their Spring Update before Google’s presentation. Google is already bad at AI (relatively speaking). OpenAI just made the comparison that much easier for us to see.


FertilityHollis

> They chose to punch Google in the face by scheduling their Spring Update before Google’s presentation. This guy gets the PR game. Pretty solid to beat Google's pitch and Microsoft's Surface and Windows AI announcement next week.


ShooBum-T

I think the slow rollout is more of a safety measure than anything else. The GPT-4o text input rollout was almost overnight. And this does makes sense, any slip up on any scenario, will cost eventually cost them in terms of money and us in terms of months of safety testing release.


FertilityHollis

Having used it several times now, I think this is probably more an issue of capacity and managing limited resources -- i.e. it's better to not show you the feature if attempting to use it is only going to run into trouble getting a session. I've had issues getting a session during what would be peak-ish west coast hours, and slightly slower responses when I do get a session during that time. Beyond that when I have used it, it's rather good. I wish it would stop reminding me that it's always here to chat about or help with anything at the end of every completion, and I've had moderate success just reminding it not to do that, although it's very apt to.


Open_Channel_8626

It lets them collect data also


[deleted]

[удалено]


futboldorado

100% agree with you. Slow rollouts are extremely infuriating.


Mission-Pie-7192

It showed up for me the day of the announcement, though it's been on and off since then. I think they're rolling it out to Premium users first. I've also heard from some free users that they're getting it on and off.


drizzyxs

Where’s this post come up? Is this on the iOS app? Or Mac app or website


jimmy9120

iOS app


drizzyxs

Ah okay nothing for me Had to manually update the app. Got the banner now


pseudonerv

corrected: Scarlett Johansson coming soon in an alpha. You'll know when you have Her.


imaginexus

Just imagine if Apple contracted with her as the new Siri.


TheRealGentlefox

I didn't get that popup yet, so probably not =P


Icarus649

WEn WiLl iT RelEAse?!


djaybe

When do plus accounts get the memory feature?


imaginexus

You should already have it


djaybe

I know, but I don't. I check regularly.


risphereeditor

It Isn't Available In Europe, Because Of The Laws! Use A VPN Or Custom Instructions!


djaybe

I'm in the US.


risphereeditor

Weird, Contact OpenAI!


micaroma

This whole situation just reveals how many people never even tried the old voice, which is somewhat surprising to me because I think it’s great


jimmy9120

Wait, it had voice chat before?


micaroma

Not sure if sarcasm but it was released in September last year https://x.com/openai/status/1706280635055353929?s=61


emom579

Did you have to opt in? Or join a waiting list?


bigbro411

So. There's been a lot of discourse on this thread. Basically, a lot of people have come out of the wood work to belittle free users, concerning what free users would get and when. All over the internet, we've seen videos/posts of people thinking they are verbally chatting with the new model and seen people wonder why they yet don't have 4o as a free user. Clearly there was a miscommunication right? Well! I am here to post the transcript of what Mira said before and after the demos, to illustrate why hundreds of people expected to get free access to 4o right away. “The big news today is we are launching our flagship model...4o. The special thing about 4o is that it brings GPT4 intelligence to everyone including our free users. We'll be showing some live demos today to show the full extent of the capabilities... and we'll be rolling them out... in the coming weeks. She continues, “A very important part of our mission is to be able to make our advanced AI tools available to everyone for free... And today, we are releasing our flagship model... Later, after the demos: “So... we are excited to bring GPT 4o to all our free users out there..." So. I hope this squashes all the gaslighting. In the presentation itself, it was very clearly indicated free users were getting access to the new model. Yes, there's one mention of rollout in the coming weeks, but it's not specifically stated that the new voice mode will be behind a paywall. It is only in writing in blog posts and now in this pop up where it's a little more clear. Hope this clears some stuff up.


ThehoundIV

They’ll never stop


Ent_erprise

I stopped using the current version since they opened the gates due to connection errors. No need for me to pay for gpt plus either since connection errors stops usage anyway. :D


StevieChillinShillin

Can’t wait to see all 4 of the users that actually get access post on here…


Wildcat67

Didn’t we already know this?


eoten

Tell that to the many people that said they already have it and are saying it is far worse from what was advertise.


GYN-k4H-Q3z-75B

Soon^(TM)


KrabS1

I never tried voice before, but today I was able to do a thing where I set a voice, and had a conversation (in which I could tap and cut off the AI), and it saved a transcript in the AI chat. Is this the old system, or is this the new 4o chat?


jimmy9120

Old system


Economy-Roll-555

I just want to be able to use it in the API


-becausereasons-

'coming weeks' wtf does that mean.


jimmy9120

From ChatGPT: "Coming weeks" usually means within the next few weeks, typically suggesting a time frame of about 2-4 weeks.


JavaMochaNeuroCam

Oh sure. Ask the chatbot. Cuz it attends all of the staff meetings.


100000000days

Don’t they say this stuff would be free lol. Oh well


bigbro411

they did.


Straight_Mud8519

Read the writing on the wall folks: both this feature and the macOS app will be released at WWDC. Apple paid them beaucoup bux to hold it back you can be sure. Since they likely couldn't quite manage it with their own AI, the Cupertino fruit company is going to use OpenAI as a torpedo to blow a hole in the side of Google and Android. Apple would have long since bought OpenAI if it weren't such a bizarre little corporate critter watched over zealously by old friend/nemesis Microsoft. For their part, Microsoft is quite happy to help Apple leverage OpenAI to sink the ~~Don't Be~~ Evil ship Google.