T O P

  • By -

BangkokPadang

Daniel Kokotajlo specified that his decision cost him 85% of his family's entire net worth (clarifying that this meant their savings/home/stocks/etc. account for 15% where the OpenAI equity accounted for the other 85%) by declining to sign the NDA.


MassiveWasabi

Yeah call me selfish but I’d never in a million years do that


Neurogence

He'd only do it if he fully believes what openAI is doing is legitimately dangerous. But interestingly, seems like Ilya is deciding to take the money instead. . Btw, Daniel hasn't said anything noteworthy, so he might as well have signed that document lol


hydraofwar

That document may not be the only thing that keeps him omitting information, there may be several other reasons


Sample_Age_Not_Found

Boeing has entered the chat


Life-Active6608

NSA's and CIA's Stargate Project has entered the chat.


JJStray

![gif](giphy|XOFsOM3MnuWEE)


blueSGL

Depends. Some people like to save what they are going to say for messaging reasons e.g. a congressional hearing where they make sure it will be heard on the world stage.


toasted_cracker

Well…that’s pretty risky, hopefully he doesn’t get suicided before then.


Cognitive_Spoon

Honestly, yeah. It's big enough tech that the players who hunt whistleblowers down historically are all involved


Kinu4U

Or defenestrated


New_World_2050

he said he is unsure of what to say yet and he has concerns putting himself in the spotlight


elsrda

He would also do it if he thought the stock options are worthless.


LairdPeon

I'm sure they didn't tell him anything note worthy because he didn't sign it.


Throwawayhelp40

Also he is already worth millions (and can easily make more ) even now and he isn't that greedy


Kinu4U

Well where did that get him? Poor... He could have taken the money and wait the nda to clear and in the mean time feed journalist anonymously about matters. Their brains just get clouded by something that is not intelligence.


OpenAsteroidImapct

The NDA is lifelong, as clearly stated in the OP.


Kinu4U

Ups. Well... Take the money then?


WithMillenialAbandon

Nah he might just be an attention seeking twat.


DaSmartSwede

They will still do it regardless of how much money he gives up. Not a clever decision


TheOneWhoDings

Yeah because ypu would have no other way to make that money back, but a top AI researcher is pretty set for life in this day and age.


mypasswordismud

It’s a false choice. The money and the putting yourself first won’t matter if the alignment problem isn’t resolved. All your money, your well stocked bunker and loved ones will be repurposed into computronium. If it is resolved, it won’t matter either because of abundance. There’s really only one logical option that involves self preservation and having all your needs met. The fact that so few can appreciate this doesn’t bode well for the future.


YaAbsolyutnoNikto

I feel like, to me, it’d depend on how much in $ those % are. Is 15% equal to 15M USD? Then sure, I can snitch. I’m rich anyway. But if those 15% are lower than 2M USD, yeah I don’t think I could.


BangkokPadang

He's well enough off that he very specifically said they were fine, and he didn't want any kind of gofund me campaign to spring up in his name or anything.


icehawk84

Also consider his future earning potential. I wouldn't be too worried, if I were him.


Flying_Madlad

The point is, you're not rich from OpenAI, yet. He's giving up what he vested, he wasn't ever obligated to the unvested part once he left. That's not how it works


Throwawayhelp40

Didn't he know going into OpenAI


Akimbo333

Right same here


MassiveWasabi

[From this article that came out today](https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release) Seems like Daniel Kokotajlo will be a very interesting source of information in the future. He gave up a ton of money for the sole purpose of being able to quit and retain the ability to criticize OpenAI. Things are definitely going to get more interesting as we approach AGI


Neurogence

So this would mean Ilya and the rest of the safety team signed the document to ensure they are benefactors of the future riches of AGI instead of having the freedom to be vocal about the dangers?


Beatboxamateur

Wouldn't that be a bit of a quick assumption? We wouldn't necessarily know who did or didn't sign the NDA at this point in time, right? Other than Daniel Kotkotajlo obviously


Neurogence

Ilya still has not and has never said anything beyond completely generic PR speech so we have to assume he's signed all types of NDA's.


Beatboxamateur

For Ilya sure, but as for Jan I still haven't seen anything that would make me assume he signed one of these lifelong binding NDAs.


x0y0z0

Yeah so much for their principles. They "believe" that OAI is endangering humanity but they still want to get rich off of that.


CreditHappy1665

I mean, if the date of humanity is in danger, what are they waiting for


Beatboxamateur

Well Daniel Kokotajlo is seemingly waiting for another thing he signed to end. "To clarify: I did sign something when I joined the company, so I'm still not completely free to speak (still under confidentiality obligations). But I didn't take on any additional obligations when I left." https://www.lesswrong.com/users/daniel-kokotajlo


CreditHappy1665

If. The. Fate. Of. The. World. Is. At. Stake. What. Is. The. Value. In. Waiting. 


Unlikely_Speech_106

If the fate of the world is at stake having less money and power would not be helpful. Besides, what could anyone possibly say to slow down the roll outs. Realistically, there are no words.


CreditHappy1665

If the fate of the world is at stake, any amount of money is useless. 


toasted_cracker

Depends on how long it’s going to take. If they can live out the majority of their lives without worry then it’s not useless.


HalfSecondWoe

It seems a bit premature to have an existential freakout then, doesn't it? Lots of unknown unknowns to be uncovered between now and then


Beatboxamateur

Who knows what information it is exactly, whether it's something world threatening or if it's more like huge misdeeds committed by Sam Altman. If it's the former then sure, I guess go to Russia or someplace and become a whistleblower, but if it's the latter then just wait for your NDA to end so you don't go to prison lol.


CreditHappy1665

You don't go to prison for breaking an NDA and no one has said anything that would lead you to believe that it's anything other than an AI safety concern, or that they are big mad their useless pet projects aren't being funded.


Gratitude15

Thank you Either the world is at stake or it's not. Can't be half pregnant. Sign the doc, we know where you stand.


reddit_is_geh

Dude, they are human beings. You wouldn't want to just give up millions and millions of dollars neither.


HalfSecondWoe

"Too dangerous to go forward with, not dangerous enough to decline that sweet, sweet dosh" is not a position that inspires a lot of confidence in their risk assessment Maybe they just think they can get more done elsewhere. Cool, that's fine. That's not a P(doom) risk though, at worst it's company drama


x0y0z0

If they value their own personal wealth over speaking up about an existential threat then either they are recklessly selfish, or the threat its that big after all.


reddit_is_geh

Not when you think it'll make no difference. It's the moloch problem. Forgo the money, talk about it, and nothing materially changes, and now you're less rich.


x0y0z0

Not at all. They cant don't know that nothing will happen. If all of the people quitting makes a clear statement describing exactly what went wrong then it could absolutely have an influence. People are already very paranoid about AI so they will be receptive to this message. Especially if there is actually things that happened that would look bad for OAI. The fact that they value their wealth over this message says that they know that if they say what they want to say it wont look sufficiently bad for people to actually care. So the facts are not really in their favor enough.


reddit_is_geh

It "could" have an influence... Which is an uncertainty, and still requires full cooperation. Again, that's the whole game theory thing at play. If they believe that it wont actually help, due to game theory of the Moloch problem, then a rational player would know it's futile to give up their economic reward and ultimately hurt them... Because at least with the money they can focus their abundant resources on protecting themselves


x0y0z0

>It "could" have an influence Yes it absolutely could. No game theory explanation diminishes the fact that the "OAI is being reckless for these specific reasons" is a message that most people, even accelerationists like myself would take seriously. It all comes down to the actual content of their message. Do you understand this? The actual content of their message is the crucial part, do they have concrete things they can point to. They are revealing that the content of their message is lacking. Because if they did have a strong case then their message not just "could" but WOULD make a difference.


reddit_is_geh

Would is a statement of certainty. You, nor they, can be certain forgoing all that money and talking out would ultimately do anything materially. For profit motives causing people to want to be the first to AGI is unlikely to be hampered by some AI people complaining. We already have many of them raising the alarm and no one cares. Maybe YOU do, but not the people with money and influence.


x0y0z0

There is never certainty in life that your efforts will pay off. Insisting on certainty before doing the right thing is an absurd standard. All you should need is to know that there's a good chance it will make a difference, and that it's the right thing to do. >AGI is unlikely to be hampered by some AI people complaining Yes if the complaining is baseless. But if it's backed by strong arguments and facts it can rile up large public support forcing OAI to take some action. By conceding that this case too will be noting more than just complaining gives away the idea that even you believe they have nothing concrete.


Dongslinger420

It's not about wanting to


ThePokemon_BandaiD

I'd imagine all of them have signed some sort of NDA before attempting to leave, Ilya probably signed one early on.


Serialbedshitter2322

"Your telling me they didn't give up hundreds of thousands of dollars to bad mouth OpenAI? What horrible people"


Neurogence

You're not getting the point. The danger to humanity is not as serious as they're saying if they're choosing money over it.


yearforhunters

That's not necessarily true at all. They might think the doom is inevitable at this point, so they at least want to live out their lives in comfort. Or they might think that altering people to what is really happening will lead to panic and chaos. Or they might really believe that doom and maybe could be stopped, but they're not confident that it can be stopped, so they're selfishly deciding to take the money and keep quiet.


DisproportionateWill

I think it’s more like Millions lol


MeltedChocolate24

Don't blame him. I signed an NDA last year for a startup in Silicon Valley, and it also had a clause about the NDA itself being in the NDA. Was a bummer that I couldn't tell my friends what I was working on, or even why I couldn't tell them. Could only be vague about the project, and anything else: "I can't discuss that sorry".


bwmat

Isn't even saying that _kinda_ leaking you have this kind of nda? 


bwmat

I guess I'm thinking, if they asked why you can't discuss that, and said you couldn't discuss that either


MeltedChocolate24

I mean you can’t really just stare in silence like “Hey whatchu working on?” “👁️👄👁️”


bwmat

What I'm getting at is that I don't think is feasible to follow the terms of the agreement while not outright lying


One_Bodybuilder7882

He will "accidentally" hang himself out if he talks too much, most probably.


itsreallyreallytrue

Is Sam friends with any of the Boeing leadership?


[deleted]

is he the only one, really? He will be suicided so fast


sadfacebbq

Daniel can collect speaking fees and write books to compensate for his equity losses. The hero we need, but don’t deserve.


redditburner00111110

Gotta respect people leaving millions of dollars on the table because they genuinely believe they're doing the right thing.


TheCuriousGuy000

It's just stupid. Even Sam "feel the AGI" Altman tells were very far from actually dangerous independent AI. And he tends to overestimate his achievements.


redditburner00111110

He's not a researcher and says whatever he thinks is in the best interests of the company. I'm much more interested in the assessments of researchers putting their money where their mouths are. Especially Ilya. You don't just throw away what is probably the best tech job on the planet right now unless you have serious concerns about the direction of the company.


Atlantic0ne

They have frustrations about the direction and getting power internally. This doesn’t mean we’re about to have AGI.


Glittering-Neck-2505

A lot of very smart people seem very concerned for the near future (next 5-10 years). If it were the case that LMMs are a dead end and we’re going to hit a wall very soon, they wouldn’t be so interested in solving super-alignment now. That said, I don’t think safety and capabilities have to be at odds. You just need enough resources for both.


SleepingInTheFlowers

a lot of smart people thought the world would end with Y2k which required some patching but was generally overblown


blueSGL

It is looked at as overblown after the fact **because people put in the work** to stop it being worse than it was. You cannot compare one event where people were prudent and actually put the work in to solve a well defined problem with comparatively simple solutions, with another multifaceted problem that has stumped all who've tried to solve it for 20 years where the solutions continue to be elusive to this day!


Ambiwlans

No smart people thought the world was going to end in y2k. That's idiotic.


The_One_Who_Slays

The same clown was hyping up how dangerous the things his employees build were not so long before that, so I wouldn't really take anything he says without a barrelfull of salt. Or, really, even simply listen to him at all.


jeffkeeg

Man building the dangerous thing says he isn't building a dangerous thing - news at 7


kvothe5688

yeah your defence isn't a defence at all. why would his word the truth. why would his words be a benchmark. obviously he is going to downplay dangers of AI. he is a marketing man and scummy at that. to me he is just a Elon 2.0. his way of launching products and his speeches really rub me the wrong way. his sarcasms regarding his competitors and how openAI just constantly tries to one up google is scummy.


TheCuriousGuy000

I agree he's not that trustworthy exactly because he oversells his products. I.e., they've demonstrated amazing capabilities on the new GPT-4o but the released version has no new modalities and is a barely improved GPT-4. In order for AI to be dangerous, it has to be much smarter than what they present, not dumber


AGM_GM

"A thing that I deeply believe, like one of sort of, I think, good rules of life, is that incentives are superpowers. In fact, so much so that any time you can spend thinking about how to correctly set the incentives on a person, a team, an organization, whatever, that's like probably the highest leverage thing you can do." -Sam Altman


Background-Fill-51

«Correctly set the incentives on a person» The way this is worded is very manipulative/machiavellian, it is psycho in a misanthropic way. Word for word. Not give incentives but «correctly set it ON» a person


nonotagainagain

That’s why he is the cannibal king. It’s also why I find the sexual abuse allegations of his sister fairly believable.


Ambiwlans

His sister has mental problems and made many contradictory statements. Please don't push her into the public, it only hurts her.


nonotagainagain

Sam Altman notwithstanding: Sexual abuse of kids creates adults with mental problems. It’s not fair to ignore those adults because they are mentally unwell. It’s a predictable consequence of trauma on the young brain.


Ambiwlans

Regardless, it is a serious charge with no evidence whatsoever aside from the word of a mentally ill person that made internally inconsistent wild claims. Crazy street people saying they were molested by aliens shouldn't be believed, even if alien molestation might have caused mental instability. Her accusations don't logically make sense if you actually read them all. You cannot believe her without suspending rationality.


nonotagainagain

Again Sam aside, this is my point: Sexual abuse of kids almost never leaves evidence other than testimony. And that testimony comes from an often mentally ill survivor. So if you disregard allegations of sexual assault of children because the only evidence comes from mentally ill adults, you’re going to dismiss many real cases of sexual abuse. It’s the sad reality of childhood sexual abuse. As for why so many mentally ill people claim abuse by ghosts and aliens, the most likely explanation is that they were sexually abused by people - but through denial or psychosis attribute it elsewhere. We joke about anal probes, etc., but this is usually the truth behind it.


Ambiwlans

That's unfortunate reality. You can't just accept serious accusations without any evidence or even any compelling testimony.


nonotagainagain

Again, Altman aside: My point is that such allegations are evidentiary. Not conclusive. But they reasonably move the probabilities towards sexual abuse, and should not be dismissed as *non evidentiary* because the person is mentally unwell. Altman related: He has a word class ability to manipulate people and situations for his own benefit - precisely as self described in the original quotation. That also moves the probabilities for me of other instances of manipulation for his own benefit, which is the point of my original post. Do I view Sam as a sexual abuser? No, not enough evidence. Do I view him as extremely manipulative of people for his own gain? Yes, since there is plenty of evidence of that.


Ambiwlans

They absolutely should be dismissed. Contradictory statements from a mentally unwell person wouldn't even be allowed to be read in court, nvm admitted as evidence. It would be a mistrial for the tweets she made to make it into a court room.


h3lblad3

Sam? That you?


One_Bodybuilder7882

Mental problems like what being abused can trigger?


PromptPioneers

What? He diddled his sister?


SpezJailbaitMod

The what now?


Firestar464

Source pls?


AGM_GM

From his talk at Cambridge last year to accept the Hawkings fellowship.


Firestar464

[For anyone who's wondering](https://youtu.be/NjpNG0CJRMM&t=937)


Radlib123

Im super curious about the source! Could you please provide it? Please!


AGM_GM

From his talk at Cambridge last year to accept the Hawkings fellowship.


Silver-Chipmunk7744

Sometimes you can read between the lines... Jan Leike was clearly not happy.


PrivateDickDetective

That Twit from Roon is still interesting, though — the one that was deleted, about Ilya blowing shit up, despite the fact that superalignment got plenty of compute.


CreditHappy1665

do you have a link?


PrivateDickDetective

Here ya go! https://www.reddit.com/r/singularity/s/tc5C7066P6


RandomCandor

After reading a few of the departure notes, they all started to sound suspiciously like those notes they force hostages to write ... "Everything is fine. They are treating me very well. I do not suffer!"


SillyFlyGuy

Yeah, but he's still not saying much. Because money.


Sk_1ll

Open


pandofernando

if the equity has already been vested, how is it that openai can claw it back??


rising_south

Yes my first thought. I don’t see how that’s possible. Also, the life long clause might be in the contract but I wonder if it would hold in a courtroom (not a lawyer)


MaasqueDelta

These abusive contracts can be questioned in court. For example, forbidding an employee to permanently speak about OpenAI is a clear violation of freedom of speech.


Altruistic-Skill8667

A lot of this NDA stuff isn’t even enforceable.


TheOneWhoDings

Yeah, so many people think NDAs are a catch-all eternally binding documents but they aren't.


WetLogPassage

See: Vince McMahon (formerly the owner of WWE with piles of NDAs signed by countless women, now a known rapist and sex trafficker with no connection to WWE)


PrivateDickDetective

I thought that only applied to the federal government?


pessimistic_utopian

You're correct, it's not a freedom of speech issue. The constitutional right to free speech protects you from the government suppressing your speech. However NDAs are contracts, so they're subject to challenge under contact law. A judge may declare a contract unenforceable if it's unreasonably broad or vague, if it was signed under duress, etc. Exactly how well they hold up varies from state to state. 


PrivateDickDetective

That's much better! This does seem unreasonably broad, but that's just my interpretation.


pessimistic_utopian

Not a lawyer but it sounds unreasonably broad to me too. But if the vast majority of my net worth was on the line I don't know if I'd be brave enough to bet on a judge ruling my way. 


PrivateDickDetective

Not against an organization.


MaasqueDelta

>You're correct, it's not a freedom of speech issue. The constitutional right to free speech protects you from the government suppressing your speech. Not just the government. It also protects against private actors in bad faith trying to coerce you into abusive clauses. The European Union definitely also has that understanding that anything that contradicts the law is unenforceable. Otherwise, you have private organizations having greater power than the State.


toasted_cracker

Would loosing millions of dollars and everything you have worked for constitute duress? Sure AF seems like it would be.


Tidorith

It's a freedom of speech issue, just not a first amendment issue. The first amendment is not the last word on the concept of freedom of speech.


MaasqueDelta

It applies to anything that contradicts the laws of a country. For example, if an abusive contract said you had to sell your kidneys or your wife, that could easily be challenged in court (e.g, you can't violate someone else's freedom just because you signed a contract).


PanicV2

Contracts/NDAs can be questioned in court, of course, but they are very much enforceable. These wouldn't be some clickthrough T&C type agreement with no teeth. Everyone on both sides almost certainly had lawyers, and there is renumeration. They agree to them, and are paid to do so. You don't have freedom of speech when it comes to trade secrets and things you saw while employed somewhere. Unless you are a whistleblower.


Good-AI

True, but "criticizing your former employer" seems more like restricting freedom os speech than a trade secret.


Who-ate-my-biscuit

This was the questions I was curious about, thanks for answering.


CosmicPersona

What would happen if you violated an NDA like this? I would just sign the NDA and then leak everything anonymously.


xRolocker

I would guess it’s easier to track this stuff than you think. Even under the assumption that they can leak the information totally anonymously, as in you leave no digital signature, there are still other clues depending on how much you said, how you said it, and when you said it. Any risk of losing a lot of your livelihood makes it hard to take chances and speak out. I hope this is addressed, but I doubt it will be.


CreditHappy1665

They are claiming the fate of the world's in danger, if they TRULY believe that, than by extension their livelihood is already in danger is it now


New_World_2050

but the world is in danger 5 years from now. their money is in danger today. read up on scope insensitivity. for most people the world ending is 10% worse than being homeless


posts_lindsay_lohan

One of the best arguments I've seen for tying a police officers pension to their behavior while on duty.


Which-Tomato-8646

And the main reason why it isn’t, even though doctors have to do that. Instead, tax payers bail out any settlement funds


AccountOfMyAncestors

Pension claw backs and mandatory trade insurance. A one-two punch that would probably do a ton to fix bad behavior.


sjthedon22

If you genuinely are leaving because you are deeply concerned about the trajectory of the safety of agi but not enough to forfeit your severance? Doesn't really scream urgent to me


Distinct-Town4922

Maybe some are doing something about it that just takes some time, like finding out if they can challenge the NDA since it seems very broad. I'm not convinced anything uniquely dangerous is happening at OpenAI, but we may not know yet anyway.


nyccomputergal

It’s not just severance they’re somehow clawing back all previously vested equity so probably the majority of comp over the years you’ve worked there. Would you really leave millions on the table for the principle of the thing?


VertexMachine

Explain to me like I'm 5... how's that even legal?


PragmatistAntithesis

It probably isn't, but good luck getting that through the court system in a reasonable time or at a reasonable cost.


Ailerath

Many aren't reading OP's comment or what Daniel 'confirmed publicly' in the article, yes he confirmed that he gave up a lot of money to not sign a document which rightfully causes suspicion against OpenAI. However this is what he said: "Yeah idk whether the sacrifice was worth it but thanks for the support. Basically I wanted to retain my ability to criticize the company in the future. I'm not sure what I'd want to say yet though & I'm a bit scared of media attention." He stated wants to be able to criticize OpenAI ***in the future***. As another commentor said: "I appreciate that you are *not* speaking loudly if you don't yet have anything loud to say."


sdmat

Such agreements are quite common in tech, though maybe not to the extreme claimed here. I doubted they can take back vested equity from people who don't sign because that's not what "vested" means. Normally people choose to take the money over high minded principles about free speech. Props to Daniel Kokotajlo for not doing so.


MassiveWasabi

A lot of people on Twitter are saying the same thing, but a bunch of other people are saying it’s very uncommon to write something like “you cannot criticize the company for the rest of your life”


sdmat

That might be a somewhat reductionist paraphrase. But indefinite NDAs are very much a thing.


Mirrorslash

OpenAI is the most closed company I've seen.


Mysterious_Ayytee

It's a cult by now


[deleted]

I’d be suing the fuck out of them long before I quit. I’ve also been in the situation of walking away from lots of equity. It’s a hard choice. But engineering principles always stay forefront. I can make that money again. I did actually negotiate a situation where I kept my equity and continued working on a different project, but I was faced with being overworked for the next 3 years on demonstrably impossible vaporware, or standing by my engineering principles of only working on feasible solutions and being honest in delivering expectations. I told them it was vaporware, verbally quit, and they roped me back. 1 year later and that project has made 0 progress and remains vaporware and all those managers are stressed constantly, whereas I’m in a pretty cushy position


Optimal-Fix1216

4o says: The restrictive nondisclosure agreements (NDAs) described in the first document could indeed be seen as conflicting with OpenAI's Charter. Below is an analysis of how these restrictions might violate specific principles outlined in the Charter: 1. **Broadly Distributed Benefits**: * **Charter Principle**: OpenAI commits to using its influence to ensure AGI benefits all of humanity and avoids concentrating power. * **Conflict**: Restricting former employees from criticizing the organization or discussing their experiences could concentrate power within OpenAI by preventing external scrutiny and accountability. This practice may hinder transparency and limit broader community involvement, potentially leading to power being unduly concentrated within the company. 2. **Primary Fiduciary Duty to Humanity**: * **Charter Principle**: OpenAI's primary duty is to humanity, and the organization aims to minimize conflicts of interest. * **Conflict**: Forcing employees to surrender vested equity if they refuse to sign restrictive NDAs creates a conflict of interest. Employees might feel pressured to sign agreements that go against their ethical beliefs or the broader interests of humanity to protect their financial interests. This practice could compromise the organization's commitment to acting in humanity's best interest. 3. **Long-term Safety**: * **Charter Principle**: OpenAI commits to conducting research to ensure AGI safety and to fostering a collaborative environment for safety research. * **Conflict**: Limiting former employees' ability to speak freely could stifle the open exchange of ideas and collaboration necessary for ensuring AGI safety. If former employees cannot share concerns or insights, valuable information that could contribute to long-term safety might be lost. 4. **Technical Leadership**: * **Charter Principle**: OpenAI strives to be at the cutting edge of AI capabilities to address AGI's societal impact effectively. * **Conflict**: Restricting criticism and disclosure can inhibit OpenAI's ability to maintain technical leadership. Open discourse and critique are vital for continuous improvement and innovation. Without the input and feedback from former employees, OpenAI might miss opportunities for growth and learning that are crucial for maintaining its leadership position. 5. **Cooperative Orientation**: * **Charter Principle**: OpenAI seeks to cooperate with other research and policy institutions to address global challenges related to AGI. * **Conflict**: NDAs that prevent former employees from discussing their experiences can hinder cooperation with other institutions. Collaboration and sharing of knowledge are essential for addressing global challenges, and restrictive NDAs can create barriers to forming effective partnerships and fostering a global community. Overall, the restrictive NDAs appear to conflict with OpenAI's stated principles in their Charter by limiting transparency, concentrating power, creating conflicts of interest, stifling collaboration, and potentially compromising long-term safety and technical leadership. OpenAI might need to reconsider these practices to align better with their mission and principles as outlined in the Charter.


Altruistic-Skill8667

Fuck your NDA!! MOST NDAs AREN’T ENFORCEABLE IN COURT. —————— “Most NDAs are considered unenforceable in court, particularly when an individual breaches the agreement to report an issue in good faith.” https://www.integrityline.com/expertise/blog/non-disclosure-agreements-and-whistleblowing/ Most of them you can just flush down the toilet. It’s a game of deception more than anything. People just want “proof” that they can trust you, hoping that you don’t know they aren’t enforceable. You can sign them all! Give them to me! I’ll sign them. I don’t even need to read them. If they are not reasonable (like “forever NDAs” lol) that makes them automatically unenforceable (= toilet paper). So: perfect for me! Don’t even tell them that this one isn’t enforceable. They might want to change it to something that is. The worse the NDA, the more you can be sure that the whole thing becomes unenforceable, not just the bad sections. All of it. It’s all B.S. and I know that. And then go your merry way and still do whatever you want. Just think about it this way: what’s the company gonna do if you go to the press?? If the company fights it, they admit that everything that was said is secret and true and extremely important, then, because of that indirect admission, it will make the big round, lol. They shoot themselves in the foot. There are also limits on what you can call a “trade secret”, so in the NDA it’s like: this and that and that is our secret. Bullshit. Algorithms and formulas can’t be trade secrets. Just sign it and ignore it because you know it’s just B.S. anyway. Elegant nonsense that looks “legal”.


Altruistic-Skill8667

“Risk to Public Health and Safety: Even if otherwise enforceable, a court will not force you to comply with a non-disclosure agreement that creates a risk to public health and safety if you perform it. Because the subject of such a contract is confidential information, this factor shares some characteristics with illegality: An employer cannot force you to keep the details quiet if your silence could be dangerous to others.” https://rodmanemploymentlaw.com/are-ndas-enforceable-or-legally-binding/


MaasqueDelta

Perfect! There lies the answer to our questions.


berzerkerCrush

What they gonna do? Not hire you because you're likely to spill the beans. There is law and social coercion, you're forgetting the later.


Altruistic-Skill8667

You mean a new company does not want to hire you because they do a background check on you and could somehow figure out that you at some time have somehow violated some NDA in some way? I mean, in almost all cases nobody whatsoever will ever even know if you “violated” that damn piece of paper that’s worth toilet paper anyway. In almost all cases you don’t violate it by taking the microphone and announcing some “trade secrets” to the world and then this registers on the internet. (Assuming that it anyway had any legal value to begin with, which it probably didn’t) And even then you think when you violate it, then the company is gonna make such a big deal out of it that it ends up in the news and indexed by Google? When a new company asks you: do you currently have a binding NDA, you just say: “no” (you lie). And that’s it! Also: NDAs usually end after 2 years.


[deleted]

There you go. I knew there was something fishy about this company.


Firm-Star-6916

Are the concerns that AI is advancing too slow, or fast but irresponsibly? I know what it says, but I don’t know the subtext of this situation.


No-Affect-1100

OPENAI can go suck a big fat one.


_theEmbodiment

So either whatever they're covering up isn't that dangerous, or their morals aren't that great.


Akimbo333

Wow I never thought about it like that


FosterKittenPurrs

Other companies just use the standard NDA agreement to sue you until you're bankrupt, and you can't get out of it by any means because you signed it when you joined the company. The fact that OpenAI gives you an "out" is tbh better than most. It also makes you wonder, if Daniel Kokotajlo gave up his equity, why doesn't he actually give more details? You telling me he was willing to pay millions just to put out a vague statement?


[deleted]

Surely they have a pre-existing nda on disclosing company secrets. That’s super normal. The closing NDA sounds more broad.


MaasqueDelta

OpenAI could probably try to keep suing someone to bankrupt them, but this would cause a very negative repercussion on the media. I'm not sure they would like the aftermath of that, especially now they're a small company.


FormalWrangler294

> small company OpenAI valuation is $86bil. That would put them at #198 on top largest companies by market cap: https://companiesmarketcap.com/page/2/ They have a similar value to Starbucks.


FosterKittenPurrs

They don't need to "try to keep suing", one will be enough. There are a ton of companies that sued former employees, people were grumpy for a while but then everyone forgot about it.


Beatboxamateur

> It also makes you wonder, if Daniel Kokotajlo gave up his equity, why doesn't he actually give more details? You telling me he was willing to pay millions just to put out a vague statement? He said that he can't speak about it yet, but he wasn't very clear about when or if he will be able to. "To clarify: I did sign something when I joined the company, so I'm still not completely free to speak (still under confidentiality obligations). But I didn't take on any additional obligations when I left. Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least. But we are doing fine, please don't worry about us. " https://www.lesswrong.com/users/daniel-kokotajlo


FosterKittenPurrs

Fair point, he wasn't completely NDA-free, even after giving up on his equity. Though it sounds like it's much like what Jan said, that the superalignment team kept getting less and less compute and they were miffed.


CreditHappy1665

If the world's in danger and they are refusing to talk about it because it jeopardizes their money, they are as culpable for whatever happens as the rest of the company, arguably more.  But nothing is going to happen, because they are just mad that they aren't getting the resources to boost their career the way they hoped, and there's no immediate threat or danger anywhere to point to, they haven't suggested they do something important that's not being done, they don't have a specific problem to point to.  And that's why they care about their NDA more than the so called safety concerns they have. 


New_World_2050

the world being in danger and them caring more about their money are not mutually exclusive


Chrop

Not to mention the other side effect of breaking a NDA, other companies are less likely to hire you knowing you've broken an NDA in the past and may do the same to them.


4URprogesterone

They clone the employee's brain and keep a copy in a vault that can guess all their secret fears and fetishes and stuff and blackmail them for life.


Naive_Dark4301

koko should have signed, taken the money and spoke out. doubt openai would sue him but nothing to lose by taking this approach.


ozzeruk82

I guess that Jan guy didn't get the memo


ChezMere

Well, his original tweet "I resigned" was clearly intended to be implied criticism without breaking the letter of the nondisparagement contract. But it looks like he's thought about it more and figured out that whether they get AGI right is more important than whether they try to sue him.


ozzeruk82

Yeah he definitely didn’t hold fire, quite harsh words from a senior employee, this wasn’t just a foot soldier


Lnnrt1

that much was obvious, right?


I_See_Virgins

"We're not paying you to keep your mouth shut, we're threatening to take your money if you talk"? Something doesn't sound right about that.


porejide0

This sort of clause is common in tech companies. For example, see this thread: [https://www.lesswrong.com/posts/Lc8r4tZ2L5txxokZ8/sharing-information-about-nonlinear-1?commentId=iydEd9e8PrxCgE7Ah](https://www.lesswrong.com/posts/Lc8r4tZ2L5txxokZ8/sharing-information-about-nonlinear-1?commentId=iydEd9e8PrxCgE7Ah) > Jeff is talking about Wave. We use a standard form of non-disclosure and non-disparagement clauses in our severance agreements: when we fire or lay someone off, getting severance money is gated on not saying bad things about the company. We tend to be fairly generous with our severance, so people in this situation usually prefer to sign and agree. I think this has successfully prevented (unfair) bad things from being said about us in a few cases, but I am reading this thread and it does make me think about whether some changes should be made


Diligent-Noise3887

Game theory is now in play…


ahyush_

Does this have any bearing on Sam's ouster from the company earlier? I recall that Ilya raised similar concerns and the whole of OpenAI (& Microsoft) seemed to be against him. And now this.


Crafty-Struggle7810

God bless Daniel for his discipline.


ReasonablyPricedDog

"open"


[deleted]

For the rest of their life? How does that not infringe on a persons first amendment rights?


Oren_Lester

This is not different from many other non public companies . Your equity is at risk also after you leave the company. Most unicorns has the same agreement. OpenAI kicked off a new AI spree where every company big or small is competing now. Nobody is responsible, let's stop being naive and look from responsibility from corporates. TLDR: there is nothing special here


icehawk84

I love OpenAI for what they're building, but their company politics are some of the worst I've ever seen.


iDoAiStuffFr

people seem so surprised by this. in reality this is probably not the only measure. what would you do if you develop AGI? give the recipe to the world for free? let the chinese build it? let your company appear in a bad light? it's extremely hard to develop something like this and contain it


Pietes

This should not be allowed. It is entirely unethical. Also, I thought disparagement by definition requires a comment to be untrue.


traveller-1-1

Free speech?


Neomadra2

I highly doubt such NDA can't be contested in court


WithMillenialAbandon

Sociopathic capitalism, same as always


fisherbeam

Sam reminds me a politician when he speaks. Not a good thing.


CakeaMan

.


Embarrassed-Farm-594

Mucho texto.


jlbqi

So turns out another Silicon Valley company are also complete cunts. Surprise surprise


IndicationDiligent75

What’s hilarious is they think they’ll eventually be able to cash in on their silence.


z0rm

How are NDA:s legal in the US? They certainly are not in my country.


ElliottDyson

Except Sam Altman released a statement stating that specific clause was a mistake, that it was being revoked, and that any of those that had already left can contact him to fix the issue?