Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: OAI employees who signed to follow Sam, what was your decision process?
42 points by eternalyxiii on Nov 23, 2023 | hide | past | favorite | 16 comments



I’m not an open ai employee. I used to work in law. I’m looking into your question and started to collect web archives of pages that might be valuable evidence later in case microsoft tries to change their Investor Relations page. I tried to put it on git hub and I wasnt sure what license to use so I did gpl 3. I hope this helps. https://github.com/almamater54/investigation-into-possible-f...


Oof that was DMCA sniped before I could read it.

While we are at it, it's time for a decentralized git ledger!


That's really wise. Thanks for compiling.


Not an employee and no knock to sama, but I bet the primary motivator was “I want my stock value intact and as high as possible”.


I would tend to agree, but wonder if they'll be able to have this honest introspection. Also, did they know about Q* and it's implications beforehand, my guess is most staff did not.


I think the primary reason was that there's little to lose by signing the letter. It's not a legally binding agreement to quit and never go back. If you changed your mind, OAI would happily take you back. On the other hand, if you don't sign you'll lose a potential opportunity.

So from a game theory perspective the only rational action is to sign, no matter what you believe.


Also curious if the Q* concerns letter to the board was known by many/most of the people on the list that signed.


I would guess not, so why blinding follow him without knowing the information behind the boards decision!


Green throwaway accounts are highly encouraged in this discussion


And easy to fake


As an OpenAI employee in this scenario, my thought process would involve carefully considering the perspectives of both Ilya and the board, recognizing the importance of ethical AI development and the potential risks associated with AGI. Here's a breakdown of my thought process:

   1.  Understanding Ilya's Concerns:
     I would seek to understand Ilya's concerns about AI commercialization and the perceived departure from OpenAI's original mission. This involves analyzing how commercialization may impact the ethical considerations of AI and assessing whether the current trajectory aligns with OpenAI's founding principles.

  2.  Assessing AGI Risks:
    I would acknowledge the board's concerns about AGI and its potential threats to humanity. Understanding the risks associated with AGI is crucial, and I would consider how OpenAI can actively contribute to addressing these concerns through responsible research and development practices.

  3.  Balancing Ethical Considerations and Financial Stability:
    Recognizing the financial realities of sustaining a cutting-edge research organization, I would consider how to strike a balance between ethical considerations and the need for financial resources. This might involve proposing guidelines for responsible commercial partnerships, ensuring that ethical considerations are at the forefront of any business decisions.

   4. Advocating for Dialogue and Collaboration:
    Rather than taking an adversarial approach, I would advocate for open and constructive dialogue between Ilya, the board, and other stakeholders. Collaborative discussions can help identify common ground, address concerns, and work towards a solution that aligns with OpenAI's mission.

  5.  Promoting Responsible AI Development:
    Emphasizing the importance of responsible AI development, I would propose initiatives and policies that actively contribute to ensuring the safe and ethical use of AI technology. This might include participating in industry-wide discussions on AGI safety and advocating for best practices.

  6.  Considering the Broader Impact:
    I would weigh the potential impact of various decisions on the broader AI community, OpenAI's reputation, and its ability to contribute meaningfully to the field. Striking a balance between principles and pragmatism is crucial for the long-term success and influence of OpenAI.

  7.  Expressing Willingness to Collaborate:
    In any communication with the board or colleagues, I would express a genuine willingness to collaborate and find solutions that address concerns on both sides. Building consensus and maintaining a shared commitment to OpenAI's mission would be a priority.
Ultimately, my goal would be to contribute to a resolution that upholds OpenAI's values, maintains its reputation as a leader in ethical AI, and ensures its continued impact on the development of artificial intelligence for the benefit of humanity


That’s GPT if I’ve ever seen it


did you write this with ChatGPT


[flagged]


The company has the week off for Thanksgiving (well the majority of the staff did, before all of this), and wouldn't you think this is important to know?


Couldn't you say the same thing about literally any other comment and discussion on HN?


You just did!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: