Hacker News new | past | comments | ask | show | jobs | submit login
Psychoanalyzing the Presidents: AI Dives into the Minds of Biden and Trump (sibylline.dev)
3 points by CuriouslyC 6 months ago | hide | past | favorite | 7 comments



I do think that my ability to understand other people could be greatly augmented by an LLM. The number of lives recorded in the training data exceeds the number of people I will ever be able to encounter or learn about by several orders of magnitude. The neural network model was inspired by the human brain, so I would expect it to be a good fit for modelling human behavior.

"Applications should not be engineered to maximize engagement metrics, but rather to create holistic experiences that transcend the sum of their A/B tests." Amen!


A plane looks like a bird, clearly it generates lift the same way!

Or, a rehash... people have said this about books for generations.


This analysis was performed using Jung, a psychoanalysis endpoint I build as part of a larger mental health project I'm working on. Jung is obviously still in beta, but I'm happy to work with people who are interested in integrating this functionality into their tools to ensure it's maximally robust for their use case.


"Analysis confirms the biases of the analysts, news at 11"


If you read the article, you'll note that I've designed the endpoint to take a variety of perspectives for this exact reason. If you feel that there's still a bias I'm happy to talk about that intelligently, that quote wasn't a good conversation starter though.


It's an LLM, there's no novel reasoning from first principles and personal ethics and insights. It's based on the training data, what data were privileged, and their biases, not to mention (if the API is build on top of something like ChatGPT), explicit guidelines given to it (I mean at the OpenAI, level before it even gets any API related prompts).

Why wouldn't those be evident just as easily against a "variety of perspectives"?


The default perspective of a LLM is a centroid of the training data + alignment, roughly. By cueing it to take different perspectives, you can get away from that centroid. By providing a variety of perspectives that sample the space evenly you can get a low bias estimator.

I also use anti-refusal techniques to reduce the impact of alignment, however I'm not instructing the model to take any extreme perspectives so I don't think it's that big of a deal in the first place.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: