Hacker News new | past | comments | ask | show | jobs | submit login
ProfileGPT: An Example of AI Agents Collaboration Architecture (sahbichaieb.com)
106 points by sahbic on April 23, 2023 | hide | past | favorite | 15 comments



> it is possible to extract various types of information…

According to what definition of “extract information” are these ChatGPT agents extracting information?

Even granting a sense in which the above claim is true (perhaps limited to cases where the yielded text is actually true of the user), you make repeated claims that the agents can “predict,” “analyze,” and “infer.” There is more to prediction, analysis, and inference than the representation in natural language of those things. Just because ChatGPT produces sentences which have those linguistic forms (and even if those sentences are true) does not mean we can rightly climb ChatGPT can actually perform those activities. This is a problem with so much of the commentary around ChatGPT: it can produce the natural language outputs of some cognitive processes, but it does mean it performs those processes per se.

And that is fine! ChatGPT can plausibly be described as performing its own very special and powerful kind of generalized reasoning, I just think we should be careful attributing to it processes just because it can produce the corresponding linguistic forms.

Put another way, words like “inference” have a dual meaning: they refer to a type of utterance with a certain form and they refer to a specific process of reasoning. Production of the former does not prove the presence of the latter.

Along similar lines, why say your “Psychoanalyst” agent was “designed… [with] 20 years of experience in the field of profiling”? In what sense is that even remotely true? Better just to say “it was inspired by an expert with 20 years of experience, and we hope it performs with that level of competence.” But then why “20 years”? Would performance at the level of a 30-year professional be considered failure?


> But then why “20 years”? Would performance at the level of a 30-year professional be considered failure?

They're keeping the "30 years of experience" agent for the 2.0 version /s.

This is a good analysis, and I'm glad that there are some people like you giving these things a good deal of thought.


For the public interest, this project might be most useful for suggesting to people unfamiliar examples of some of the kinds of profiling information that others have been building and using.

But don't read too much into how well this does or doesn't work. Some will do more. Some will do it more accurately. Some will do it with the same accuracy, or less accuracy, which can also be very bad.

Also be aware that, for every person concerned about the public interest, there are countless others who are looking at this through the lens of personal/business advancement, which has been normalized in our techbro circles. So, when you're raising awareness to the public, you're also attracting more sharks to the public beach, and handing a `git clone` starting point to aspiring new sharks.


> here is increasing concern about the safety of AI. Some countries, like Italy, have taken steps to address these concerns by banning the use of ChatGPT, while other like France and Spain are raising questions about it

The ban was because ChatGPT isn't complying with the GDPR, nothing to do with ai safety, as their are no ai safety regulations, at least not yet.


Indeed it's GDPR issues that lead to the ban. I agree with you that the expression" safety of AI" may be too generic. The right to data portability (downloading your own data) was implemented only recently and that's what made this project possible. There are other concerns like the right to be forgotten which implies the ability to delete someone's data. For LLMs this would mean filtering the input data, retraining models which come with high costs.


The “little bunny trick” makes me think of Pyroland from Team Fortress 2: https://wiki.teamfortress.com/wiki/Pyroland (watch the “Meet the Pyro” video to get the full effect)


Tip: if blog post has "Conclusion" section at the end, it was very likely generated with GPT.


> known for using advanced mathematical models and deep understanding of human behavior to predict future behavior of individuals and large populations.

Isn’t psychohistory not able to predict individual behaviors and only effective on large numbers?


Yes, this is key to the entire premise of the book series in all its iterations. Seldon's psychohistory is able to predict the movements of people in aggregate, with the obvious allegory being the uncertainty principle. In the later novels - many the interesting problems the foundation run into arise from the failure of Seldon's model to account for changes in underlying humanity (e.g.: the psychic abilities of the Mule). Of course we also learn Seldon's predictions are being 'helped along' from behind the scenes by the Second Foundation. Which brings to mind some of Jaron Lanier's critiques of A.I., as dependent on human curated, reinforced and updated datasets.


Yes, that's right but you can see this as a fiction inside the fiction. This agent "thinks" he is capable of predicting the future of individuals and acts as if he can.


Perhaps we are fiction, within fiction, creating a new fiction?


The different kinds of agents are interesting. The personality test choice is impressively accurate.

Of course it will depend on the quality of the search data. But this gives me ideas of other applications.


I've been researching GPT's functionality -- probably 90% of what I've sent it has been hypotheticals. I shudder to think what it thinks of me :-)


I think the real power of AI isn’t (yet) in one general intelligence, but in the architecture and collaboration of individual intelligent agents acting autonomously. Very cool!


The little bunny!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: