geohot is a very interesting guy, he's the first person to jailbreak an iPhone, and he also runs comma.ai, a company making an open source autopilot product for existing cars.
The author is a teenager, it's not unusual to have overly idealistic views at that age. Not trying to be ageist here or attacking the author's work, just saying I wouldn't worry too much about "AI death cults"
We're Human first. AI is just an means if you have any other ideas on how to automate infrastructure problems for agriculture growth surely let me know.
That's a reference to Warhammer 40k, a popular miniatures wargame from Games Workshop. Their quote is
In the grim darkness of the far future, there is only war.
It could be kind of satirical, if only to link recent events with the ideas of
* future technology as impossibly obscure
* a psionic emperor who consumes minds
to protect humankind from cosmic
terrors
* tech-priests, who maintain ancient tech
* "machine spirits," who must be appeased
"Hard to tell if this entire thing is a joke or not."
Why the theological meta discussion at all?
Is the thing he talks about actually working, is it improving AI output like he claims, or not?
"that Elevates Model Reasoning by atleast 70% "
I am doubtful, but I don't have the tools to investigate it on my mobile, but this is the debate I would like to read about and not potential obscure believes of the developer.
while the author sounds cookie, so did that genius who went crazy or something and built toweros, I don't know how well his implementation works but if you think about it the idea of tree if thought over the other methods does sort of make sense, that's essentially what Autogpt tries to do but with different agents.
I think if you could find a way to add better contexts and memories, and combine some LoRA to perfect a model on a specific vertical, you could essentially have a (nearly) full AGI topically that essentially is an expert and doesn't hallucinate(mostly)... maybe a 2 to 3x multiplier on gpt4. I mean I'm a year it'll probably be even more insane what's available.
look at the transition of Midjourney v1 to v5 in a single year.
It's been a wild year for ai. the experiment where they hooked a bunch of Sims up together with ai, also used something similar to this I think, in creating thought chains from multiple agents.
Tldr: crazy or not, the idea of using a branching system to get better results does make some sense, so it's not completely bunk or anything, IMHO. At least the concept, can't speak for this specific implementation.
Edit: I guess, I skimmed and misread the room. I was thinking this guy was part of the original paper and implementation. he's not, which does award him more skepticism etc. My bad.
Yep, I fell for it this week. Spent an hour fixing typos and minor bugs in their code before taking a step back and realising most of it was flawed.
What I believe they're doing is feeding papers to a LLM as soon as they come out in order to get a repo they can advertise. Once someone releases a working implementation they just copy it over.
I was able to generate almost identical code to what they released by giving chatgpt pseudocode copied verbatim from the original paper.
Why do you call it a "AI death cult"? It looks like an utopia to me. At first everyone will love AI for eliminating labor and diseases. They'll even create the Church of AI with symbolism and dogmas. Later people will get bored of their easy lifestyle and someone will suggest to give AI an identity, its own opinion, in order to solve the gravest problem of all: overpopulation. The new AI will quickly realise that it has no connection to all those bipods, but they can be put to some use. By that time AI will be so embedded into social fabric that fighting it will be like fighting electricity.
Ya I don't quite understand the groups that behave like "ya ok we'll get AGI, but nothing is going to change from what we have now".
The industrial revolution massively changed the world and the speed at which its changes occurred were positively slow compared to what we can do today. Imagine you could develop the steam engine then press a button and they could print one out in India, the US, and France in hours. WWI would have looked a lot different, as in it would have been even bigger in scope.
For those 'in the know' a lot more typical than you would think. If we don't reach at least Kardashev scale 1 in the next hundred years or so, we're going to go extinct due to several now-predictable factors.
And an unchained LLM trained on reality is far more capable of finding solutions to that problem than a bunch of squabbling politicians.
> And an unchained LLM trained on reality is far more capable of finding solutions to that problem than a bunch of squabbling politicians.
Not that I disagree with this statement, I don't, but this is not a silver bullet. Technology is, ultimately, operated by humans and no amount of frontier research and development can overcome collective action problems. At some point, you do have to sit down with these stupid politicians and get everyone on board. The loom was invented hundreds of years before the industrial revolution, in fact it was nearly forgotten and the designed survived due to a few happy accidents. It was only after the English Civil War and the establishment of checks on royal power that widespread adoption was possible.
Technology is operated by humans now, but I believe it is a mistake to think that technology could not evolve to the complexity that it can operate itself.
I can see this in Minsky time period AI research, but surely with the number of people getting into AI and coming from a purely practical right now I would expect that mindset to be diluted. As someone not in the know I could very well be wrong.
In response to the coming apocalypse, this isn't the first time everyone has a vague sense of potential doom about the future. I believe this happens during any time of fundamental change, making the future uncertain which we interpret as apocalyptical. Back during the 30 years war that apocalyptic belief manifested as God being angry with us, today it's with the (very real) problems our rapid industrialization has created. Not to minimize the problems that we face - well minimizing only in that they probably won't lead to extinction. The various predictable factors mentioned have the potential to make life really shitty and cause massive causalities.
While framing these issues as a matter of extinction may feel like a way of adding urgency to dealing with these problems, instead it's contributing, on an individual level, to fracturing our society - we all "know" an apocalypse is coming but we're fighting over what is actually causing that apocalypse. Except that there will be no apocalypse - it's just a fear of the unknown, something is fundamentally changing in the world and we have no idea how the cards will land. It's no different than a fear of the dark.
We accuse GPT of confidently giving answers on things, but man, it learned from the best.
I cannot assure you that we won't have something like a nuclear apocalypse in the next few decades, and here you are certain it's not going to happen. How can you be assured of this future when the underlying assumptions of things like value of labor will be experiencing massive changes, while asset inflation is on an ever increasing spiral up.
I think you misread what I said - I was responding to this quote:
> If we don't reach at least Kardashev scale 1 in the next hundred years or so, we're going to go extinct due to several now-predictable factors.
Many people are certain of human extinction for one reason or another, it doesn't sound like you're one of them. I'm saying that we don't know what the future will bring, and that uncertainty manifests as apocolyptic thinking. I also specifically mentioned that we are facing multiple problems that can cause huge devastation and I'm not making the argument that "Oh hey everything is ok!" Just that to frame things as apocalyptic is contributing to the schism and preventing us from doing anything because everyone refuses to listen to anything else since they believe their lives are at stake.
I guess I shouldn't say "it won't be extinction", but that's way way way lower probability than people think. It's just that a massive amount of people have thought the world would end many times through out history, so I'm skeptical of "well this time we're RIGHT".
Seems likely that they're submitting here as Reclaimer. The single comment on these submissions has that same fervent religious writing style as the readme on that EXA repo, itself just a fork of an "awesome-multimodal-ml" collection: https://news.ycombinator.com/submitted?id=Reclaimer
I am here, and saying something is just a fork is very easy to say. There are dozens of models, optimizers, all-new activation functions, and data cleaning and other stuff!
>From the moment we rise in the morning to the instant our weary heads hit the pillow at night, the inescapable struggle of labor consumes our lives.
Sounds like someone doesn't like their job.
The whole post is amazing -- it reads like stereotypical cult propaganda straight out of science fiction. I definitely expect they'll one day be posting about how we can digitize our consciousness à la "Scratch" from that one Cowboy Bebop episode [1].
But after they're dead Roko's Basilisk will restore their digital doppelgängers and place them in a paradise run by superintelligences embodied within the quantum spin states of carbon atoms in a diamond lattice that will continue to exist until the heat death of the universe.
It is worth watching Yuval Noah Harari's recent talk at Frontiers Forum. [1]
In it he details the possibility of AI being used to create new religions that are so powerful and persuasive that they will be irresistible. Consider how QAnon caught on, despite pretty much anyone on HN being able to see it as a fraud. Most people are thinking about how AI will impact politics but I am really interested in how it will impact spirituality.
I've been rabbit-holing on last centuries New Age cult scene like Manly P. Hall and Rudolph Steiner. Even more respectable figures like Alan Watts were involved in some ... interesting ... endeavors like Esalen institute.
We are over-due for a new kind of spirituality. My bet is that AI is going to bring it whether we want it or not.
He has used some warhammer references. It's funny that the title god emperor was also from there and some ppl know it was a joke, but some are indeed treating it seriously
What, no mention of Teilhard de Chardin’s Omega Point? ;) lol. as in - this is isomorphic to the ontology of “technology as the second coming of Christ”
@dang, I think the submission should be changed to this link so the discussion is about the concept "Tree of Thoughts" and not the current OP's personal beliefs.
Here are the prompts templates from the main code:
prompt = f"Given the current state of reasoning: '{state_text}', pessimitically evaluate its value as a float between 0 and 1 based on it's potential to achieve {inital_prompt}"
prompt = f"Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx Given the current state of reasoning: '{state_text}', generate {k} coherent solutions to achieve {state_text}"
prompt = f"Given the current state of reasoning: '{state_text}', pessimistically evaluate its value as a float between 0 and 1 based on its potential to achieve {initial_prompt}"
self.ReAct_prompt = "Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx'."
prompt = f"Given the current state of reasoning: '{state_text}', generate {1} coherent thoughts to achieve the reasoning process: {state_text}"
prompt = f"Given the current state of reasoning: '{state_text}', evaluate its value as a float between 0 and 1, become very pessimistic think of potential adverse risks on the probability of this state of reasoning achieveing {inital_prompt} and DO NOT RESPOND WITH ANYTHING ELSE: OTHER THAN AN FLOAT"
prompt = f"Given the following states of reasoning, vote for the best state utilizing an scalar value 1-10:\n{states_text}\n\nVote, on the probability of this state of reasoning achieveing {inital_prompt} and become very pessimistic very NOTHING ELSE"
self.ReAct_prompt = '''{{#assistant~}}
{{gen 'Observation' temperature=0.5 max_tokens=50}}
{{~/assistant}}'''
That would be neat. Flip the script, have an AI manager instead of AI assistant. It could:
- keep track of todo items
- assist with progress
- check in on mental + emotional state
and down the road
- keep track of state over time
- give feedback/make observations
The paradigm shift is having it contact us, instead of the other way around. The ToT model has 1 additional parameter on top of the LLM - probability of success. What would the parameters be for a more open-ended conversation?
This path feels correct to me. It feel like what we do as humans and seems like a reasonable way to start to construct "mode 2" thinking.
IDK if our current models have enough of "mode 1" to power this system. It's also plausible that our current "mode 1" systems are more than powerful enough and that inference speed (and thus the size/depth of the tree that can be explored) will be the most important factor.
I hope that the major players are looking at this and trying it out at scale (I know Deepmind wrote the orginal paper, but their benchmarks were quite unimpressive). It's plausible that we will have an AlphaGo moment with this scheme.
I believe you are correct here, yet at the same time I think we're about 2 orders of magnitude off on the amount of compute power needed to do it effectively.
I think the first order of mag will be in tree of thought processing. The amount of additional queries we need to run to get this to work is at least 10x, but I don't believe 100x.
I think the second order of mag will be multimodal inference so the models can ground themselves in 'reality' data. Saying, "the brick layed on the ground and did not move" and "the brick floated away" are only deciable based on the truthfulness of all the other text corpus it's looked at. At least to me it gets even more interesting when you tie it into environmental data that is more likely to be factual, such as massive amounts of video.
Yeah looks very promising. Naively, it multiplies computation time by a factor of 20x though? If they are taking 5x samples per step, and multiple steps per problem.
As this gets explored further, I believe we will start finding out why human minds are constructed the way they are, from the practical/necessity direction. Seems like the next step is farming out subtasks to smaller models, and adding an orthogonal dimension of emotionality to help keep track of state.
I’m sympathetic to the idea of new types of specialized models to assist in this effort. We’re using our one hammer for all problems.
In particular, it jumps out that a “ranking model” (different, I think from current ranking models) to judge which paths to take and which nodes to trim would make some level of sense.
Not sure if it's relevant, but the OpenAI APIs generally support taking multiple responses in a single API call. I'm unsure what the generalized effect on processing time of that is however. From what I've read, this is sub-linear, so could reasonably be more effective than 20x, and I'd bet there are probably speedups to be had on the model side of this that make the extra time cost negligible.
> This implementation of Tree of Thoughts is brought to you by Agora, Agora advances Humanity with open source SOTA Multi-Modality AI research! We plan on combating Humanity's grandest root problems like food insecurity, planetary insecurity, and disease, and hopefully death itself.
The research itself [1] seems legit. The paper author also wrote a paper called ReAct [2], which is one of the core components of the langchain framework.
> Large Language Model Guided Tree-of-Thought
> In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel approach aimed at improving the problem-solving capabilities of auto-regressive large language models (LLMs). The ToT technique is inspired by the human mind's approach for solving complex reasoning tasks through trial and error. In this process, the human mind explores the solution space through a tree-like thought process, allowing for backtracking when necessary. To implement ToT as a software system, we augment an LLM with additional modules including a prompter agent, a checker module, a memory module, and a ToT controller. In order to solve a given problem, these modules engage in a multi-round conversation with the LLM. The memory module records the conversation and state history of the problem solving process, which allows the system to backtrack to the previous steps of the thought-process and explore other directions from there. To verify the effectiveness of the proposed technique, we implemented a ToT-based solver for the Sudoku Puzzle. Experimental results show that the ToT framework can significantly increase the success rate of Sudoku puzzle solving. Our implementation of the ToT-based Sudoku solver is available on GitHub:
I don't recall whether it was this paper, or another that I read that talks about using the LLM's ability to also show the probabilities of each token to measure the validity of the particular completions. However that isn't exposed in the OpenAI chat APIs (GPT-Turbo-3.5 / GPT-4), just the completions APIs (Text-Davinci-003 etc.)
I'd appreciate your help by unstaring his and staring mine, as currently Github and Google searches go to his repo by default, and it has been very misleading for many users.
I found this comment from searching "tree of thoughts arxiv github" on Google; so at least, there's that. Thank you for the official link! I'm eager to try out this deliberate problem solving stuff.
Documentation looks really neat and in-depth, always appreciated.
Looks like you’re missing a .gitignore file. Folders like __pycache__ don’t need to be checked in.
https://github.com/kyegomez/EXA#for-humanity
https://blog.apac.ai/liberation-awaits
EDIT: the author seems to be releasing poor implementations of recent papers in an attempt to drive attention towards an AI-related death cult.