Hacker News new | past | comments | ask | show | jobs | submit | gpsx's comments login

I just tried. Claude did exactly what you said, and then figured it out:

Central Park in New York City is bigger than GoldenGate Park (which I think you might mean Golden Gate Park) in San Francisco.

Central Park covers approximately 843 acres (3.41 square kilometers), while Golden Gate Park spans about 1,017 acres (4.12 square kilometers). This means Golden Gate Park is actually about 20% larger than Central Park.

Both parks are iconic urban green spaces in major U.S. cities, but Golden Gate Park has the edge in terms of total area.


If you are a new company starting out (suppose aluminum siding), you have to market your product or else nobody will know about it. I'm not sure what all parts of marketing are counted as advertising here, but generally in marketing you pay to get word out about your product. Wihtout that, starting a business might be tough. And then, as a consumer, there are lots of free products I have gotten my entire life by virtue of the fact that one of those companies offered to pay for it for me. If you turn that off I'm not sure how that all would work. I don't think the OP goes into too much detail.


Trump said he wanted peace, but I think he really wanted an end for the sanctions, probably to help with his goal of lowering the price of oil. I don’t think he cares at all about how things turn out in Ukraine.


Trump met with the Russia in Saudi for 'peace talks'. I doubt lowering oil prices was the concern there.


Trump have promised to end the war in a short time and is a little late on the deadline. That explains him being irritated at that meeting.


I am of the view that the chance of life in a single universe is vanishingly small. Fortunately, there can be many, many universes, or many, many effective universes. I googled the number of stars in the universe and it said 10^23. I am admitteedly not sure exactly what all this entails, but that is a pretty small number. How many ways are there to arrange a deck of cards? 10^68. That means you would have to put 10^45 decks of cards in each star system just to get a good chance of finding another deck with the same order of cards as one you shuffle yourself. And life it a lot more complex than a deck of cards. The number of stars grows linearly with the volume of space. Probability shrinks much faster. I don't know what the actual probabilty of life evolving is, but I wouldn't expect it to be very easy. And I don't think there is any reason to think the universe we see is the only "try" there has been to create life.


Comparing a physical count to permutation count doesn’t really compute me to. The only correlation that these two concepts have is the fact that they are numbers. One scalar’s magnitude has no bearing on the significance of another’s in an entirely different domain.


I suspect when Susskind was talking about ideas away from the consensus _usually_ being suspect, he was talking about what the general public or non-practitioners should think about, especially since they are not usually in a position to judge those ideas. As a practitioner you certainly can’t ignore non mainstream ideas, or else no new ideas would ever become mainstream.


Gravity is similar to an electric field here. A wave function for a field consists of an amplitude for each field configuration, where a “field configuration” refers to a value for the electric field for each point in space. In GR each field configuration would correspond to a space time geometry for the universe. We have quantized excitations as distinct “valid” solutions to the wave function, which we call a particle, though it is nothing like an electron. The notion of space time geometry holds throughout. (Edit: in practice, people never calculate wave functions for fields like electric fields. That would be too hard. Different methods are used in calculations. Second edit: the wave function wouldn’t be composed of complete space-time configurations, histories of the universe, but time slices from it, like space geometries. Maybe this can be expanded in responses/comments.)


I saw an LLM having this kind of problem when I was doing some testing a ways back. I asked it to order three fruits from largest to smallest. I think it was orange, blueberry and grapefruit. It could do that easily with a simple prompt. When the prompting included something to the effect of “think step by step”, it would try to talk through the problem and it would usually get it wrong.


How much does this align with how we learn math? We kind of instinctively learn the answers to simple math questions. We can even at some point develop an intuition for things like integrating and differentials. But the moment we are asked to explain why, or worse provide a proof, things become a lot harder. Even though the initial answer may be correct.


I definitely don’t learn math by means of gradient descents.

We can possibly say math is not learned, but a mental models of abstractions are developed. How? We dunno, but what we do know is we don’t learn by figuring the common features between all previously seen equations only to guess them later…

Mind operates on higher and higher levels of abstractions building on each other in a much fascinating way, very often not with words, but with structure and images.

Of course there are people with aphantasia, but i really fail to see how any reasoning happens in purely language level. Someone on this forum also noted - in order to reason one needs an ontology to facilitate the reasoning process. LLMs don’t do ontologies…

And finally, not least though, LLM and ML people in general seem to equate intuition to some sort biased.random(). Well intuition is not random, and is hard to describe in words. So are awe and inspiration. And these ARE part of (precondition to, fuel for) humanity’s thought process more that we like to admit.


> I definitely don’t learn math by means of gradient descents.

https://physoc.onlinelibrary.wiley.com/doi/10.1113/JP282747


The fact it (is suggested / we are led to believe / was recently imlied ) the neurons can be explained to be doing something like it on the underlying layer still says little about the process of forming ontological context needed for any kind of syllogism.


Humans learn skills like basic mathematics by reasoning about their environment and building internal models of problems they’re trying to solve. LLMs do not reason and they cannot model their environment.


It's not thinking, it compressed the internet into a clever, lossy format with nice interface and it retrieves stuff from there.

Chain of thought is like trying to improve JPG quality by re-compressing it several times. If it's not there it's not there.


  >It's not thinking



  >it compressed the internet into a clever, lossy format with nice interface and it retrieves stuff from there.

Humans do both, why can't LLM's?

  >Chain of thought is like trying to improve JPG quality by re-compressing it several times. If it's not there it's not there.
More like pulling out a deep-fried meme, looking for context, then searching google images until you find the most "original" JPG representation with the least amount of artifacts.

There is more data to add confidently, it just has to re-think about it with a renewed perspective, and an abstracted-away higher-level context/attention mechanism.


> Chain of thought is like trying to improve JPG quality by re-compressing it several times. If it's not there it's not there.

Empirically speaking, I have a set of evals with an objective pass/fail result and a prompt. I'm doing codegen, so I'm using syntax linting, tests passing, etc. to determine success. With chain-of-thought included in the prompting, the evals pass at a significantly higher rate. A lot of research has been done demonstrating the same in various domains.

If chain-of-thought can't improve quality, how do you explain the empirical results which appear to contradict you?


The empirical results like OP’s paper, in which chain of thought reduces quality?


The paper is interesting because CoT has been so widely demonstrated as effective. The point is that it "can" hurt performance on a subset of tasks, not that CoT doesn't work at all.

It's literally in the second line of the abstract: "While CoT has been shown to improve performance across many tasks..."


I have no idea how accurate it actually is, But I've had the process used by LLM described as the following: "Think of if like a form of UV Mapping, applied to language constructs rather than 3D points in space, and the limitations and approximations you experience are similar to those emerging when having to project a 2D image over a 3D surface."


These kind of reductive thought-terminating cliches are not helpful. You are using a tautology (it doesn't think because it is retrieving data and retrieving data is not thinking) without addressing the why (why does this preclude thinking) or the how (is it doing anything else to generate results).


> If it's not there it's not there.

There is nothing in the LLM that would have the capability to create new information by reasoning, when the existing information does not already include what we need.

There is logic and useful thought in the comment, but you choose not to see it because you disagree with the conclusion. That is not useful.


I'm sorry but generating logic from tautologies is not useful. And the conclusion is irrelevant to me. The method is flawed.


Maybe if you bury your head in the sand AI will go away. Good luck!


This is basically a reformulation of "have fun staying poor!". Even contains the exclamation mark.

Those people sure showed us, didn't they? Ah, but "it's different this time!".


It would be interesting to think about how it got it wrong. My hunch is that in the "think step by step" section it made an early and incorrect conclusion (maybe even a subtly inferred conclusion) and LLMs are terrible at walking back mistakes so it made an internally consistent conclusion that was incorrect.

A lot of CoT to me is just slowing the LLM down and keeping it from making that premature conclusion... but it can backfire when it then accidentally makes a conclusion early on, often in a worse context than it would use without the CoT.


Maybe it needs even smaller steps, and a programmatic (i.e. multi prompt) habit to always double-check / validate the assumptions and outcomes.


I always found it interesting how sorting problems can get different results when you add additional qualifiers like colors or smells or locations, etc.

Natively, I understand these to influence the probability space enough to weaken the emergence patterns we frequently overestimate.


The model is likely to had already seen the exact phrase from its last iteration. Adding variation generalizes the inference away from over-trained quotes.

Every model has the model before it, and it's academic papers, in it's training data.

Changing the qualifiers pulls the inference far away from quoting over-trained data, and back to generalization.

I am sure it has picked up on this mesa-optimization along the way, especially if I can summarize it.

Wonder why it hasn't been more generally intelligent, yet.


From Claude:

I'll rank those three fruits from largest to smallest:

1. Grapefruit 2. Orange 3. Blueberry

The grapefruit is definitely the largest of these three fruits - they're typically around 4-6 inches in diameter. Oranges are usually 2-3 inches in diameter, and blueberries are the smallest at roughly 0.5 inches in diameter.


chatGPT, from smaller to largest: Blueberry Orange Grapefruit


It looks like they measured the time of the process where the entanglement occurred, not a time of entanglement itself. I guess they can use this to put an upper bound on the time for entanglement, which is a valid thing to do for an instantaneous process.


I have another definition, or at least this is how I think of it. I’m not sure many people would buy into it. In the standard model, the fermions are particles, like the electrons, quarks, neutrinos. Electroweak, strong force, gravity are fields. This means the photon is not a particle, but just a field excitation. I know people can think of fermions as fields, I just think of them as particles.


Aren't you describing quantum field theory (QFT)?

Anyway, what exactly is a field besides a mathematical object? What is it made of?


I did study quantum field theory and I have a hard time viewing a fermion as a continuous field, whereas a gauge field I do view as a continuous field. I view a fermion as a true point particle, kind of like it is in a lattice. The fermion still has a wave function of course. It is very different from the wave function of a gauge field. The wave function of an electric field is a wave function over field configurations. The fermion wave function is a wave function of fermion spins. I don't think this is an unreasonable view, but I am not trying to force it on anyone else.


I'm still new to learning about these things, but is the viewpoint that a particle is a field excitation sort of the thing about starting with a lattice in the ground state with a field defined on the points of the lattice, then some excitations happen which cause the field to enter a particular "mode". This mode is the particle?


Checkout energywavetheory.com. It's essential the Aether, but really makes you think.


I flipped through some of the content. It's very well presented, but unfortunately it is pseudo-science gibberish.


I agree with you. I think the place to start would be with the ingredient list. I think they would say to avoid things that don't sound like food. I think chemicals are the real danger. It is very hard to do a conclusive study on how good or bad specific ingredients are, unfortunately, so it is hard to say some chemicals is OK. Secondly, but not as bad, I think are ingredients that are refined from foods. There are many levels of refinement so this is tougher. White flour? Sugar? Then there are things that take an industrial process to extract things from food, which I just put in the chemical category. Maybe they could color code the ingredient list on food packages?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: