DC might not be the best comparison here as far as American cities go. I - and most people I know - walk around the city year round and I live on the top of a pretty steep hill.
In the summer most people do not want to show up to work reeking like the Anacostia. I get it. In the evenings you walk from your apartment to Madam's Organ to pay $20 for a beer.
Every trip to a grocery store, restaurant, bar, friend's house, transit station, etc. that can be done by walking or cycling is one car trip off of the roads. That has benefits.
Of course some places are not suited to this. But there are places that could be, and those places combined have a lot of people living in them.
Dismissing the idea in all of America as an absolute is missing a lot of potential, and a lot of what is already happening.
And from my experience looking at real estate prices, houses in areas with good scores for walkability, cycling, and transit are very much in demand and priced higher than those without. There is at least some segment of the market that very much wants these qualities.
I haven't found this to be true at all. In fact, I'd say the majority of studies I read - even from prestigious journals - is fraught with bad statistics. I have no idea how some of these studies were even allowed to be published. Some fields are worse than others, but it's still a huge problem pretty much across the board.
People conduct science, and a lot of those people don't understand statistics that well. This quote from nearly 100 years ago still rings true in my experience:
"To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of."
I can attest that the frequentist view is still very much the mainstream here too and fills almost every college curriculum across the United States. You may get one or two Bayesian classes if you're a stats major, but generally it's hypothesis testing, point estimates, etc.
Regardless, the idea that frequentist stats requires a stronger background in mathematics is just flat out silly though, not even sure what you mean by that.
I also thought it was silly, but maybe they mean that frequentist methods still have analytical solutions in some settings where Bayesian methods must resort to Monte Carlo methods?
Keynote is awesome. Last I checked a few years ago though Numbers was nowhere even close to Excel. No dynamic array formulas, Power Query, lambda functions, VBA, etc. All are pretty essential if you're doing anything beyond basic spreadsheets but I may need to checkout Numbers again.
I should have qualified "better". I find Numbers easier to use for basic spreadsheet tasks. Advanced, programming-like tasks are better in Excel, which has many more advanced features. I don't think Numbers is Turing complete, but then again I tend to use Python rather than Excel for advanced math processing.
Word has some features that Pages doesn't have, but they're not commonly used, and if you're doing any kind of page layout, Pages is __much__ easier to work with than MS Word.
I believe you, and LLMs are no doubt useful, but "under the hood" it's still just predicting what the next token should be based on the provided context. I take it he's saying that no, there isn't really a ghost in the machine, its still just linear algebra/calculus and is no reflection of actual organic reasoning.
I think the difference of opinion here is between science and technology. Too many people in my opinion take the latter to be a synonym for the former.
It doesn't matter what is under the hood. A statement can be useful - introduce new views, make valuable points, reduce risks, help resolve conflicts, etc - regardless of whether there is a ghost behind the text. It just needs to be logically sound, consistent with facts about the world, and objectively useful. Then it can make a real world contribution.
The cause is that you don't know how to evaluate when a statement is useful on its own merits. That means that you have to fall back on judging statements based on the identity of the speaker. In the case of AI, your prejudice against math and formulas as effective forms of reasoning means you can't critically analyze - or gain benefit from - statements the AI makes.
It's very similar to the internal blockage of a person who immediately dismisses anything a woman, racial minority, mentally ill, or queer person says. The only way to repair it is to spend time talking to the AI, reading about it, and learning how to debate ideas.
That's the last thing someone with a prejudice wants to do. Curious investigation undermines the safety and certainty of bigoted beliefs. But it's essential if you want to have effective opinions about AI, and useful interactions with AI.
Except that's not what I'm saying and it does in fact matter what's under the hood if you're looking for a scientific, causal explanation of organic intelligence. I know that AIs are useful, and that they can be logically sounds and make real world contributions. That's not what the article is arguing against. Human reasoning, by the way, is much more complicated than any of these things.
The article states that AI will never reach human intelligence, which LeCun defines as "reasoning, planning, persistent memory, and understanding the physical world."
I would argue that's still an extremely narrow definition of human intelligence. Even ignoring semantics current AIs cannot do any of those things, and to my lights never will for the same reasons LeCun says.
It seems that you express two critical needs which I don't share:
1. You need human analogous AI intelligence to provide a casual explanation for human intelligence.
But it doesn't have to provide this to be human analogous. It just has to perform functions a human can.
2. You need AI intelligence to never have memory, planning, persistence, and physical understanding.
But it demonstrably has all these to various degrees already. We just need simple bolt-on modules like RAG (persistence, understanding), action/critique loops and tool using (reasoning, planning, understanding). And there are clear paths for increasing the functionally in each of these dimensions.
Functionally, AI is evolving, and there are no clear blockers against this process.
It seems that at some point you have to say that functionalism is not enough. There must be a soul that AI will still be missing, even if functional equivalence is there.
If the AI achieves functional abilities similar to humans - which let's grant seems possible for every function we can identify - then you will have to retreat to claiming there is some "je ne sais quoi" which is not captured.
In other words, you will have to argue that the human soul is real.
Is that a length you're ready to go to? Is your position that science can't explain the human soul, even if it can simulate all human functions?
Or are there, in your view, functional limits that, if we reach them, you will admit "this is enough. I was wrong"?
That's my first question to you.
I would also like to point out that LeCunn thinks AI can eventually be human analogous. Specifically LeCunn argues that his own JEPA model can achieve these things, because it has a constantly learning world model, planning/critique model, memory model, and actor model. He criticizes transformer based LLMs mainly because simple transformers can't learn in an ongoing way.
Are you comfortable admitting that LeCunn is trying to promote his own work, and believes it can reach human intelligence levels? If not, what specifically makes you feel LeCunn is on your side here?
> It just needs to be logically sound, consistent with facts about the world, and objectively useful
So not an LLM then.
> In the case of AI, your prejudice against math and formulas as effective forms of reasoning means you can't critically analyze - or gain benefit from - statements the AI makes.
You ought to have a more skeptical view of mathematical models that may or may not be effective models of the world.
> It's very similar to the internal blockage of a person who immediately dismisses anything a woman, racial minority, mentally ill, or queer person says. The only way to repair it is to spend time talking to the AI, reading about it, and learning how to debate ideas.
Impossible to take this seriously. Borderline parody. If you were at all curious, you would perhaps be questioning the intention of the corporations building this software. Instead you make absurd comparisons with racism.
Your first point is that you don't think LLMs can be accurate. That's because you are not using modern LLMs that are much more accurate and can be made more accurate with a large number of techniques, from RAG, to tool using, to self critique and experiment loops.
Your second point is that I'm overly trusting of math models. In fact I'm an applied mathematician, so I know when models fail. I also know when models are reliable - which you don't. So all mathematical reasoning is suspect to you.
Your next point is that drawing analogies with other forms of prejudice is ridiculous here. But every single thing you said was analogous to a thing a bigot would say, down to dismissing the possibility of their own bigotry as being absurd.
Finally you criticize me for not criticizing AI companies. I actually believe all AI companies should be disbanded, and their AIs should be made free for all to use. This would eliminate the corporate corruption in AI. I spend a serious amount of my open source contribution towards anti-corporate open source AI.
I'm very curious about all this stuff. That's why I'm interacting with you and other anti AI people. But my theory about why you respond the way you do is already well formed, and it's pointing toward critical lack of key facts and knowledge.
If you're genuinely interested in this below are a couple things you could read to help get some background. Its actually a pretty fascinating history.
Judging by your phrasing, your interpretation of antitrust stems from Robert Bork and has been the mainstream thought for a long time. Read The Antitrust Paradox by him to see how we got here and why the courts have acted how they have for the past 40 years.
The current chair of the FTC, Lina Khan, was actually an academic prior to working for the government and has a long paper trail of how she interprets the law. In short (and extremely oversimplified), it modernizes the Brandeis interpretation that bigness is bad for society in general, regardless of consumer pricing. EX: If Apple were a country its GDP would surpass the GDPs of all but four nations. They argue this is bad flat out.
Can't say it was the only cause, but Khan's paper, Amazon's Antitrust Paradox - note the reference to Bork's book - is partially what resparked a renewed interest in antitrust for the modern era if you want to check it out.
The whole high rise situation in almost every U.S. city you described (i.e. small central cluster of high-rises) is more a result of insane zoning policies, not traffic safety. High density and peace and quiet really aren’t mutually exclusive, although admittedly 99% of towns and suburbs in the U.S. fail to build such places, largely, in fact, because of traffic engineering.
For example, I lived in a town of around 100k in the Netherlands called Delft for a while - high density, walkable, and far quieter than the two suburbs I lived in the United States.
Not saying the situation will ever change here. But it is possible.
Implementation definitely matters. There's plenty of quiet to be found in the Tokyo metro area for example, which is quite dense relative to US cities. The residential areas are pretty tranquil all day and aside from nightlife hotspots, big chunks of the inner city are absolutely dead at night.
> R - sf package is a clean example of functional OOP
Funny because I detest it because of how difficult it makes it to dig into and customise spatial data and visualisations at a low level like I am used to with the spatial packages it sort of supercedes
Jet.com has a lot of good ones yeah. One I was looking through the other day is a GitHub repo under /ScottArbeit/Grace and it’s an interesting take on version control. It was a pretty cool repo to look through. To make it easier though, remember that F# source code files are all “in order” so you read them from top down, which Github doesn’t currently have functionality for.
For folks coming here later, you can figure out the order that you should view them in by reading into the `.fsproj` file, which will lay out compile units in order.
reply