Hacker News new | past | comments | ask | show | jobs | submit | bb86754's comments login

Keynote is awesome. Last I checked a few years ago though Numbers was nowhere even close to Excel. No dynamic array formulas, Power Query, lambda functions, VBA, etc. All are pretty essential if you're doing anything beyond basic spreadsheets but I may need to checkout Numbers again.

I should have qualified "better". I find Numbers easier to use for basic spreadsheet tasks. Advanced, programming-like tasks are better in Excel, which has many more advanced features. I don't think Numbers is Turing complete, but then again I tend to use Python rather than Excel for advanced math processing.

Word has some features that Pages doesn't have, but they're not commonly used, and if you're doing any kind of page layout, Pages is __much__ easier to work with than MS Word.


I believe you, and LLMs are no doubt useful, but "under the hood" it's still just predicting what the next token should be based on the provided context. I take it he's saying that no, there isn't really a ghost in the machine, its still just linear algebra/calculus and is no reflection of actual organic reasoning.

I think the difference of opinion here is between science and technology. Too many people in my opinion take the latter to be a synonym for the former.


It doesn't matter what is under the hood. A statement can be useful - introduce new views, make valuable points, reduce risks, help resolve conflicts, etc - regardless of whether there is a ghost behind the text. It just needs to be logically sound, consistent with facts about the world, and objectively useful. Then it can make a real world contribution.

The cause is that you don't know how to evaluate when a statement is useful on its own merits. That means that you have to fall back on judging statements based on the identity of the speaker. In the case of AI, your prejudice against math and formulas as effective forms of reasoning means you can't critically analyze - or gain benefit from - statements the AI makes.

It's very similar to the internal blockage of a person who immediately dismisses anything a woman, racial minority, mentally ill, or queer person says. The only way to repair it is to spend time talking to the AI, reading about it, and learning how to debate ideas.

That's the last thing someone with a prejudice wants to do. Curious investigation undermines the safety and certainty of bigoted beliefs. But it's essential if you want to have effective opinions about AI, and useful interactions with AI.


Except that's not what I'm saying and it does in fact matter what's under the hood if you're looking for a scientific, causal explanation of organic intelligence. I know that AIs are useful, and that they can be logically sounds and make real world contributions. That's not what the article is arguing against. Human reasoning, by the way, is much more complicated than any of these things.

The article states that AI will never reach human intelligence, which LeCun defines as "reasoning, planning, persistent memory, and understanding the physical world."

I would argue that's still an extremely narrow definition of human intelligence. Even ignoring semantics current AIs cannot do any of those things, and to my lights never will for the same reasons LeCun says.


Thank you for your response!

It seems that you express two critical needs which I don't share:

1. You need human analogous AI intelligence to provide a casual explanation for human intelligence.

But it doesn't have to provide this to be human analogous. It just has to perform functions a human can.

2. You need AI intelligence to never have memory, planning, persistence, and physical understanding.

But it demonstrably has all these to various degrees already. We just need simple bolt-on modules like RAG (persistence, understanding), action/critique loops and tool using (reasoning, planning, understanding). And there are clear paths for increasing the functionally in each of these dimensions.

Functionally, AI is evolving, and there are no clear blockers against this process.

It seems that at some point you have to say that functionalism is not enough. There must be a soul that AI will still be missing, even if functional equivalence is there.

If the AI achieves functional abilities similar to humans - which let's grant seems possible for every function we can identify - then you will have to retreat to claiming there is some "je ne sais quoi" which is not captured.

In other words, you will have to argue that the human soul is real.

Is that a length you're ready to go to? Is your position that science can't explain the human soul, even if it can simulate all human functions?

Or are there, in your view, functional limits that, if we reach them, you will admit "this is enough. I was wrong"?

That's my first question to you.

I would also like to point out that LeCunn thinks AI can eventually be human analogous. Specifically LeCunn argues that his own JEPA model can achieve these things, because it has a constantly learning world model, planning/critique model, memory model, and actor model. He criticizes transformer based LLMs mainly because simple transformers can't learn in an ongoing way.

Are you comfortable admitting that LeCunn is trying to promote his own work, and believes it can reach human intelligence levels? If not, what specifically makes you feel LeCunn is on your side here?

That is my second question to you.


> It just needs to be logically sound, consistent with facts about the world, and objectively useful

So not an LLM then.

> In the case of AI, your prejudice against math and formulas as effective forms of reasoning means you can't critically analyze - or gain benefit from - statements the AI makes.

You ought to have a more skeptical view of mathematical models that may or may not be effective models of the world.

> It's very similar to the internal blockage of a person who immediately dismisses anything a woman, racial minority, mentally ill, or queer person says. The only way to repair it is to spend time talking to the AI, reading about it, and learning how to debate ideas.

Impossible to take this seriously. Borderline parody. If you were at all curious, you would perhaps be questioning the intention of the corporations building this software. Instead you make absurd comparisons with racism.


This illustrates AI bigotry well

Your first point is that you don't think LLMs can be accurate. That's because you are not using modern LLMs that are much more accurate and can be made more accurate with a large number of techniques, from RAG, to tool using, to self critique and experiment loops.

Your second point is that I'm overly trusting of math models. In fact I'm an applied mathematician, so I know when models fail. I also know when models are reliable - which you don't. So all mathematical reasoning is suspect to you.

Your next point is that drawing analogies with other forms of prejudice is ridiculous here. But every single thing you said was analogous to a thing a bigot would say, down to dismissing the possibility of their own bigotry as being absurd.

Finally you criticize me for not criticizing AI companies. I actually believe all AI companies should be disbanded, and their AIs should be made free for all to use. This would eliminate the corporate corruption in AI. I spend a serious amount of my open source contribution towards anti-corporate open source AI.

I'm very curious about all this stuff. That's why I'm interacting with you and other anti AI people. But my theory about why you respond the way you do is already well formed, and it's pointing toward critical lack of key facts and knowledge.


If you're genuinely interested in this below are a couple things you could read to help get some background. Its actually a pretty fascinating history.

Judging by your phrasing, your interpretation of antitrust stems from Robert Bork and has been the mainstream thought for a long time. Read The Antitrust Paradox by him to see how we got here and why the courts have acted how they have for the past 40 years.

The current chair of the FTC, Lina Khan, was actually an academic prior to working for the government and has a long paper trail of how she interprets the law. In short (and extremely oversimplified), it modernizes the Brandeis interpretation that bigness is bad for society in general, regardless of consumer pricing. EX: If Apple were a country its GDP would surpass the GDPs of all but four nations. They argue this is bad flat out.

Can't say it was the only cause, but Khan's paper, Amazon's Antitrust Paradox - note the reference to Bork's book - is partially what resparked a renewed interest in antitrust for the modern era if you want to check it out.


Dependency Injection Principles, Practices, and Patterns by Mark Seemann.

Examples are in C# but it's a great book about OOP in general.


F# has something called Fable. It originally just compiled down to JavaScript. Recently they've added Python and Dart.


The whole high rise situation in almost every U.S. city you described (i.e. small central cluster of high-rises) is more a result of insane zoning policies, not traffic safety. High density and peace and quiet really aren’t mutually exclusive, although admittedly 99% of towns and suburbs in the U.S. fail to build such places, largely, in fact, because of traffic engineering.

For example, I lived in a town of around 100k in the Netherlands called Delft for a while - high density, walkable, and far quieter than the two suburbs I lived in the United States.

Not saying the situation will ever change here. But it is possible.


Implementation definitely matters. There's plenty of quiet to be found in the Tokyo metro area for example, which is quite dense relative to US cities. The residential areas are pretty tranquil all day and aside from nightlife hotspots, big chunks of the inner city are absolutely dead at night.


So many, I like reading how other people write code.

R - sf package is a clean example of functional OOP

Python - pytudes (Peter Norvig’s notebooks)

Haskell - Elm compiler. I could mostly understand what’s going on even though I barely know any Haskell.

Ruby - Sequel is really nice.

Rust - Ripgrep

Pretty much any F# codebase is super readable too.


> R - sf package is a clean example of functional OOP

Funny because I detest it because of how difficult it makes it to dig into and customise spatial data and visualisations at a low level like I am used to with the spatial packages it sort of supercedes


+1 on F#, highly under-rated language IMHO.


Do you have any F# favorites? People often mention jet.com repos, but I’m interested to hear about others.


Jet.com has a lot of good ones yeah. One I was looking through the other day is a GitHub repo under /ScottArbeit/Grace and it’s an interesting take on version control. It was a pretty cool repo to look through. To make it easier though, remember that F# source code files are all “in order” so you read them from top down, which Github doesn’t currently have functionality for.


For folks coming here later, you can figure out the order that you should view them in by reading into the `.fsproj` file, which will lay out compile units in order.


I've lived in the area my entire life, and DC proper for around the past 7 years. I love living here and am without question biased but I'll do my best to give you some objective advice.

- As others have said, it's a very transient city. A lot of people live here for 5 years then move. Most of my friends/family live here so this isn't an issue for me but take this into consideration.

- There is definitely a tech presence outside of the defense industry, especially in the suburbs as others have mentioned. The scene is not nearly as big as San Francisco or NYC though. That said, if you are able to get a security clearance - do it. Without question do it. It opens a ton of doors around here and you can have an extremely lucrative career with one. DC also has the highest rate of remote workers out of any city in America, so you really don't need to work for a company around here if you don't want to.

- Not sure what you're into for fun, but it's very much a "drinking" city. A lot of socializing revolves around going to bars, breweries, concerts, brunch, etc.

- Despite what some may tell you, the suburbs right outside DC (Arlington and Montgomery County) are not cheaper than DC proper by any means. In recent years rent in Arlington on average is more expensive than DC. Parts of these suburbs though are highly developed, and if you didn't know better you'd probably think you were in the city anyway.

- The whole area is expensive. No way around that unless you move to Baltimore.

- DuPont and Logan Circle have a ton of transit and are very walkable. Two of the best neighborhoods in DC. Consider ditching your car. I haven't had one for 10 years and I can't think of many places I'd need to get to that aren't accessible by transit/walking/biking. If I do need a car, we have something called ZipCar which is a car rental service that charges by the hour for quick trips.

- Traffic is absolutely absurd. To me, it's worth it to pay more than commute from the further out suburbs. Anything beyond Reston, VA will be a miserable commute.

- DC is actually a pretty small city relative to other major US cities. I like this about it, but I've heard this come as a surprise to people that have moved here.

- Certain parts are pretty dangerous. The city is gentrifying quickly (for better or worse), but have some awareness and you'll be fine.

- If you do end up moving here. Venture out to the neighborhoods away from the mall and downtown. This is where the fun stuff happens.

Hope this helps. Happy to answer any questions if you have any as well.


Used to go from Logan Circle to Reston and it was god awful. 1h15m some days.

> Certain parts are pretty dangerous.

It always felt pretty compartmentalized to me, but it's also where a lot of the cool shit and "culture" is. Shaw, H St. corridor, Columbia Heights and Petworth. Even Navy Yard, as broey as it is, has a lot of amenities but always feels a bit sketch.


Oh definitely agree on both parts. And yeah that's what I meant by the last point. I couldn't care less about politics or the federal government, the outer neighborhoods are where the "real" DC is at in my opinion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: