Hacker News new | past | comments | ask | show | jobs | submit | bangaladore's comments login

I'd suggest everyone go read the issue thread (https://gitlab.com/gnachman/iterm2/-/issues/11470) before commenting.

It seems abundantly clear that people are being overly negative about a feature that realistically has no security concerns (even as originally developed). Many commenters did not even know how the feature worked (assumed all keystrokes were being sent by default, etc...)

One outright said that the feature should be removed because the developer must "stand against OpenAI and the whole "AI" industry."

To me this just seems like a lot of people whining and trying to inject politics and unfounded safety concerns into a good implementation of something that many people like. This is an opt-in feature. It has a separate panel to even interact with it. And you need to provide a valid openai API key to use it.


> I don't care if there's a buried flag that enables or disables this behaviour. I want a binary that doesn't have this capability in it at all.

That's a rather absurd way of approaching the threat model of data exfiltration on a terminal app.

By its very definition a terminal needs the ability to spawn unsandboxed processes and send/receive input from/to them, including processes that have network access. Even if the binary doesn't contain specific logic to do this it could invoke curl, or a variety of other binaries that do, either on purpose or accidentally. In addition, it links against AppKit, which includes NSURLRequest. Is that off-limits too?

If one's this allergic to OpenAI, that even an opt-in feature is a concern, they're better off using a firewall like Little Snitch, or blocking it at the DNS level.

Additionally, if you don't trust the developer with this, why would you trust a binary from them without this feature?


Agreed. This issue thread makes it apparent why many open source developers give up.


It's one thing for this data to be held locally and under extreme protection through the OS.

But Azure cloud? How can this be legal?


The way we treat corporations doing bad things in America:

It's probably not illegal...

And if it is, they're probably not going to get caught...

And if they do, the government probably won't go after them...

And if it does, they're probably not going to be found guilty...

And if they are, they probably won't face any consequences...

And if they do, nobody's going to go to jail.


Azure Cloud isn’t mentioned in the article.


I overtrusted the original commenter. My bad... Thanks for the correction.


If we envision a spectrum ranging from an HTTP(s) client to a full-fledged web browser (like Chrome or Firefox), I would place cURL very close to the HTTP(s) client end of the scale.


Why are they manually implementing API interfaces for various companies when something like OpenRouter exists? OpenRouter provides a unified API for Commercial and opensource models. Seems like the obvious answer for something like this.


Lots of reasons!

- They may not know this library exists.

- They may not think the library is actually suitable for their use case.

- They may not want the dependency included.


Openrouter packages APIs and most companies prefer having individual relationships with AI vendors. Choosing an AI gateway might be another way to go


I have had this question. How much better would common LLMs (Llama, GPTN) be if they were only trained in one language? I have to assume they would perform better, but I might be wrong.


Perform better how? Knowing more languages gives you more data and different points of view rather than just using the English corpus and culture. When I ask chatgpt for a translation it seems to understand the meaning behind the words and finds the closest thing in the other language. The datasets seem to merge in some way


Fair, but there may be overhead that doesn't need to exist. Certainly - for the limited compute my brain can accomplish - I could gain a deeper understanding of physics, if I focused on learning physics and didn't also have to simultaneously learn French.


Wouldn't a better metaphor be if a child growing up in a bilingual household would be worse at physics as an adult? My guess would be growing up bilingual would have no impact.


This hypotetical kid would have the same size of brain/number of neurons anyway. In case of LLMs one could create a model that could be smaller thakns to not including the knowlegde about unecessary languages. A problem though could be with lacking traing data in other languages.


In the short term. In the longer term you'll understand concepts better when you're multilingual.


Human is not limited by computational power of brain (or rather, it is not the limitation we encounter). We are limited by time and the fact that our machinery degrades with time (aging).


Just like adding code to textual models helps the model develop its reasoning capabilities, it seems like adding more languages helps in other areas too. What is needed is more good quality data to train on...


We also see humans get worse at specific things when they learn too much in general. There is a cut-off point to how many concepts we can learn with what skill. To be most effective, we have to specialize in the right things while continuing to acquire generalist knowledge. It’s a balancing act.

These architectures are less capable than brains in many ways. So, we should expect them to have such trade-offs. An efficient one should work fine on English, mathematical notation, and a programming language. Maybe samples of others that illustrate unique concepts. I’m also curious how many languages or concepts you can add to a given architecture before its effectiveness starts dropping.


I guess you mean non-textual data then because the amount of text data they are being trained on ought to be enough for agi by now?

Some kind of diminishing returns asymptote from text volume alone must have been hit a long time ago.


It's not the amount that is wrong, it's how the model is trained. The model is trained for zero and few shot tasks. It is not surprising that it is performing well when you ask for that.


> its reasoning capabilities

To be clear, LLMs are not capable of reasoning.


imo this is an uninteresting debate over semantics/metaphysics


Would you say a deontologist reasons? Evolution survives, but does it reason?

Is it reasonable to show interest in something you call uninteresting?

Was Gödel a reasonable man, starving to death in fear of being poisoned?


I can't track down the citation (either Google or DeepMind I think), but I remember reading research from a year or two ago how adding extra languages (French, German) improved English language performance. There may have also been an investigation about multi modality too, which found that adding vision or audio helped with text as well.


Interesting thought. Maybe an LLM would build deeper insight with only one training language. On the other hand, the model might overfit with just one language -- maybe multilingual models generalize better?


they would perform worse, i promise you


I think this makes sense to the extent that an understanding of the differences between language helps separate out language from the underlying meaning. However... the models that are used receive input (i.e. translate from language), and to learn / understand, and to output information (i.e. re-encode into language), do not all have to be the same.


"I promise you"?

This is Hackernews, I would have expected data, not promises.


I think this is being misunderstood, partially because of the self-claim that this is a build system.

The CMake DSL is utter garbage. From my understanding, this converts TOML that follows the same/similar naming as CMake commands/options to CMake DSL. Allowing for total compatibility with CMake while avoiding the pain of actually interacting directly with a CMakeLists.txt.

Most people agree that CMake is not a good option but the only real option. I'd happily put lipstick on this pig.


The Make DSL is also utter garbage. As is Autotools. I'm not convinced that wrapping a garbage DSL with another DSL is a value-add when you're building applications.


There are plenty of options other than cmake. Ideally:

1. read the GNU Make manual 2. Write a tiny build system with Make or use any of the precanned ones. I use this: https://github.com/dkogan/mrbuild/ but there are many others

In any case, learning how to actually use Make is a prerequisite to having an opinion here.


3. Read “Recursive make considered harmful”

4. Follow its advice until you hit the end of where it is useful. (i.e., you can no longer count the number of $ escapes in your own code).


Yes. Recursing your Makefiles produces poor results. You know who hasn't read the Make manual and makes recursive Makefiles? The cmake devs.

If you truly need a lot of complexity, you can end up with unreadable Makefiles, as you say. Most projects don't need a lot of build logic, though. I have lots of projects using mrbuild, and their builds are clear and easily maintainable. If you truly need a lot more, then Make might not be the best choice. And you can do much better than cmake


cmake is not just another way to get a Makefile. If you have a cross platform project and want to work in multiple IDEs for specific platforms (xcode/visual studio), cmake or some other build system generator is a huge time saver


That is the one big use case cmake has, yes. But most (all, actually) cmake I see around me is used to just get a Makefile.


It is a website that you can type in some web technology such as " CookieStore API" and know at a glance which browsers support it and which browser versions.


That makes sense. Thanks for the clarification.


It seems to me that there are 10s of millions of "L2 driver assist" as defined by synopsys that have zero driver supervising checks such as most cars out there with lane-keep and cruise control.


And you believe that federal regulators have simply never heard of these other systems? A more likely explanation is that the driver attention monitoring in a Honda or Cadillac or whatever works better than Tesla's.


You seem to have misunderstood me. I'm talking about vehicles without attention monitoring systems. Most cars sold in the past ten years technically have "Level-2" driving features but with no driver monitoring systems.

Tesla autopilot / FSD is immensely more capable than 99% of LKAS systems, so maybe that's why regulators care more. But from a pure attentiveness standpoint, 1000 other car models have grossly worse safety in this regard.

However, my argument is that your original claim that the "L2 driver assist system must ensure that the driver is always actively supervising" is demonstrably false, given that, by pure numbers, "Level-2" capable cars, by large, do not have driver monitoring features. However, the government continues to focus solely on Tesla.


Can you name such a vehicle? My personal car with lane keeping and TACC insists that you keep your hands on the wheel.


I might be mistaken about cars requiring no input, but isn't just steering sensing a problem? The car does not know if it is you or a weight on the steering wheel. Or if you are on your phone touching the wheel with your knee.

According to the NHTSA, Tesla was forced to enable driver monitoring via a camera because sensing driver steering input was insufficient.

So, without a doubt, 10s of millions of cars have "insufficient" driver monitoring while having "level 2" capability. Should those cars not be recalled and the feature be disabled (or updated to more aggressive monitoring if appropriate hardware exists)?

To be clear, I fully support driver monitoring for ADAS to whatever extent regulations require. But picking and choosing who to enforce the rules suggests they aren't working for the general public's best interest.


The NHTSA insisted that Tesla switch to camera monitoring because Tesla repeatedly monkeyed with the nag interval via OTA software pushes. All the other manufacturers, except Tesla, complied with NHTSA's 2017 rule on the issue.


The IIHS has repeatedly shown that Tesla airbags are the best in class. Deploying airbags when not needed can do significant harm to occupants. Not to mention total vehicles for no good reason.

There isn't a need to wonder when independent data gatherers have already answered the question.


> The IIHS has repeatedly shown that Tesla airbags are the best in class.

No they haven't. For a start, IIHS doesn't rank "airbags".

And then... Tesla fans love to latch on to this as a trope.

The IIHS -groups- vehicles into safety levels by class. It doesn't rank in class, at all.

So what, you might say, they're still in the top or top two safety levels, they deserve credit... Except:

There are twenty one other vehicle at that same level or higher in the same segment ("mid-size luxury"), and sixteen if you only count the top segment.

Of those, Tesla could be number 1, or number 16. You don't know. So this whole meme of "Tesla is the safest" needs to die, it's just bluster.


I think you are misrepresenting the actual IIHS results.

For example, the Model Y is one of five cars that achieved the highest rating in its size class, not one of 16. You are grouping their highest and second-highest tiers together.

Additionally, the Model Y is by and far the cheapest vehicle in that category.

So, yes. If you consider a car to be classified by size and price, I think it is a very reasonable way to classify a car, the Model Y is very clearly the best in its class.


> For example, the Model Y is one of five cars that achieved the highest rating in its size class, not one of 16. You are grouping their highest and second-highest tiers together.

Who said I was talking about the Y? (I also specifically said that I was grouping them ("they're still in the top or top two safety levels"), because IIHS has "Top Safety Pick+", "Top Safety Pick" and "Others". Though I have been looking at 2023. But let's break it down:

Tesla Model 3: Didn't make the cut.

Model S: Didn't make the cut.

> So, yes. If you consider a car to be classified by size and price, I think it is a very reasonable way to classify a car, the Model Y is very clearly the best in its class.

"So yes" implies a logical conclusion. But your initial point was "IIHS says Tesla has the best airbags in class", which it does not.

For one model of car it says that it is in the top five of that segment.

But then somehow you blow that out to be "clearly, best in class, because it's cheapest".

Like, no. The other cars in the category could have better safety than the Model Y. And if your argument is "its best because its cheapest" when it comes to safety? Wow. Huh. I suppose we all know the safest components are the cheapest?


The original comment about Tesla allegedly gaming the system by not deploying airbags when they should is unfounded based on available safety metrics. It's an unsubstantiated claim that derails the conversation from the main point.

Comparing the minor differences in airbag performance among a handful of cars is not particularly relevant to the overall argument, especially when considering the vast number of poorly performing vehicles on the market.

And, yes, I do believe that cost matters here. Cars exist in size and price classes. If you cannot comprehend that, that's fine. But it doesn't change the argument.


> The original comment about Tesla allegedly gaming the system by not deploying airbags when they should is unfounded based on available safety metrics. It's an unsubstantiated claim that derails the conversation from the main point.

If you read my other comments in this thread, I partially agree with you. But I also think Tesla's "we don't count it as an FSD/AP incident if airbags weren't deployed" is "convenient", given that "advanced airbags" (which are a spec) use a whole variety of means to determine deployment which don't correlate to the severity of the incident, i.e.:

depending on other parameters, you can collide with someone at 20-30mph, but have no airbag deployment, because the algorithm decides that passenger restraint is sufficient. Great. Except if a car operating in FSD/AP mode causes a 20-30mph collision with something else, that's a notable incident. Well, most people would think so. Tesla explicitly says this is NOT an incident when reporting FSD/AP stats. Huh.

> And, yes, I do believe that cost matters here. Cars exist in size and price classes. If you cannot comprehend that, that's fine. But it doesn't change the argument.

I can comprehend that just fine. But it's not a factor for IIHS, which is what -you- brought into the argument when you said "IIHS says it's best in class for safety".

1. It doesn't.

2. You can't then say "oh, well, if you consider cost as well, then clearly it must be best in class", which is a conclusion that cannot inherently be drawn from the previous.

It -may- be best in class "overall", not for safety alone, but that's got nothing to do with what you said. The flow of that argument was:

You: It's best in class.

Me: Not demonstrably, it's one of several cars that are in the top category for that class.

You: Well, if you factor in price, too, it's "clearly" best in class.

Everyone likes paying less money, sure. But you're already talking about the luxury segment, where just maaaaybe people are conscious of more factors than price when considering even this broader definition of best in class.


It can be best in class, yet still rarely deploy...

It can even be best in class when tested, rarely deploy, and save more lives than competitors. It just needs to correctly detect cases like this[1].

[1]: https://youtu.be/MTX0MvqBqL0?t=10


You claim that Tesla is gaming the system.

I'm not aware of a definition of gaming in which the system is overall safer or equivalently safer than other top-tier competitors. That seems objectively better than deploying airbags when they will harm the occupants or total the vehicle when not-needed.


Tesla is not gaming the system by having a good airbag system that only deploys when necessary (that's actually really good!).

Tesla is gaming the system by excluding accidents where the airbags don't deploy from the "Autopilot/FSD accidents" dataset, thus artificially deflating the number of accidents.


This is a point you certainly could make. But it is not what the original commenter wrote.


Agreed, but I am not the original commenter, I am making my own point separately from them. I agree with you that Tesla isn't gaming anything by having a good airbag.


It only makes sense if everyone else is gaming the system, and then it just proves it's still the best lol.


Reading the article or the NHTSA documents would clarify that blame is not placed on Tesla here for the crashes, but blame is placed for not ensuring the driver was paying attention. Drivers had plenty of time to react, but they didn't.

This is not trying to say the car quickly jerked into a pedestrian on the side of the road.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: