Hacker News new | past | comments | ask | show | jobs | submit | mattdm's comments login

What's wrong with you?

It's not necessarily about being "one program". It's this part:

"The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities."

I get that it's really hard to make money as an open source company. (That's why I am one of your paying customers.)

The exclusion you are putting on your SDK seems very similar to that of the "bitkeeper" version control software used for the Linux kernel for a short time. Look how that turned out.


FSF has published a commentary: https://www.gnu.org/licenses/gpl-faq.html#MereAggregation

GPL licenses have allowed so-called "mere aggregation", where separate programs are distributed together. Such programs don't have to be all covered by GPL.

On the other hand, if parts are intimately tied to each other such that they are effectively a single program, GPL applies to the whole.

The FSF commentary explains that the judgment depends both on the mechanisms and the semantics of the co-operation. Technical implementation details don't make programs separate if they are intimately designed to work together: "But if the semantics of the communication are intimate enough, exchanging complex internal data structures, that too could be a basis to consider the two parts as combined into a larger program."


So they either have to license their SDK with a GPLv3 compatible license as well, or have to change the license of the client to a non-GPL one.

In the latter case, IIUC their CLA (https://cla-assistant.io/bitwarden/clients) allows to do change the license unilaterally. (Not a legal expert, so please correct me if I am wrong.)

If so, then I feel strengthened again in my conviction that permissive licenses (as well as closed-source licenses) and CLAs are bad for both users and developers and should be avoided, if possible.


You are siddstepping the issue and answering in bad faith and you know it.

What do people actually want to hear from you?


Their answer actually sounds alright to me. What is your problem with it, exactly?

ChatGPT, for all its amazingness, _never_ follows instructions. It just appears to, as it generates likely text. This is incredibly important to understand — it isn't a generalized AI.

Larger and more sophisticated models will do the party trick more convincingly... but actually following instructions will require a different approach.


Its ridiculously annoying.

The other thing that I would really like to see is at least an option for deterministic responses. I like the creativity of ChatGPT constantly giving different responses but it is hugely problematic when you are trying to iterate and it keeps completely changing the thing you are iterating on.


LLMs are a drunk guy at a party. A context machine, or a really good spell checker, maybe like a really outgoing, kinda drunk guy at a party. He can just walk right up to any group, hear someone say the words, "...thats why we never go on vacations anymore..." and he will just jump right in, "Me and the wife just got back from a great VACATION, we love going to Cabo, its out favorit vacation spot, last year we went..."

The guy has NO IDEA what the group was actually talking about but it doesn't matter anyway. You can give him more context but all he can do id further tailor his story to try and fit in better with the group discussion.

I think people get confused about what LLMs are doing because you CAN ask the drunk guy questions and he CAN kind of answer, or ask him to do a party trick and he can kinda do it, but all he is doing is continuing the context you give him, he IMMEDIATELY forgets anything you told him so he can't do the same exact thing twice.


Use the API and specify temperature 0. It will be as close to determinist as the underlying system allows.


> ChatGPT, for all its amazingness, _never_ follows instructions.

Yes it can, with robust system prompts.

The webapp is a misleading representation of ChatGPT's capabilities.


LOL wow that's an old-school deep cut!


I don't know about "sed-replace", but you can download the RHEL selinux-policy source package from https://gitlab.com/redhat/centos-stream/rpms/selinux-policy


> There is no any guarantees Stream doesn't break ABI compatibility.

This is incorrect. Or, sure, there are no _guarantees_, but any such break would also break a future RHEL release, and is therefore a bug.

2. Five years.


AFAIK, there is no rule RHEL is strictly based on Stream.


RHEL is branched from Stream and released from the branch every 6 months.

I challenge you to find _one_ package in RHEL (as of the git.centos.org c9 branch) that reverted an ABI breakage that is in Stream, without said reversion having been applied to Stream as well.


> Five years

So basically 2 times less, nice.


Isn’t that the same amount of time Rocky would be supported?


Or, access every part of it for free with no strings attached here: https://gitlab.com/redhat/centos-stream


Try https://access.redhat.com/downloads/content/package-browser

You should also be able to use yum/dnf to download the srpms from your dev program system.


> The distribution needs to do almost nothing to support either EFI or the legacy BIOS or any other booting method.

I think you're seriously underestimating the amount of effort the bootloader and hardware enablement teams who work on Fedora put in to making _systems boot Linux at all_.


Okay, this is a serious question. For me, not an official RH position. In my time in HPC, nodes were baked with a specific image and then that basically never ever got updates. As I came to that as a sysadmin from other areas, I found that somewhat horrifying, but it seemed pretty universal. Have things changed such that applying patches regularly (like, more often than once a month or so except in emergencies) is a thing?


Not much, but in our setup the image is not something which can evolve or change over time. This practice has some very practical reasons though.

Scientific applications can be very picky about the libraries they use or need, down to minor version since the results they produce are very, very precise. Even if not very accurate, you need to know the inaccuracy. An optimization in a math library can change this and, it's not something we want. Also program verification and certification generally includes versions of the libraries used.

Piecewise upgrades are a no go too. Your cluster generally can't work well in heterogeneous configurations (due to library mismatches) and draining a node is not a straightforward task (due to length of the jobs). If your cluster has a steady stream of incoming jobs, reducing resources also means queue bloat and recovering it is not easy sometimes. If you want to drain the whole cluster, it takes almost 2-3 weeks so, you lose ~1 month of productivity. When you start an empty cluster to churn its queues, its saturation takes time so, it doesn't go to 11 directly.

Also, worker nodes are highly isolated from the user's point of view. No users can log-in, only known people submit jobs, etc. Unless there's a rogue academic trying to do nefarious things, the place is pretty safe and worry-free. In past 15 years, we got two rootkit infections due to a server which can be world-accessible by design. Other than that, nothing ever got infected.

At the end of the day, this approach has some valid reasons to be alive. It's not that we're a bunch of lazy academics who refrain from applying good system administration practices. :D

Addendum: The images generally get updated when new hardware is added, since new processors tend to work better with newer kernels. Also sometimes we bit the bullet and update all the cluster at once. XCAT helps a lot in this space. If your image is sane, you can install batches of 150+ servers in 15 minutes while sipping your coffee.


Right, so: for this case, CentOS Stream will be virtually identical to the CentOS Linux RHEL rebuild.


We will certainly try. Need to mirror a repo, freeze it and update our installation infra so it looks to the local repo rather than the national mirror.

All repo settings will look to local repo so we'd have no dependency problem or version creep if we need to install an additional package.

Didn't completely think how to handle the occasional emergency update though.

Also, we need to compile in some packages. Hope they won't break. High performance stuff needs optimized/customized compilations.

I just want to add: Hope that the packages in CentOS stream won't end up too cutting edge for the scientific software community. These communities move slow due to stability requirements. We'll certainly see but it might be another potential problem.


I can totally reassure you on your last concern: everything that goes into Stream is approved for a minor release in RHEL. That's not changing at all. Cutting edge is still Fedora's turf. :)


Thanks, because that last point would be actually breaking in some cases.

I think HN is the only place where you can casually provide feedback and get answers about an OS project from one of the core people in it. Fun!

Glad to meet you, BTW.


To be clear, I'm RHEL and CentOS _adjacent_, rather than actively _in_ them. But I think (rough launch and more than a few communication issues) aside this is generally gonna be positive.


I think that's because HPC users are largely non-technical developers. We changed a DHCP schema at one point and had a bunch of angry academics in the IT office because their Matlab scripts were broken. Many of them had been hard coding IP addresses into the code itself.


The login nodes on our cluster (UChicago) can reach uptimes over a hundred days (which my tmux sessions love).

Seems like the kernel was last updated in May.

    $ uname -r -v
    3.10.0-1127.8.2.el7.x86_64 #1 SMP Wed May 13 10:45:47 CDT


No, they haven't changed in my experience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: