Hacker Newsnew | past | comments | ask | show | jobs | submit | dabockster's commentslogin

False. Local races directly determine the day-to-day laws and rules you live under way more than a POTUS could effectively decree. I don't know about you, but I sure enjoy having reliable electrical, water, and sewer systems.

They have that in Saudi Arabia too but I would not want to live there. Set higher standards.

In the US, we have the ability to either confirm or change a significant chunk of our Federal government roughly every two years via the House of Representatives. The argument here is that we, theoretically, could collectively elect people that are hostile to domestic mass surveillance into the House of Representatives (and other places if able) and remove pro-surveillance incumbents from power on this two year cycle.

The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:

1) Lack of term limits across all Federal branches

and

2) A general lack of digital literacy across all Federal branches

I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?


> 'secure' (by executive standards)

"Secure" in the sense that they can sue someone after the fact, instead of preventing data from leaking in the first place.


Hanlon's razor.

"Never attribute to malice that which is adequately explained by stupidity."

Yes, I'm calling labs that don't distill smaller sized models stupid for not doing so.


You can run AI models on unified/shared memory specifically on Windows, not Linux (unfortunately). It uses the same memory sharing system that Microsoft originally had built for gaming when a game would run out of vram. If you:

- have an i5 or better or equivalent manufactured within the last 5-7 years

- have an nvidia consumer gaming GPU (RTX 3000 series or better) with at least 8 GB vram

- have at least 32 GB system ram (tested with DDR4 on my end)

- build llama-cpp yourself with every compiler optimization flag possible

- pair it with a MoE model compatible with your unified memory amount

- and configure MoE offload to the CPU to reduce memory pressure on the GPU

then you can honestly get to about 85-90% of cloud AI capability totally on-device, depending on what program you interface with the model.

And here's the shocking idea: those system specs can be met by an off the shelf gaming computer from, for example, Best Buy or Costco today and right now. You can literally buy a CyberPower or iBuyPower model, again for example, download the source, run the compilation, and have that level of AI inference available to you.

Now, the reason why it won't work on Linux is that the Linux kernel and Linux distros both leave that unified memory capability up to the GPU driver to implement. Which Nvidia hasn't done yet. You can code it somewhat into source code, but it's still super unstable and flaky from what I've read.

(In fact, that lack of unified memory tech on Linux is probably why everyone feels the need to build all these data centers everywhere.)


> Now, the reason why it won't work on Linux is that the Linux kernel and Linux distros both leave that unified memory capability up to the GPU driver to implement. Which Nvidia hasn't done yet. You can code it somewhat into source code, but it's still super unstable and flaky from what I've read.

So it should work with an AMD GPU?


> the Linux kernel and Linux distros both leave that unified memory capability up to the GPU driver to implement

Depends on if AMD (or Intel, since Arc drivers are supposedly OSS as well) took the time to implement that. Or if a Linux based OS/distro implements a Linux equivalent to the Windows Display Driver Model (needs code outside of the kernel and specific to the developed OS/distro to do).

So far, though, it seems like people are more interested in pointing fingers and sucking up the water of small town America than actually building efficient AI/graphics tech.


It also lets them keep a lot of the legal issues regarding LLM development at arms length while still benefiting from them.


It's Meta. They always push to be that fast on paper, even when it's costly to do and doesn't really need it.


> This is the best kind of open source trickledown.

We shouldn't be depending on trickledown anything. It's nice to see Valve contributing back, but we all need to remember that they can totally evaporate/vanish behind proprietary licensing at any time.


They have to abide by the Wine license, which is basically GPL, so unless they’re going to make their own from scratch, they can’t make the bread and butter of their compat layer proprietary


That's why the anti-GPL push is so harmful. Specially in the Rust ecosystem


There is absolutely nothing harmful about permissive licenses. Let's say that Wine was under the MIT license, and Valve started publishing a proprietary fork. The original is still there! Nobody is harmed by some proprietary fork existing, because nothing was taken away from them.


It's a little more nuanced than that. Software and gained freedoms survive not because they exist, but because they are being actively maintained. If your original, never-taken-away software does not get continually maintained, then:

* It will slowly go stale, for example, it may not get ported to newer, increasingly expected desktop APIs. * It will lose users to competing software (such as your proprietary fork) which are better maintained.

As a result, it loses its relevance and utility over time. People that never update their systems can continue using it as they always have, assuming no online-only restrictions or time-limited licenses. But to new use cases and new users, the open software is now less desirable and the proprietary fork accumulates ever more power to screw over people with anti-consumer moves. Regulators ignore the open variant due to its niche marketshare, increasing the likelihood of things going south.

Harm can be done to people who don't have alternatives. In order to have alternatives, you need either a functioning free market or a working, relevant, sufficiently usable product that can be forked if worse comes to worst. Free software can of course help in establishing a free market, it isn't one or the other.

If a proprietary product takes over from one controlled by the community, much of the time it's not a problem. It can be replaced or done without.

If a proprietary platform takes over from one controlled by the community, something that determines not only how you go about your business but what other people expect from you, everyone gets harmed. The problem with a lot of proprietary software is that every company and their dog wants their product to become a platform and reshape the market to discourage alternatives.

MIT by itself does no harm. If it works like LLVM and everyone contributes because it makes more sense than developing a closed-off platform, then great! If it helps to bootstrap a proprietary market leader while the originally useful open original shrivels away into irrelevance, not as great.


A decade or two ago Wine was on permissive license (MIT I think). When proprietary forks started appearing, Codewavers (which employs all the major Wine contributors) relicensed it as GPL.


It's harmful to the ecosystem, because the reason so many Linux drivers, and Wine contributions, and a lot of other things are free software today is because of the GPL


How? It's GPL.


Can it vanish behind proprietary licensing? Pretty sure most of Valve’s stuff is under GPL so they can’t exactly evaporate that away.


From what I read, it was a lot of the prosumer/gamer brands (MSI, Gigabyte, ASUS) implementing their part of sleep/hibernate badly on their motherboards. Which honestly lines up with my experience with them and other chips they use (in my case, USB controllers). Lots of RGB and maybe overclocking tech, but the cheapest power management and connectivity chips they can get (arguably what usually gets used the most by people).


Sleep brokenness is ecosystem-wide. My Thinkpad crashes/freezes during sleep 3 times a week. Lenovo serviced/replaced it 3 times to no avail.


I have had never any sleep issues with my Macs.


And my wife's Macbook Air wakes itself up again if it tries to suspend when connected to a Dell monitor. Apple has plenty of bugs too, and only Apple can fix them.


This is a part of Secure Boot, which Linux people have raged against for a long time. Mostly because the main key signing authority was Microsoft.

But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.

Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.


The goals of the people mandating Secure Boot are completely opposed to the goals of people who want to decide what software they run on the computer they own. Literally the entire point of remote attestation is to take that choice away from you (e.g. because they don't want you to choose to run cheating software). It's not a matter of "no one stepped up"; it's that Epic Games isn't going to trust my secure boot key for my kernel I built.

The only thing Secure Boot provides is the ability for someone else to measure what I'm running and therefore the ability to tell me what I can run on the device I own (mostly likely leading to them demanding I run malware like like the adware/spyware bundled into Windows). I don't have a maid to protect against; such attacks are a completely non-serious argument for most people.


Is there any even theoretically viable way to prevent cheats from accessing a game you're running on a local machine without also disabling full user control of your system?

I suppose something like a "reboot into '''secure''' mode" to enable the anti-cheat and stuff, or maybe we'll just get steamplay or whatever where literally the entire game runs remote and streams video frames to the user.


And all this came from big game makers turning their games into casinos. The reason they want everything locked down is money is on the line.


anti-cheat far precedes the casinoification of modern games.

nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.

anti-cheat is essentially existential for studios/publishers that rely on multiplayer gaming.

So yes, the second half of your statement is true. The first half--not so much.


> anti-cheat far precedes the casinoification of modern games.

> nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.

You are correct, but I think I did a bad job of communicating what I meant. It's true that anti-cheat has been around since forever. However, what's changed relatively recently is anti-cheat integrated into the kernel alongside requirements for signed kernels and secure boot. This dates back to 2012, right as games like Battlefield started introducing gambling mechanics into their games.

There were certainly other games that had some gambly aspects to them, but 2010s is pretty close to where esports along with in game gambling was starting to bud.


There are plenty of locked down computers in my life already. I don't need or want another system that only runs crap signed by someone, and it doesn't really matter whether that someone is Microsoft or Redhat. A computer is truly "general purpose" only if it will run exactly the executable code I choose to place there, and Secure Boot is designed to prevent that.


I don't know overall in the ecosystem but Fedora has been working for me with secureboot enabled for a long time.

Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.

Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.


I'm pro secure boot fwiw and have had it working on my of my Linux systems for awhile.


I'm not giving game ownership of my kernel, that's fucking insane. That will lead to nothing but other companies using the same tech to enforce other things, like the software you can run on your own stuff.

No thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: