Hacker News new | past | comments | ask | show | jobs | submit | ilyt's comments login

Pretty much, we got from network of blogs and forums to all-encompassing massive social networks that impose same rules and sensibilities on everyone regardless of sense and reason, regardless of local culture or really anything else.

Like recently here in Poland we had cause of someone's motorization-related facebook page (basically diary of him building his tuned car) got banned after he posted a poll that dared to contain the word "czarny" (black).. someone else got banned for posting pic of their kids on vacation etc.


I think fundamental problem of it is it being basically a monolith and is now hard to untangle that.

IMO better architecture for such "IOT HUB" would be a rule-engine that basically just takes code (embeddable language like Lua or outright WASM) and runs them against incoming events. Maybe add pluggable storage so the code can use some kind of persistency on top of it.

Then UI would basically be just a configuration interface for that rule engine. So to run high availability you'd just run the engine on 2 nodes and sync the configs, and configuration/dashboard/whatever else using it could run separately on "bigger" machines.


The new propane or CO2-based pumps the temperature it keeps working is also lower so it isn't much of a problem for most places if you are in market for one currently.


> To help ensure messages you send to Gmail accounts are delivered as expected, you should set up either SPF or DKIM for your domain.

But spammers already do that, why would enforcing that even help ?


I think the explanation is a little incomplete, but it makes sense if you expand the explanation a bit.

What really happens is this: when GMail receives a lot of email from domain xyzzy.com, and a lot of it seems to spam (either it's marked spam explicitly by the recipient, or maybe Google uses some weird AI or whatever to identify messages as spam) then GMail will start marking email from that domain as spam. Obviously if you own xyzzy.com and you're not a spammer you want to avoid this. So what can you do?

SPF and DKIM are ways to prevent unauthorized senders from delivering mail that appears to come from your domain. SPF is a way to list IP addresses authorized to deliver mail on your behalf, and DKIM contains cryptographic keys needed to sign email coming from your main. That means if you have SPF and DKIM enabled, the only people able to send mail that appears to come from your domain are people that are authorized to do so (there are a few more bears on the road, but broadly this is true).

It's true that spammers can register their own domains for the sole purpose of sending spam, and they can enable SPF and DKIM on those too, but if they use domains exclusively to send spam, they will still be marked as spam domains by GMail, at least eventually.

But this doesn't explain why GMail should be distrustful of domains without SPF and DKIM records. There are literally hundred of millions DNS records worldwide, and only the tiniest minority (think, 1% or less) of those have SPF/DKIM records, and not having those records isn't evidence of being a spammer per se. But look from the perspective of spammers. If GMail adopts the policy that email from rare domains without SPF/DKIM records is accepted so long as they don't send high volumes of spam, then it's trivial for spammers to collect 100 million domains without SPF/DKIM and send literally 1 message from each, which results in a 100 million spam messages being accepted by GMail.

That's why GMail wants you to add SPF/DKIM records to your domain if you're not a spammer. It allows GMail to block email from the >99% of domains that don't have SPF/DKIM enabled. And for the remaining 1% of domains, it can either delete email outright (if it's forbidden by SPF/DKIM), or else it can reliably identify a domain as being spammy.


It took like a decade to get average LCD screen to look as good as old CRT. And still need a ton of post-processing to make old pixel art games to look as good as before on LCD.

And I did play a bunch of games 10-20-30 years after they were released and they still hold up. My limit seems to be around SNES-era graphics, before that it just feels too ugly and clunky for me.

Sure, many games just feel like any modern titles do everything better, but some play just fine if you can stomach some of the obsolete mechanics.

> I fired up Doom 2 again on one of the many ports and it looks great, all the nice memories came flooding back, and yet it wasn't the same. It made me realize that the magic wasn't in the games, or the computers or the people. The magic was in us being a bunch of kids born into infinite curiosity and no (real) responsibilities. That magic unfortunately cannot be recreated in adult life.

I thought about it a lot and come to conclusion that every new interesting experience bumps our "standard" up and so once you accumulate a ton of that it's just harder and harder to be wowed by new game, even if it is just fine, fun and plays nice. But me getting my first car in my 30s was still thrilling and I was giggling like mad so dunno about "kid" part. Yeah kids know shit all so everything new is exciting but that doesn't mean you can't find magic moments in the adulthood, just amount of work required is higher.


You can just buy same chip if someone somehow decided to check random flash chip vendor.

More sensible way to stopping that would be writing eeprom with encrypted key burned into the GPU itself but I doubt NVIDIA bothered, money loss for few people willing enough to take their GPU apart to replace a chip is insignificiant.


Driver side sucked on AMD since the cards were still ATI tho


I am hearing the same claims repeated over and over again.

On linux they are simply not true.

So are we talking about Windows? Are we talking games?


For GPU compute drivers on the majority of their consumer cards on Linux the claims are most certainly true.


Huh? Not sure if I'm misunderstanding but I'm on Arch and I've been running my 6750XT with SD since like February. Got SDXL running a few months ago and have played with oobabooga a bit. Also compiled whisper.cpp with HIPblas the other day.

I also play a few dozen hours of games a month, some new, some old, some AAA, some indie. All through Steam's Proton with no driver issues whatsoever.


You may be running sdxl but according to benchmarks I’ve seen, nowhere near the speed of say, a 3070, or a 3080 12gb (if you want a nvidia product with comparable vram)


I'm waiting for something like that but with built-in buck-boost to make it immune for charger quirks


I use it when it's 12V, it's probably most popular barrel plug voltage and also used in a lot of audio devices (aside from guitar pedals which have 9V with reversed voltage... fuck that)


-9v is semi-tolerable for legacy reasons, what's really bad is center-positive 5v on what looks like a 1.3mm or so plug, as found in a cheap wireless DMX system. Still beats wired DMX though.


Pretty much. Near-everything aside from "one man project hacking" will have some amount of "office hours" just used to coordinate with co-workers, plan, code review, or frankly just doing stuff like reading the docs or specs.

Even assuming "you can only do 3 productive hours a day" (which is a lie with appeal to authority mixed in. "Look that guy that made shitty web framework did it, it must be good!"), that does apply only to coding, there is more to being developer than just programming

> Especially the advice that you must stop after three hours even when you're "in the zone" seems hard to justify. Instead of insisting on a hard and fixed number of hours everyday, I think it's much more fruitful to follow a flexible time model (which is actually implemented in many work places today): if you happen to have a very productive day, just keep milking it and put in some extra time. In return, you can take that time off on some other day.

I'd even call that advice outright idiotic. You wasted time to get in the zone only to throw it away

I feel like any recommendation of "do X hours of this" is a delusion. We're not robots, we have better or worse days and more or less engaging tasks. If task is "here are API docs, make a bunch of code and tests for it" I can do it whole day without much slowdown.


Being in the zone def is a thing.

The longest lasting and easiest to extend software I’ve cobbled together (and some still running in production after 20yrs) were me designing for 18+hrs straight, share general idea w/team, then furiously writing for 36hrs. Those spurts were always the most productive and setup future work to be easy to add on.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: