Hacker News new | past | comments | ask | show | jobs | submit | stinos's comments login

Depends on what you do/need I guess. Overall I'd rate the ones I use rather usefule. I use Run and FancyZones every single day. To the point that it's annoying to not have them when I'm on another computer. The others which I have enabled (because I use them often enough) are e.g Text Extractor, Color Picker, Screen Ruler.

> Python is the right thing to compare to here, because it is easily the most popular way to perform these computations in the modern day. Specifically using numpy.

By that reasoning, wouldn't it make more sense to wrap their C code and maybe even make it operate on numpy's array representation, so it can be called from Python?


Popular does not mean best.

Suppose that this blog post were part of a series that questions the axiom (largely bolstered by academic marketing) that one needs Python to do array computing. Then it is valid to compare C directly to NumPy.

It isn't even far fetched. The quality of understanding something after having implemented it in C is far greater than the understanding gained by rearranging PyTorch or NumPY snippets.

That said, the Python overhead should not be very high if M=1000, N=1000, K=1000 was used. The article is a bit silent on the array sizes, this is somewhere from the middle of the article.


Python is popular precisely because non-programmers are able to rearrange snippets and write rudimentary programs without a huge time investment in learning the language or tooling. It’s a very high level language with syntactic sugar that has a lot of data science libraries which can call C code for performance, which makes it great for data scientists.

It would be a huge detriment and time sink for these data scientist to take the time to learn to write an equivalent C program if their ultimate goal is to do data science.


I think it’s okay to say “This is the benchmark, now I’m going to compare it against something else.” It’s up to the reader to decide if a 3% (or 300%) improvement is worth the investment if it involves learning a whole other language.

It's a muddy comparison given that NumPy is commonly used with other BLAS implementations, which the author even lists, but doesn't properly address. Anaconda defaults to Intel oneAPI MKL, for example, and that's a widely used distribution. Not that I think MKL would do great on AMD hardware, BLIS is probably a better alternative.

The author also says "(...) implementation follows the BLIS design", but then proceeds to compare *only* with OpenBLAS. I'd love to see a more thorough analysis, and using C directly would make it easier to compare multiple BLAS libs.


Their implementation outperforms not only the recent version of OpenBLAS but also MKL on their machine (these are DEFAULT BLAS libraries shipped with numpy). What's the point of compairing against BLIS if numpy doesn't use it by default? The authors explicitly say: "We compare against NumPy". They use matrix sizes up to M=N=K=5000. So the ffi overhead is, in fact, neglectable.

If that was the goal, they should have compared NumPy to BLAS. What they did was comparing OpenBLAS wrapped in NumPy with their C code. It is not a reasonable comparison to make.

Look, I'm trying to be charitable to the authors, hard as that might be.


There is some reason in this comparison. You might want to answer the question: "if I pick the common approach to matrix multiplication in the world of Data Science (numpy), how far off is the performance from some potential ideal reference implementation?"

I actually do have that question niggling in the back of my mind when I use something like NumPy. I don't necessarily care exactly _where_ the overhead comes from, I might just be interested whether it's close to ideal or not.


If that was your question, you would compare against a number of BLAS libraries, which are already well optimized.

What they are doing here is patting themselves on the back after handicapping the competition. Not to mention that they have given themselves the chance to cherry pick the very best hyperparameters for this particular comparison while BLAS is limited to using heuristics to guess which of their kernels will suit this particular combination of hardware and parameters.

The authors need to be called out for this contrived comparison.


are OpenBLAS and MKL not well optimized lol? They literally compared against OpenBLAS/MKL and posted the results in the article. As someone already mentioned, this implementation is faster than MKL even on a Intel Xeon with 96 cores. Maybe you missed the point, but the purpose of the arcticle was to show HOW to implement matmul without FORTRAN/ASSEMBLY code with NumPy-like performance. NOT how to write a BLIS-competitive library. So the article and the code seem to be LGTM.

Look, it's indeed a resonable comparison. They use matrix sizes up to M=N=K=5000, so the ffi overhead is neglectable. What's the point of compairing NumPy with BLAS if NumPy does use BLAS under the hood?

I do not recommend C++ CLI.

Can you elaborate on why?

I looked at various ways for interop between C# and C++ over the years, and overall found C++/CLI to be the best overall for our particular application types: it's a separate layer bewteen a C++ backend (which is also used in other non-gui applications), with a windows-only WPF desktop application on top. Mainly because the C++/CLI code itself is simple, readable and fairly effortless to write and maintain. A bit repetitive at times though, but that's going to be the case for any such layer AFAIK. Integration on the C# is seamless, with code completion etc, and C# interfaces can be implemented in C++/CLI directly for instance. The initial setup takes some work but with conversion between common types implemented (ability to e.g. IEnumerable<ManagedT> <-> std::vector<NativeT> or std::iterator<NativeT> etc) it's all pleasant and good.

or check this library of mine https://github.com/Const-me/ComLightInterop/

Gotta say this looks neat, but it's exactly the type of code I'd rather not write: UUIDs, bunch of macros, unclear mapping between return types (HRESULT vs bool), having to declare same interfaces both in C++ and C#, ...


> Can you elaborate on why?

The language is only supported in a single compiler, and is specific to Windows.

The language is based on both C++ and .NET runtimes, and when I used it (admittedly, it was many years ago) I didn’t like the usability consequences. These two runtimes interact in a weird way. It’s hard to compose data structures which contain both managed and unmanaged pieces, see that question https://stackoverflow.com/q/10523268/126995 You can’t include Windows SDK headers in CLI code, not gonna compile: https://stackoverflow.com/q/26502283/126995 Same applies to most third-party C and C++ libraries.

So with CLI you have 3 languages instead of just two: C#, C++/CLI, and classic C++.

> it's exactly the type of code I'd rather not write

I have used that library in multiple projects in the last 5 years, for Windows and Linux platforms including ARM Linux, both open source and commercial. It worked great for my use cases.

Here’s an open-source example of a relatively complicated COM interface implemented on top of ComLightInterop. C# interface: https://github.com/Const-me/Cgml/blob/master/CGML/CgmlNet/iC... C++ interface: https://github.com/Const-me/Cgml/blob/master/CGML/Cgml/API/i... C++ implementation: https://github.com/Const-me/Cgml/blob/master/CGML/Cgml/D3D/C...

As you see it has very few macros, no bool returning methods, very clean API between managed and unmanaged parts, and most importantly very readable idiomatic codes on both sides of the interop.

I’d like to add that if you make mistakes when writing these C++ and C# projection of a COM interface you’ll find very soon because it’s likely to crash on first use due to /GS compiler switch. Also trivial to debug and fix because VS supports mixed-mode debugging.


Also related, the rather counterintuitive theory / myth that hot water freezes faster than cold water.

It comes up every winter here when it starts freezing and people want to leave out water for the birds. I can't help to urge people to just try it instead of simply believing it. I.e., it's not that I don't believe it's impossible under certain specific circumstances, but having tried it myself a number of times and the result always being that the cold water freezes first, it's pretty clear to me that it's a bit silly to just dismiss using warm water.


More info (but a little vague on myth vs fact): https://en.wikipedia.org/wiki/Mpemba_effect


I’m curious if other people detect interrupted or irregular patterns so readily

All the time, and I learned to not care a lot, even like some; for instance there's a lot of (mostly abstract, surrealism) art which does all the things wrong on that front but which is extremely enjoyable to me. Same weird way with music: exact 4/4 stuff is mostly boring, often even annoying, but give me funky off-beat stuff, chaos and noise and it brings a smile to my face.

There's only one thing which I can't shake off and that's lines which are meant to be, but aren't eaxcatly, parallel or right angles. Can keep staring at those. Especially when they are like very close to being correct but look like they're off (for like 1mm over 1m). Not the first time I actually get up and take a ruler to verify.


But then there is the intentional curvature in ancient stone columns, where ther the pillar is neither a perfect cylinder nor even a perfect cone, and it's on purpose because actual perfect forms don't look right to humans.

Like one part of the article shows the Apple logo in a circle, and the correct centering is not to have all points on the logo equidistant from the circle, but to allow the leaf to go a lot closer than the rest.


This, and the linked article, show the photo sensor halfway the monitor. Nothing wrong with that for comparing measurements, but for quite a lot (possibly the majority) of typical monitors out there that actually means for a refresh of 60Hz putting the sensor at the top of the screen will give you about 8mSec faster and at the bottom 8mSec slower measurements because pixels / lines thereof are driven top to bottom. Like a CRT basically. So if you're getting into the details (just like where to put the threshold on the photo sensor signal to decide when the pixel is on) that should probably be mentioned. Also because 8mSec is quite the deal when looking at the numbers in tha article :)

Likewise just saying 'monitor x' is 30mSec slower than 'monitor y' can be a bit of a stretch; it's more like 'I measured this to be xxmSec slower on my setup with settings X and Y and Z'. I.e. should also check if the monitor isn't applying some funny 'enhancement' adding latency with no perceivable effect but which can be turned off, whether when switching monitors your graphics card and/or its driver didn't try to be helpful and magically switched to some profile where it tries to apply enhancements, corrections, scaling and whatnot which add latency. All without a word of warning from said devices usually, but these are just a couple of the things I've seen.


If I’m watching a screen scroll in real time I’m much more likely to be looking at the bottom third of the screen.


No one puts on a vinyl record to play one song.

Pretty much every DJ out there begs to differ :) I get your point though, and do also like to listen to whole albums (be it on vinyl or cd or PC), sometimes. Other times I prefer to create the playlist on the spot in function of my mood, i.e. select one track after the other. 'DJ' if you will, except I don't usually do proper mixing as that can take away part of the 'just listening' experience.


So how does that play out when I work as a contractor for multiple companies which have some overlap?


That extremely depends on details of your contract, understanding with clients, level of honesty, risk appetite, and legal advice.

There are any amount of large companies where being employed by multiple entities is either frowned upon to expressly verboten. Of course, there are also any number of companies that do no care as long as you deliver, and many that actively encourage / expect nerds to nerd out in our spare time :)


This entire "You're a contractor, but you can only work for us" really makes me ask "Then why are they a contractor", cause it really sounds like they are an employee.


In this situation, they probably work for a consulting firm who is hiring them as FTE. And my experience with this is that the consulting firm itself is party to the agreement on how intellectual property rights are transferred and the employee of the firm signs an agreement that the firm can negotiate such on their behalf.

It boils down to, employee transfers IP to their consulting firm, and the firm agrees to transfer ownership of the IP for which hours were billed to produce.

IME, the firm I worked for would not bill clients for work done to our shared libraries. I'd record those hours to the firm itself, but I would still get paid the same.


Many reasons, usually combination of short term need, specialized skillset, reduction in liability.


You should consult the employment contacts your signed and/or an attorney...


Contractors probably get different contracts than employees


Is there a good source of data on this?

This. It's not that I don't believe the article per se, but it doesn't come much further than 'technician says' which I find hard to consider as evidence for various reasons. Moreover, and that's the major flaw imo, it talks about 'appliances' as one big group as if all are alike and all brands and price ranges behave the same, and gives examples like fridges with icemakers which is a thing which is fairly new in e.g. Europe and I wouldn't buy anyway because I have no need for it.

Would be interesting to see how a plain fridge holds up, how long current 'normal' diswashers and washing machines etc last compared to the past, and that split up per brand/model group/price category. Also because anecdotally I haven't encountered issues, but that could just as well be because my expectation is different or I just had luck or I just happened to buy the right thing. I mean, I have several examples, but just to name one: my expensive cordless drill is now about 17 years old and I got a new battery for it recently making it again as powerful as it used to be. It has had zero issues and I renovated 2 houses with it. Is that expected? Did I just get lucky? I don't know, but price divided by hours of usage this thing costs close to nothing yet works great and for what I'd consider a pretty long time.


This is the thing, and the main reason I only played around with HA but never fully implemented it. Ideally for things which are controlled in 'absolute mode' you need a watchdog. Like something implemented in hardware to keep it from misbehaving. On the other hand many crucial devices have this built in. But obviously plain lights don't. What I mean is e.g.:

- having HA control your heating's on/off hours, where the watchdog is the heating's thermostat making it stop heating when it's warm enough or forces heating if it gets too cold. So HA crashing and leaving the heating in heat mode might be just a waste but not a serious issue.

- having HA control the wanted temperature on the other hand is more problematic, because for all I know it could misbehave and make the heating want to go to 30 degrees Celcius or to a value so low that there's no frost protection anymore, then crash and never get it out of that state anymore. And there's no watchdog anymore correcting for it. Potentially this can cause issues. Chances are small, but I don't like the idea that AFAIK these chances are much larger than standalone heating acting up like this.

Likewise we can now opt for a dynamic electricy tarrif, basic idea being that for instance when you now the prices are going to drop below a certain threshold during the night you're going to tell your home battery to charge at that time. Of course the thing acts like a watchdog for itself in that it stops charging when full, but there is no watchdog keeping it from charging continuously. In other words: if it's put into charging mode and HA crashes and leaves it in charging mode then it will happily continue charging during peak hours. Not 'serious' per se but pretty stupid.


I started out with just toggling the TRV between two target temps which is pretty much your suggestion, 17° when I'm away, 20° when I'm home. I added better thermostat to the mix because I liked the idea of using an external thermometer for deciding wether to heat the room or not, instead of the one built into the TRV sitting right next to the radiator. So better thermostat usually overshoots the target temp a bit at first to get the TRV to open the valve fully. I don't even know of I like this tbh, but even in the old setup it would've been annoying to enter the kitchen in the morning to find out it's still 17° because something is broken with the whole setup. I mean before, I just went to the kitchen and turned on the heat manually before using the bathroom and putting on my clothes back in the sleeping room, and I'm still not sure the current automation is that much of an improvement even if it works reliably. Once you start you realize it's really hard to create automations that are subtle yet useful, and still easy to override, because there's always some special cases.


I mean, why not just have HA just control a hardware thermostat that has different mode settings (home, away, etc.)? Plenty of them exist and its, honestly, easier than trying to make HA into a thermostat.

I've done it in "pure" HA due to specific requirements and do not recommend it. Reliability has not been a problem but everything else was. You'd probably need a custom thermostat software component (as the HA built in one is limited), a wi-fi connected switch (like a Shelly), a high amp relay (if heating or controlling an AC due to startup load) and a low latency temp sensor (most have a 10+ minute delay or 2 degree F delay, I found Govee Bluetooth sensors to work well). Then you'd still probably want a remote control or a physical control panel/dashboard. The IR remotes for ACs are absurdly difficult to decipher (I never succeeded) and there's no good Zigbee remotes I found (the best I found is the Yolink remote but that's not local). Dashboards either involve running a browser on some LCD on a wall or an e-paper display wired to esp-home. Making a dashboard in esp-home is like going back 40 years as everything is individual graphic components drawn one by one. It might annoy you (like it did me) enough to build an svg bashed dashboard creator, a renderer on your HA box and then use a dead PR's remote image loading support in esp-home.

edit: The only positive about my approach is that I was able to build a custom "feel like" temperature curve that combines temp and humidity. So I no longer wonder all the time why I'm feeling cold or hot despite the thermostat being set to the same thing as last week.


I mean, why not just have HA just control a hardware thermostat that has different mode settings (home, away, etc.)? Plenty of them exist and its, honestly, easier than trying to make HA into a thermostat.

Wel yes, that's exactly my point: when done like that there's a watchdog in place. My issue (or rather reluctantce to use it) is that HA and the likes provide as far as I'm aware no watchdog features for systems which don't yet have it. Think some hardware AND-style gate which is only going to apply HA's last state if, say, HA can prove it is up and running properly.


There isn't one because the by far most common failure case isn't "HA isn't running" but "the communication protocol isn't working." A watchdog must be on the device itself because otherwise it's basically useless for all intents and purposes. That said it also doesn't in my experience matter in like 99% of situations (a light stays on... who cares and you'll directly notice the issue with your eyes anyways) and for most of the rest you can just have HA send you a notification that it lost contact with some device for too long. You want notifications anyways because you cannot assume that a thermostat being on means the heater is on. If you need a heater to avoid a water pipe bursting then you should have a low temperature alert as well. The breaker may have blown, the relay may have broken or the heater's overheat fuse may have blown at some point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: