Hacker News new | past | comments | ask | show | jobs | submit | fizzynut's comments login

You should probably just use a quad tree to put your objects into and traverse that with a path finding algorithm.

Yeah, I thought about that too, but I'm also trying to keep pre-processing work as light as possible.

From zooming into your clip both ASCII and Unicode are wrong:

- ASCII is off center ~43/50 pixel margins

- Unicode is off center ~20/25 pixel margins

- Both have different margin sizes

- The button sizes of both are the same.

- The Hide button is offset from both 8/10/16 selector and ascii/unicode buttons

- Even if everything was correct, because there is no contrast between "Off" and background, it's going to look wrong anyway


Is it possible to get rid of all the macros TO_FIXED, FROM_FIXED, mult, etc and replace them with a class with the correct constructors / operator overloads?

Then your code doesn't ever need to be aware of the special fixed point math and horrible syntax everywhere and everything just works?


Yes it can. I’ve seen it done properly only once though. You still have to pay attention to avoid overflows


I'm curious why you think Javascript should be on the list, I see a lot of issues:

Javascript generally runs on a website in a browser: The browser must be continually updated on the client side every few weeks at the very least.

Your software must work on all future browsers in 50 years time.

Security patches/privacy/laws/etc deprecate/change API calls regularly.

The website must continually update certificates, maintain payments to providers / keep valid credentials / not get hacked for 50 years.

Most javascript applications have a billion dependencies, and people rewrite using a new framework over updating if it's been a few years, never mind decades.

Keeping dependencies up to date for 50 years is a lot of work.

I don't know of any instances of anything javascript that has run continually for 20 years to date?


Intel basically made the same CPU for about 6 years straight because of 10nm process issues.

They had to keep pretending the next gen "Lake" CPU was substantially different from the last, so they just took last gen product, made some minor tweaks to break compatibility and called it a new generation


Same goes for most cars. No real revolution, tweaks or changes due to regulatory demands, but nothing groundbreaking.


Still, when you’re due for a new car and look for the newest of the newest, would you go with manufacturer A, who released their latest car 8 years ago, or manufacturer B, who released it 1 year ago?

Incremental upgrades get so much hate around the internet (mostly about phones) by people having the version before it. Saying things like “ah they changed almost nothing! Why would I upgrade?!” While for instance me, only on my 3rd smartphone EVER, would love all the incremental updates over the years when I finally decide I need a new one, because I always get the latest and greatest. If a company then doesn’t release anything for a few years, I’d go somewhere else.


The one with a reliability data for the past 8 years.

It's surprising to me that people would want to make a major financial decision like a car without knowing about its reliability history.


8 years of the same parts, repair knowledge, and continued software support?

Sign me up immediately.


> continued software support

Unfortunately due to the extremely minimal software rights that exist (see: proprietary software) this is pretty much nonexistent in cars AFAIK.

I would rather get a car that is old enough to not be limited by software constraints. Which is pretty disappointing, because I actually really like electric cars. I think they would work well for my needs. But they are all so intentionally kneecapped, I have no interest in any particular model that's available.


I would love a super bare bones electric car. One that functions the same as any late 90s/early 2000s era car would, except with an electric power train and maybe cruise control.


2011-2013? Nissan Leaf fits the bill.


This.

It's so thoroughly reverse engineered that if you decided you wanted to reconfigure the software so it would only drive at prime numbered miles per hour, you could.


Those cars are essentially illegal to sell in many countries.


> It's surprising to me that people would want to make a major financial decision like a car without knowing about its reliability history.

Some people will always be surprising, but it is pretty clear that the pickup truck is the most purchased type of vehicle (in North America) exactly because they have a much better reliability track record as compared to most cars. This idea doesn't escape the typical buyer.


> it is pretty clear that the pickup truck is the most purchased type of vehicle (in North America) exactly because they have a much better reliability track record as compared to most cars

The pickup truck is also deeply engrained in American culture as masculine, even if the owner does nothing that requires it.


Yes, it seems most products that gain a reputation for reliability end up taking on a "masculine persona", of sorts. Which I guess stands to reason as in popular culture reliability is what defines the "manly man".


What if the reliability is like "these bearings are known to fail every 10k miles or so, but we have no product refresh planned for at least 3 years so the problem will remain unresolved?"

This is what incremental improvements are supposed to be. Well that and discovering that the vehicle can last till the end of the warranty period with one less bolt in that spot, so you can eliminate it.


You are wrong and right. Wrong because continuous improvement is not about making a vehicle only survive a 3-5 year warranty period. Right because the master of continuous improvement have a 10 year warranty (20000 km, 12500 miles) where I live if you do service at a dealership or authorised service centre and I think this extended warranty influence the decisions about what minimum level of quality the manufacturer will accept.


In the case of cars and CPUs its not that people mind incremental upgrades, it is that they mind incremental upgrades sold as big upgrades.

For phones the mindset of people who upgrade when they have to and/or buy cheaper phones is very different from those who regularly upgrade to the latest flagship phone.


Case in point: A new car released 8 years ago, but with incremental upgrades (i.e. Mitsubishi RVR in NA), still won’t have the same fundamental design considerations around safety or fuel efficiency as a more recent model.


New models every year are fine if they're honestly labeled and have technologically reasonable compatibility.

Cars and phones meet those criteria a lot better than Intel CPUs. The problem isn't releasing things, it's the way they pretend every release is a big advance and the way they make the motherboards almost never able to upgrade.


Buying first gen models is always a crapshoot. Often same for last gen if they try to squeeze new capabilities into a platform it wasn’t intended for.

Tesla is particularly terrible but this has been true for every manufacturer.

You want a couple years for them to work out the kinks.


For sure A. I would never buy a car that is the first model year of a revamp. I would give them at least a year to work out the bugs.


Honestly, from a reliability standpoint, the ideal new car is one that had a major refresh ~2 years ago. By then most of the kinks should be worked out.

Or just pay attention to the warranty. If they guarantee it for 10 years, they probably expect it to run for 10+ years.


Car buyers aren't always so dumb. When we bought our car, I was fully aware that major updates to models happen only so often. We bought used (of course), and the "major update" was our major criteria, more so than the specific year release date. (We bought a 2014 model in 2018; the year they released significant safety improvements compared to the 2013 model.)


This is arguably exactly what most people actually need in a vehicle that you are spending thousands of dollars on: accumulated refinements seamlessly incorporated over time.

Year over year this typically results in good outcomes on a purely practical basis. However it just inherently makes for very boring publicity/promotional material.

Edit to add: it can also admittedly result in older solutions getting baked in which prevent larger beneficial changes. (Toyota's North American 4Runner and Tacoma models might be good real world examples of this approach resulting in generally high reliability but also larger, "riskier" changes being seemingly eventually necessary.)


Luckily it’s not common to need to replace your garage every time you get a new car.


In this analogy, Intel sells the parts to make garages too


Cars don't get a new model every year. They are even called "facelifts" to make it clear that it is essentially the same with minor modifications and upgrades.

Also there isn't much "groundbreaking" you can do in a car, except for the EV switch, the industry has existed on many small upgrades over time. (Like many other industries)


I mean I can't speak to ICE cars, but electric cars ranges seem to scale pretty dramatically with how new they are.


There hasn't been significant combustion engine efficiency changes in a long time. My scrapbox from 2007 still goes 550 miles on a tank of diesel, about the same as my 1997 car did before it.


I guess the only question is if you include hybrids in ICE cars. I'd personally never buy a strict ICE again, with the high quality of hybrids.


An M5 ultra would be the best ai chip for a data center in the same way it would be a good idea to use an array of the best laptop speakers to power a music concert.


When you write your own UI, you get used to quickly + easily being able to create custom elements that do/behave exactly as you want and even iterate on those elements to get the best user experience.

Lets say I want to control pan/tilt/zoom/focus/aperture/etc of a remote camera. If I ask lets say an expert in UI framework Z to do it, it will take them 10x longer to create a very painful experience using standard elements with poor input latency, so someone actually trying to setup a camera over/under shoots everything, but it technically "ticks every box". The path to create a better experience just isn't really there and it is difficult to undo/change all the boilerplate/structure, so version 1 isn't improved for years because it took so long to create the first iteration.


Sounds like a very specific example. But I'm still unconvinced - what will make this particular UI slow using Qt? The camera view? My Qt note-taking app is faster and more responsive than native apps like Apple Notes and best-in-class Bike Outliner. Both in loading speed (4x) and resizing (with word-wrapping) of a large text file (War and Peace).


Maybe the user wants to do real time exposure/color correction, so you want to minimise the number of frames from the moment of the input to seeing the output. To do it properly the user also would want to see analysis graphs on the screen on the same frame that's being displayed? And do this for 10 cameras at once?

Maybe your definition of "fast" for a large text file is War and Peace, and mine is multiple 1/10/100 GB text files that you want to search/interact with in realtime at a keystroke level.

I've probably written 100+ completely different "very specific examples" in very different industries, because that's where you can create much better experiences.

Generally your expectations are based around what you get as standard from the library, but if you want to get a much better experience then it immediately becomes a lot more difficult.


I believe you have proved my point that you're speaking of very niche examples. Even Sublime Text won't load a 100GB instantly on a normal machine. And I consider it a very well-made app. While of course there might be apps that will load such files instantly, they are highly optimized for such a task. My point is that Qt is more than enough to replace all those Electron, and other web-based apps, while performing as good or even better than native apps.

At the end of the day, Qt can also be just a wrapper for your highly optimized engine - for example, NotepadNext[1] is using the very performant Scintilla engine while its UI is written in Qt. From my (unscientific) tests, it's even vastly faster than Sublime Text.

BTW, I'm not saying that rendering and creating your own UI is always a bad idea. Many people do it because it's fun and challenging, or to push the boundaries. That's what Vjekoslav Krajačić is doing with Disk Voyager - writing a file explorer in C from scratch[2][3]. But for many people, that's too much. I believe Qt C++ with QML is the best combo for most people, for most applications.

[1] https://github.com/dail8859/NotepadNext

[2] https://diskvoyager.com/

[3]https://www.reddit.com/r/SideProject/comments/103b9fy/disk_v...


If everything you do stays in the rails of QT you're going to be fine. But you try to do something simple like load a 2GB file and everything starts to fall to pieces, then you're going to assume the people that wrote this are super clever and that a 2GB file is too complicated, too niche, too hard of a problem, it needs to be "highly optimised", etc.

The reality is QT/whatever program/framework is doing 100 things you don't care about when loading/rendering a file. If we only care about 1 thing we can do that much better because we don't care about the 100 other things our naïve code we wrote in 5 minutes outperforms the standard element by a factor of 1,000.


If ever you work at a place where the software does not make money or is treated as such then you should consider leaving. You will be treated like a cleaner, i.e. it doesn't matter how well you can clean the office if it has zero impact on revenue, they just want the cheapest.

Everywhere else, your worth is the value you generated multiplied by the number of people it interacts with.

The closer or more direct the path is from the work you do to the value generated, the more you will be valued in that company. This is why Sales can be valued so highly, but if you implement X to win contract Y worth Z where Z is a very big number, your worth is quite clear and it's obvious you should move on if you're treated as a cost in such cases.


I think the author has discovered memory bandwidth. When you have a simple function and just scale the number of cores it's easy to hit.


It is a common use case.

Every time you buy a computer it is generally going to be "bleeding edge" to various degrees and it's unlikely to work properly for X months/years.

It is only because people keep their machines for several years that the "average" machine running linux is several years old, yet it seems like a rite of passage for people to blame the user for having the audacity to buy a laptop that isn't 10 years old with X and Y but avoids Z wifi chipset.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: