As a software engineer, I insist on giving developers high-end laptops. The reason is very simple: a lot of development environments are very heavy to run, and developers should not waste time on their development tools running slowly. I also don't want developers to disable tools that are meant to keep an eye on the quality of the code. High-end laptops generally serve well for development for up to 5 years.
Developing on high-end laptops should definitely not be an excuse to deliver slow software, and in the teams I work in, we do pay attention to performance. You are right though, a lot of software is a lot slower than it should be and my opinion is that the reason is often developers that lack fairly basic knowledge about data structures, algorithms, databases, latency,... One could say that time pressure on the project could also play a role, but I strongly believe that lack of knowledge plays a much bigger role.
Now, aside from that, also keep in mind that users (or the product owner) become more and more demanding about what software can and should do (deservedly or not). The more a piece of software must do, the more complex the code becomes and the more difficult it becomes to keep it in a good state.
Lastly, in my humble opinion, the lowest range budget laptops are simply not worth buying, even for less demanding users. I think that most users on a low budget would be better off with a second-hand middle or high range laptop for the same price. (I am talking here about laptops that people expect to run Windows on, no experience with Chromebooks.)
> users (or the product owner) become more and more demanding
I disagree. For all my life, customers have been asking for as much as they can imagine. Customers wanted flying cars long before they wanted the latest iPhone.
The thing that changed is that we realised that if we write lower quality software that has more features (useful or not), customers buy that (because they are generally not competent to judge the quality, but they can count the features). So the goal now is to have more features.
> I think that most users on a low budget would be better off with a second-hand
Which is exactly the problem we are talking about: you are pushing for people to get newer hardware. You just say that poorer people should get the equivalent of newer hardware for the poors. But people on a budget would actually be better off if they could keep their hardware longer.
I think one should not use advanced language features just because, but I also think one should not avoid using advanced language features where it is useful.
Why would the code base be worse when advanced language features are used?
I don’t understand your comment: if you develop software as a team, it seems important to communicate and to know what the others in your team are working on?
Also, I really don’t see it as ‘social’ meeting, to me it’s a focused technical meeting about the work that is going on.
Yes to communicating with each other and knowing what you're working on, no to doing it every morning. Once a week, every two weeks, or once a month is fine. Any integral communication that can't be handled by the recurring meeting, can be handled ad-hoc as you go.
But people who have only done capital A agile and scrum are so buried in the philosophy that they don't understand that there are far better ways to do things.
I'm not sure how that's supposed works if most team members are burning through 14 tasks or 10 tasks or 5 tasks in a two week period. If your tasks are two week chunks, then you're doing something else completely different.
Over a 40 year career I've done all kinds of methodologies. If you understand that there are far better ways to do things, I'm all ears.
The big advantage to agile/scrum methodologies in my opinion: dramatically improved predictability. Total elimination of drama. Efficient management of expectations outside of the development group. Never having to do a death march ever again.
Given the rest was reasonable I was willing to grant that as "aims for high quality".
Because it wasn't about how good of a programmer they are, it was about their intentions and principles. "I try" vs "I don't try" already makes the significant relative difference regardless what the absolute values are.
When I read about people using AI to write code, it always seems to me that it would be a lot faster and less hassle if they would write the code themselves, without even considering the fact that it would give them more experience and make them better.
I have used some of those tools myself, and for the code that I could use help of an AI tool, I, again and again, receive junk: code that looks plausible but that does not compile, uses apis or libraries that do not exist and so on. In the end, it just made me waste time.
I find it useful when I'm working with some unfamiliar technology. It can get me up and running much faster, and even debug code to some extent.
I used it yesterday to fix a problem with my makefile. I couldn't see the issue, gave it to chatGPT. It gave me some code that didn't actually work for some reason, but which had the correct solution to my problem in it. I just ported over the solution to mine, job done. This was after spending a couple of fruitless hours reading documentation and blog posts.
we had a guy on our team who was using ChatGPT aggressively in his awful code, to the point of putting his prompts in as comments
he did not last long...
this is the kind of developer you have to explain (repeatedly) why things need to run on servers & service accounts rather than his laptop as his personal ID
> Sounds kinda cursed, but running something like Debian sid sucks because there's not many other people doing that.
I am not sure that is correct. I think it is very common for (experienced) people that run Debian to use Debian Testing or Debian Unstable on the desktop and Debian Stable on servers.
Personally, I run Debian Testing with selected packages from Unstable. I rarely have problems with that setup.
I really like to work with snapshot isolation and rarely have any problems with it, but you do have to be aware what guarantees it gives and does not give.
In the example given in the article, there would not be a problem if the links on the removed node would be reset (which is cleaner imho) as then an update conflict would be triggered.
When using snapshot isolation, I sometimes implement ‘dummy’ updates (using a version field for example) on ‘root’ objects to trigger update conflicts when concurrent updates must be prevented to data linked to that same ‘root’ object. (This is similar to implementing optimistic locking using a version field on a root object.)
This is a lot like how couchdb handles MVCC. There is a `_rev` field that represents some specific mutation of a document, old revisions stay available until compaction, and you will receive an error if you attempt to write to a document with a different revision than you read.
Snapshot isolation, after all, is basically a method for implementing MVCC. I guess it's no big surprise that this isolation level is problematic for people that don't implement the other half.
Well… if you know the data is real, then knowing everything about that someone can seriously limit the number of people it could actually be, which makes that someone identifiable.
Like if you only know the full address, and you see that only one person lives at that address. Or you know the exact birthdate and the school that someone went to. Or the year of birth and the small shop that the person works for. And so on…
I still consider semver better. When it is used correctly, the version number gives a clear indication of what kind of changes to expect when upgrading. Obviously this is done to the best of knowledge of the author and might not always be 100% correct.
Either way, the amount of work to do for an upgrade depends on which parts of the product you are using and whether those parts have any changes in the new version. For this reason, most projects also have a changelog which gives you more detailed information about the upgrade. When preparing for an upgrade it is advised to read the changelog.
Developing on high-end laptops should definitely not be an excuse to deliver slow software, and in the teams I work in, we do pay attention to performance. You are right though, a lot of software is a lot slower than it should be and my opinion is that the reason is often developers that lack fairly basic knowledge about data structures, algorithms, databases, latency,... One could say that time pressure on the project could also play a role, but I strongly believe that lack of knowledge plays a much bigger role.
Now, aside from that, also keep in mind that users (or the product owner) become more and more demanding about what software can and should do (deservedly or not). The more a piece of software must do, the more complex the code becomes and the more difficult it becomes to keep it in a good state.
Lastly, in my humble opinion, the lowest range budget laptops are simply not worth buying, even for less demanding users. I think that most users on a low budget would be better off with a second-hand middle or high range laptop for the same price. (I am talking here about laptops that people expect to run Windows on, no experience with Chromebooks.)
reply