Don’t get it. Cheaper ways of doing software have been always available; that doesn’t mean they are the best way. You get what you pay for, as usual. Anybody thinking that 2025 AI is some sort of magical “free lunch” device, is just delusional.
Agreed, either you get bit by your choice early on and can’t ship, or you ship and get bit later when trying to maintain the puddle of guck you just shipped.
If you don’t know how to do it, you don’t know how to do it. Learn how to do it.
I’m in group A and B. I do programming for the sake for it at home. I read tons of technical books for the love of it. At work, though, I do whatever the company wants or whatever they allow me… I just do it for the money.
Hijacking the thread: also looking for some super cheap VPS for running Go servers (don’t need more than 256-512 MB). I know i have digitalocean for ~$5/month, but i’m looking for something more like $1 or €2 per month.
I run a number of go servers with similar resource requirements on cloud run for free. I wanted them to be always on, so I set up an uptime check that keeps those instances alive around the clock without leaving the free tier.
AI won’t become “normal technology” until the open source versions are more powerful than the closed ones. Just like Linux is the “best” kernel out there, and that doesn’t prevent other kernels to be proprietary (but that doesn’t matter because they are not better than Linux).
Imagine for a moment what would happen if suddenly one company “buys” the Linux kernel, and suddenly you need to pay per the number of processes you run in your machine. Awful.
Spreadsheets for example became normal technology long before we had a good open source one. And arguably we still don't have an open source one that's more powerful than the closed source ones.
I agree with you. I think OP’s point becomes more valid if you limit the discussion to tools used while developing/maintaining software, as opposed to tools used by a wider audience.
I don’t fully believe that either, but I see where the point would come from.
I know the difference between an OS and the kernel, still a lot of devices don't run on the Linux kernel. Windows isn't Linux, macOS/iOS is not Linux, PS5/Xbox/Nintendo don't run on Linux, Xiaomi and Huawei are transitioning away from Linux.
I stand by my point that Linux isn't particularly dominant in the consumer space, even if we include Android, whose Linux-ness and open source pedigree is questionable.
> I stand by my point that Linux isn't particularly dominant in the consumer space
what if we add Steam deck? chromebooks? smartTVs, smartwatches, amazon echo, google home? GoPro and similar cameras? Maybe we should add some drones too. There are way more devices using linux in the hands of consumers than all other OS's together.
I don't have any particular feelings for or against Linux, and even if I had, they would be irrelevant for the sake of this argument. I'm just saying for something to be the objectively being 'best' at something, means it makes little sense to use anything but that thing, excepting niche cases.
Which is why you could make a credible case for Linux being the 'best' server OS, but you couldn't make the case for it in other spaces (consumer, embedded etc.), because the alternatives are preferred by huge chunks of the market.
I like a good conspiracy but based on what? Jetbrains have no incentive to force that, they make money based on providing flexible tools that people will pay for. And their IDEs are desktop apps, you could always just... not upgrade. Unlike web or cloud-based "IDEs".
But fixing a bug requires time from your side (mainly doing the investigation) and from others (code reviews). So if the whole team is working on an “important” epic (this is, one with a deadline, like any other epic) and you come out of the blue with a bugfix unrelated to the epic without telling anyone: well, that’s weird isn’t it? Your EM/PM will ask you why you didn’t prioritise the epic’s tasks, and your colleagues could say that they cannot switch their focus or gather time for reviewing your fix (more so that it’s something that the EM/PM hasn’t approved).
So unless you are overworking (e.g., you work in your jira tasks AND on top of that you fix bugs) I don’t see it.
I would love to work on things that make sense like stabilising the system and all, but I work on whatever sells or whatever the EM/PM wants. These days unfortunately, shipping >>> fixing.
In my (albeit limited) experience, there's slack in the workweek, and that slack can provide the required time to do random stuff.
I recognize this isn't true in organizations where everything is micromanaged, work time is tracked in hours or even minutes, and autonomy doesn't exist.
The more the employees are treated like responsible professionals, the more this is possible. And conversely, the more they're like factory workers behind a conveyor belt, the less this is possible.
> In my (albeit limited) experience, there's slack in the workweek, and that slack can provide the required time to do random stuff.
How do you incentivise developers to put that slack to good use? In my experience, without an incentive, culture slowly rots to the point where the majority of developers simply don't.
It's extremely dysfunctional to micromanage devs to the extent that they can't take a bit of time to fix a bug without getting permission from someone. Unfortunately, a majority of companies in the industry are extremely dysfunctional.
This requires a lot of passion and motivation from individual developers within the company. Of all things they could be slacking off with during a pointless video call, they have to choose to spend than time doing thankless bug fixing.
This is a good way to introduce regressions, particularly if you don't have the QA resources to do full regression testing each release and lack automated test coverage.
I don't say this to scold you, but I think most of us should keep in mind that even simple code changes incur risk and add testing requirements.
Why would you interview with a company far away if you aren't willing to travel and eventually relocate there?
Job hunting has become a game of shotgunning your resume while employers cast the widest net, and this has been hugely detrimental. Internships, junior positions, and onsite training are disappearing across the board. Everyone instead wastes time shopping around without any real evidence that this way improves outcomes.
One can easily rehearse answers that sound natural. You could start with a partially wrong answer, realize midway and correct it. Easily fakeable. All the “ums”, “let me think for a second”, and even failing to answer 10% of the questions on purpose is easily doable.
If your resume is not a perfect fit, you don’t get an interview. So either it’s a almost perfect fit or no chance to get the job. What’s wrong about that?
I don't conduct interviews in that manner. It is more important for me to know that I can trust your words and that you are aware of your limits, so you can learn any missing skills on the job.
For example, if you have 3 years of working experience and claim, "I know Docker, Kubernetes, AWS, GCP, Azure, Python, React, PostgreSQL, MySQL, and networking extensively," in 99% of cases, I can no longer trust anything you say.
As for the 1% hidden gem I might miss out on, I likely won't have the budget for them anyway.
reply