- fools who can't do simple math or read fine print ("have no idea how much 20 $2 per hour instances pumping data at 10 mbit/second add up in a month, and OMG that data is not free when i already pay for the instance?!". And besides they give me a whopping $100,000 credit - that will last eternity!")
- corporate tricksters ("if we don't invest into our hardware and buy AWS instead, our next quarter bottom line will look GREAT and i get a nice bonus, and by the time truth transpires, i will jump the boat to the next shop i do the same trick with")
- people with breaks in basic logic and total lack of foresight ("i can't afford buying all the hardware for my small pet startup, and will make do with just $200 a month in AWS, and i don't realise it will only work for as long as my startup is not successful and has no users - and when it's no longer the case, i will be vendor-locked with tech solutions based on AWS and petabytes of data which is $0.05 per GB to download, locked up there, and will bleed money for years").
They should be avoided at all costs except for development purposes, and if you don't know how to or can't afford to do something without clouds, you just don't know how to do it or can't afford it.
In Europe, none of my clients use clouds. They have dedicated setups with reputable providers that work a lot better than cloud-based ones and cost pennies. Also, i realise that my custom software development biz doesn't really work with EU clients, i barely make a profit with them and they get to be real pain. Probably suggests that educational level of Europe is a lot higher.
Don't have much experience with insurance companies - i only use them for mandatory things like health insurance and corporate liability insurance and car insurance - all of which come so cheap here in Europe frankly idgaf if they are indeed scams - plus, my insurer paid a nice amount to the other side when i got into a drunk car accident 13 years ago and i never had a problem with them.
As for banks, what's the problem about them? They get to be a little bit of pain because of KYC, but otherwise, what's wrong about them?
You should start with a single monolithic application on a single server that you scale vertically as much as possible before even thinking of scaling horizontally. Most apps won’t ever need architecture more complex than this.
One thing to remember is that SOA solves two problems: one of organizational scalability, and another of product scalability (with the usual caveat of "if done well").
Monoliths and traditional databases can take a beating before you need something else. It's trickier for rapid growth organizations where you are trying to take on many new members, but there are other solutions there too.
I'd also note that traditional web monoliths really have multiple services too (usually a reverse proxy + CDN, web application, and a data store). There is plenty of business logic on each of these layers too (this also explains the traditional split between Ops, Dev and DBAs), and this actually allows the setup to scale big.
I’m also of the opinion that most engineering/product orgs are extremely bloated and could move much faster with higher quality if they were cut 50-90%.
I like this - have started to think more this way, but I'd almost always deploy three boxes instead of one. I like the flexibility of having something that can auto-failover should an az or instance die or disk die.
That being said, I've seen VMs with multiple years of uptime on various clouds, so ymmv.
Making proper use of functional (stateless) paradigm in non-functional languages embodies a bunch of other good practices (testability, isolation, dependency inversion...).
Refactoring can always be done with a running (no-downtime) system at no extra cost (time or money) compared to rewriting or downtime-requiring approach.
You can always deliver user value and "paying up technical debt" can and should be done as part of regular work (corrollary from above: at no extra cost).
We'll never do away with physical keyboards for inputting text (yet I only have one mechanical keyboard I don't even use regularly :).
"AI" is the dotcom bubble (notice how every big company HAS to get in on it, no matter how ridiculous their application is?)... Further, it will simply allow those who apply their power unto others to do so in an even more egregious or deeply-reaching way.
Advertising should be illegal.
Proprietary software is basically always a trap (if it's not harmful or coercive at first, it eventually will be, well after you're locked in).
The web has been ruined by turning it into an operating system (also see "advertising should be illegal"). 99% of the time I just want very lightly-styled text, and some images. I don't need (or want) animated, drop-shadowed buttons.
Graphical OS user experience was basically "solved" 30 years ago and there hasn't been much of anything novel since -- in fact, in terms of usability, most newer OSes are far worse to use than, say, Macintosh System 7 (assuming you like a GUI for your OS). The always-online forced updates of modern OSes exacerbates their crappiness -- constant change and thus cognitive load, disrespectfully changing how things work despite how much effort you spent to familiarize yourself with them.
HTML, and retained-mode GUIs and DOMs generally, is all you need. Anything more complex is over-engineering. JavaScript was, broadly speaking, a mistake. 90% of what we need computers to do is do some I/O and put text, colored rectangles, and JPEGs/WEBMs on a screen, and that shouldn't be that complicated.
A lot of good things about way we wrote websites and native applications back in the early 2000's were babies that got thrown out with the bathwater. That's why we can't seem to do what we could do back then anymore--at least not without requiring 4x as many people, 3x as much time, and 20x more computing power.
(Maybe more than a few people on HN will agree with this, now that I think of it...)
User time is more valuable than programmer's time. Read: programmers should operate as if CPU cycles, RAM, disk space etc is precious. Less = more.
Why? If programmer builds something only for him/herself, or a few of their peers, it really doesn't matter. Do as you like. But be aware that one-off / prototype != final product.
Commonly held view is that programmers are a small % of population, thus their skills are rare (valuable), thus if programmer's time can be saved by wasting some (user) CPU cycles, RAM etc (scripting languages, I'm looking at you!), so be it. Optimize only if necessary.
BUT! Ideally, the programming is only done once. If software is successful, it will be used by many users, again & again over a long time.
The time / RAM / storage wasted over the many runs of such software (not to mention bugs), by many users, outweighs any saving in programmers time.
In short: fine, kick out a prototype or something duct-taped from inefficient components.
But if it catches on: optimize / re-design / simplify / debug / verify the heck out of it, to the point where no CPU cycle or byte can be taken out without losing the core functionality.
Existing software landscape is too much duct-tape, compute expensive but never-used features, inefficient, RAM gobbling, bug-ridden crap that should never have been released.
And that developer has a beefy machine doesn't mean users do.
I have always been of the opinion that no software should ever be released until the entire development team has spent at least a week personally running it on ten year old hardware. Nothing motivates a programmer to optimize their code more than to have to experience the same pain that users without the beefiest hardware have to endure.
iOS versions are obsolete within 2 years and Android within 5.
Generally, Android devs tend to have at least one Huawei, Xiaomi, or low end Samsung, because these break a lot and hold a good share of the non-American market.
However, the high end phones have their own pain - notches, fold, edge screens. I've built apps that didn't function well because the edge screens meant that buttons needed extra side padding because they fall off the screen. These are also the devices used by investors & in demos, so often high end phones are higher priority than the low ones.
There's some problems that have nothing to do with device age. Samsung gallery is one of the top image/file picker apps in the world and there's weird behavior once it exceeds 2000 images or so. I ended up hacking a file/image picker that was more optimized than Samsung's and it's why you see why many apps defauting to their own internal file/image pickers.
We will eventually adopt Capability Based Security out of necessity. Until then you really can't trust computers. I think it's still at least a decade away.
WASM is as close as we've been since Multics. Genode is my backup plan, should someone manage to force POSIX file access in order to "improve" or "streamline" WASM.
typescript is worse than plain javascript. The type system is not sound and can't actually catch that many bugs; at the same time it adds a lot of verbosity and wasted time satisfying the type checker.
mandatory code reviews on every merge are a net negative. Too many people wasting time on nits and YAGNI "improvements". Actually improving the code in a structural way is too hard and most reviewers won't spend the effort. It would be better to dedicate time and resources to code audit and improvement on a regular cadence, e.g. pre-release.
A/B testing is pure cargo cult science, it has no where near the rigor to actually determine anything about human behavior (look at the replication crisis in real psychology where they use 10x as much rigor!) You might as well use a magic eight-ball.
Heh. This thread is like an invitation to get downvoted.
I'll bite: Using const for all variables in JavaScript is moronic. It's a trend that should have been killed with prejudice in the crib. If you want type safety, use another language. Use let for variables and const for actual constants. Words have meanings. The const statement wasn't created so developers could litter their code with it to show how cool they are.
Imagine you're learning JavaScript at young age: "Here are three ways to declare variables - var, let and const - but you should only use the most confusing one that doesn't actually make verbal sense for general use, nor do what you'd think it would do based on its description. Use it anyways. Because reasons."
If the value of your "variable" doesn't change, it's a constant by definition, and a const declaration is appropriate. If it does change, using const will give you an error. Therefore, it's literally impossible to misuse const in a JavaScript program that runs without errors.
This is a highly bizarre take. Do you also object to text types being referred to as strings when they're not actually rope? Because if it's the misleading naming convention, then you have to be consistent in your argument.
Constants are a way to give magic numbers a meaningful name, and at the same time guarantee some minimal form of type safety by instructing the compiler/VM/Linter that they're not allowed to be mutated.
Also, I don't think I've seen any JavaScript tutorial teach anyone to use var for YEARS, it's really just let and const.
I'm a fan of let. When you use it all possibilities are still alive when you are declaring. It could stay null or hold an object. The skies the limit. When we use const it limits the variable to whatever wishes I had during declaring but never leaving me room to change my mind. I can't reuse and recycle I always need something new. Pencils have erasers. No one likes liquid paper.
I'd almost always rather build something than use a 3rd party service.
I understand it, I run it, and I don't have to deal with 3rd party's changes, performance problems, or downtime. Also, less bugs related to data consistency.
Is this really an unpopular opinion? I would think most developers would rather develop a solution themselves than rely on a third party library or solution. The problem is speed and cost will never be in favor of a homegrown solution.
In some smaller startups I've been in, they seem allergic to building anything they can just buy and call via an api despite the problems that might cause - it's another level of tech debt imo.
What about running your own db instance in a cloud VM? Do you really want to do all the infra management? I've done it and it isn't fun--well it can be making soup-to-nuts IaC recipies, but I wouldn't say it was a good use of my dev/ops working hours.
I think it's not that hard - I have configs and I even build Postgres from source so I can get the same version in dev environments supporting Linux and MacOS.
In production I find it a bit easier to profile what's happening if the db is performing poorly since I can login to the box and use standard tools - is it cpu/io bound, etc.
The actual forms of computing devices have gotten worse and more boring over time. Everything is just a flat slab these days. There seems to be little interest in experimenting with the sculptural design of phones or laptops. A few decades ago, this wasn’t the case.
This is especially relevant with touch screens, which I wish weren’t so omnipresent. I really don’t want to use a touch screen in a car, for example…
In the mainstream market, absolutely, but the hardware keyboard niche has been absolutely thriving as of late. I just bought a chocopad keyboard to make a gift for a friend.
1. Personal names are not generic strings, any more than Dates are numbers or Money is a floating point value. Name, as in a person's given and family name, or whatever, should be a type in every standard library, with behaviors appropriate for the semantics of a Name, language, and country. Yes there are lots of conflicting international differences. We've managed to handle languages and calendars of significant complexity, we can do so sufficiently well for the things we use to identify ourselves.
2. "AI" is a marketing term for a kind of automation technology able to use pattern matching to reproduce plausible new versions of the data used to train the model's algorithms. It couches this automation as especially powerful or magical as a way to draw a smokescreen around the real problems and limitations of the technology.
I love that first point and I wish it was something that was applied even further to other pieces of data - integers and floats are intended for mathematical operations and so to make database row identifiers integers always seemed strange to me.
That's a symptom of what's known as Primitive Obsession. Back in the days of programming languages before declaring new abstract data types with either prohibitively expensive (in time and/or memory) it was valid limitation. Today, there's rarely an excuse.
So taking a name and applying the reverse() operation on it yields a name? You can concatenate the names of two people together and that becomes a valid identifier for the two-person tuple?
Strings have behaviors that names don't, and names have domain-specific meaning that strings don't. An entire class of bugs can be eliminated if we stop treating people's names as sequences of characters enclosed in quotes.
Quantum annealing is the only truly useful quantum computing architecture to date.
I further believe that someone will figure out an algorithm that runs on classical computers to match the speed of quantum annealing, thus the Quantum Annealing machines, while currently useful, have a limited shelf life.
Shor's algorithm, which runs on Quantum Computers, requires the use of many repeated cycles of small rotations of qubits in the complex plane of the Bloch sphere. These are analog operations that accumulate phase error. It's not possible to use error correction with these operations, as those techniques necessarily sacrifice information in the complex component plane, leaving the real component "corrected".
Software should not be free for the users. The expectation that profits must be made through advertising, paired with the entitlement of users who scoff at the idea of paying for a product is a recipe for disaster over the long term.
Users, wallets full from a well paying, satisfying career job, will look a piece of software that they get tremendous use out of, decide that not only is it worth it, but want to support the developers of the product, and gleefully open their wallets and bestow money on the company that made it. Lol no just kidding. The first stop is to ask them pirate bay if they have a crack for it, then if there's an open source free version, and at no point are they really going to pay for, say, Winzip.
It's no wonder that free-but-with-ads is the predominant distribution method today.
I knew I harbored one, and apparently my belief that we'll always have general purpose computing is controversial! That is, unlocked bootloaders on general purpose computers; laptops and desktops, will continue to exist well into the future. I guess others here are more cynical about a dystopian future coming about than I. Turns out MacOS is just a few releases away from flipping the switch and only running blessed software. Any day now, we won't be able to run our own programs on our own computers because iphones already don't let you do that.
Days I learn things about other people are always welcome!
We think alike in many ways! I'd like to respond to some of your points.
> * Class Inheritance is an anti-pattern.
Just say no to classes. Period.
> * Python is an anti-pattern.
Yes.
> * The 80/20 of functional programming is use pure functions and acknowledge that everything is a list or an operation on a list.
Yes, though pure functions are way more important.
> * GraphQL is better than REST when you have multiple clients, codegen or any AI/RAG needs
While GraphQL and RPC are better in a lot of ways, I'd propose that none of these forms of API are always necessary, and that there's nothing wrong with simply having a single endpoint that receives a POST request with some arbitrary data that responds with arbitrary data. People speak as if you must at least use REST, but that is not true, and REST probably should be avoided unless it's somehow called for.
I think I just want to do more programming, even though I'm far from a good programmer. It was not an issue a few years ago when we didn't have kid, but nowadays I have no time outside of work so would like to get all fun during work.
The future of gaming is streaming, and home PC gaming hardware will eventually go the way of DVD players. GeForce Now is succeeding where OnLive and Stadia failed (20+ million users now, apparently). Offloading rendering to the cloud means much improved thermals, graphics, and battery life -- especially for laptop users and Mac owners. Apple Silicon is cool, but it's not going to beat a 4080 for pure graphics performance. And that's just computers... GFN can also stream to tablets, phones, TVs, and anything else with a screen and an internet connection.
It makes more sense for Nvidia to put GPUs in data centers with good cooling, shared between multiple gamers and idle workloads (AI, etc.), instead of having them sit in expensive but unused home desktops most of the day
Nvidia is the only one who has a real shot at this because they're the only ones who can directly allocate GPUs to cloud gaming (unless what's left of AMD wants to get in on the action too). And they're the only ones that Steam specifically partners with for its Cloud Play beta: https://partner.steamgames.com/doc/features/cloudgaming
Stadia failed not because of the technology, but because Google mismanaged it and never understood PC gaming cultures. Nvidia and Steam do, and it's a much, much better product for it.
As a supporting point, right now people have decent GPUs but as people get a taste of the streaming approach on a laptop for example, and with the GPU prices increasing, they might not want to spend a whole lot of money on it. A small subset.
And then the AAA-studio will see the hyper-paced style of game would be a bad experience and reduce the revenue, being more likely to prefer the styles that are more comfortable with the latency.
This in turn will make the approach itself more viable let alone the improvements in the area, and when the most popular games ensure they are viable on these platforms, you won't need to buy a 4XXX card.
People I know are looking into 3XXX and 4XXX cards at the moment when they are building a PC or buying a pre-built, 4XXX for the latter scenario, not necessarily 4090 or anything, really just the ones closer to the entry point, and honestly they aren't great value.
I don't like this situation but the writing is kind of on the wall, for the laptops I already see the streaming becoming common.
There doesn't need to be fast and low latency on every last square inch of the Earth for it to be a successful product, it just needs to be ubiquitous enough to be worth it most of the time to enough people. The gamers I know aren't also avid travellers, so if there's gigabit at the primary and secondary and tertiary residence, that covers enough to be worth it to most.
Maybe not if you want to go pro, but amateur/casual FPS play is totally fine. Have you tried it recently?
Between GeForce Now's 120 Hz support, adaptive v-sync, and Nvidia Reflex, it's very playable IMHO. No, it's not as nice as having a 4080 below your desk, but compared to midrange or lower GPUs, the minor input/network latency is more than worth it in exchange for a smooth high FPS, higher viewing distance, and great DLSS.
I don't play competitively (as in ranked), but I do play a lot of shooters on it. It's so much better than it was on Stadia, for example.
I don't play competitively either, but it's pretty easy for me to detect a frame of lag in my mouse controls. (I had to return a non-gaming ergonomic mouse for this reason.) Maybe most people won't care, but it has a material impact on gameplay and immersion.
Anyway, my problem is with the assertion that streaming is "the future" of gaming. It might become a major segment of the market, but I don't expect that it will dominate.
Well, that's why it's an opinion that few agree with :)
IMO most people don't care about playing competitively anyway (as in ranked e-sports). PVP shooters are common even on shitty devices (like PUBG or Fortnite on phones)... and of course consoles. Having a leet PC gaming setup with a super high DPI mouse just isn't a concern of many -- or maybe most -- gamers. Good enough is good enough.
Granted, people have been saying "cloud gaming is the future!" for ~~the better part of~~ (edit: more than!) a decade now, since OnLive was first launched. Stadia was a very public failure, but now there are many (GFN, PSNow, XCloud, Luna, Boosteroid, Shadow, etc.) Among them, GeForce Now is the only one that has the synergy of also being the dominant GPU producer. I believe it won't be long before more players game on the cloud than on PCs.
But feel free to come back in 5-10 years and tell me how wrong I was, lol.
Some applications are compute-heavy, and BitGrid seems a nice architecture for that.
But some applications are not. Where can they get/put their data?
Btw... this screams for implementation on reasonably sized FPGA (say, 100k+ LUTs). The size to have a go at more interesting uses than "hello world" type ones.
Any chance of doing that? A single cell in the grid looks simple enough. Plenty cheapo FPGA boards around.
The program RAM is the columns of the LUTs, 64 bits/cell.
The Data RAM is the latches of the data flowing through the LUTs, 4 bits/cell.
I've thought it likely that edge I/O would be best.
The debug tooling would ideally map everything as 2 chunks of RAM, program and state, thus allowing random I/O via a host CPU or DMA.
The major hurdle after that is software for compiling and routing, which I intend to build
My emulator runs at a good clip, 32 x 32 cells at about 35 khz, or 1024 x 1024 at about 16 hz. I haven't multi threaded it yet, but I can. That should give about 4x performance
Maybe an emulator library for the Arduino would be a good idea?
The Google chip shuttles should be a good way to get it implemented for real.
This is some sort of cellular automata, nothing crackpot
about that, perhaps a FPGA demo with it would demonstrate
the claims its a superior architecture.
There are many exotic architectures like Transport-triggered,
Computational RAM and async CPUs, but they're not easily
implemented in hardware and claims of these exotics outperforming
conventional CPUs didn't materialize.
something being powered by "AI" is actually not that strong of a selling point to most normal people. I've seen several ads that say "try <thing>, now powered with AI". To me, most people don't want to use something just because it's powered by AI. It's almost like bragging about automated phone systems: "Call our customer service, now powered by automated machines!".
... and listen to the recording "your call is important to us" for 30+ minutes before being connected to some harried, underpaid call-center worker in a low-wage country.
I'd say this is mostly due to the fact that building software in many companies is largely a social activity, and there's an interesting intersection of personalities and motivation that ultimately disincentivizes building good quality software
We could have kept social rules outside of the CPU and made sure the machines were unintelligible.
We should have kept the machines blind and retained social control between people and families.
Once we allowed machine code to be intelligible, we poured control of services, governments and speech into machines.
Truly free speech in a CPU became synonymous with the concept of a virus. And real free speech has followed, programmed rules on an electronic device are socially silencing people.
The concept of a virus, outlawed one of the best uses computers could have provided, maintaining speech globally.
Now we are racing to the bottom, to put the rules of society into a machine. Then lock society out of the machine, with a few holding all the keys to the computing kingdom.
Slavery to machines, as enjoyable as it is, will eventually become nonsensical.
As power becomes centralized, all the new moves will be made by those outside the system.
So I think that CPUs should be running unintelligible code, and LLMs are a baby step towards that.
I see the consistency in your thoughts. I disagree as to the root cause, and thus the course of action required to correct the situation.
You can take $5 out of a wallet, and buy something, and not risk the rest of your cash, or your other assets.
You can't do that with a computer. You can't tell a program to only run a program for X purpose, or X seconds, etc... because the Operating System has a gaping hole in its design. All of them share that hole.
Thus we can't just run our own web sites, nobody wants to risk their unsafe computers going to them.
Thus we're forced into walled gardens.
Thus we lose the war for general purpose computing, and democracy.
Theres 2 types of VR. Home VR, and Commercialized VR.
Home VR wont last long and is definitely more fad than anything. Commercialized VR is actually a pretty big niche for things you might not have even thought of. A company i worked for made VR into a "movie experience", but beyond just playing a VR game. Floor rumblers fans, physical to VR mappings, scent sprayers; truly a wild experience.
Beyond that entertainment value though, are possibilities like military and emergency aid training. For example you could have a large warehouse with tracker cams all over, and setup various physical objects to match the VR world, then use that VR world to run training exercises on a "live" battlefield. While its obviously far from the real thing, its a good way to train without pulling out all the tools.
And lets face it, we all want a giant virtual room to do anything we want, a .. holodeck...
It could solve the fantasy girlfriend problem. In todays age you are stuck buying expensive gifts off of an only fans page waiting for the next status update. VR could take over this market.
VR isn't going to steal the video game market or business conference. If VR was as open ended as second life I think people would find cool ways to interact. Fan expos, social clubs, esports, concerts and sex are natural fits
Technology must serve humanity, not the other way around.
If a new technology threatens to eliminate hundreds of thousands of jobs, and the benefits are marginal or largely in favor of the capital class, then we should probably not pursue that technology.
That's true, though most technology that eliminates jobs is a net benefit to humanity as a whole. Where would we be if stage coach drivers and telephone switchboard operators were assured permanent employment in their field?
1. The less code I depend on written by others, the fewer maintenance problems I have.
2. Most developers 10x over-complicate solutions and spend 20x too much time implementing them.
3. Don't ever deprecate API's! An API is a contract that potentially thousands of systems or developers rely on. So find ways to move forward without breaking things. You are a pro right? So act like one. Linus gets it. Most don't.
I see a lot of untapped potential in using a domain-specific language to drive a framework, instead of using the general-purpose language that doesn’t know anything about what the framework has to offer. In general we’re too eager to throw boilerplate at problems, and too reluctant about tools that require learning something extra.
Concurrency using async function type is dumb. Either make and compose futures/promises with normal language features OR support auto-yielding coroutines/goroutines/fibers (lightweight threads).
On the contrary, it's just too smart for me! With async function types, I just can't wrap my head around what's actually happening and if it's happening the way I want it to. I'd much rather deal directly with explicit coroutines.
I’d say this is more of an argument against camelCase (which does indeed suck in terms of legibility, especially compared to the much_more_readable_snake_case).
That font in particular yes, but proportional fonts are faster to read.
I think anyone can notice it if they try. BTW, there is a study suggesting we start reading noticing the shape of the word envelope. If that's the case, that could be one reason
Uppercase letters have left padding, which is 1/3 of a whitespace char. I got used to it within a few minutes.
That padding is for helping with camelCase, but another opinion is that snake_case is better than camel. Better yet, some languages allow them in identifiers.
Late to the party, but my big one of late.. Job Titles.
- Job titles in tech are out of control. Searching for a job requires me to use like 3-7 searches because everyone is making up titles as they go, and its just insane. We call ourselves engineers, and its time we take another page from the book of engineering, for job titles. Engineer titles are very pointed, and thus job function limited by their title. Example: Cloud Infrastructure Engineer [I], this would be a generalist cloud role, at the jr/entry level, eventual knowledge of all clouds and their things as you reach [III] status (or [V], depending on company size). Their only function is infrastructure for cloud services. Today, such a title would also require devops knowledge, CI/CD, various services for monitoring, logging, kubernetes, etc. These should obviously be jobs for multiple people but we just keep letting things get stacked on to a single title. Ive even seen ridiculous titles that are very obviously 3 jobs in one, especially when you look at the responsibilities/qualifications of it. Really need to get a grasp on this. likely nothing will get fixed without entire unionization of people in tech, unfortunately.
- React, Next, etc, are steaming piles of code. I was going to build a react frontend for a project that was size constrained. The out of the box build, adding only react and some bare minimum packages resulted in well over 4mb of js, AND I DIDNT EVEN HAVE A WORKING ANYTHING YET. Utterly disgusting. Add to that that it seems that any dev who does front/backend work worships their framework of choice to the point of using it in places it has absolutely no reason to be. Ive seen these developers trying to write shell scripts in js/ts, using react. The amount things ive seen written in js that had no right to be, is way to high.
- fools who can't do simple math or read fine print ("have no idea how much 20 $2 per hour instances pumping data at 10 mbit/second add up in a month, and OMG that data is not free when i already pay for the instance?!". And besides they give me a whopping $100,000 credit - that will last eternity!")
- corporate tricksters ("if we don't invest into our hardware and buy AWS instead, our next quarter bottom line will look GREAT and i get a nice bonus, and by the time truth transpires, i will jump the boat to the next shop i do the same trick with")
- people with breaks in basic logic and total lack of foresight ("i can't afford buying all the hardware for my small pet startup, and will make do with just $200 a month in AWS, and i don't realise it will only work for as long as my startup is not successful and has no users - and when it's no longer the case, i will be vendor-locked with tech solutions based on AWS and petabytes of data which is $0.05 per GB to download, locked up there, and will bleed money for years").
They should be avoided at all costs except for development purposes, and if you don't know how to or can't afford to do something without clouds, you just don't know how to do it or can't afford it.
In Europe, none of my clients use clouds. They have dedicated setups with reputable providers that work a lot better than cloud-based ones and cost pennies. Also, i realise that my custom software development biz doesn't really work with EU clients, i barely make a profit with them and they get to be real pain. Probably suggests that educational level of Europe is a lot higher.