The availability of tools is severely dependent on the runtime + language. With most of my work being in interpreted languages, it's just way easier to either use a REPL or print statements - as getting good debugging to work involves having Just That Particular (often - commercial) IDE, Just That Particular Version of the runtime (often outdated) etc. These things frequently break and before you have gotten it to work again you have spent so much time that using it over a REPL just isn't worth it. I never made the effort to master GUI-less debuggers like gdb though.
That said, on one project I did have a semi-decent experience with a debugger for PHP (couple of decades back) and when it worked - it was great. PHP didn't have much of a REPL then, though.
Absolutely true that not all runtimes and languages have the same level of tooling. But the state of tooling has dramatically improved and keeps improving.
I use PyCharm for my projects including Python, for instance, and it has absolutely fantastic debugging facilities. I wouldn't want to use an IDE that lacked this ability, and my time and the projects are too valuable to go without. Similar debugging facilities are there for Lua, PHP, Typescript/JavaScript, and on and on. Debuggers can cross processes and even machines. Debuggers can walk through your stored procedures or queries executing on massive database systems.
Several times in this thread, and in the submission, people have referenced Brian Kernighan's preference for print versus debugging. He said it in 1979 (when there was basically an absence of automated debugging facilities), and he repeated it in an interview in 1999. This is used as an appeal to authority and I think it's just massively obsolete.
As someone who fought with debuggers in the year 2000, they were absolute dogshit. Resource limitations meant that using a debugging meant absolutely glacial runtimes and a high probability that everything would just crash into a heap dump. They were only usable for the tiniest toy projects and the simplest scenarios. As things got bigger it was back to printf("Here1111!").
That isn't the case anymore. My IDEs are awesomely comprehensive and capable. My machine has seemingly infinite processor headroom where even a 1000x slowdown in the runtime of something is entirely workable. And it has enough memory to effortlessly trace everything with ease. It's a new world, baby.
Interesting bit of lore: FLTK was developed as a GUI toolkit to power Nuke, which at the time was the in-house high-end VFX compositing tool at Digital Domain. Nuke was then sold to The Foundry and its UI rewritten in Qt, but the legacy (in the form of FLTK) continues.
I thought I remember reading that there is some relation (or inspiration) from an older X11 toolkit called xforms. Which might also be of interest to someone.
I've been redoing some old projects in FTLK recently so this sparked my interest. And I just like delving into the history of things.
Even better. Discreet (later acquired by ADSK) used to ship proprietary RAID arrays for use on SGIs with their high-end editing systems. They had this smart thing where frames of larger sizes would be placed on the outer tracks of the platter for faster reading/writing - one of their promises was "if you press play, it plays" - and it always did uncompressed. What they did is a custom filesystem ("stonefs") that you had to use, the disks would not be formatted with xfs but instead with that proprietary filesystem. Now here is the tasty bit: the replacement hard drives they would provide themselves as well. Apparently, the "proprietarization" of a disk for such an array involved overwriting a few sectors of the disk with custom magic bytes so that the stonefs formatting software would accept it as "native". Obviously when disks in arrays would die, and the array would be too old to support, those disks would become unobtainable. I knew (via via) of a guy who could "flash" an off-the-shelf disk with the magic bytes so that it be recognized as a proprietary stonefs disk.
Luckily they developed a system for frame-addressable storage on standard filesystems in parallel, and when they decided to step off the "proprietary hardware" story all they had to do was enable it. Now it's just a directory structure.
I have observed the new European right-wing neo-liberal sunrise with great interest, and I think the "people entering the coutry", aka "asylum seekers and other pesky poor people I don't like" (where there still is social housing) or "IT professionals and bank boys and all those pesky rich hipsters I don't like" (for gentrified/gentrifying cities) are a much smaller part of the problem than it seems. It is easy to blame them because they are foreigners and thus don't vote, and it makes "externalising" the problem (which is multifaceted, complex, and absolutely the fault of the current generation of politicians/investors and the middle class) easy - it gives you a super simple "guilty party" you can smack at with a baton and feel good about it. Where as you might be a part of the problem (that lovely single family home you have invested in so well could have been a 4-story housing unit for 4 families, but it wouldn't have been your investment pot, wouldn't it?)
I think NIMBYism and financialisation of housing in general is the actual problem here. If it is a financial instrument to invest into a mortgage or development and you can expect those insane returns simply by being able to "afford" a home - you will naturally be skewed towards making homes more expensive and less available, as scarcity raises value. Where I am it is the trope of the last year or two in politics that "those pesky foreigners have gobbled up all the real estate", all the while construction is regulated up the wazoo, older people do not allow hardly an electricity cabinet to be built even if they can see it with binoculars...
It is in the interest of those who bought their houses early and for cheap (some - even before the Euro) and (some) paid off the mortgages. Since pension schemes are getting dismantled, and solidarity is decreasing - a lot of people tend to see "selling the big house" as their only plausible way to have savings that do not depreciate and do not get taxed. And they sure AF do not want housing to be more affordable, because their pretty single-family home will become more affordable too. Boo boo foreigners.
Naturally they would not want this investment to depreciate.
Mortgages not balanced with rents are certainly a problem, as are fiscal benefits to those taking mortgages (and the general neo-liberal thing with taxing investment way less than income). But the biggest issue IMO is the fact that housing is investment first, basic need second. This needs to change if folks in populated western countries want to live... somewhere.
> It is easy to blame them because they are foreigners and thus don't vote
I don't care if people vote. That doesn't inform my opinion on supply and demand.
> asylum seekers and other pesky poor people I don't like
If you're saying something that only attacks a person's imagined character, and not their argument, you shouldn't say it.
> If it is a financial instrument to invest into a mortgage or development and you can expect those insane returns simply by being able to "afford" a home
It's become a financial instrument because of the likelihood of future increase in demand vs supply.
* Multipart uploads cannot be performed from multiple machines having instance credentials (as the principal will be different and they don't have access to each other's multipart uploads). You need an actual IAM user if you want to assemble a multipart upload from multiple machines.
* LIST requests are not only slow, but also very expensive if done in large numbers. There are workarounds ("bucket inventory") but they are neither convenient nor cheap
* Bucket creation is not read-after-write consistent, because it uses DNS under the hood. So it is possible that you can't access a bucket right after creating it, or that you can't delete a bucket you just created until you waited enough for the changes to propagate. See https://github.com/julik/talks/blob/master/euruko-2019-no-su...
* You can create an object called "foo" and an object called "foo/bar". This will make the data in your bucket unportable into a filesystem structure (it will be a file clobbering a directory)
* S3 is case-sensitive, meaning that you can create objects which will unportable into a filesystem structure (Rails file storage assumed a case-sensitive storage system, which made it break badly on macOS - this was fixed by always using lowercase identifiers)
* Most S3 configurations will allow GETs, but will not allow HEADs. Apparently this is their way to prevent probing for object existence, I am not sure. Either way - cache-honoring flows involving, say, a HEAD request to determine how large an object is will not work (with presigned URLs for sure!). You have to work around this doing a GET with a Range: of "very small" (say, the first byte only)
* If you do a lot of operations using pre-signed URLs, it is likely you can speed up the generation of these URLs by a factor of 10x-40x (see https://github.com/WeTransfer/wt_s3_signer)
* You still pay for storage of unfinished multipart uploads. If you are not careful and, say, these uploads can be initiated by users, you will be paying for storing them - there is a setting for deleting unfinished MP uploads automatically after some time. Do enable it if you don't want to have a bad time.
These just off the top of my head :-) Paradoxically, S3 used to be revolutionaly and still is, onl multiple levels, a great products. But: plenty features, plenty caveats.
The one that caught me a couple of weeks ago is multipart uploads have a minimum initial chunk size of 5 MiBs (https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts...). I built a streaming CSV post-processing pipeline in Elixir that uses Stream.transform (https://hexdocs.pm/elixir/Stream.html#transform/3) to modify and inject columns. The Elixir AWS and CSV modules handle streaming data in but the AWS module throws an error (from S3) if you stream "out" that totals less than 5 MiBs as is uses multi-part uploads which made me sad.
The last part can be any size, so with a few tweaks to the streaming code you should be fine. Ready-made AWS SDKs handle this (chunking) for you. Truth be told, the multupart upload on GCP is even worse :/
First, whitespace makes things look _expensive_, more luxurious (look at all that material we surround our content with). This comes from print, and as screens have gotten bigger (and resolutions have gotten finer) this trend has entered screen design too.
Second is the perennial "grandma argument" - i.e. if your website or software or whatnot is not built in such a way that "a grandma could figure it out" it gets proclaimed "high barrier" and folks say that "nobody will ever take time to learn how this works". This often results in bikeshedding over features which are absolutely clear to anyone who has ever used a computer, are useful - but if the product design is ruled by a person driven purely by aesthetics - the features get killed. The issue though is that most software is useful exactly because it does not place a single button called "Do Thing Nao" in the middle of the screen, but actually tries to be a tool.
Replayable event streams (but careful there as, say, a complete drag interaction of multiple minutes may produce tens of thousands of events, so deduping/vector addition is needed). Also - "interlocked" interactions (if one interaction is in progress, no other interaction may start), orchestrated global shortcut installation...
It is actually a fairly indicative story. At the start there is a brilliant individual (Evan) who sets up an entire toolchain + core of the product. They then move on (or get pushed out, or get bored), and with the team (and the product) now being much bigger, things get replatformed to a more familiar, widely used stack. The success of these steps heavily depends on how robust the eng culture is at the organisation. I suspect (no evidence though!) that Evan and other founders have set up an excellent eng culture at Figma and even if they make a mistake at some point, there is sufficient resilience in place to correct. All power to them!
It's ironic. For the past 5 years I've been writing type strict PHP. People love to shit on PHP yet I found that when I started using strict types my code quality improved, amount of lines needed to produce a result decreased, and necessary unit tests to produce the same result also decreased.
Then a few months ago I decided to write a TS project from scratch. For the record I have 18 years of JavaScript experience. What I found was that the biggest barrier to entry was configuring webpack to be "just right". Other devs on my team with similar level of experience would have their eyes glaze over when webpack came up and get annoyed. For good reason. It took me several days to get it to work right. The fact that the tsconfig has 20 options that can effect the transpiler and not have good docs is a problem. The fact that there needs to be two tsconfigs in a react native project that compiles down to web as a secondary build target is another problem. The fact that you need a very experienced dev to spend days configuring webpack is another problem. Finding information about the right configuration is like the blind leading the blind. Most search results on the topic are riddled with half truths and non sense. Many devs rely on a preexisting webpack config and if they do anything to mess it up they are many times completely unable to fix it.
Typescript is fine. I guess. The code produced is nicer. But having to rely on webpack is an issue.
I actually like TS but I wish I didn't need to transpile anything or have to bang my head against a webpack config for days. It's by far the biggest barrier to entry because while you're banging your head against it you're not writing code. And for none technical stake holders when you have nothing visual to show at stand ups that can create friction and make the engineers seem like they aren't doing anything.
So far at multiple companies I've had to configure webpack for extremely complex JavaScript based single page apps which took me literally months of messing around with it until it worked just right. And until it does work "just right" the non technical folks think you're wasting time.
> What I found was that the biggest barrier to entry was configuring webpack to be "just right".
As someone who has just spent a whole week trying to plumb Vite + Rollup into an ASP.NET web application, I can relate to this on many levels.
I can produce 90% of our functionality with vanilla javascript + a sprinkling of JQuery, but to get something 'modern' in Vue.JS fitting into the application comfortably is a bloody chore. Sparing the gory details, it feels like orchestrating a thousand moving parts while being blind with a gun named 'ship or die' held to your back.
For comparison, the EF core at least gives me logs. C# is a delight to debug. Print statements can tell me what I need. These parts feel wholistic.
Yet the web stuff is just so scattered, so much to configure, so many options where if you want to do something even slightly non-standard you are in the dark, mashing the conf files until it works and you aren't even sure why but you have to move on.
This feels different from mastering one language, even though it has a steep learning curve. I hit roadblocks in perl but they weren't as frustrating and it felt like everything was feeding back to a cohesive whole. With webdev, it doesn't feel like that at all. I don't know why, I wish it wasn't so.
Yep, you're walking in one of the voids that is largely ignored by modern front end web dev. Frameworks like React and Vue advertise that your can easily add them to any page and that's technically true... but when you have a real world app built with a backend framework and you want to integrate it with React/Vue in a sane way... good luck to you!
All the pieces exist to make it work, but you won't find much documentation to help you. You'll have to rely finding blog posts, but of course if the post is more than a year old most of the libs or tools they're talking about will have totally changed. Once you do get everything up and running you'll often find that the dev experience is less than great.
That is really unfortunate. Webpack is a nightmare and outdated. I wonder how you came to use it? Node has a nice intro: https://nodejs.org/en/learn/getting-started/nodejs-with-type... (skip ts-node and go directly to tsx). Or Deno or Bun run your TS code directly. Modern frontend frameworks like Vue or Svelte have their own tooling, mostly based on Vite and Esbuild. I think it was just bad luck that you came across Webpack ...
They do fundamentally the same things but with very different approaches and tradeoffs.
Webpack and Vite are very different approaches to the same problem with different tradeoffs[0][1]
[0]: namely, webpack and its inevitable successor rspack, are way more flexible and arguably powerful but at the cost of higher complexity and more proprietary features like the webpack/rspack specific runtime. Superior in asset handling though, in many respects, and the high level of optimizations you make once you hit a certain complexity threshold is greater than what Vite/Rolluo has currently without extensive custom plugins
[1]: Vite or Rollup is most likely what most projects need. I’d recommend always starting there, as most of the advanced and flexible features of webpack/rspack are very much not what most need
Yes, so on the only sizeable TS project I did (which was a library, to be used by other teams) I bypassed webpack entirely and went for a mixture of tsc and esbuild. But knowing to steer clear of webpack (or even - that you can!) is a barrier.
I use TypeScript and esbuild for all my stuff. Even then, I often spend a crazy amount of time getting modules working.
Between the various TypeScript module options and various package.json module options (and various code patterns used), modules make JavaScript way more painful than it should be.
I think most of the JS language standards work the past 10 years has been awesome, but modules was definitely rushed and poorly thought though, causing years of frustration.
I agree that config and tooling are the hardest part of getting Typescript working. Everybody is saying use a framework, but if your use case deviates from the frameworks it can get pretty difficult. My use case that was very tricky to config was.
- SSR rendering of react in an express app (both typescript).
-Trying to get VSCode visual debugger to work for both the client and server code paths.
- Getting the various test libraries to work correctly (I still can’t get the NYC code coverage library to work).
- Mix of ESM, CommonJS, misconfigured npm packages that don’t expose their types correctly.
I ultimately used Vite, and got things working 90% the way I wanted and called it good enough.
Or just go straight to esbuild. I've found vite just makes things more complicated and slower. Particularly, the "smart reloading" breaks in subtle ways and turning every source file into a request doesn't scale well. This can probably be configured away somehow, but again, that just makes things more complicated.
Vue, Nuxt or Svelte are even better. No need to waste time and energy with React and it's peculiarities. However, if you want or must React, then Remix, hands down.
While I very much like svelte, it's not fully mature yet and it doesn't have a very deep ecosystem, plus the additional hiring time/cost basically means building anything other than a small scale solo project with it is going to end you up in the red.
On the Vue vs React debate, honestly it comes down to preferring templates vs components. Vue is simpler but there are good reasons for a lot of the React complexity, and React still has a stronger ecosystem and more developers.
I'd make the exact opposite suggestion: Always use WebPack. There will be a package in the future that has a particular WebPack configuration to make; and you don't want to figure out how to do that in another bundler.
I’m always amazed to see the number of SaaS companies based over here in the more privacy focused, non-Microsoft open source EU centric spaces, that use PHP but without a care in the world for strict typing.
It leads to scenarios where I receive OpenApi specs that loom like this:
type:
- string/integer
They just don’t give a shit because this kind of crap works in PHP.
They could use:
oneOf:
- type: string
- type: integer
Which is nastier to deal with a typed language client, but at least it conforms to the spec.
So thank you for actually caring about types in PHP.
Thank you. The replies to your post all seem variants of “you should’ve used X instead of Y”, but when you’re transpiling, you’re inviting a world of subtle bugs and edge cases. The added value, if it at all exists, is almost never worth the trouble, IMO.
As a solo dev with a successful electron app I can say that the 5+ year journey from babel+flow+webpack to typescript+webpack, between two targets (main node and renderer chromium) not to mention native modules, node ABI, dual package jsons, electron itself as a giant shifting foundation… has been one of the most intimidating challenges in my dev career and I’m coming out the other side much stronger and confident. Props to everyone involved.
This is the reason that JS frameworks are a thing. Next is buggy and overbuilt, but Remix is pretty much plug and play, I strongly recommend checking it out.
Im surprised that Remix doesn’t get much love in the community. Or is it because Vercel and their influencer team is yelling so loud about Next that we can’t hear the Remix people?
I feel like Remix is rising pretty fast. The death of create-react-app has pushed people to frameworks and Next (while loudly marketed by Vercel) feels overweight and underpolished for people who just want something that focuses on the most common use cases with minimal setup / fiddling where remix shines.
You want to do a lot but you don't want to pay for it. There is a shit ton of complexity on the web and the current frameworks (ie: NextJS/React/TypeScript) try to hide/manage this complexity but this only goes so far.
As soon as you hit an edge outside of their matrix of management you open the dark Pandora box of front-end development.
I remember when I first read this blog post, https://www.figma.com/blog/how-we-built-the-figma-plugin-sys.... Besides now feeling a bit old realizing this was 5 years ago, I remember thinking what an amazing engineering culture they must have at Figma (besides having a bunch of brilliant people). I mean, they talked about essentially trying out a tech path for a month and then deciding that path was a dead end - I find this so rare in startups where there is a lot of pressure to continually demonstrate "progress".
As a corollary, though, I think those kinds of cultures are only possible if your team is composed of primarily brilliant people, because these brilliant people can move faster than most competitors even if they do wander down an unproductive path for a while, and there is total trust that the folks on your team are capable and self-motivated.
That said, on one project I did have a semi-decent experience with a debugger for PHP (couple of decades back) and when it worked - it was great. PHP didn't have much of a REPL then, though.
reply