Hacker News new | past | comments | ask | show | jobs | submit | bmgxyz's comments login

This blog post[1] may interest you. As you suggested, the workflow seems to be:

1. Try various techniques that might trick the firewalls on both ends to let the connection through. This requires a relay for the initial negotiation only.

2. If (1) fails, then use a relay for everything.

[1]: https://tailscale.com/blog/how-nat-traversal-works/


>you can get a sport pilots license in as little as 20 hours of training

This is not enough training to operate safely, in my opinion. Even in VFR conditions, I think it would be negligent for an instructor to turn a 20-hour pilot loose without further oversight. Even if a student pilot has flown solo by that point in their training, their instructor still needs to sign off on all cross-country flights. Entering Class B with 20 hours is out of the question in my view.

I also disagree that the Sport license meaningfully decreases the cost of general aviation. The largest expenses are fuel and aircraft rental fees or maintenance, neither of which are mitigated in the long term by a shorter training period.

It sounds like you are probably more experienced than I am, so maybe I'm wrong.


I think this means that a previous iteration of this effort had a value of £249,000. The article is mainly about the current iteration, which has some other total value that is presumably larger. It isn't clear how much the current iteration is worth, at least from the article alone.

In general I agree that this might not be significant, unless the total value of the current contract is large or there are notable research or engineering results.


Specifically, this new stage of the project has £2.9 million of new funding, according to the linked press release. Still feels like pocket change, I wonder how much they can really do with that little


I use the Redirector[0] browser extension to do this, and it works great.

[0]: https://einaregilsson.com/redirector/


Good lookin' out, thanks

Edit: Here's the config I use to twitter -> nitter

    Redirect: https://twitter.com/\*
    to: https://nitter.net/$1
    Hint: Any word after twitter.com leads to nitter
    Example:https://twitter.com/some-username → https://nitter.net/some-username
    Applies to: Main window (address bar), IFrames

That got me wondering what other frontends there are for popular sites, and though I don't use reddit, I know a lot of folks do, so this should work for a thing I just found called 'teddit' which is inspired by Nitter

    Redirect: https://reddit.com/*
    to: https://teddit.net/$1
    Hint: Any word after reddit.com leads to teddit
    Example: https://reddit.com/u/some-username → https://teddit.net/u/some-username
    Applies to: Main window (address bar), IFrames


Here's a great compilation of privacy-preserving, JS-free front-ends for many popular sites like Twitter, Reddit, Youtube, and more: https://github.com/mendel5/alternative-front-ends


Thanks!


This seems to imply that there isn't any high-resolution precipitation data available that could provide these "minute-by-minute" forecasts, but that isn't true. The National Weather Service provides several radar products that give data with resolutions in the range of 500 m using their NEXRAD technology[0]. This allows for some pretty good estimates of when precipitation will start and end over the next hour or so. This kind of forecast product is called a precipitation nowcast. Other nations have similar systems.

If you use the NOAA desktop tool[1] to view the data from NEXRAD stations, you can compare to services like DarkSky and see that they are very likely using it without much editing.

The simplest nowcasts use optical flow techniques rather than meteorological modelling. On short time scales (less than an hour), these methods can give passable results. I built a tool[2] that pulls this NWS data from their Web server and gives you a nowcast.

[0]: https://en.wikipedia.org/wiki/Nexrad#Super_resolution

[1]: https://www.ncdc.noaa.gov/wct/

[2]: https://github.com/bmgxyz/threecast


Moreover, I've used minute-by-minute forecasts myself, and (at least where I lived at the time) they were quite accurate. My use case was "where is a 30 minute gap when I can walk home without getting soaked", and I never once got soaked. So much of the OP falls into the category of things-I-know-to-be-false-from-experience.

The article really has a strong vibe of "algorithms are faulty, we need humans in the loop to make sure they're behaving well!", with a hidden assumption of "humans are less faulty than algorithms". That's an empirical assertion to be determined on a domain-by-domain basis. It's certainly true that having a human in the loop leads to worse outcomes in chess (unless the human has enough modesty to just not do anything). The same is increasingly true of other domains as well.

Perhaps someday, incorrect, largely content-free FUD articles about how algorithms suck will themselves be written entirely by algorithms.

This is pushing way too many of my buttons, so I'll just close by pointing out (on the other side of the apps/humans scale) that a substantial fraction of the time, when I check the weather on NWS, it says something like "Today's high: 56; current temperature: 58". I certainly hope that a human in the loop would fix that problem.


> It's certainly true that having a human in the loop leads to worse outcomes in chess (unless the human has enough modesty to just not do anything).

No, this is actually totally false. There is a world championship in computer-aided correspondence chess [1], and you won't get anywhere near the top ranks by having "enough modesty to just not do anything."

[1] https://www.iccf.com/


Does deep blue participate?

If not your assertion is false.


The best chess-playing program is Stockfish and it could give deep blue and any human a handicap and still win easily

The state of art is much better now than before


Deep blue is 25 years out of date and far far weaker than modern chess engines.


I think that strengthens the point, don't you? Deep blue could beat humans a long time ago, and it's still the case that computers don't need humans to play chess, and play it better than humans do.


> Deep blue could beat humans a long time ago

Yes

> and it's still the case that computers don't need humans to play chess,

Sure, I guess

> and play it better than humans do.

In the sense that they can beat humans in a 1v1, yes.

But none of that is relevant to the original claim, which is that a human in the loop makes a computer play worse--i.e., that human+computer is worse than computer alone. This claim is false, as the ICCF championship demonstrates each year.


I can’t comment on America, but when I lived in Amsterdam everyone I knew used an app called Buienradar for this exact use case. The accuracy was astonishing.


Any chance of this working in Canada? Even just right across the border?


I hadn't tested it before, but yes, it looks like there's coverage for some major Canadian cities near the border, including Montreal, Ottawa, Toronto, and Vancouver. You can check your locations of interest on this map[0] (requires JS). Just click "Maximum Radar Ranges" on the left and see if you fall within the coverage circles.

As I write this, there's some light precipitation just north of Toronto, and threecast seems to give reasonable output there. Note that the radar coverage areas aren't always perfect circles because terrain can block the radar beam.

Canada may have a similar system with better coverage, but I'm not sure.

[0]: https://www.ncei.noaa.gov/maps/radar/


I tried using this a year or two ago—seemed like a really cool idea, and I still think it is—but I couldn't find an actually useful application for it in my life.

Not saying there aren't plenty of neat applications out there, but once the initial "that's cool" feeling wore off, I wasn't sure how to actually get some value out of the system. For me, it felt like a time sink.

Curious to know if others have found use cases that are low-maintenance and high-value.


I setup huginn about two years ago primarily just because it looked interesting. I then sat there for a while wondering what to do with it. I setup the basic things like a morning weather alert, and daily digest email and basically ended up with the same feeling. I decided to leave it for a couple weeks or so and just see if I came up with anything else more useful. That was definitely the right way for me to get going with Huginn. It's ultimately a bit of a multi tool, if you keep it around you're bound to reach for it although you may not need it the second you buy it.

Since having it setup I've used it to:

* Monitor covid travel restriction changes on the UK gov site and email me when updates are made.

* Scrape the site of a housing development where I wanted to view a house early as a potential purchase to know first when they opened viewings.

* Look for a price drop on some computer monitors on Amazon as the price was quite high when I found them.

* Daily discord report on the price of flights on a particular date when I and a group of friends were thinking about a holiday together.

* Alert when an exchange rate dropped or increased by a certain amount to help figure out when to exchange money for the same trip.

I find most of my uses include some kind of web scraping, although not always, and are usually things I want to run frequently and have run reliably. Most don't live more than a few months but it's a great tool to have knocking around as the effort to spin up an automation is much lower compared to writing a custom script each time.

I actually have n8n.io self-hosted as well because it has some nice pre-bundled integrations with various services like Spotify which huginn doesn't. That said, I've not really enjoyed using it, I find myself hitting F5 a lot because the UI breaks and won't recover and while it's more visual it's actually much harder to view the logs of a workflow. I tend to default to huginn unless n8n has a particular integration which will make the task faster to setup. Once setup though, I've not had any problems with it.


I agree, I've never quite been able to figure out anything useful over alternatives. Like the usual "send me an SMS when the weather will be rainy" example - I use a highly customizable weather app that gives me more power to send me stuff like that without having to fiddle with a new system.

This system's integration with Mechanical Turk could be interesting, but I'm too dumb right now to figure out some cool things I could do with it.


I have “frenemies” who are semi popular. I want to keep alerts on their LinkedIn and Twitter. Similarly startups I want to know if they’ve stagnated or gotten funding. I would like to get earnings calls emails for some companies I like. I want to get a ping if the forex rate changes for some combos, or if gold price drops by some percent. I and family have many bank accounts and it’s a real thing that you need to keep an eye on the finances of your banks here (one goes down every few years). I will like to have news alerts on all the small towns my family is in so I know if something big happened anywhere.

I think it’s definitely a mindset to think of everything you do or read every day and think if you can offload it to someone or something else. Once you switch that knob on you notice so much every day.


> I have “frenemies” who are semi popular. I want to keep alerts on their LinkedIn and Twitter.

That sounds so far away from how I deal with "frenemies" (I just ignore them) that I have to ask: What do you get out of reading what they are up to on Twitter and LinkedIn?


This sounds fun. Rust is amazing, and I've been playing with it on AVR chips lately. And I spent several summers sailing years ago. Email me if you'd like. Curious to know what you're working on.


I wonder how difficult it would be to implement a language for specifying Factorio factories without worrying about physical layouts. This post provides a lot of good insights into how that could be done. Once a factory is specified, a compiler could turn the high-level specification into a blueprint or a valid save file to be loaded up in the game.

Of course, the compiler would have no way to lay out the factory with respect to available resource deposits. Maybe that would have to be part of the specification as well? Or maybe the programmer (player?) could pass an empty map file as a compiler argument.

Also, this doesn't give any consideration to the tech tree or enemies within the game. I guess those could be compiler flags, as long as we're dreaming.

Look, all I'm saying is that it would be really cool to be able to check factories into version control.


I actually made this, a factory "compiler" that takes in a target output (eg. 1000 science / minute), and lays out a blueprint working from raw inputs to the desired output. It's hacky and buggy (and by this point probably out of date) but you can see it here:

https://github.com/ekimekim/factoriocalc/tree/generator

The factory it generates is a main bus design, with discrete "steps" on the bus that take certain inputs, process them in a standardized area, then put the output onto the bus. It also stops every so often for "compaction" where it reduces several low-throughput belts into fewer high-throughput belts. The whole area is covered by roboports to enable automated construction (no logistic robots are used, production is entirely done by belt) along with power lines etc.

The full layout looks like this:

    ||bus||
    |||||||  ...beacons...
    vvvvvv\-> input to process
    ||||||
    ||||||/-- output from process
    |||||||  ...beacons...
in a repeating pattern, each process step sandwiched between two lines of beacons.

It is very much not optimised, the idea was to make something as simple as possible that would work.

There's also a blueprint-to-ascii-art renderer which I added to help debug layout issues.

I haven't played factorio in a while but I do plan to keep working on this. Due to some poor choices early on implementing the last few fiddly (and less interesting) bits ended up being the bottleneck. In its current state it can go from raw inputs to a full suite of science packs, though some of the recipes are probably out of date, and I think there's some bugs in a few of the layouts that manifest in the finished blueprint as belts the wrong way around, etc.


Oh it's a python app. I read `virtual factory constructor generator` and assumed it would be Java.


This sounds interesting! I would love to read a post about this. (I am currently working on automatically generating belt balancer layouts, but that is an entirely different beast.)


Yeah, I should write up how it works. There's a README but it was written as an initial plan of attack and the results didn't always reflect the original plan.


Just wanted to say that this is awesome.


Personally I think most of the game is actually about managing changing requirements and layouting, but there's certainly a subset of gamers that are after the perfect endgame base (the X science per second crowd). If that's the target group of the compiler then you can assume that enemies don't exist (they waste CPU power), resources are decoupled from the factory using trains (they run out too fast for tight integration) and that the entire tech tree is explored (everything before that is just a unimportant bootstrap base). Making those assumptions certainly makes the compiler more tractable, but it also gives you another interesting metric to optimize (CPU load of running the produced layout)


If you can make some assumptions like train length and loading/unloading station design, you can probably guess how many trains you need to supply a given design, which means you can probably automate the placement of resource collection outposts.

You'd need some empirical data on train acceleration and top speed, but once you have that plus some generous assumptions about latency tolerances, you can probably automatically lay out tracks and train routes and resource outposts.


Laying out the perfect Factorio game would be akin to laying out a high performance super scalar out of order cpu. CPUs are math factories.


There was a poster a while ago that designed a series of mods that would automate the sprawl of his base. It would detect ore patches and set up mining and train loading.

https://forums.factorio.com/viewtopic.php?t=41377


Did we just automate automation


Welcome to Factorio, the game where the goal is to automate your automation of automation.


I’m a beginning megabaser myself. For how engrossing Factorio is, the vanilla gameplay to the end of the tech tree is just not enough ;)

At some point, the inserter-to-belt interface becomes a serious bottleneck. CPU load does too. As for enemies, you can basically treat them as a solved or solvable problem after artillery even if you don’t turn them off.


Such a language could be an extension of VHDL or verilog (or more likely, just a library in such a language). The map could plausibly be the equivalent of an FPGA architecture, and build upon existing tooling. Caveat: I'm a bit of an FPGA noob and have zero clue how one specifies a detailed FPGA fabric to such tooling.


No, you are right. One would likely need to define more logic types to be fully accurate, but those could be abstracted away.

Just connect blocks A and B in some hardware description language. Further describe B as an assembly of blocks A and C.

Ask a compiler such as yosys to flatten your hierarchy down to a set of primitive blocks (say, A and C). Once you have the diagram, use a custom placer and router to position it.

There might be additional constraints while placing blocks. In silicon, this relates to impedance, design rules, and making sure the clocks are properly synchronized (adding buffers, etc). In factorio, this could be making sure conveyors are the same length, placing extractors on resources, etc. But the general topology doesn't change, since you are the one specifying it.

Note: I've done some, but very little logic synthesis, this is really a bird view.


I have a somewhat nooby question about Verilog:

Does Verilog allow the description of throughput or a capacity constraint? I'm imagining a situation where a specific component (or belt) can only allow so many messages per second or needs less than a specified amount of current. Or is this concept somehow handled in a different way when specifying circuits?


I am the latest in a chain of noobs in this thread, but I think what you're talking about is handled by clock cycles. Components use the clock to gate the flow of information down lanes.


Yeah, usually you want to sync everything with the same clock.

Verilog and VHDL allow specifying delays, which is basically troughoutput. Basically:

    when input changes:
        computed_output = f(input)
        after 10ps:
             actual_output = computed_output
That's not actual syntax, mind you, I am a bit rusty for this. But the idea is that you either make sure every delay fits in your clock period, you use different clocks, or wait a few clock cycles to sync everything (which is the same as having different clocks).

Of course, you can oversize a capacitor to drive a high-capacitance line faster, and that would require more current. I don't think logic synthesis tools can handle that sort of compromise yet.

This is also why non-sequential (no clock) logic is hard: you would likely need synchronization signals so that the rest of the circuit knows when it can change the inputs (maintaining those during setup and hold times is necessary to guarantee valid output... try changing the input numbers while you perform a multiplication by hand).

And that class of issues is likely not a problem at all in factorio: I have only played mindustry and infinifactory, but I guess that factories do not run until they have the right input materials? Control signals are already there, in the form of {material present, material absent} on the belt, and factories are already fully-fledged state machines.


The belts are horrible little buggers. They've got two sides, which get out of sync going around turns. But ultimately, factorio runs on a common clock, and the belts are state machines and should be modeled as such.


There was a game way back on the TRS-80(?) - I think it might have been called "robots" - and you had to program these robots to do things based on logic gates... (I can't recall the details, but surely some HNer will)

---

It would be cool to have a CPU/GPU/Circuit design game that literally made a game out of designing logic gates and instructions as a game - but based on real-world circuit design....


Do you mean Robot Odyssey: https://slate.com/technology/2014/01/robot-odyssey-the-harde... ?

As for your idea, Zachtronics had one for a while, but it was flash, so I don’t know if it still exists.


YUP!!!!

Thats the one


I thought you meant TIS-100 which is also a game by Zachatronics where you write a basic assembly to solve puzzles:

https://store.steampowered.com/app/370360/TIS100/


You absolutely can check factories into version control. They are exported as 'blueprints' and many blueprint management sites exist. One such example: https://factorioprints.com/


That's not quite as satisfying as text-based files managed via proper version control


"infrastructure" as code :)


"supply chain manufacturing as code"


Blueprints are text-based, though.


This is about as meaningful as saying base64 encoded pngs are text based.

Most tools operate on the compressed, binary header including, base64 encoded blueprint strings, not the raw blueprint JSON payload.


base64 is an encoding, and a base64-encoded PNG is still an image. Factorio blueprints are base64'd deflate'd JSON with a small single-byte header[0].

It would be comically trivial to setup a .gitattributes filter[1] to do the conversion for you on checkout (to the base64 form) and commit all JSON, allowing for beautiful diffs.

It's a bash one-liner[2] to turn JSON into their base-64 form and into your clipboard anyways.

[0] https://wiki.factorio.com/Blueprint_string_format [1] https://git-scm.com/docs/gitattributes#_filter [2] left as an exercise for the reader


Hardware description languages such as VHDL/Verilog feel very similar to Factorio in that "everything is happening at the same time in parallel". Definately takes a while to wrap your head around it coming from procedural languages.

The "compilers" for these languages have very sophisticated "routing" algorithms which synthesize efficient physical layouts of circuits.

Programs like Quartus even let you edit your description visually as an abstract block diagram by dragging around wires/placing blocks.


As an (technically) EE who studied mostly CS courses (and worked in software engineering since), that's something that's really (occasionally) bugged me with software, especially when deliberately choosing asynchronous (in the broadest concurrency-inclusive sense) patterns and syntax. It just seems so... much harder?

Electronics is just inherently 'parallelised', and software generally isn't, I get that. But some how we built up from parallel hardware to procedural synchronous (I know it isn't all but let's be honest it ~~all is) software, then built up further to sometimes wanting parallel again, and it's just sort of hard (er than it seems it should need to be) to use?

I challenge anyone who's used exclusively procedural languages to try an HDL (like Verilog or VHDL) or at least a high level declarative language (like Prolog or HCL/Terraform, as long as you view it as a language rather than config files) and not feel refreshed.

> Programs like Quartus

Egh, if I have nightmares tonight I'll know why!


Most folks will have used a least a little declarative programming if they're ever interacting with a DB - you can drop out into procedural land using cursors and functions but SQL really shines when you tell your DBMS what you want and let it figure out the details itself.

Building complex logic ins declarative languages is one of the most rewarding things I've ever done. I tend to prefer a strong focus on type transformation when approaching a problem in any paradigm (I'd tend to read "Get the user's name" as "Create a transformation from a User ID to the User's Name and then cast the value") but just putting all the blocks for declarative programming together and then saying "Now, go!" is quite satisfying.


If you use a language that lets you program in terms of data rather than operations this principle can be applied to programs - it's a hard problem, however.

Clock signals do seem like a very useful abstraction (when I have my software-hat on) - not sure how you'd do it for a general purpose CPU however.


I have been day dreaming about a program taking a factorio blueprint and running it through a genetic algorithm to produce a compressed but still functional version. I can imagine it could produce some interesting spaghetti layout! One way to ensure that the end result still works the same would be to only let the algorithm work with a selection of predefined refactoring steps that guarantee that the factory does not break. Another maybe more interesting approach would be to include some actual simulation that could test the factory. With the simulation the algorithm could work more freely with possibly destructive changes and apply them until the factory works as expected. This I figure would produce even more interesting spaghetti..


There's a haskell project called Clash[1] that produces VHDL/Verilog that gets pretty close to this for FPGA development. [1]https://clash-lang.org/


Bluespec is also based on Haskell and is now open source


This will take a specification: https://kirkmcdonald.github.io/calc.html#tab=graph&data=1-1-...

But 98% of the problem remains: generating a layout.


I don't think it's actually that hard, as long as you don't try too hard to optimize the layout itself. Write a few heuristics into the system and you can get really close to the thing you're describing. Most of the throughput modeling is pretty straightforward from an algorithmic perspective.

This probably gets the most interesting in the super-late game where players have unlocked all of the tech and are building megabases. Now that the making of everything has been automated, perhaps the building of everything should be automated as well.


I think optimizing layout or at least attempting to minimize the amount of crossed conveyors is most of the issue.

Comparing Factorio to Satifactory (basically a simpler Factorio but in 3d space) - component layout gets a lot easier to optimize since you can essentially float rando chunks of logic in random parts of the sky - Factorio does have an interesting mod[1] to allow what's essentially a black-box subroutine but the majority of the headache is trying to build a relatively compact (and thus defensible) layout while also allowing yourself the room to re-tool the setup as needs change.

Building absolutely can be automated - there are a lot of self-replicating structures out there, most of which automate the forward deployment of defenses since that's a real issue you're going to need to deal with.

1. https://mods.factorio.com/mod/Factorissimo2


I think optimizing layout or at least attempting to minimize the amount of crossed conveyors is most of the issue.

Yup that's exactly what I was thinking about. If you don't have to optimize, then you can mix and match heuristics to get the job done. If you have to optimize... now the search space gets real big real fast.


I work with a rather complex database in my day job and occasionally maintain a very human-oriented ERD for employee education - I've looked at various auto-ERDs in the past and failed to find one that can actually produce a decently pleasant visual diagram as output - most of them will just give up on the task and lay tables out in a roughly hierarchical display based on distance from a relatively core table and then let links between tables lie where they may.

Optimizing layout is a very interesting problem and one interest component in Factorio in particular is that you have some real values to measure as output like:

1. Total space used

2. Total energy required (more inserters means more power - more belts usually means more space)

3. Throughput

4. Pollution produced

5. Tile-ability of the design.

6. Proportion of unused space within a bounding box

Probably a few other fun ones - it actually sounds like a pretty approachable problem to get objectively good results out of.


It could take inspiration from some speedruns formats where the "seed" is pre-selected. That way you could have multiple people compete to write the most efficient factory in terms of resources and space to generate as many rockets as possible, and everyone with the same resource limitations and map layout!


I’ve been noodling away on this related question:

Given that sub factories in Factorio can be graded numerically on units/s output, and given that Factorio has a built in blueprint system, could one use ML to automatically create the most efficient blueprint possible for a given output?


Probably. The ratios are well known and easy to compute but it gets interesting with upgrade modules and effect transmitters. So you can for example trade power consumption for higher throughput. But belts only support so much throughput so you can only feed an area with so much resources, so you chain assemblers with direct insertion.

This would be a great application for ML or genetic programming.


I really want to see a factorio app with a pipeline that takes an HTTP request as input and eventually converts it into a useful JSON at the end.


Not too far off from circuit design languages (VHDL, Verilog) to me.


It sounds like it would be really easy to make a game like that, without the hassle of a 2D grid.


"Fractorio"


When I started my last job, I thought I was going to be writing a lot of Python and C. It turned out that the position had a lot more React and TypeScript than I expected, and at first I was annoyed and afraid. I wasn't a frontend developer---or worse, a designer---but I didn't have much of a choice, so I dug in and learned the stack.

At first I resisted every change. What good is VS Code when I have Vim? Why would I learn TypeScript when vanilla JS has "worked" for me for so long? What's a Webpack config?

Once I began using the tools that my coworkers recommended, I started treading water and even swimming with purpose in the ocean of Web UI technology. I still have a lot to learn, but I probably would have kept on avoiding this area if my situation hadn't forced me into it. Letting my guard down and following the trends in my group helped a lot in this case.

The best lessons I learned during that period are that learning can't kill me and using good tools doesn't make you a bad engineer.


I'm glad that you had such a positive experience! After years of web development I'm just burnt out by the tooling.

Layers upon layers, just make debugging so unnecessarily hard. The tooling is brittle and buggy.

I've seen typescript compiler bugs, webpack segfaults, and whatnot. I've started to ban typescript and jsx from all future projects, and it's better, but still a nightmare.


I’m on the fence but painfully with both feet on the ground. However it wouldn’t take stiff wind to knock me back on to the plain ol JS side.

I’m currently “rewriting” a vue.js app for the sole reason that we’ve just lost a senior dev who was the only one who could stomach the thing. We’ve taken on two juniors in his place and there is absolutely no way they would be able to dig into this thing.

The process has been quite enjoyable and we’re just about at feature parity at 1/10 LOC. And the juniors are quite keen at picking up typescript and lots of other useful things along the way.

Had they just been dumped into the vue pool, things would have turned out much differently.

I await the day a few months from now when they “discover” this new thing called vue and want to rewrite the entire thing!


>“rewriting” a vue.js app rewriting to what?


As a mainly frontend developer, I agree. I've spent more time configuring tooling, than writing code, in this new project I'm starting. I don't want to write plain JS, but the top used frameworks have strayed so far from basic JS that it's getting a bit ridiculous.

Svelte appears to get rid of some of the boilerplate and verbosity stuff you find in other frameworks, though it's still a pretty magic framework. Looking forward to trying out SvelteKit.


I've seen this take a lot lately, and frankly most of them are straight out lies. Your post may be one of them.

The top three modern frameworks all use cli-tools that do the config for you. Most are astonishingly simple to use. There are the rare times you have to venture into using custom webpack configurations, but they are far and far between.

You say the top used frameworks have strayed far from basic JS, but that is not true. Most of them are 90% vanilla JS with the exception of Angular. Hell, even React Components are simple JSX transforms. A component transforms into React.createElement(<name>, <props>, <markup>)


Idk what projects you are working on that don't require any configuration.

I just started a new Vue 3 project with typescript, Vite, VuePrime and some other dependencies, and there were a lot of undocumented steps and annoying issues to get everything working. JS-shims, beta version browser extensions, colliding ESLint rules, buggy dependencies that get compiled into broken JavaScript, etc.


Well, I‘d wager it‘s a likely experience if you venture into experimental frameworks. Vue3 is pretty new, as are dependencies that work with it.

I have not had any of these problems with `create-react-app` yet


The thing is, complexity cant be destroyed. You can just move it from one framework to another


How do you mean? If there’s a repeated pattern that is hand written al over the place without people realizing it (or not making sense within the local context to build something more general), centralizing that pattern certainly destroys complexity, does it not? The pattern existed before and after but the centralization means you have only 1 instance of this pattern.


That wasn't complexity, it was repetition or verbosity. It's tedious to manage rather than complex.

In making it DRY you have introduced a dependency for all of the usages of that snippet of code, and made it harder to have individual uses deviate if they need to.

Of course that might be exactly what you want! It's just good to be aware that code reuse is adding complexity by way of adding a new system to manage.

A little function, no big deal. But if you find yourself writing models and extending classes just to save yourself a couple of repetitions you may have jumped off the deep end!


You now have to integrate that pattern/module, and learn/work with the tooling to integrate the pattern/module, which adds new complexity and constraints on top of the now-centralized hidden complexity.

On the whole, it’s worth it - building on the shoulders of giants lets us achieve great things with what used to require out-of-reach amount of resources.

But as a result our work has shifted towards more integration and tooling (everything from node/npm to cloud services, orchestration, containers, and ML/AI) and total complexity keeps marching on upwards.


And that works well. After years of using react, I only have positive thoughts on it compared to the jquery apps I used to build.

Things that used to be so verbose and brittle before become trivial and reliable solutions in react.


Interesting that you're reacting against the Typescript trend...

I'm personally starting to wonder if/when the backlash against Typescript will gain steam.


The problem is that Typescript seems amazing when you're first starting out, and is especially appealing to devs coming from strongly typed languages - but, it's a productivity drag almost immediately (for seasoned JS devs), and as the software gets more complex you either get more and more type spaghetti or devs who spend days figuring out just how they're going to make that one type elegant.

All this to maybe catch one or two bugs, since the boogeyman of accidental type abuse rarely makes an appearance.

Some of the sacrifices made to turn it into a superset of vanilla JS come back to bite it as well. I think banning it from projects is a very wise move, but it's the kind of wisdom that's counter-intuitive and requires more of a business sense of things.


> All this to maybe catch one or two bugs

I think you’re significantly underestimating the bugs that could be trivially caught with types. For example, Airbnb stated a while ago that 38% of their bugs could have been prevented with TypeScript[1]. Types aren’t the solution to all bugs - this is why we have strong test cases as well - but they bring a lot of benefit, especially if done with diligence, not just to prevent bugs but about to aid in understanding the system as a whole (seeing the types of an object can make it much easier to understand what data is being passed around).

1: https://news.ycombinator.com/item?id=19131272


Postmortems like this are highly suspicious. There are many, many questions such as whether these bugs could've been caught simply via linting, and whether whoever did this research engaged in p-hacking.

I could not find a single paper - the only reference is a slide in a presentation that may well have been pulled out of someone's ass.

Anyway, I do concede that typing has its value, but in a large project, the cons ironically outweigh the pros (for TS specifically). I'm sure other JS typing systems are actually better at that, especially the ones that aren't trying to be JS supersets.


Given that AirBnB supplies one of THE strictest ESLint / TSLint configs, I'd wager they were using linting alright.


I don’t know about your experience but TypeScript has saved my team an enormous amount of time and resulted in the near complete elimination of showstopping bugs on deploy at my company. We’ve had maybe one fire in two years of rapid growth and a large part of that is thanks to TypeScript.


I know this joke, but in the version I've heard it's mechanical engineering students instead of programmers, their professors instead of conference participants, and the aircraft itself instead of the flight control software.

But the structure is the same.


I think Radio Yerevan wants you to be chief announcer.


The Soviet Union's greatest accomplishment was convincing everyone else in the world that Yerevan exists.


I have been there, I tell you. You gotta believe!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: