Hacker News new | past | comments | ask | show | jobs | submit | ww520's comments login

Just curious. How does that work on changes between depending states within a cycle? E.g.

   [foo, setFoo] = useState(1);
   [bar, setBar] = useState();
   setFoo(10);
   setBar(foo * 2);  // Is bar 2 or 20 at this point?

You would not write this code for a very simple reason: all `setState` calls queue up a new render. Since you're setting the state every time the component is rendered, it will loop infinitely.

Let's say you instead did the `setFoo` and `setBar` in a click handler. In that case, when you click the button it would tell React "Hey, in the next render set `foo = 10` and `bar = 2` (since `foo = 1)`. Then React goes ahead and runs your function again, and `foo` and `bar` will both be updated.

The value of `foo` does not change within a render cycle, you literally don't have to worry about state.


It’s 2. We’re just running a js function here with nothing magical going on.

It might seem that there’s a chance that people will be confused about what bar is, but in practice all the changes are happening in handlers and bar will be a const anyway so you have no plans to mutate it directly.


Thanks for the clarification. I'm trying to understand the rationals behind React's design decision. Why state changes within the same render cycle would lead to all sorts of bugs? And why is it not a problem with other frameworks like Vue or Svelte?

Just a follow up. If foo and bar are expected to be constant in the rendering code, what's the point of the setXXX() functions? Why not just make them constant explicitly and say mutating them elsewhere like in the handlers/controllers?


If you have state changes within your render cycle, then you can't write pure functions anymore, you have a stateful class. And if you have a stateful class, then React can't control when you render, because it can't know when you update a variable. So the onus is on you to decide when your component needs to re-render. This is extremely easy to screw up. You will end up in scenarios where the UI does not match the underlying state, just because the two have gone out of sync.

React inverts the MVC paradigm and says "no, you don't get to decide when to render." So if you want to update state, React will manage that for you and queue up the render. With React, your UI and state literally cannot go out of sync (except for refs), and all the data dependencies are managed for you.

Facebook gave a great talk on Flux architecture vs MVC ten years ago that you should check out here: https://www.youtube.com/watch?v=nYkdrAPrdcw

> what's the point of the setXXX() functions? Why not just make them constant explicitly

Look at the code again: it is explicitly constant! `const [myState, setMyState] = useState(0)`. `myState` is a const, and will literally not change its value for the entire render cycle. There is no mutable state to manage, no race conditions. You just tell React what to render based on what the value is now, and React injects the current value for you. The first time, `myState` is 0 since that's the initial value. Every subsequent render, React will give you the latest value. You can pretend everything is constant, because it is.


I'm glad someone who's expert in React can answer these questions. I got several thoughts.

First of all, I don't believe React component is pure functional with the state hooks embedded inside the component with side effects. Calling UI=F(props) twice with the same props would not produce the same result given the different sets of states embedded in them.

Second, the local variable myState is const but the state is mutable; the variable myState is an alias snapshot of the state. Otherwise what's the purpose of handing out setMyState? setMyState is for mutating the state, right? It seems the need to keep myState constant is due to React's inability to detect changes to the variable to sync up the UI. It uses setXXX() to trigger the UI re-rendering. However, whether setXXX() queuing up re-rendering and causing re-rending loop is inconsequential to the programmers. It's React's implementation detail.

Third, I'm interesting in learning what kind of bugs mutable state would cause in React, so that we can design better systems down the road.


`bar` is 2 because `setFoo` does not update the value of `foo` until the next render

Will bar become 20 eventually? In the next cycle perhaps?

So there will be 3 cycles?

   1: foo = 1, next(foo = 10)
      bar = undef, next(bar = 2)
   2: foo = 10
      bar = 2, next(bar = 20)
   3: foo = 10
      bar = 20
Does the above sound right?

Bar is 2.

Why is state management so complicate in React?

React is the only major FE lib/framework that basically re-renders each (affected) component and sub-tree on update cycles.

The model they've chosen is the opposite of what one would intuit and because of this model, it is necessary to manage state a bit differently in React compared to Vue or Svelte.


I, for one, would intuit that exact process flow. If my UI is a function of my state, I expect state changes to re-render my UI. How else would you handle it, except for imperatively?

This is genuine question; I've not used Vue or Svelte.


If rendering or not can change the state, then your UI is not a function of your state.

It re-invokes the entire render function of the component so that the state in the component (meaning not externalized to a hook or declared outside of the component) is effectively wiped out.

Vue and Svelte don't work that way.

In a Vue SFC, this is fine:

    <script setup>
    let counter = 0           <-- This code only runs once
    
    watch (someRef, () => {   <-- Only this code runs when someRef changes
      counter++
      console.log(counter)
    })

    const fn = () => { ... }  <-- Normal functions are stable
    function fn2 () { ... }   <-- Stable as well
    </script>
Basically, it behaves like you would expect normal JS closures to behave because the `setup` function is invoked only once. When `someRef` changes, only the `watch` function is executed (this is also why Vue and Svelte perform better because they can execute a smaller subset of code on updates).

JS and DOM itself work with the same conceptual model, right?

    <button onclick="handler()">...</button>

    <script>
    let counter = 0           <-- Allocated once

    function handler() { }    <-- Only this code is executed

    const fn = () => { ... }  <-- Allocated once
    </script>
In React, this (obviously trivial example) doesn't work:

   export const App = () => {
     let counter = 0          <-- New allocation on each render

     const fn = () => { ... } <-- New allocation on each render

     // ...
   }
Because the entire `App()` function gets invoked again. So you have to use hooks to move state out of the path of the component function because the entire component function is re-invoked. Imagine if React, instead of invoking the entire `App()` function, only invoked the code affected by the change as captured in the dependency array. That's how Vue and Svelte work; the same way that a handler function via `addEventListener()` only executes the designated handler function. React does not work this way. Think hard about that and how React is conceptually opposite of how events work in DOM itself.

This difference in design means that in Vue, there is -- in practice -- almost never a need to do manual memoization and never really a thought of "where should I put this"?

It might seem obvious here, right? if `fn` has no dependencies on the component state, then it can just be moved out of the `App()` function. But this is a common mistake and it ends up with excess stack allocations (also why React is generally the poorest performer when it comes to memory usage).

React's model is a bit of an outlier for UIs which generally assume state encapsulated by a component is stable. React says "no, the component function is going to be invoked on every change so you cannot have state in the component."

A different way to think about this is to go look at game engines or desktop UI frameworks which render stateful objects onto a canvas and see how many game engines or UI engines assume that your object is recreated each time. This is certainly true of the underlying render buffer of the graphics card, right? But generally the developer builds on top of stateful abstractions. React has, in fact, moved the complexity onto the developer to properly place and manage state (while not being faster nor more memory efficient than Vue or Svelte).

I believe that a lot of React's expanding complexity flows from this design decision to model inherently stateful UIs in a "pretend everything is a stateless function" way because it is fundamentally playing opposite day against the underlying JavaScript and DOM.


Thanks for the detail explanation. That's what I understand how React works.

Another thing I found un-intuitive with React is the state of multiple instances of the same component. The useState() hook basically references some global state in the global context. When the component is used in multiple places, i.e. <my-comp></my-comp> <my-comp></my-comp>, the useState() hook inside is called twice and somehow they get a different copy of the global state. I know the magic of the global array to track the states. It just feels a bit too magically.

I chuckle it's claimed that React component is functional. And cringe whenever people put out the UI=F(props) statement, as if calling F with the same props multiple times getting a different set of state values not the anti-thesis of being functional.



It’s not.

I don't want to come off as negative, but here's an off-the-shelf alternative. My home network has VPN set up that let me access the home video feed securely and privately. The NVR software doing the video recording can process the videos to detect abnormal activities. It will send a push notification to my phone. I then connect to the VPN and view the videos as if I'm on my home's local network, totally private and secure.

This is a reasonable setup. We discussed it in other comments as well

How do they measure the memory usage?

Setting up a dynamic DNS record to map a hostname to my home network’s dynamic IP actually makes private VPN usable. It’s really a game changer to be able to access all the local services and resources on the road without exposing them to the public internet.

Are you using an internal or external service? Curious what you or others recommend...

I've done a bit of both... I used CloudFlare which works fine and then I moved over to tailscale when playing with pxe / netboot and I've not decided on what to use beyond tailscale's magic dns. Unbound looks pretty nice.


Unbound is perfect. The CLI is very handy as it allows you invalidate specific domains from the local cache. I have had a good experience with dnsmasq and dnscrypt2 as well.

I’m using an internal machine for the VPN server and port forwarded to it from the router. I also have Tailscale set up but if I remember correctly Tailscale requires all devices participating in its VPN to install its software, which is too much.

> I also have Tailscale set up but if I remember correctly Tailscale requires all devices participating in its VPN to install its software, which is too much.

This isn't true. You can use Tailscale "Subnet Routers" to access devices within a network without the Tailscale software installed. You still need one device to act as SR, but that would be a requirement for leveraging any traditional VPN as well.

[0] https://tailscale.com/kb/1019/subnets


Is that true? I’m not 100% sure, but I think I’ve printed while I was away from home and I only have Tailscale software installed on my AppleTV.

I'm intrigued. Could you please elaborate on your setup, what Apple TV provides in this mix and how it is used? Is the Apple TV always powered on (24x7)?

There isn't much to say. The AppleTV is like any other computer. I installed Tailscale, set it as an exit node and turned on subnet routing.

The AppleTV is always powered on, but it only uses 0.3 watts while idle.


Wha... since when does Tailscale have an AppleTV subnet node!??! Those guys are on fire and I missed this.

I use mine as my Tailscale exit node.

A pretty common setup is to have a public VPS/dedicated server with wireguard/openvpn hosted at some trusted company and use that as an entry/exit point. It's basically what Tailscale is (massively simplified, obviously).

As far as I understand it, that's not how Tailscale works most of the time. The actual connection is established between the VPN nodes, and actual traffic doesn't travel through Tailscale's servers.

The VPS solution is usually the hub of a star-shaped network, so everything has to go through it, which may be limiting, given that, at least where I live, gigabit fiber is fairly widespread and reasonably priced. Most VPSs I see have less bandwidth than that.

There's headscale which allows setting up tailscale with a private server: https://github.com/juanfont/headscale/


Tailscale will fallback to tuns servers which are dumb "cloud" relays if direct connection can't be established.

I think what the original post was referring to was using their home (dynamic IP) network instead of a public VPS/dedicated server. That’s what I used to do — I’d use Cloudflare’s dynamic DNS to keep my home IP up to date and have a dedicated VM running at home that handles Wireguard connections.

Now, I have found it easier to manage devices using Tailscale. Also, Tailscale makes it very easy to manage some very dynamic routing (managing connections to external VPNs that mandate different non-wireguard clients).

Sadly, I’ve hit some issues with using tailscale’s DNS provider (my work configured Mac doesn’t like to have the DNS server changed), so I still have some work to do on that side.


> I think what the original post was referring to was using their home (dynamic IP) network instead of a public VPS/dedicated server.

Personally, I wouldn't let incoming traffic hit my home IP/router by itself, that's why I suggested having something in-between public internet and your local network.

But, any way that works obviously works, the rest is just details :)


Wireguard running on my router (Unifi Dream Machine Pro) - but I have a static IPv4 address, as well as a routed /48 IPv6 block.

Anything that needs to be exposed to the internet (which was essentially TeslaMate during setup) through a cloudflare tunnel, which terminates on a server behind my router.


I've been very pleased with powerdns for my self hosted internal DNS services. It implements basically everything you want for even the most esoteric DNS setups, and IMO, quite sanely.

I've tried many times to setup PowerDNS and never complete it because I get bogged down in the complexity. I saw they had an ansible / terraform script for deployments. Do you just use the team's docs or something else?

You can also just setup a pihole adblocker on a vm. It has a local dns feature as well (that is nothing more that a textfile containing all your local records). Super easy to setup and maintain :)

Yeah just the PDNS docs. They're excellent. I'll admit my personal setup isn't particularly complex, but I'm not sure how much more complex it can get. I've just got an authoritative server for `lan.` and two secondaries, all 3 using sqlite as their database.

I just added their debian repo and apt install'd the two packages (dnsdist and pdns-server). Set the respective config files appropriately (dnsdist is a little hard, but googling got me there) and bam. I've got dnsdist serving DoH, DoT, and plain port53 DNS with some ACLs, was really easy IMO.


Cool! I'll have to try once more. That sounds a lot more reasonable than going straight to postgres.

for example https://freedns.afraid.org/dynamic/ + cron job on router to periodically update dns record

Just give in and use tailscale, life is so much better on the dark side!

I prefer Zerotier approach in relation between account and devices. In Zerotier for each device added, no need to login to Zerotier account. Just add the network ID and approve it from the account. In Tailscale I have to login from each device to add it to the network.

Staying with Wireguard. The article, by the way, is about Wireguard, not an opinion piece comparing alternative technologies.

@smw just says that tailscale is more convenient than dynamic DNS.

Why would you need a dynamic DNS record though? Within the VPN you should feel free to hard code any address you want. You control the network after all. In my own VPN I've never had a need to have IP addresses changed.

Dynamic IP. Hard coding an address is exactly what we want to avoid.

Let's go one level deeper. Why do you need dynamic IP in your own private network?

There is a dynamic IP on the external address, from their ISP.

Yeah but you don't use the external IP for the purpose of accessing your VPN (not via a DNS record anyway). I am also unclear on the purpose of the dynamic DNS.

Your external IP is dynamic because the ISP can rotate it. You want to reach your home's external IP to VPN in. One common way is to create a public DNS record that's dynamically updated (by a cronjob or whatever) to always contain whatever IP your ISP last handed you.

That's what I do. Just a cronjob on my TrueNAS server to query my IP and update my subdomain's A record if my IP has changed. That way when a power outage happens and my IP gets rotated, it makes no difference.

How do you connect your VPN with your phone when you travel on the road?

Really?

Imagine, if you will, the following scenario: I have a wireguard endpoint on my home router. The home router uses a residential ISP connection and I don't want to pay $10/mo for a static IP because my ISP is cheeky and expensive. I want to have my phone connect to said wireguard endpoint to establish a secure link. I don't want to have to change my wireguard configuration on my phone every time my home IP changes.

So, I set my phone to peer with the wireguard endpoint on `home.denk.moon:1234`. Every time my home router's external IP changes, it sends a dynamic DNS update to my DNS server such that `home.denk.moon` reflects the new IP for my router. Now, whenever my phone attempts to connect to wireguard, it will resolve that domain name, get the latest IP for my router, and connect.


To find your private network when you're away and plugged into a public one and the former's IP may have changed. I gather the OP is talking about discovering their public-facing address, not doling out IP's on their internal VPN.

Don't ask next "Why do you need to know your home IP address?"


I assume the fourth point is for billing clients for a high fee while paying a quarter to the associates. But some law firms will use the more efficient tools to take on more cases for the partners to bill on their higher rates. Competition will force the firms to use more efficient tools.


One hundred percent! And you can already see this playing out. More and more large law firms are spinning up technology and innovation teams. This is a huge investment on their part, indicating they're incentivized.

Also, the billable hour is somewhat misunderstood. It is more of a process tool than a hard reality. Repeat clients expect the bill to end up within a certain range. And law firms expect their people to work a certain number of hours.


One of my production web-based projects managing critical transit data hit the 12 year mark. It was based on straight HTML/CSS, plain Javascript, Knockout.js (yeah, this was before ReactJS and friends), JQuery, Bootstrap CSS, and with a Play Java backend on Centos Linux against a Oracle 11 DB. It needed minimal maintenance and updates, only when the spec versions changed. It has reached its end of rope when it failed the new security audit in that the software versions being too old, i.e. Oracle, Linux and JDK being too old. The customer comes up with the funding to do a rewrite rather than spending the money in retrofitting the old code. You bet one of the main goals of the new tech stack is reducing 3rd party dependencies and choosing dependencies carefully.


For "write everything twice" I would say write some parts many times. You don't have to write everything twice. Most code are optimal or good enough already, but you should be willing to re-write some parts over and over again.


I found WSL2 to have similar performance to native Windows. The only slowdown is loading files from the Windows side via /mnt/ which goes across the VM boundary. Move the files to the Linux side and it’s good.


Thanks for the wonderful article.

I’ve tried running models locally. I found that collocating the models on my computer/laptp took up too much resource to impact my work. My solution is to run the models on my home servers since they can be served via http. Then run VPN to my home network to access them if I’m on the road. That actually works well and it’s scalable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: