Hacker Newsnew | past | comments | ask | show | jobs | submit | libeclipse's commentslogin

It looks like an AI generated idea and execution. I believe the README is meaningless words pretending to mean something


Why not just change the nonce instead of appending it and save some space


We do. Unless I’m mistaken?


Innovative how? Many travel routers already exist and support similar features


The way it automatically connects to your home and presents to your devices as part of your home WiFi. So you bring that device with you and everything else works like you're back home.

I use OPNSense and OpenWRT myself and there's no way you can make travel routers this convenient with them.


Tailscale running in subnet router mode on a GL.iNet router comes close. You can setup Tailscale through the GL.iNet GUI but to have it also route traffic for everything over to your Tailnet you need to flip one setting via an ssh command.

Not as convenient as this travel router sounds though, but comes close-ish for techies. (wish it didn't require that tweak via SSH. Maybe it'll be added)


Something something dropbox is simple :) https://news.ycombinator.com/item?id=9224

I wish Eero offered this feature. I bring three eeros to Airbnb’s to replace their crappy WiFi with my same SID, but it would be nice if it connected back through the home internet.


hah, 2nd time in the last couple months I've been compared to that LEGENDARY Dropbox comment...

In my defense, I'd argue that the average Tailscale user would be comfortable running an SSH command! And GL.iNet is just one very minor tweak away to making this entirely possible from the GUI. (though they might be intentionally avoiding it because of the support burden of quirks caused by Tailscale acting as a subnet router...)


Why do you think this would be difficult to do using openwrt? Wouldn't you just set up the travel router to have the same ssid and password as your home network and configure a wireguard tunnel from the travel router to your home network (that is if you want to be in your home network)


Because manually configuring wireguard tunnels on random devices is a simple task for most people lol. Unifi’s whole stack is all about making powerful tools easier to use for people who don’t want to fuck around with networking.


Agreed. I use Tailscale (which the gl.inet devices support, because they're basically a pretty front end for OpenWRT, and it supports Tailscale) for my stuff, because I can do it and it's not a real pain to do, but you do have to know a bit at least about networking. This thing looks extremely promising for the "I know this should be possible and I want to do it but have no idea how" level of knowledge as well as the "I want to spend as little time as possible on configuring things" people.


But you don't need to configure wireguard on the individual devices just on the openwrt router. That's one device and you can keep that on permanently.


Except that sometimes you can’t. I don’t know if the Unifi router checks for this, but I’ve run into more than one network where the VPN conflicted with either the captive portal or the wireless network itself (and at least one in the DFW Admiral’s club that had draconian blocking)


Although it does sound really nice from a user experience perspective I'm really hesitant with carrying a device with me that without any (additional) authentication would gain access to my home network wherever you plug it in. Would hate losing it or have it be taken from me.


Why would you assume there's no additional authentication imposed? You definitely need to establish a connection wherever you are, and most likely you do using a dedicated and pre-authenticated app on your phone.


> presents to your devices as part of your home WiFi

That will be fun for browser geolocation based on WiFi name.


In a 1 bit environment (==single SSID visible), sure. But most of the time multiple SSIDs are visible, and correlate to each, making detection of abnormalities easier. And the lat/long is also visible to help disambiguate.


Would both the stationary and mobile instances of that SSID be visible on public databases like https://wigle.net?


I think OP meant the opposite issue of broadcasting "I live at 123 evergreen terrace" everywhere you go, because SSIDs are vaguely unique.


You’ve reminded me of a project I started and never got it working. A home network on a vpn to another location.

So the usually ssid is in my home country, and another ssid is based somewhere else geographically.


It probably needs a panic/border mode to disable all home access in the event of an emergency. You don't want to be crossing borders and give customs officials full access to your home network.


If you disable your password saving, I think it would prevent them somehow.


You don't need NixOS to use Nix as a package manager/build system


If you configure your server(s) through nix and nix containers, then even without another host OS you are basically running nix.


They didn't block anyone. They're making a mockery of Chinese censors. It's a political and satirical act.


Yes, it's the famous satire and political stance that is invisible to people.

(this was satire)


They took an action with the intent of making the website inaccessible in China.

It is political, and it is satirical, and it is discrimination. Better to argue that it is righteous discrimination than that there was no discriminatory intent.


Chinese censors use active probing to scrape hosts and block anything with problematic content or services. It's not just DPI based.


They're exaggerating. It's a very casual insult, often used playfully


I thought so when I was younger, but I’ve found that it’s often considered worse than words like ‘idiot’ or ‘twit’. Some seem to find it — to my surprise — not just insulting, but crude.


The sample size is miniscule (14)


That’s true of most neuroimaging studies. Have you ever tried to get a bunch of people into an MRI for a study? Not easy, not cheap.

Like they said, the effect size is large. With a large enough difference, you can distinguish the effect from statistical randomness, even with a small sample size.

As with any study, this result must be replicated. But just waving around the sample size as if every study can be like a live caller poll with n = 2,000 is not helpful.


Also this idea that bigger is better with sample sizes can lead to problems on the other side, when we see people assuming an effect must be real because the sample size is so large. The problem is, sample size only helps you reduce sampling error, which is one of many possible sources of error. Most the others are much more difficult to manage or even quantify. At some point it becomes false precision because it turns out that the error you can't measure is vastly greater than the sampling error. Which in turn gets us into trouble with interpreting p-values. It gets us into a situation where the distinction between "probability of getting a result at least this extreme, assuming the null hypothesis" and "probability the alternative hypothesis is false" stops being pedantic hair-splitting and starts being a gaping chasm. I don't like getting into that situation, because, regardless of what we were all taught in undergrad, scientific practice still tends to lean toward the latter interpretation. (Except experimental physicists. You people are my heroes.)

For my part, the statistician in me rather likes methodologically clean controlled experiments with small sample sizes. You've got to be careful about how you define "methodologically clean", of course. Statistical power matters. But they've probably led us down a lot fewer blind alleys (and, in the case of medical research, led to fewer unnecessary deaths) than all the slapdash cohort studies that we trusted because of their large sample sizes that were so popular in the '80s and '90s.


Diet studies can also fall into a similar trap.

Huge sample size, but all food intake is self reported, or a tiny sample size where test subjects were locked into a chamber that measures all energy output from their body while being fed a carefully controlled diet.

The later is super expensive, but you can be pretty confident of the results. On the flip side it also miss any conditions that only present in a small % of the population.

You can see this with larger dietary studies where out of 2 cohorts of 100 each doing different diets, 15 or 20% on each group does really well on some "extreme" diet (e.g. Keto) but the group on average has no unexpected results.

If your sample size is 5, it is quite possible none of your test subjects are going to be strong responders to, for example, keto.

So then the study deadline comes out "Keto doesn't work! Well controlled expensive trial!"

Meanwhile the large cohort study releases results saying "on average Keto doesn't work".

But in reality, it works really well for some % of the population!

Some non-stimulant ADHD drugs have a similar problem. If a drug only works for 20% of the population, you need to be aware of that when doing the study design.


You seem to be implying that subgroup analysis never happens?

I guess I don't follow weight loss research closely, but I would be genuinely amazed that they don't do it, too, given how ubiquitous it is everywhere else in medical science. And the literature on ketogenic diets goes back over a century now, so it's hard to imagine nobody has done one. Could it be instead that people did do the subgroup analysis, but didn't find a success predictor that was useful for the purposes of establishing medical standards of care or public health policy? Or some other wrinkle? Or maybe people are still actively working on it but have yet to figure out anything quite so conclusive as we might wish? But that this nuance didn't make it into any of the science reporting or popular weight loss literature, because of course it didn't, details like that never do?

Disclaimer, I'm absolutely not here to trash keto diets in general. I have loved ones who've had great success with such a diet. My concern is more about the tendency for health science discussions to devolve into a partisan flag-waving contest where the first useless thing to get chucked out the window is a sober and nuanced reading of the entirety of the available body of evidence.


> Could it be instead that people did do the subgroup analysis, but didn't find a success predictor that was useful for the purposes of establishing medical standards of care or public health policy?

If we are all being generous with assumptions, this could very well be the reason.

I haven't seen much research on efforts of trying to predict what dietary interventions will most effective an individualized treatment basis, but I also haven't kept up a literature for five or six years.

Then again the same promises for ADHD medicine where now they are some early genetic studies showing perhaps how we could guide treatments, but the current standard of care remain throw different pills at the patient and see what they works best with the fewest side effects.

Of course dietary stuff is complicated due to epigenetics, environmental factors, and gut microbiomes.

That said progress is being made and the knowledge we have now is world's different than the knowledge we had 20 years ago, but sadly it seems outcomes for weight loss are not improving.


That's a great point. If your experimental methodology is flawed, it doesn't matter how big your sample size is. A study like this lets you gather some compelling evidence that you may have a real effect. Then you can refine the technique. Autism is a very active area of research, so I suspect we'll see other groups attempt to replicate this study and adapt its techniques while the original authors refine the technique and get funding to perform larger studies.


Here is a deep neuroimaging study of 52 fetal humans and their brain maturation states.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8583264/

I recognize the Catch 22 that the diagnosis is not possible until several years after birth. But a prospective study of this sort is “in scope” at UCSD. They already have big MRI studies of kids with hundred or even thousands of scans.


It would tell you that the system is inconsistent


It would not. If a system can prove (I know sufficiently rich systems can’t do this but suppose it could) its own consistency you still can't conclude it is consistent.

EDIT: I'm working under the hypothetical situation in which PA could prove its consistency. I know it can't but assuming that it could prove it's own consistency you still couldn't conclude that it was consistent since an inconsistent system can prove it's consistency.


GP is (I think correctly) stating that a system that can "prove" its own consistency is definitely inconsistent. Inconsistent systems can "prove" anything; if a system can appear to prove its own consistency, it isn't.


Yes. I know this. What I'm saying is that even if a consistent system that was strong enough could prove it's consistency then it still wouldn't tell you anything. There are system that can prove their own consistency for which it is known that they are consistent.

https://en.wikipedia.org/wiki/Self-verifying_theories


These are weaker than the systems Godel's theorems are referring to, as discussed in the opening paragraph. Do these systems are not "strong enough" in the sense described in this thread.


Obviously I’m aware of this. As stated several times, my original comment refers to the hypothetical situation in which PA could prove its consistency without Godel’s theorems being true/known. One would not be able to conclude anything.

The point being, having PA prove its own consistency couldn’t tell you anything of value even in the case that Godel’s theorems were not true. This is an interesting phenomenon. The only way to know a system is consistent is to know all of its theorems.


Did you see the authors of the paper?


Yes - that's what I meant. Kaufmann rehashing his old work (but presumably adding something to it too).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: