This looks super cool but I could only get 10/21 to scan with the classic zxing scanner (>100M downloads), and 3 of those took a bunch of playing with camera angles and distance, so I'd recommend against using anything too fancy for real world barcodes.
Thanks for trying it out! I definitely agree, the fancier it gets the harder it is to scan so these should not be used for real world stuff where reliability is important.
I basically only tested these on my own phone, a pixel 7 with default camera app, so I'm not surprised people with different hardware/software are getting varied results. I think I might try to automate testing using different scanners if I can figure that out.
Hmm, yes, I did not try these but I have noticed that my default phone app in my S22 is freakishly fast at detecting QR codes, something I never remember that zxing was.
Does anyone know if these are today implemented in neural networks or something? Specialized hardware? Or just very well optimized. Sometimes it feels like only 1 frame in the camera is enough, and distance, angle, lighting and focus is not important.
Ah, yes. The classic “works on my device” argument.
Edit: from testing with zbarimg on my laptop: Basic, Camo, Neon (inverted), Glass, Blocks, Alien all seem to work reliably. Not sure what the problem is with some others, but maybe they’ll also work with some tweaking (Mondrian seems simple enough! maybe contrast issues?)
Edit 2: for codes that will definitely be scanned using a camera [1], some styles can work because phone camera isn’t perfect. E.g. if you slightly blur Dots, it works with zbarimg, which also means it would work if you slightly move around your phone.
[1]: Screenshots are used sometimes! E.g. for importing TOTP codes into a password manager, etc.
I used python so it's annoying to install, and I need to work on the readme, but I snoop the command line so pbipaste > foo.jpeg works as expected. there's also a copy command
More importantly it's sad and bad when a obvious negative comment is at the top of a front-page Show HN, that misrepresents and then deigns to provide recommendations against this nascent beautiful project that needs our support. Effectively shutting down the experience and enjoyment for incoming audience, and for the creator.
That is a very bad thing and should not be tolerated. Speak up against it.
OP is being respectful and merely points out an issue: the codes generated don’t work with some devices. If you want a code that works with everything, you can’t do too much customization.
It’s a very okay thing to point out and there isn’t much negativity in it. I agree that we should support projects like this, but being dismissive of real issues is not the way.
No; top comment: "don't use it" because "I tested on <my setup>" and <it failed> - you don't see the problem?
It is OK to point out, but not to dissuade others from using, you see? Especially on a Show HN for a cool project where some good person has clearly poured their passion into it. You got to stand up for that stuff. Bring the positivity, man! Frame real issues in the correct (qualified and personal) light, not proclaiming a blanket recommendation, way to go.
I get how making such quick generalizations can serve you well as an engineer, but when people's work is involved it pays to think about it a bit more! :)
Example of respectful: I tried this, super cool, and I encourage everyone to give it a try. For my setup (details), I could only get 10 to work <more details if you want>, but that might just be me. All around super awesome project, love it!"
Example of truly awesome comment if you want to be a super star: I tried this, super cool, and I encourage everyone to give it a try. For my setup (details), I could only get 10 to work <more details if you want>, but that's probably just me! edit: Thinking about it some more, I realized it might be because my device uses an outdated lib<something> and I just updated that and I got 19 of them to work! I checked <the qr code scanning app>'s GitHub and saw they also hadn't updated this dependency. So I opened an issue, and let them know about this, here's the link: <gh issue>. All around super awesome project, love it!"
You get the idea?
Few word mantra: contribute and build, not deplete and discourage.
The problem here is that it cannot possibly work robustly. It does what it does by trampling all over the error correction codes and using out of spec pixel shapes.
This is not the sort of problem that will go away once the technology matures. It is fragile by its very nature.
It is absolutely OK to recommend against it on that basis.
I don't think you're right about that, how could it work at all? Are you even sure how it works? I was able to scan all 21 perfectly, less than 1 second each, on iOS 18 a couple of inches from retina display. I have no idea how it works, but it seems pretty robust to me.
I know people can't do it on some Android phones, and apps. Okay. But still seems pretty good and not a fundamental flaw. Absolutely not OK to blanket recommend against trying that. Why limit anyone? Just encourage everyone to try for themselves. Get more data. Premature to make these negative recommendations on 1 test. Rather than short-circuiting further curiosity and exploration, expand it!
You were scanning a screen, which is perfectly flat, has no crumbling and no smudges. You are using a high resolution camera close to a high resolution screen, on a relatively large QR code. You have a best case scenario, and if that's all you care about, you'll be happy with it.
For now, anyway. If you are using the same scanner implementation that the author used to check their results, then the good results you are seeing are due to the generation algorithm being tuned to work with that particular implementation. But there's no guarantee that the implementation will stay the same in some future iOS. Say Apple makes some change, to better recognise bleached codes or something, and then suddenly some of the drop shadows are interpreted as 1's instead of 0's. They could do that, because that wouldn't require changing anything for codes that follow the spec. But it might break that pretty, pretty code that you just put on a million milk cartons.
> I know people can't do it on some Android phones, and apps. Okay. But still seems pretty good and not a fundamental flaw.
Might not seem like one to you, since you've got a shiny iPhone running the latest iOS, maybe fundamentally it actually is? They're pretty and all, but accessibility and not being ableist is kind of a thing?
I know we can't all have shiny things but does that mean everyone must miss out? Why forbid pre-2018 droids using this pretty square, or everyone else if they can't? If a18n is lowest common denominator to you, well not holding double standards is kind of a thing, right?
I see your point! But there’s an issue specific to QR codes here.
See, if you generate a code and it works for you, you generally expect it to work for other QR readers as well. If all you want to create a QR code for yourself to use (say, to deactivate an alarm with a code, or a Wi-Fi code for your guests to connect maybe, this should be fine. However, things like this will inevitably be used in production, and it would be really bad if, say, a QR code gets printed on a million copies of something and turn out to only work for some users.
I agree that we should support new projects and not choke them down with overly negative commentary. However, piling up positivity doesn’t help either: your second example doesn’t feel genuine to me, more like a spam review in app store or something. I love the sandwich approach in your first one though!
How about this: “Cool project! Some styles are really nice, I didn’t know you can make QR codes like this ever work. However, when I tried reading them with [app name] scanner app, only a few styles actually worked for me. Perhaps it works with other scanners, but this one has [99M+] downloads last month [/ active user base or something]. So if you want to use those be sure to check it works everywhere, and maybe stick to more plain styles? I also hope this gets addressed in the qrframe itself – perhaps tweaking the contrast in styles [X] and [Y] a bit would help.”
It is still way too fluffy IMO, but much more substantial. It is harder to write, of course.
I think you're underestimating the importance of positivity. But aside from that: really? the one where the guy technical deep dives to solve some problem, seemed like a spam review to you, in the context of HN? How do you figure?
They're both genuine from me, one way I might write one. I feel sorry for you that you didn't feel that genuine emotion, or see the utility, but I'm glad you got something from it, hopefully it can help you write better ones in future!
Just curious: have you done many Show HN's yourself? You might be finding it hard to empathize with the uniqueness of that without having some experience doing it.
Regarding the drafts: first one's not really sandwich as much as genuinely owning "maybe this is my setup" (total correct), and expressing love for this cool project.
I like your one, it's a good start. I think focusing clearly on your test set up and expressing positivity is enough. Making recommendations or suggestions runs the risk of making too many unfounded assumptions about intended use, and clouding the market with unfounded restrictions when in reality you don't have the research to back that up. So, temper your pronouncements to the quality of your data, I guess, otherwise you might come across as gratuitously negative or fault finding, as OP here, even if not in "tone" per se, then in harmful effect (authoritative dismissal dimming engagement, etc).
Another way to say it might be: don't overestimate your ability to see real limitations, while underestimating your ability to harm a project or creator with the same.
These ideas, are they all really so shocking? If so, glad I could help bring a bit more light to the Show HNs by prompting some reflection!! :)
In this case, the builder actually agreed with the feedback. You had no reason to call it “poosey”, in fact that’s a stupid word to even have in your vocabulary.
Hahaha! No. What would you know? In fact, that word is perfect for the meaning I expressed. What do you think it means?
The creator can do what he wants, it doesn't change the principles.
Just like I do what I want, and you have no ability to judge my reasons, nor pretend you can judge a word is stupid for someone to use or not - besides yourself.
So maybe turn all those judgment attempts on yourself because you are getting your personal boundaries all wrong, is that something you do a lot? Hahahaha!
I tested it with my phone and got similar results: I got only 9 out of the 21 QR codes to work. I don't know why you think such a finding should not be mentioned here.
I think it should be mentioned but without the "so I'd recommend against using", because that's wrong. Also, specify OS, surface, time, etc for better test!
> "so I'd recommend against using", because that's wrong.
Someone's opinion is not "wrong". It may be misinformed, it may be different than yours, but you can't just assign a "correctness" value to something subjective just because you disagree with it.
No, that's not it, but I get why you read it like that. By "wrong" I mean the lifting his personal test/opinion to the level of a general "recommendation", in effect discouraging others from exploring, and harming the creator. That is one wrong here. You get it?
If you want to use something for production, you shouldn’t only test on the latest stuff you personally have at home but on the crappiest stuff you reasonably expect users to have. Otherwise, you get websites that lag and spin up the fans on everybody’s laptops while the developer is happy on his workstation with 32 cores and a gaming GPU.
But did OP effectively "recommend against using" this by that statement "? Yes. Especially given how many people are likely to quickly scan that top comment given the context.
It's not correct, and actually morally wrong, to make blanket recommendations against something based on your own specific set up. Share your experience! Don't discourage others from trying it, or pretend your single source, device specific test now permits you to direct others with recommendations on this thing you didn't build, but have effectively now misrepresented as a "too fancy" "non real world application" toy. How many people is that comment going to lose the creator? How many joy experiences are robbed from those people who get misguided by that comment?
I get your more literal, word parser-y reading, and I can definitely see how a narrow, defly parsed, narrow reading precisely defines just what you think it does. But that doesn't mean that's the only reading, or even the intended one.
I think the people who will bother parsing any such possible nuance while making microsecond decision to either explore further or not bother is low. Congrats on being an exception tho! Or maybe you are just overanalyzing for the purpose of this argument? Hahaha :)
I get you don't see the issue right now, and how it undermines the very best aspects of the project, but i think with a slight shift of perspective, you might! I encourage you to explore a few more Show HN's yourself and empathize with the dynamics, the effects of comments, and even try posting a few and seeing the response you get and how you feel about that. Through long experience you will know the values important here.
You sound personally involved. Your comment is the one being downvoted, because the hubris of "it works for me so everyone should be using it!" is absurd. The fact is many of the QR codes shown do not in fact work on many devices. Your use of all-caps also violates HN rules.
Judge proportional to downvotes, you miss great ideas, demonstrated in your thinking. The only hubris is your misrepresentation, and only absurdity? Your low-fact, untested 'fact'. Also, you hallucinated the all-caps. Hahahaha! :)
>Agree, but OP should not then "not recommend" and it shouldn't be top comment dissuading others from seeing this AWESOME project.
Maybe you don't understand what "all caps" is? Let me explain it to you. You capitalized the entire word "awesome" in your comment above. That is against the HN rules.
Your entire comment to me is a troll. I'm not going further than this with you. Goodbye.
I didn’t mean „you’re mandated by law to do this“, I meant „if you don’t do this, people won’t use your stuff if they can avoid it“. If that’s ok for you, that’s awesome. We were talking about production use though, were adoption and user friendliness is normally a goal.
Ah yes, scanned using a proprietary app on a thousand-dollar phone from the bright screen of a thousand-or-more-dollar computer: the only use-case imaginable, no reason any person would ever be using anything else...
If it doesn't work on a 150$ Android phone using all of the 10 top results for "qr scanner" in the Play Store when scanned from a dirty TN panel or faded inkjet print in bad lighting: you are failing your users, turn off your graphic designer brain and think about usability.
But are you? Depends who your users are, so... misplaced point, as for mass appeal there's alternatives. Celebrate for what it is, not judge by irrelevant standards.
I'm surprised to hear this. Fortunately it doesn't look like the source code itself got taken over [1], and of course F-Droid, which is always the best place to get any open source Android application, still has the same version as the latest Github release. [2]
These applications are blessedly feature complete, and I haven't noticed any issues being "stuck" on the F-Droid versions.
It's kind of surprising that a presumptively legitimate company (and YC-funded startup) would out themselves as buying black market residential proxy bandwidth, isn't it?
Their frontpage also advertises the ability to pass CAPTCHAs, whether by automation or more likely by delegating them to third-world CAPTCHA farms. If that's a major selling point for your automation service then your target market probably ranges from dubious (e.g. data scrapers trying to get around limits) to extremely dubious (e.g. ticket scalpers, spammers, click fraud, etc).
Just because something can be used for sketchy purposes doesn't mean that's the only purpose of it. there are thousands of situations where people are forced to interact with a shitty website 100x per day and the site won't provide an api. Imagine if your job was booking plane tickets all day. United could provide you an API key to do so via an API, but in practice they won't, only some enterprisey travel software company can get that kind of access, for a steep fee. You could build a tool which automatically puts together an itinerary based on rules and books it, through a tool like this. Perhaps a slightly contrived example but I believe things like this definitely happen.
> United could provide you an API key to do so via an API, but in practice they won't, only some enterprisey travel software company can get that kind of access, for a steep fee. You could build a tool which automatically puts together an itinerary based on rules and books it, through a tool like this. Perhaps a slightly contrived example but I believe things like this definitely happen.
And you think that's NOT sketchy?
I'm almost afraid to ask where you think the bar is...
And why is it? A company provides you an API for a "fee" and a free web-based interface, as long as you are agile enough to use it, with some limitations per ip/cookie. You choose the second path and automate it. What's wrong with that? Limits of the free access are the public contract. You're not obliged to play along with someone's "monetary spirit".
And in practice, APIs are often much more PITA than the actual interface, but you can't buy unlimited web automation. Few years ago one of my projects literally OCRed data from an android phone screen because receiving it via API took a couple minutes longer and involved email-like back and forth with polling and id matching after a convoluted authentication that fails every few requests.
I really wish I was a better programmer with more time, I would install the accursed "MyQ" garage door app on a dedicated Android, and bridge it into Home Assistant using an OCR type of strategy. (they are notorious for flipping the bird to the whole open home automation community by not integrating with anything)
A very common and pro-consumer use for residential proxies is price scraping and price comparisons.
Most businesses don't want to compete on price and are extremely unhappy if you tell people that their competition sells the same stuff but for less, that their "best deal of the month" is actually a price raise, or that they significantly raise toilet paper prices every time there's a natural disaster.
Agreed. Just for reference, one of our most popular use-cases is automating data entry into CRMs without APIs... No one wants to be doing this stuff manually, and automating it has some serious positive QoL impact
We get a lot of requests for bad usage (ie spinning up upvote rings on Reddit) but we don't want to support things like that
> one of our most popular use-cases is automating data entry into CRMs without APIs... No one wants to be doing this stuff manually, and automating it has some serious positive QoL impact
No-one would need captcha automation or residential proxies for a use case like that that's all on the level.
>one of our most popular use-cases is automating data entry into CRMs without APIs... No one wants to be doing this stuff manually, and automating it has some serious positive QoL impact
Good question! A lot of CRMs have CloudFlare enabled with location based blocking by default, so we needed a way to spoof a local location to be able to interact with the website
> Imagine if your job was booking plane tickets all day. United could provide you an API key to do so via an API, but in practice they won't, only some enterprisey travel software company can get that kind of access, for a steep fee.
Even your example sounds sketchy though. If you're not legitimate enough to use the enterprise software, why are purchasing tickets all day? And why do you need to proxy your bandwidth instead of just accessing the site directly. And why aren't you concerned about the fact that it's a crime to bypass the approved channels by hiding behind the proxy to do this?
Imagine a legitimate travel agency cannot book 100 United tickets a day via methods outlined in business contracts and need to resort to shady practice.
Dude, please provide some real solid evidence to back this up, and perhaps come up with another realistic scenario where bypassing captcha is justified.
> Imagine a legitimate travel agency cannot book 100 United tickets a day
That's the whole point, I never said travel agency, I was thinking a company with travelling consultants.
How TF is it "shady" to purchase and use airfare?
And again, bypassing captcha, say, to purchase tickets isn't evil either, if you are purchasing them for use and not for resale. It would just allow a person to book tickets for 50 people without wasting 6 hours to complete 25 CAPTCHAS and type in my information 25 times.
CAPTCHA is a blunt instrument deployed in an attempt to mitigate abuse, but it has a massive bad side effect that for every heavy user (not just evil users), it requires a human butt to be in a seat somewhere to do mindless busywork that could otherwise be automated. Working around that (sounds like OP agrees to do so on a case by case basis) is not inherently evil. It's as evil (or benign) as whatever you're using it for.
You ever see that video of the women paying a thousand dollars to skip to the front of the release day line to buy one of the first generation iPhones?
Then when she did and the employees told her they limited customers to buying one or two iPhones per person she becomes incredibly flustered. The guy who sold his spot in the line celebrates with a free phone.
What you’re describing is analogous and there’s a reason that went viral on the internet and was reported on in the mainstream, but I won’t spell it out for you.
It's almost never done with the full understanding of the person providing the proxy, doesn't matter if they get promised some change, their browser addons betray them or they install bundleware/adware.
I'd say it has about the same moral standing as a payday loan.
The look of surprise on their faces is almost universal when the feds knock on the door with a search warrant in relation to something that came from their IP address.
I don't think this is right. If there's a shared secret like a TOTP seed, that's in theory a "something you know", but if I don't know it, then who does? The point of "something you have" is that you own a device that "knows" it for you, and you never even need to see or expose the underlying secret, you just copy a token proving that the device you have knows the secret. I think that does count as an additional factor.
Of course if someone is memorizing the TOTP seed and generating the proof on the fly every time, then there's no shift in factor, but no one is doing that. And if they're saving the password on the same device that stores the TOTP code, then we're back to one factor, but now it's just 2x "something you have" at that point.
"that's in theory a "something you know", but if I don't know it, then who does"
An attacker. Your knowledge is much less interesting that the knowledge the server has, which is what the attacker can obtain. Grabbing a TOTP key out of a database is not materially different than a password.
TOTP's different characteristics mean it's harder to intercept, but passwords tend to be stolen nowadays moreso than intercepted, if only because you can intercept only one at a time but can steal the entire database.
The different characteristics mean it can add a bit of utility to a normal password, but I think it's less night-and-day than it was presented as.
re "pro-slavery" specifically that seems very clearly intended as a hypothetical argument in response to someone else; e.g. you can imagine an argument about whether you ought to raise the minimum wage where one person points out that someone on minimum wage can't even afford to pay rent and purchase food, or something.
The rest of it sounds extremely sarcastic and tongue-in-cheek. It's also (all of it) more than ten years ago. I think it's a bad idea to judge someone on the basis of a handful of out of context tweets like this.
Stuff like
> Justine Tunney is petitioning the White House to make Eric Schmidt the "CEO of America."
seems clearly like a pointed act of satire by someone who thinks corporations have too much power.
The fact that a JSON parser is the sort of ecological niche where you naturally get a single best library which does basically what everyone wants it to do with minimal dependencies is exactly the argument for putting it in the stdlib, though.
That would require moving the entirety of serde and its traits (or equivalent) into the standard library. Considering how long it takes to stabilize basic things in the stdlib, I think that’s a terrible idea.
IMO Rust has the best tradeoff between stdlib and ecosystem libraries I’ve ever seen in a (compiled) language, and that includes serde/serde_json.
> Number of dependencies is correlated with ease of package management.
I disagree with this, Python makes it trivial to add dependencies. [1] And yet Python developers tend to have the mentality of an older generation of software developers, pre-node/npm, and believe in keeping most things light. That starts with the language maintainers having a vision that says this should be possible, by including the batteries most people need for mid-size projects.
This has a compounding effect. People writing the dependencies that your dependencies will use might not even need dependencies outside of the Python standard library. In Rust, those dependencies have dependencies which have dependencies. You get a large tree even when everyone involved is being relatively sane. When one or two projects are not careful, and then everyone uses those projects ...
When I'm writing Rust, I eternally have the infuriating dilemma of picking which dependency to use for a relatively common operation. Having one blessed way to do something is an enormous productivity advantage, because it's never on you to vet the code. The disadvantages of a large stdlib are pretty overstated; even when an outside library outperforms the internal one (like Python's "requests"), the tendency will be for everyone to form a consensus about which outside library to use (like Python's "requests"), because that's how the ecosystem works.
Using technology designed 25 years ago has big advantages for Linux developers as well. Even when I need to add a dependency in Python, chances are I don't have to install one in my development environment because I can experiment with the shared dependencies already installed on my operating system. And I can look to the maintainers of the software I use - and my system maintainers - for implicit advice about which libraries are reliable and trustworthy, and use those. Rust, Go, etc., throw all of this out in the name of convenience for developers, but it's precisely as a developer that I find this mentality frustrating.
[1] People complain about the UX of e.g. pip in certain situations, but (regardless of whether it's a reasonable complaint) it's not the same issue as maintaining a stack of dependencies in the language itself being difficult.
> I disagree with this, Python makes it trivial to add dependencies.
In python it's surely easier to add dependencies than C/C++, but it's harder than in Rust and not so trivial. Everyone using your script has to manually run pip to _globally_ install dependencies, unless you use virtual environments which can take a requirements.txt file, which just show how the builtin dependency management is not enough.
> When I'm writing Rust, I eternally have the infuriating dilemma of picking which dependency to use for a relatively common operation.
Likewise in python you might wonder which library to use between urllib, urllib2, urllib3 and reqwest. People not familiar with python might even fail to guess which ones are part of the stdlib.
Granted that's just one specific example, but it shows pretty nicely how there will always be a library with a better design and a stdlib module that's good enough right now will likely not hold the test of time, ultimately resulting in bloat.
The folks on the wormhole-rs fork (who appear to share your Github organization? [1]) already have NAT punching working 95+% of the time in my testing, so maybe what they're doing could be ported over to the Python implementation.
The Rust implementation on Tailscale worked well for me. Except on a layer 7 firewall have to be quick to permit the connection or else it tries fallback.
> For some more exotic testing, I was able to run my own magic wormhole relay[1], which let me tweak some things for faster/more reliable huge file copies.
The lack of improvement in these tools is pretty devastating. There was a flurry of activity around PAKEs like 6 years ago now, but we're still missing:
* reliable hole punching so you don't need a slow relay server
* multiple simultaneous TCP streams (or a carefully designed UDP protocol) to get large amounts of data through long fat pipes quickly
Last time I tried using a Wormhole to transmit a large amount of data, I was limited to 20 MB/sec thanks to the bandwidth-delay product. I ended up using plain old http, with aria2c and multiple streams I maxed out a 1 Gbps line.
IMO there's no reason why PAKE tools shouldn't have completely displaced over-complicated stuff like Globus (proprietary) for long distance transfer of huge data, but here we are stuck in the past.
I overall agree, but "reliable holpunching" is an oxymoron. Hole punching is by definition an exploit of undefined behavior, and I don't see the specs getting updated to support it. UPnP IGD was supposed to be that, but well...
Well, with v6 you're down from NAT-hole-punching to Firewall-hole-punching, which in principle should be as simple as arranging the IP:Port pairs of both ends via the setup channel, and then sending a "SYN" packet in both directions at once.
Then, trying to use e.g. TCP Prague (or, I guess, it's congestion control with UDP-native QUIC) as a scalable congestion controller, to take care of the throughout restrictions caused by high bandwidth delay product.
That's why protocols like this have what Wormhole calls a "mailbox server", which allows two ends separated by firewalls to do secure key exchange and agree upon a method for punching through directly. See also STUN: https://en.wikipedia.org/wiki/STUN
I'm working on a branch that considerably improves the current code and hole punching in it works like a swiss watch. If you're interested you should check out some of the features that work well already.
I (Electrical + Software Engineer) once worked for a physicist who believed that anything less than an order of magnitude was merely an engineering problem. He was usually correct.
I was taught the same. To not care a lot about things under an order of magnitude. Over the years when planning large software projects or assessing incidents and so on, the 1 order of magnitude threshold helped me often.
As a physicist, I think this is correct too :). You don't start to see problems with things under that, unless they are deviation from standard model predictions.
Not as far off as the casual reader might think 20MB vs 1Gb sounds way more than the actuall 160Mb vs 1Gb - one shouldn't use Bytes and bits in a direct comparison together. One or the other, otherwise it's misleading/confusing.
In this case transferring the data at the slow rate would have taken more than a week, so it's no small difference. Actually one side had a 10 Gbps line, so if the other side had had faster networking I could easily have exceeded the limit and gotten the transfer done more than 6x faster.
I used the term "1 Gbps line" just because it's a well known quantity - the limitation of Gigabit Ethernet. The point wasn't that multiplexing TCP can get you 6x better speeds, it's that it improved the speed so much that the TCP bandwidth-delay product was no longer the limiting factor in the transfer.
Yeah but with magic wormholes you see, there could be other universes where that's not the case and 160mbps is close to 1024mbps or 1000mbps whatever the cool kids call a gigabit now adays.
As a protocol tcp should be able to utilize a long fat pipe with a large enough receive window. You might want to check what window scaling factor is used and look for a tunable. I accept that some implementations may have limits beyond the protocol level. And even low levels of packet loss can severely affect throughput of a single stream.
A bigger reason you want multiple streams is because most network providers use a stream identifier like the 5-tuple hash to spread traffic, and support single-stream bandwidth much lower than whatever aggregate they may advertise.
Yeah, that's the issue. I didn't have root permissions on either side. Moreover, a transfer tool should just work without requiring its users to have expert knowledge like this.
In this case, I checked the roundtrip ping time and multiplied it by the buffer size, and it agreed with the speeds I was seeing within ~5%, so it was not an issue with throttling. Actually, if I were a network provider interested in doing this, I would throttle on the 2-tuple as well.
Do you have examples of a few popular sites that support them?
I honestly don't know that I've seen one, unless the Apple / Google / Github / Gitlab "single sign on" links have all quietly switched to using passkeys under the hood (I thought they were all OAuth 2.0). Would be frustrating if so, because it wouldn't provide for custom / hardware implementations of the standard.
Seems like the overwhelming majority of the listed sites require you to already have an account, then go to a difficult-to-find URL on a settings page, enroll a passkey, and then you can login with the passkey. So granted, a handful of sites support it, but I have trouble believing the Google number. Maybe that's including people using their device to sign into Google itself?
Putting myself in the position of a typical user, passkeys haven't "replaced" passwords until I don't have a password for Home Depot or what have you. Otherwise there's still a password I have to write down or remember somewhere.
I'm not even here as a hater - I do like the idea of cryptographic authentication replacing passwords - but I'm just saying I've seen zero real world uptake of this so far.
Er, Google uses Passkeys and yes is an OAuth2/OIDC IdP for federated sign-in. I think you're missing the point though. I can still login to Google and Tailscale without ever typing a single password, trusting my url bar/eyes, sharing any secret material, etc.
reply