I want to be supportive, and I believe this solves a real issue, but this giving me serious pause:
> We take security very seriously, especially when it comes to our users. This is why we offer end-to-end SHA-256 encryption
You take security seriously, but are confusing pretty basic and fundamental concepts of encrypting vs hashing.
Given that the point of this service to expose local services to the internet, and only provides compression benefits if I expose the plaintext traffic of my service, I'm not seeing a lot of information that gives me confidence you truly understand the importance of doing what you are doing securely and safely. Not to mention confidence to defend against what an attractive target this makes you for attackers to passively tap or pivot into your customers.
This was a mistake on our website, which as since been removed.
1. We'll be open sourcing our client in the coming weeks so you can check out our code yourselves.
2. We will be offering a self-hosted version which will decouple you entirely from our infrastructure and you can provide you own SSL certificates.
3. Lynk can forward traffic to your encrypted services - which of course would mean losing out on compression benefits, but Lynk is designed primarily for quick development work like testing out a Stripe or Github webhook on your local machine, or demoing your webapp to a remote client. For production use we recommend a reverse proxy or self-hosting Lynk.
You allow the user's machine to obtain a valid certificate for a subdomain of lynk.sh? (This would seem to me the only way to accomplish E2EE, given the example of connecting over TLS to a lynk.sh host, and it also seems very unlikely.)
You could use this "for dev" and "for prod" as a marketing/business opportunity. Distinct bullets points of use cases in "For Dev" and "For Prod" sections. For prod there are also add-on options that would not make sense for dev, and could provide additional value (custom support, pre-built packages for distros, analytics on traffic/usage analytics about the tunneled services, etc)
This product offering is a bit confusing. Not that I don't get the offering - but that the marketing around it confuses me.
It's designed for forwarding applications for development purposes? But then why the in-depth monitoring and alerting (something that seems to imply a production system might be running on top of this.)
The "meaning your website performance won't take a hit no matter where your users are" doesn't really make sense in this context either. My "users" are hopefully my coworkers/the client I'm demoing to. And if I'm tunneling up my site over wifi from my laptop, a "highly available" and "globally distributed" load balancer setup really doesn't make sense, nor is it relevant for the development use-case.
And if it's "6x more performant" than ngrok- is that really relevant if you're using it for a trivial amount of traffic in a development environment? Why is this a selling point to me?
Further - if I'm tunneling TCP, does "end to end" really matter? If I'm tunneling something that's not encrypted on my application (let's say a plaintext redis connection) the link between this service and the person connecting to the tunnel is still unencrypted. Which means the whole "end to end" marketing really doesn't make much sense - unless of course I don't trust the network I'm running on - but for some reason maybe trust the network the person accessing the tunnel is on (super unlikely situation...)
The example also irks me - exposing a database directly to the internet (if it's a database for development, hopefully - it's probably fine?)
Is this trying to compete with ngrok, or something like Cloudflare's Argo tunnels? Because the marketing material leads me in two different directions, which is pretty confusing.
The standard answer to this problem is ngrok, so it'd be useful to have a direct comparison. I couldn't find any encryption code in the Github repos you have exposed (I didn't look hard); obviously, since you've documented "SHA-256 encryption", you should probably write that up a lot better. I assume that at least the client side of this will be open source, in that people are going to be squeamish about running closed source desktop tunneling software.
If I can just offer one bit of feedback, if you just type any email and password in the "Sign in / Register" page, it just creates an account for you if it doesn't exist.
There's a ton of reasons you don't want that, and should probably have a separate "sign up" flow for email/password login. Here's a few:
1. It's not at all clear you can actually do that. One guy in the comments below thinks you can only sign up using Github/Gitlab/etc.
2. Many of us have multiple emails, what if I don't remember what email I used for your service? Every time I try the wrong one, it'll just create a separate account, and it'll take me a few minutes to realize this isn't my account, but I just created a new one.
3. No double "password confirmation". Some sites skip this step now, so I guess it's not required, but not having it is part of why people will think this isn't a sign up field.
That makes a lot of sense, thanks for pointing this out! I assumed that asking people to verify their emails would be enough but it never hurts to make things clear.
We're excited to announce that the Lynk Beta is now live!
We've been hard at work building out Lynk's tunnelling protocols to make them faster, more stable, and all around better. We're happy to announce that vs. Ngrok our tunnels perform up to 6 times faster (source: https://medium.com/@shivanshvij/building-a-better-ngrok-dbc1...) and support technologies such as HTTP/2 (with HTTP1 fallback) and Websockets.
Check out our open beta and documentation here: https://lynk.sh
This is great. Congratulations on the launch. It reminds me of DERP [0].
A few questions:
1. What are the top few things in the protocol that improved speed upto 6x in comparison to ngrok?
2. Is Lynk based on your earlier work on Parasite?
3. The website claims "CDN-like performance" with "highly-available load-balancers" over the tunnel, and so:
3a. Is the limit on number of connections (100 to 200 per minute) a soft limit? Can it be raised? If so, by how much?
3b. Do you have servers across the globe like CDNs do? Or, plan to?
4. I see a flat $5 fee for enterprise usage and a notice about "no tricks pricing"... Is bandwidth something you monitor for "fair-use" and might charge extra for? Or, is bandwidth simply unlimited?
5. "Lynk Web UI for Request Capture and Playback" Neat. And so:
5a. This works for both TCP and HTTP? Any video/screencast that shows this in action?
5b. Is the HTTP tunnel a reverse-proxy or in fact a HTTP CONNECT tunnel or something else?
5c. Does TCP tunnel over SSH? Or HTTP? Or both? Or neither?
I'm curious about the note that you apply compression. Are you proposing your customers rely on your encryption rather than doing their own E2E encryption? If not how significant are the compression savings if you're just compressing encrypted data.
Yes, we are doing E2E encryption and compression between the Lynk Infrastructure and Lynk Clients. As with Ngrok, for a quick hosted tunnel our encryption will be more than suitable, and we are working to release a self-hosted version soon that will allow you to bring your own certificates (for both ingress traffic and the traffic between the Lynk Client and the Lynk Infrastructure).
You didn't really answer the question about the value of the compression. There are 2 options:
1- If people are using their own E2E encryption below your tunnel, then your compression provides essentially zero value, since properly encrypted traffic should not have repeating patterns to compress.
2- If you are telling people to not use their own E2E encryption, and instead rely on the Lynk tunnel's E2E encryption (with Lynk applying compression before encryption) then people are exposing their raw traffic to you, a seemingly random person on the internet.
The client compresses responses from your local services before they're encrypted and sent to the Lynk infrastructure. This application is designed primarily for development work and takes the hassle out of setting up a reverse proxy or dealing with port-forwarding.
If your local application provides its own encryption (ie, it's running over HTTPS), then your traffic won't be exposed to Lynk. In this scenario, you're right - there would be very little compression gain.
Theres an important security tradeoff here. Compress then encrypt leaves you vulnerable to attacks like CRIME[1]. How much this matters depends on the application.
HTTP will compress traffic from the lynk endpoint but we also compress the traffic from the client to that endpoint which helps save bandwidth and keeps things snappy even in slow network conditions
I honestly don't see the use of this. Why would anyone want to expose a development environment to the internet, especially in the age of devops and reproducible environments. 98% of all my development happens over 127.0.0.1 now. I have zero need to expose a service when I can replicate multi sites on a lan with the lightweight environment given to me with containers and vms.
One of the original popular use cases for services like this was if you're developing against a third-party service that wants to send you webhooks.
But it's also useful if you want to run something locally and show it to a friend or co-worker. The co-worker bit is especially useful now with people working from home.
Regardless, this space is pretty crowded, so I'm not sure what a new offering could do to differentiate. I'm skeptical of their "6x faster than ngrok" claim, and even if it is true, I don't think this is a use case where that metric really matters all that much.
(Full disclosure: the founder of ngrok is a friend of mine, so I'm certainly biased here.)
If you are working on someone and want to show a friend, or if you want to open a project on another device that isn’t necessarily on the same network. And you don’t have to pay for a droplet or setup a PaaS if you’re in the middle of the nitty gritty of development.
How are you handling connections? I think the original ngrok had a control connection and then opened a new connection for each request. I think they have now switched to multiplexing. I've heard multiplexing multiple HTTP connections over a single HTTP connection can have issues if there is packet loss.
For example if you have 1% chance of dropping a packet, and your conversation involves 1 packet then if you have 10 conversions each with its own TCP connection you will average 0.1 conversations being stalled from packet loss.
But if you multiplex those 10 conversations over 1 TCP connection then you will average ~ 1 conversation being stalled from packet loss.
I'm guessing this is not such a big problem with typical packet loss and a low number of conversations but if you are pushing a lot of conversations simultaneously over a single TCP connection then it could become noticeable.
> We only ask for personal information when we truly need it to provide a service to you. We collect it by fair and lawful means, with your knowledge and consent. We also let you know why we’re collecting it and how it will be used.
Be specific on this in your privacy policy. What information do you collect?
I echo the sentiments that the features you promote seem overkill for the "primary use-case" of local development.
On another note I suggest you improve the accessibility of your pages. The contrast of the text in the navigation bar and on the FAQ page makes it hard to read, and is below the WCAG standards.
Interesting. But I hate getting random endpoints, that means I need to keep updating configs. I'd rather use a self hosted solution, there are some easy ones out there.
We'll be releasing an update soon that will enable both custom domains (so you can bring your own) and reservable subdomains. Furthermore we plan on releasing a self-hosted version and open-sourcing the client.
Author of https://inlets.dev here, inlets is open-source (client and server) and can be self-hosted along with link encryption with TLS. We use web sockets which can penetrate almost any firewall/NAT situation. It's also important to note the Kubernetes and cloud integration for provisioning your own exit hosts. Take a look. TCP support in PRO edition with personal license available.
We'll be releasing a self-hosted server and client soon that will allow you to decouple completely from our infrastructure and bring your own SSL certificates.
However, to be clear, this is intended primarily for local development use, for traffic that isn't necessarily sensitive.
It is al about trust I guess. Many people trust AWS, Microsoft, Digital Ocean, Dropbox, Apple and others (FB should be mentioned here as well) with far more then just their local test setup (which I what use ngrok for).
> We take security very seriously, especially when it comes to our users. This is why we offer end-to-end SHA-256 encryption
You take security seriously, but are confusing pretty basic and fundamental concepts of encrypting vs hashing.
Given that the point of this service to expose local services to the internet, and only provides compression benefits if I expose the plaintext traffic of my service, I'm not seeing a lot of information that gives me confidence you truly understand the importance of doing what you are doing securely and safely. Not to mention confidence to defend against what an attractive target this makes you for attackers to passively tap or pivot into your customers.