Hacker News new | past | comments | ask | show | jobs | submit | TheDong's comments login

The reason iodine works is not because any traffic to any IP on port 53 works, but because they allow traffic to a specific IP, and forward DNS queries.

Let's say the plane's DNS server is 192.168.1.53 and your DNS server is 1.1.1.53

You can't talk to 1.1.1.53 because that's blocked, so running wireguard on 1.1.1.53:53 doesn't help you. Instead, you run iodine there and configure "*.mydomain.com" to have 1.1.1.53 as its dns server.

Now, you can talk to 192.168.1.53 to make DNS queries, and then the dns server there, which isn't firewalled, will forward to 1.1.1.53:53, and proxy back the response.

Obviously, the plane's DNS server won't speak wireguard, nor forward wireguard for you.

That's the usecase for iodine, when you have access to some local DNS server which will forward requests for you, but you don't have access to the public internet, so you can access your own DNS server indirectly and thus do IP-over-DNS that way.


What feature do you need that isn't in the open source version?

Why not fork it and implement that feature, and then use that?

If you don't want to deal with a company trying to extract profit from you, use the open source version. That's the way to ensure your own freedom.


> What feature do you need that isn't in the open source version?

ODBC support, Cassandra, Redis, and Azure (Table) Storage support, and support for the kinds of obsolete RDBMS and "4GL" you see in large enterprise and small mom-and-pop backrooms (Progress, Clipper, AdvantageDB, ElevateDB, etc). And SQL script debugging for Postgres.

Currently I have separate tools/clients/IDEs for all the different data systems I target (e.g. SSMS, MySQL Workbench, Azure Storage Explorer, Azure Data Studio, Excel PowerQuery, LinqPad for .NET Framework 4.x for both x86 and x64 because ODBC and OLE-DB are like that); while I have a modest set of portable-ish VM images of obsolete Windows OS instances where I can run MS Access 2003 or some Clipper derivative. I'm also still running Firebird and even FoxPro 9 in another VM somewhere.

...so yes, having a single tool which handles 85% of my tasks without needing to juggle VMs and multiple bloated Electron apps together would be an improvement for me and worth me paying money for.

> Why not fork it and implement that feature, and then use that?

"Fork it" is not a reasonable suggestion considering the amount of work involved (see above).

> If you don't want to deal with a company trying to extract profit from you, use the open source version. That's the way to ensure your own freedom

As I said, I'm perfectly fine with me throwing money at DBeaver. I'm just expressing my frustration with DBeaver that I think they're being callous by not offering perpetual offline licenses for Dbeaver Pro, for any amount of money. The fact they used to[1] but don't anymore suggests they were getting disappointing return business from deep-pocketed "enterprise" customers, but I'm disappointed that their solution to this problem of theirs means leaving my money on the table because now they don't have any product licenses that I can work with.

[1] According to https://dbeaver.com/docs/dbeaver/Differences-between-license... - they stopped doing perpetual licenses after v23.3).


Your reasons are valid and make complete sense to me. I wonder if you could write to them and explain your use case and offer to pay for a perpetual license? No idea if it'll work, but worth a shot maybe?

If you try this, do update and let us know what they say.


Can you explain more what security vector you're talking about here, because I just don't see it?

Like, as far as I can tell, grub or whatever is a bundle of filesystem and device drivers, with enough info to then execute a kernel.

Linux also is a bundle of filesystem and device drivers, but better tested ones I think.

To me, it seems like using the kernel's filesystem drivers, which you have to use already anyway once you've booted, means you have to trust fewer total implementations of these drivers, so it seems more secure.

What attack or threat vector are you trying to talk about here?


It is the same security abstraction where you don’t allow support for network socket in process ID 1.

(Looking at you, systemd.)

You don’t allow access to the bootloader from any kernel, thereby afford a relative security in starting 2nd stage (kernels). One abstraction is that TPM, et. al., can lockstep assurances on each stage. At a minimum, you have a bootloader, in case of SNAFU/FOOBAR.

Bricking (or worse, malicious kernel) seems more a possibility with upcoming Redhat design.


Sorry, I still don't follow.

> You don’t allow access to the bootloader from any kernel, thereby afford a relative security in starting 2nd stage

You install and update the bootloader and its configuration from your running linux system.

In this new world, you would also update the kernel from your running linux system. That's the same, right? To update the kernel, you need to update bootloader configuration anyway, so it's obviously required that the running system can at least update the kernel, and that's true either way.

> Bricking (or worse, malicious kernel) seems more a possibility with upcoming Redhat design.

If your kernel is malicious, it's game over whether or not you're using grub, right? Like, that doesn't seem like a new threat model.

I don't really care about bricking because, frankly, I've made my system unbootable via grub bugs more often than I have through kernel bugs, and the kernel developers seem to take these bugs more seriously, so I feel like bricking is a possibility with either design, but less likely without grub.

Either way, I need to have a liveusb off to the side to fix these issues.


/boot should never be mounted.

If you look at how cgo is linked, you'll see it's a magic comment in go source code: https://github.com/xthexder/go-jack/blob/bc8604043aba0b6af80...

If bazel allows impurely using that dependency from the host, like bazel allows for python, then you'll be able to run into similar issues.

Admittedly, you'll usually get compilation errors for undefined references in compiled languages (modulo dlopen), so at least you'll get a build error instead of a runtime error like you do in python.

The solution for both Go and Python is for bazel to also control the system dependencies, i.e. to force you to run all code in a container sandbox bazel controls, force you to specify all external dependencies from linux kernel version up to go compiler version, and build all of that itself.

In other words, bazel is a lacking implementation of NixOS.


> If bazel allows impurely using that dependency from the host, like bazel allows for python, then you'll be able to run into similar issues.

That’s not how Bazel + CGO works at all. It sounds like you are making some guesses here but those guesses turned out to be wrong, sorry.

Go has an internal build system as well as a compiler. The build system drives the compiler. When you use Bazel with Go, you’re bypassing the Go build system entirely. Bazel directly invokes the Go compiler. This means that any of those comments will also get ignored. (Generally, this is how Bazel works for other languages too. Bazel + Rust does not use Cargo. Bazel + Java does not use, like, Maven or Gradle.)

You have to specify the dependencies in your build file. Let's say you have a Go library abc, with a C library xyz.

  go_library(
    name = "abc",
    srcs = [
      "abc.go",
    ],
    cdeps = [
      "//path/to/xyz",
    ],
    cgo = True,
    importpath = "path/to/abc",
  )
> In other words, bazel is a lacking implementation of NixOS.

Bazel and NixOS are both good tools for making reproducible builds.

They’re even a good match for each other—grab your compiler from NixPkgs, and build the rest of your project in Bazel. You get the benefit of Bazel’s fine-grained dependencies (Nix has very coarse dependency management), and you can get some specific version of GCC built for both Linux and macOS by tapping into NixPkgs. A marriage made in heaven. (Except for the fact that you are now using two tools which both have very steep learning curves.)

There are a lot of interesting similarities between Bazel and Nix, which is not surprising, since they are solving similar problems.


> That’s not how Bazel + CGO works at all. It sounds like you are making some guesses here but those guesses turned out to be wrong, sorry.

> This means that any of those comments will also get ignored

I cloned this example: https://github.com/bazelbuild/rules_go/tree/634fc283f8d84ea6...

And then ran "go get github.com/xthexder/go-jack" and added an 'import _ "github.com/xthexder/go-jack"' to 'cmd/roll.go'.

    $ bazel run //:gazelle-update-repos
    $ bazel run //:basic-gazelle roll
    __main__/external/com_github_xthexder_go_jack/jack.go:11:10: fatal error: jack/jack.h: No such file or directory
    compilation terminated.
    compilepkg: error running subcommand external/go_sdk/pkg/tool/linux_amd64/cgo: exit status 2

    $ apt-get install libjack-dev
    $ bazel run //:basic-gazelle roll
    Number rolled: 80

Sure looks like you're wrong. Bazel obviously didn't ignore those comments because it errored out on trying to find the include. (And of course it didn't ignore them, those comments are preprocessor instructions for the go compiler, which bazel runs under the hood). It obviously didn't need me to specify cdeps because it found the C dependency on my host when I installed it without changing a single bazel-related file.

It looks like your understanding might be wrong, sorry.


You’re making a complaint about Gazelle. If you import third-party packages with c dependencies with Gazelle, obviously, the two choices are it breaks hermeticity or you vendor the C code.

Bazel does ignore those comments. Gazelle does not.


> Bazel does ignore those comments. Gazelle does not.

I really don't understand what you're saying.

gazelle does not seem to do anything related to the c dependencies there. The full diff gazelle generated is just changing "deps" in 'BUILD.bazel' files to include the library, and adding a "go_repository" to "deps.bzl". Not that it would matter anyway, gazelle is part of bazel.

There's no references to c dependencies anywhere in what gazelle produced. I could have hand-written those changes just as easily, without gazelle.

I'm only using gazelle here because that's the only go example that upstream has in the repo.

The "bazel build" part is very clearly the part that is looking for these C files, and doing so in a non-hermetic way.

It's possible to make it hermetic, sure, if you pay attention and vendor things or such, but it's obviously not required.

That's all I was saying, just like you can have python break hermeticity and depend on external C libraries, you can have go break hermeticity and depend on external c libraries. It's possible to use bazel such that it's hermetic, but it doesn't stop you from doing it wrong.

That's all this thread is about, is whether it's also possible to make these non-hermetic references in languages other than python.

Like, if you go back and read my commend and your comment a few up, you'll see that my original claim was just that you could impurely reference the host "libjack-dev", which I have very clearly shown you can, and your claim was that it's literally impossible to do that, which it very obviously is possible.


> There's no references to c dependencies anywhere in what gazelle produced. I could have hand-written those changes just as easily, without gazelle.

I get that this scenario is confusing but this is incorrect. The problem is that you are running Gazelle to generate the build files for third-party dependencies, and not just the build files in your own workspace.

  $ cat bazel-basic-gazelle/external/com_github_xthexder_go_jack/BUILD.bazel
  load("@io_bazel_rules_go//go:def.bzl", "go_library")

  go_library(
    ...
    cgo = True,
    clinkopts = ... "-ljack" ...,
    ...
  )
This file is generated by Gazelle. Gazelle is the tool you are running when you run this command:

  bazel run //:gazelle-update-repos
When you run that command, it generates go_repository() rules, which are a part of Gazelle. There are three things at play here—there’s Bazel, rules_go, and Gazelle.

Bazel by itself doesn’t have any Go rules. You need to use rules_go for that. The rules_go rules completely ignore any of those comments you’re talking about—all libraries you link in are specified by things like cdeps and clinkopts.

Gazelle is what fills those in, and go_repository() is a part of Gazelle. You can see that in the imports:

  # This is part of @bazel_gazelle, not @io_bazel_rules_go.
  load("@bazel_gazelle//:deps.bzl", "go_repository")
There are a few practical suggestions I have for dealing with this:

- Nix. Run Bazel inside a Nix environment that contains your C dependencies. (You can have Bazel bring in C dependencies using Nix, too, if you can set aside multiple weeks to figure out how to do that. I wouldn’t.)

- Make your Bazel more hermetic. It’s not completely hermetic out of the box, but you can configure it to be more strict.

- For pure Go projects, consider disabling CGO with --@io_bazel_rules_go//go/config:pure.

- For third-party dependencies which use CGO dependencies, vendor those dependencies. You don’t need to vendor everything, just the dependencies that use CGO.

My own personal experience is that Bazel has a damn steep learning curve, which is why I don’t like engaging in Bazel apologetics. Bazel is reproducible when used/configured correctly, and it’s way easier to use/configure Bazel to do that than it is to configure other systems—but it’s still a pain in the ass.

And my own personal experience with the Go ecosystem is that CGO usage is uncommon enough that, most of the time, I can get work done with CGO turned off. YMMV.


Okay, yup, you're right, the C dependency stuff is gazelle.

I'll still claim the meat of what I said was correct:

> If bazel allows impurely using that dependency from the host, like bazel allows for python, then you'll be able to run into similar issues.

> Admittedly, you'll usually get compilation errors for undefined references in compiled languages (modulo dlopen)

That's the important bit of my comment, and it indeed seems true that bazel, as commonly used for go with gazelle, does allow and even facilitate those impure references to host dependencies, so even though I didn't understand the details of it fully, my overall point seems entirely accurate.

I appreciate you taking the time to explain things in more detail and share some advice/wisdom on dealing with this!


They say right there, "To hopefully mitigate the impact of this".

Having a mutex right there in the hot path of Instant::now is not great for performance. You expect getting monotonic time to be very fast generally, and some code is written with that assumption (i.e. tracing code measuring spans).


Ah, that's fair. Didn't realize there was a significant performance impact.

Eventually Rust just gave up. Here you go, here's your "monotonically increasing clock" courtesy of your operating system. It might go backwards, try asking your vendor to "fix" that and see if they laugh at you or just ignore you.

Sometimes the OS is broken in a way Rust can fix, for example Rust's current std::sync::RwLock actually does what you wanted on Windows, the C++ std::shared_mutex doesn't. It's documented as working, but it doesn't because the OS is broken and the fix is just on their internal git "next release" branch, not in the Windows you or your customers are running.

But sometimes you're just out of luck. Some minority or older operating systems can't do std::fs::remove_dir_all correctly, so, too bad you get the platform behaviour. It's probably fine, unless it isn't, in which case you should use a real OS.


> Safari is actually part of the solution ...

> Google, like microsoft, <1-3>

If you're going to complain about 1-3 for google and ms, I don't think you can praise safari in the same breath.

Apple's abused their position with the iPhone to make safari relevant, and unlike Chrome and IE, users can't just install another browser.

Apple's behavior is the only reason I can't run my own addons I've written for firefox on iOS (they run _fine_ on android of course), why I can't run uBlock origin on iOS, and so on.

Apple's behavior on iOS is far more egregious than anything microsoft or google has ever done.

I never once had to run IE or Chrome unwillingly since I could always install netscape, or mosaic, or firefox.

I'm forced to run Safari, unable to decently block ads, unable to use the adons I've written, unable to fork and patch my browser to fix bugs, and I've generally had my software freedoms infringed... and if I don't run safari, then I can't talk to my family group chat (no androids allowed, sms breaks the imessage group features too much) or talk to my grandma who only knows how to use facetime.

I wish so much I could use a phone with firefox, but I can't justify having a spare iPhone just to talk to my family, so I'm kinda forced to suffer through safari, held hostage by apple's monopolistic iMessage behavior.

The only thing that comes close to Apple's behavior is Google's campaign to force Chromebooks upon children in classrooms, requiring them to use Chrome, but at least Google isn't holding their grandmother's hostage... and managed work/school devices already are kinda expected to have substantially less freedom than personal devices, so it feels much less egregious.


Maybe I missed something but your arguments seem be about how Apple’s locking down of iOS/iPadOS and Safari are harmful to user freedom. That’s a very different argument from the one the person you’re replying to was making. They were saying that the popularity of Apple’s mobile devices coupled with their only running Safari holds back a Chrome monopoly in the browser space. If people don’t support Safari they lose out on a large portion of users.

> If people don’t support Safari they lose out on a large portion of users.

If people don't support Safari, it's because the free market has spoken and overwhelmingly chooses alternative options: https://gs.statcounter.com/browser-market-share/desktop/worl...

The story would be different, if Apple wasn't miserly with their native APIs and App distribution. But this is indeed a harmful and competition-restricting decision, even in Mozilla's opinion: https://mozilla.github.io/platform-tilt/

So I think we can safely assume that Apple's policy harms browser diversity by forcing their users to support a single minority option. If their users preferred a more feature-filled browser, we would never know; they aren't sincerely presented an alternative choice. If Apple wants their users to defend Safari, maybe they should invest in it until their browser (or Operating System, for that matter) competes with Chrome. Until then, they're promoting a megalomaniac solution and being a sore loser about it at the same time.


> because the free market has spoken

You mean the company dominating the internet heavily promoted and pushed users towards its own browser.

> If their users preferred a more feature-filled browser

Where by "feature-filled" you mean "all the Chrome-only non-standards because free market or something"


> You mean the company dominating the internet heavily promoted and pushed users towards its own browser.

If the company dominating their hardware did any better, maybe the majority of them wouldn't leave Safari. If Apple doesn't want to build a competitive browser, then they need some (non-anticompetitive) strategy to retain their users. Otherwise we're doing the Microsoft Shuffle again.

> Where by "feature-filled" you mean "all the Chrome-only non-standards because free market or something"

No, at this point I really do just mean "feature-filled". iOS has notoriously restrictive APIs and it makes full sense that those users would want a browser do do the things Apple prevents their iPhone from doing natively. At the rate Apple's heading, I wouldn't be surprised if next-gen iPhone apps were just PWAs that hook into WebGPU. Big-business has no reason to keep living under Apple's thumb, and market regulators can't justify it in Europe, Japan or even the United States.


> If the company dominating their hardware did any better

Apple doesn't dominate all of hardware. Google, however, dominates major access points to the internet, and used it to aggressively promote its browser.

> No, at this point I really do just mean "feature-filled".

I doubt it

> iOS has notoriously restrictive APIs and it makes full sense that those users would want a browser do do the things Apple prevents their iPhone from doing natively.

Ah. So you are talking about Google-only non-standards

> I wouldn't be surprised if next-gen iPhone apps were just PWAs that hook into WebGPU

Android has been the dominant OS for over a decade now. It has no real or perceived limitations of iOS. We've yet to see a single amazing PWA future we hear so much about.


> We've yet to see a single amazing PWA future we hear so much about.

Then maybe it's time you gave Android another try. Chrome runs on mobile just as well as it does on desktop, so any of the web apps you use on your computer work fine on phone too. It makes modern Safari look like a tofu browser substitute by comparison.


> Then maybe it's time you gave Android another try. Chrome runs on mobile just as well as it does on desktop

So?

> so any of the web apps you use on your computer work fine on phone too.

So where's the amazing PWA future we hear so much about. All the "amazing web apps" we hear about are shitty slow bad monstrosities that can barely display a few lines of text without jank.

The very few actual great apps which are made ad great engineering effort and expense (like Figma) don't run in full mode on mobile for obvious reasons.

So, my question remains and you haven't answered it.

Edit: There are some web apps here and there which are surprisingly good. E.g. I'm quite impressed by Foodora's app. And it runs well on iOS, too. However, 99.9999999% of the "great PWA future" is just garbage despite the "Chrome runs just as well on Android".


Orion Browser includes experimental Firefox extension support on iOS https://kagi.com/orion/

And it works really well from what I see.

Although Orion also has built in a (simpler) implementation the most important Firefox for me and I assume many others, tree style tabs. Orions built in version doesn't have the full customizability from TST but it works and presents tabs nested by what tab the descend from which is the most important feature.


nickel's performance definitely will need some work. It's several orders of magnitude slower than nix for even simple tasks:

    $ time nix eval --expr "builtins.foldl' (l: r: if l > r then l else r) 0 (builtins.genList (x: x) 5000000)"
    0.839s
    memory: 627 MB
    $ time nickel eval <<<"std.array.fold_left std.number.max 0 (std.array.generate (fun x => x) 5000000)"
    1:20.06s
    memory: 10540 MB
    
I know we don't actually need to deal with 5 million element lists in practical code, but this is also not nickel's most pathological case, and it's easy to write fairly reasonable and normal code which is prohibitively slow.

I rewrote some code I had laying around from nix, where it evaluated in a mere 3s or so (using perhaps a GiB of memory), into nickel.

At first the nickel version crashed after using all my memory, but with hours of optimization, I managed to get it to run in just under 2 hours with only 60GiB of memory usage.


That looks bad indeed.

I must say that contracts, while very nice for error reporting and all, also brings additional challenge related to performance (cf https://dl.acm.org/doi/10.1145/2914770.2837630). Our stance is that it's a reasonable trade off for configuration, but the cost would be unacceptable in a general purpose language.

Also, it's true that we've focused on language design and tooling before performance, and then working our way when users report unacceptable perfs on their concrete use-case.

There's been some improvement recently-ish. For example, with the latest Nickel, I get (by inlining `max` to mimic the Nix version):

    $ time nix eval --expr "builtins.foldl' (l: r: if l > r then l else r) 0 (builtins.genList (x: x) 5000000)"
    0,74s user 0,11s
    $ time nickel eval <<<"std.array.fold_left (fun x y => if x > y then x else y) 0 (std.array.generate (fun x => x) 5000000)"
    36,63s user 3,40s

Nothing to be too excited about, though. Although it doesn't make it unusable in small and medium-sized projects (we have users with 120kLoc of Nickel - with they'd definitively like to make it faster, it's still usable), I agree that performance is a big area of improvement, to say the least.


(also, it seems `foldl'` is an ad-hoc builtin operation in Nix, probably for perf reasons, so the comparison isn't entirely fair either)


> I feel like the real problem is not helm, nor argocd ... What was really annoying was the constant moving and changing the yaml that kubernetes wanted.

That to me sounds like you're angry at helm and argocd, but don't realize it.

The kubernetes apiserver publishes all the resources it supports, including custom-resource-definitions, including typed specifications that can be used to validate what you're submitting client-side.

If helm weren't a dumb layer of yaml templating, it could tell you locally, like compiling a typed programming language, "this helm chart won't work on your cluster because you have the beta version of this CRD and need the alpha version", or it could even transform things into the correct version.

The kubernetes API provides everything that's needed to statically verify what Groups/Kinds/Versions exist, and tooling like helm is just too dumb to work with it.


That's what the --api-versions in helm is for. If you look at the helm commands argocd uses you'll see a very long string of flags that pass every available api version to helm.


The vector db comparison is written so much like an advertisement that I cannot possibly take it seriously.

> Shared slack channel if problems arise? There you go. You wanna learn more? Sure, here are the resources. Workshops? Possible.

> wins by far [...] most importantly community plus the company values.

Like, talking about "You can pay the company for workshops" and "company values" just makes it feel so much like an unsubtle paid-for ad I can't take it seriously.

All the actual details around the vectorDB (for example a single actual performance number, a clear description of the size of dataset or problem) is missing, making this all feel like a very handwavy comparison, and the final conclusion is just so strong, and worded in such a strange way, it feels disingenuous.

I have no way to know if this post is actually genuine, not a piece of stealth advertising, but it hits so many alarm bells in my head that I can't help but ignore its conclusions about every database.


> Also, this has been public for months:

Posting the hash to twitter as a proof that "something" exists reveals no actual information, so it's not considered making the exploit "public" in any meaningful way.

From the blog's timeline, it's been visible in code diffs since ~April, but only called out as a CVE since 10 days ago, so I'd consider this one hot off the presses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: