Hacker News new | past | comments | ask | show | jobs | submit | en4bz's comments login

Whiteboard coding almost always devolves into leetcode which also requires at home study. You're going to be spending evenings and weekends coding something in either case.


That part is understandable. Perfecting your craft requires many hours of hard work and dedication, and you can never know everything.

For me, I work on a lot of open-source as side projects, so there's always the coding "something" factor.


Haven't most plans moved to PPO since Obamacare. They usually allow you to just skip primary care nowadays.


Flatbuffers, Cap'n Proto, or SBE over RDMA.


None, bazel's caching implementation is broken because they don't even know or specify what constituents a build hash/key. See this issue from 2018 that's still open [1].

[1] https://github.com/bazelbuild/bazel/issues/4558


Probably because building targets with tools outside of the workspace is an antipattern, as it violates hermiticity principles. In fact, Bazel generally makes it quite hard to do this, so anyone who ends up in this scenario must have jumped through many hoops to get there.

I agree that the linked issue is legitimate, but I'd argue that this isn't a problem Bazel itself needs to solve--you should fix your build to be fully hermetic.


Not that I'd recommend it, but if you symlink your system library into the bazel build area, as long as your sandboxing setup don't hose you (or you just turn it off), bazel will track system tools/library in the same way as everything else.

Bazel's rules_cc even has a system_library.bzl you can import a `system_library` from that automates this for you. https://github.com/bazelbuild/rules_cc/blob/main/cc/system_l...

I'd still recommend building everything from scratch (and understanding the relationships and graph of your dependencies), but if your build isn't that complicated and you want to role the dice on UB, this isn't that hard.

As an aside, the most galling part of bazel's cache key calculations has to be that it's up to the individual rules to implement this how they see fit. The rules native to bazel written in java vary wildly compared to starlark-written rules. On thing you (or someone in your org) end up becoming pretty comfortable with while using bazel in anger is RTFC.


> Probably because building targets with tools outside of the workspace is an antipattern, as it violates hermiticity principles.

Nonsense. Nothing forces you to use tools outside of your workspace. CMake just requires you to set CMAKE_<LANG>_COMPILER_LAUNCHER[1] to point to a compiler launchers, which can be anywhere where you see fit, including a random path within your workspace.

People try too hard to come up with excuses for design flaws.

[1] https://cmake.org/cmake/help/latest/prop_tgt/LANG_COMPILER_L...


Non-Hermetic is the default for C/C++. And if you plan on using system provided libraries to support multi OSes then you can't use it.


That is precisely the point--using system-provided libraries in your Bazel project is an antipattern that should be avoided.


It's literally the default. How can the default be an anti-pattern. I doubt you're using C/C++ because you don't seem to understand the issue.


Learn to take others seriously without asking for credentials. Whether or not I use C/C++ is irrelevant. Bazel is flexible, and you can use it however you want, correctly or incorrectly. The principles I've mentioned above are language-agnostic, and are recommended best practice regardless of whatever programming language you are building.

https://bazel.build/basics/hermeticity

> When given the same input source code and product configuration, a hermetic build system always returns the same output by isolating the build from changes to the host system.

> In order to isolate the build, hermetic builds are insensitive to libraries and other software installed on the local or remote host machine. They depend on specific versions of build tools, such as compilers, and dependencies, such as libraries. This makes the build process self-contained as it doesn't rely on services external to the build environment.

My overarching point is, you should fix your build rather than point blame to the tool's authors because you're using it in an unsupported way.


Many people believe that the "traditional" way of building C/C++ applications is an antipattern. Such a belief is, in fact, a core reason to adopt bazel. If you don't believe that, then bazel may not be for you. It is intentionally opinionated in a way that you aren't.


I'm assuming you're referring to the golang model of statically linking everything. That's not really doable when many popular libraries are (L)GPL'd like glibc and libstdc++. It also doesn't work if you want to provide a shared library and need to be compatible with every possible system. That's not my opinion it's just a deficiency of bazel.


Then you build the lowest-supported-version of GCC and glibc, use that as your toolchain in Bazel, and build a dynamic shared library as normal. Using a system-provided toolchain also works, but you have to build on that system using something like Docker, which is certainly an alternative to Bazel but isn't quite meant to serve the same niche.


Non-hermetic is non-reproducible. I can only produce the same build outputs as you if I use the same toolchain, which essentially implies I must run the same OS and patch level, among a host of smaller impurities that can change the output of a build.

Sometimes this is desirable; for example, if you are packaging for a Linux distribution. But that's not the use case Bazel was invented to serve.


No, they are extremely broken. You can only choose:

- Link everything static.

- Link everything dynamic.

- Link user libs as static and system libs as dynamic.

There is no easy way to link a single user lib static/dynamic without resorting to hacks/workarounds like re-importing the shared library or defining weird intermediate targets. It's completely broken.


I don't know what you're talking about. This is trivial to accomplish using `linkstatic` as documented on `cc_library` and `cc_binary`. I uploaded a demo to github here: https://github.com/emidln/bazel_static_dynamic_c_demo/blob/m...

Try it out like this:

    # you'll need a c compiler installed, xcode is fine on macos
    git clone https://github.com/emidln/bazel_static_dynamic_c_demo
    cd bazel_static_dynamic_c_demo
    bazel run //:foo
    ldd bazel-bin/foo  # otool -L bazel-bin/foo if you're on MacOS


qux is now forced to be static. Consumers should be able to choose whatever they want. I don't know what end users want to do.


*You are the consumer.* You are consuming it in foo. Are you building some straw man consumer who might want to delve into my build and rearrange my libraries with no work at all? You can even do that if you want. Remove the linkstatic line from qux. Now it has two outputs: `libqux.a` (default) and a `libqux.so` (implicit). If you want to ship both of them in a release artifact, you can. If you want to mark some binaries `linkstatic = True` and statically link them you can. If you want to dyanmically link some binaries, you can do that too.

I was demonstrating that you can force some libraries to be only static and still partially link some things static and some dyanmic. If you want to get really into the weeds, you could even affect the link line and individually pick dependencies in a custom rule (that is fully compatible with the rest of the bazel tooling). Almost nobody ever needs to do that, but maybe you want to make only every other dependency dynamic to satisfy some weird slide in a talk.


You can use the "linkstatic" feature on the cc_library level. Then that library will be linked statically while other cc_library's will be dynamically linked.


The logic is backwards though. I may have multiple consumers of a library some of which may want static some of which may want dynamic. You need to create/import new targets to do this even though the original target creates both static and dynamic libs by default.


This is just false. Bazel creates both static and dynamic libs by default for every cc_library. The default is to link static, but you can control this on a per-binary basis. You don't need parallel trees of targets or even to write custom rules to access the non-default output groups that contain the shared library. This all just works out of the box.


> You're assuming you have control over all the deps and can set `linkstatic` on them.

In bazel you have complete control over all of your deps. Ultimate power and ultimate responsibility. Even an external BUILD.bazel can be patched when you bring it in (bazel has provisions for patching stuff from http_archive if you supply a patch). You can even ignore an existing BUILD.bazel and write your own if you only care about part of an external thing. If it's a target in your graph, you can certainly modify it. If there is some internal process that prevents you from modifying your own code, I can't help you. Maybe fix that or go somewhere that gives you the authority to do your job.


> bazel has provisions for patching stuff from http_archive if you supply a patch

That's a hack.


You're assuming you have control over all the deps and can set `linkstatic` on them.


CUDA Driver API or Runtime API remoting?


Driver.


That's the biggest problem with this model. With inference it's better to just use a dedicated model server. For training it's better to deploy on a massive dedicated machine. The only real use case left over is experimentation and debug for devs or students.


I doubt this does multi-server. All the GPUs probably have to be on the same machine.


Infiniband avoids the network stack. Has ~2us latency these days over LAN.


bazel is trash.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: