Hacker News new | past | comments | ask | show | jobs | submit | mxmlnkn's comments login

What are you using to bridge these three chat providers?


An Ansible playbook running on a relatively cheap hetzner box. In general bridges require you to self-host a homeserver. A federated homeserver will need 4GB of RAM at least and 8GB if you plan on joining large rooms. The playbook is controlled from a single config file and sets up all the bridges you need.

https://github.com/spantaleev/matrix-docker-ansible-deploy


I think this isn't true anymore since at least the introduction of the unwanted Github activity view: https://docs.github.com/en/repositories/viewing-activity-and...


GPT4All?


And the additional trade-off that some bugs are only noticed at runtime when that particular line is executed while it could have been noticed by the compiler of a strongly typed language. Pytype helps but at this point you have a static analyzer that potentially runs as slow as a compiler without the additional performance benefit.


There’s no good reason for type checking to be super slow. I’m no fan of Go, but the language compiles insanely fast while being fully statically typed.

As I understand it, C++’s slow compilation comes from the fact that it usually parses all of your header files n times instead of once. This isn’t a problem with static typing. It’s a problem with C++, and to a lesser extent C.


> As I understand it, C++’s slow compilation comes from the fact that it usually parses all of your header files n times instead of once.

That's one of the things that can slow compilation down but it's definitely not the only one. It helps that precompiled headers (and maybe modules?) can go a long way towards reducing and possibly eliminating these costs as well.

I think some (most?) of the larger remaining costs revolve around template instantiation, especially it impacts link times as well due to the fact that the linker needs to do extra work to eliminate redundant instantiations.


> due to the fact that the linker needs to do extra work to eliminate redundant instantiations.

Yeah, I see this as another consequence of C++'s poor compilation model:

- Compilation is slow because a template class in your header file gets compiled N times (maybe with precompiled headers). The compiler produces N object files filled with redundant code.

- Then the linker is slow because it needs to parse all those object files, and filter out all the redundant code that you just wasted time generating.

Its a bad design.


I'm not sure I'd call the design "bad". At the very least it's a product of the design constraints, and I'm not sure there's an obviously better implementation without sacrificing something else. I think separate compilation and monomorphization are the biggest contributors, but I wouldn't be surprised if there was something I was forgetting.

Somewhat related, there was some work in Rust about sharing monomorphized generics across crates, but it appears it was not a universal win at the time[0]. I'm not sure if anything has changed since that point, unfortunately, or if something similar could be applied to C++ somehow.

[0]: https://github.com/rust-lang/rust/issues/47317#issuecomment-...


> I'm not sure I'd call the design "bad". At the very least it's a product of the design constraints, and I'm not sure there's an obviously better implementation without sacrificing something else.

It was a product of the design constraints in the 70s when memory was expensive, and compilers couldn't store a whole program in memory during compilation.

The problem C++ has now is that the preprocessor operates on the raw text of a header file (which is a relic from C). This means the same header file can generate totally different source code each time its included in your program. C++ can't change that behaviour without breaking backwards compatibility. So headers get parsed over and over again "just in case" - wasting time producing excess code that just gets stripped back out again by the linker.

The way C++ works doesn't make any sense now that memory is so much cheaper. Go, C#, java, rust, zig - basically every compiled language younger than C++ compiles faster than C++ because these languages don't contain C++'s design mistake.

Rust doesn't share monomorphized generics across crates, but at least each crate is compiled as a single compilation unit.


This was already the case with languages like Modula-2 and Object Pascal in the 1980's, C++ works that way because it was designed to be a drop-in in UNIX/C without additional requirements.


> As I understand it, C++’s slow compilation comes from the fact that it usually parses all of your header files n times instead of once.

Sort of. The primary issues are:

1) The C/C++ grammar is garbage.

Note that every single modern language has grammatical constructs so that you can figure out what is "type" and what is "name" without parsing the universe. "typedef" makes that damn near impossible in C without parsing the universe, and C++ takes that to a whole new level of special.

2) C++ monomorphization

You basically compile up your tempate for the universe of every type that works, and then you optimize down to the one you actually use. This means that you can wind up with M*N*O*P versions of a function of which you use only 1. That's a lot of extra work that simply gets thrown away.

The monomorphization seems to be the biggest compile time problem. It's why Rust struggles with compile times while something like Zig blazes through things--both of those have modern grammars that don't suck.


1. No, the grammar is not the issue per se. As you say, C has the same problem, and C code invariably compiles dozens of times faster than C++, and both Zig and Rust have modern grammars, but Zig compiles about as quickly as C and Rust is only somewhat faster than C++ (depending on features used).

2. This is incorrect. What's happening is that each template instantiation for a new set of template arguments requires reprocessing the template to check types and generate code, and also that is done per translation unit instead of per project. Each distinct template instantiation increases the compilation time a bit, much more than it takes to parse the use itself. That's why it's easy to have a small C++ source that takes several seconds to compile.


> 1. No, the grammar is not the issue per se. As you say, C has the same problem, and C code invariably compiles dozens of times faster than C++, and both Zig and Rust have modern grammars, but Zig compiles about as quickly as C and Rust is only somewhat faster than C++ (depending on features used).

Sorry, the C++ grammar is terrible. There are lots of things where C++ can't figure out whether something is a class or name or template or constructor call until looking way far down the chain.

However, you are the first person I think I have ever heard claim that Rust is faster than C++. Rust is notoriously slow to compile.

Zig generally compiles much faster than most C projects I've used. However, that is difficult to lay at the hands of C as a lot of those are the build system being obtuse.


>Sorry, the C++ grammar is terrible.

When did I say otherwise? What I said was that it's not the main cause of C++'s long compilation times. The grammar causes other problems, such as making it more difficult to write parsers for IDEs, and creating things like the most vexing parse.

>However, you are the first person I think I have ever heard claim that Rust is faster than C++. Rust is notoriously slow to compile.

It's kind of a mixed bag. Given two projects of similar complexity, one in C++ and one in Rust, the one written in Rust will take longer to compile if organized as a single crate, because right now there's no way to parallelize compilation within a single crate. However, compiling the C++ version will definitely be the larger computational task, and would take longer if done in a single thread. Both contain Turing-complete meta-languages, so both can make compiling a fixed length source take an arbitrarily long time. Rust's type system is more complex, but I think C++'s templates win out on computational load. You're running a little dynamically typed script inside the compiler every time you instantiate a template.


(one of the reasons Go compiles fast is its compiler is really bare bones, comparatively speaking it does very little in the optimization area vs what you would see out of .NET and JVM implementations, not even mentioning GCC or LLVM)


It isn't just that, but there are a number of ways that templates can cause superlinear type checking behavior.


Idk as long as you can develop at speed I don't see why a static analyser that's a few times slower than compiling couldn't run on the latest commit overnight? More as a sonarqube type thing I suppose.


Chrome trace format files also use JSON and can also become large and are a pain to work with.


After learning that ß is the ligature of literally "sz", I am astounded that "ss" is used instead to replace ß.


systemd-oomd is also default since Ubuntu 22.04. I remember it vividly because it effectively kept killing X when RAM filled up instead of sanely killing the process that last filled up the RAM, which is either gcc or firefox in my case. Absolutely user-unfriendly default configuration. I removed it and reinstalled earlyoom, which I have been using for years with a suitable configuration. I can only concur, RAM behavior isn't user-friendly on Ubuntu.


Thank you for mentioning earlyoom - I'll install and try it because current behavior of total, complete lockup without ability to do anything besides reset with the hardware button infuriates me unbelievably. I really don't comprehend how something like this is possible and default behavior in 2023 in OS marketed as 'desktop' and 'casual/user friendly'


Had the same experience in the past with systemd-oomd, nowadays it does a better job at killing greedy processes than the entire user slice/scope.


I second the earlyoom recomendation

it's a lifesaver


personally, i run my systems without swap, and kernel OOM behavior has been adequate.


np.random itself with that configuration can go negative with a probability of 0.62% as can be derived with scipy.stats.norm(loc = 7500, scale = 3000).cdf(0) or by looping a million times and counting the negative numbers.

Maybe the f2d function filters negative values but it sounds like a simple float-to-double conversion.

I'm not sure whether it was intentional, but, contrary to the headline, this random value was used to update the fund size daily as shown in the rest of the code. So, a single day for which the fund actually decreases wouldn't matter much. It might even be beneficial to make it look more real.


f2d does seem like float-to-decimal (edit, typo). My brain is off for the evening but might that part have a misplaced parenthesis? They f2d() the random number but then multiply it by the higher-precision expression (previous day's volume / 1 billion), which will give 8 numbers after the decimal in USD as in the tweet they posted. Still, it's fewer digits than without the call I suppose.


Because some people don't get a static IP from their ISP and they don't want to go through e-mail verification every day. At this point, some sites require this workflow from me:

    1. Solve CAPTCHA for log-in form
    2. Log in with valid password
    3. Open E-Mail client, maybe even log-into your e-mail with the same workflow if not done yet
    4. Verify the IP via E-Mail 
    5. Surf to website log-in form again
    6. Solve CAPTCHA for log-in form again
    7. Log in again with a valid password
    8. Verify with 2FA code
Thanks, I hate it. It feels like step 1 to 7 could be skipped.


You can even click on the window, hold the mouse button clicked and now all mouse movements, even outside the window, register and increase the randomness meter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: