Hacker News new | past | comments | ask | show | jobs | submit | elcritch's comments login

How so?

The intuitive way to think about it is that with very few dimensions you have very few degrees of freedom, so it's easy to prove things possible or impossible. With lots of dimensions, you have enough wiggle room to prove most things possible. Somewhere in between, you have enough complexity to not trivialize the problems but not enough wiggle room to be able to easily circumvent the issue.

Often in practice, that boundary is around 3-4 dimensions. See the poincaré conjecture, various sphere packing shenanigans, graph embeddings, ....


There's a section here about phenomena in 4 dimensions: https://en.wikipedia.org/wiki/4-manifold

One of the most surprising is that all smooth manifolds of dimension not equal to four only have a finite number of unique smooth structures. For dimension four, there are countably infinite number of unique smooth structures. It's the only dimension with that property.


Fascinating that higher dimension manifolds are more restrictive!

Though in a _very_ handwavy way it seems intuitive given properties like that in TFA where 4-d is the only dimension where the edges of the bounding cube and inner spheres match. Especially given that that property seems related to the possible neighborhoods of points in d-4 manifolds. Though I quickly get lost in the specifics of the maths on manifolds. :)

> However in four dimensions something very interesting happens. The radius of the inner sphere is exactly 1/2, which is just large enough for the inner sphere to touch the sides of the cube!


> One of the most surprising is that all smooth manifolds of dimension not equal to four only have a finite number of unique smooth structures. For dimension four, there are countably infinite number of unique smooth structures. It's the only dimension with that property.

Can you give some intuition on smooth structure and manifold? I read Wikipedia articles a few times but still can't grasp them.


I am not sure the other comment was especially intuitive. Here is my understanding:

Euclidean space is a vector space and therefore pretty easy to work with in computations (especially calculus) compared to something like the surface of a sphere, but the sphere doesn't simply abandon Euclidean vector structure. We can take halves of the sphere and "flatten them out," so instead of working with the sphere we can work with two planes, keeping in mind that the flattening functions define the boundary of those planes we're allowed to work within. Then we can do computations on the plane and "unflatten" them to get the result of those computations on the sphere.

Manifolds are a generalization of this idea: you have a complicated topological structure S, but also some open subsets of S, S_i, which partition S, and smooth, invertible functions f_i: S_i -> R^n that tell you how to treat elements of S locally as if they were vectors in Euclidean space (and since the functions are invertible, it tells you how to map the vectors back to S, which is what you want).

The manifold is a pair, the space S and the smooth functions f_i. The smoothness is important because ultimately we are interested in doing calculus on S, so if the mapping functions have "sharp edges" then we're introducing sharp edges into S that are entirely a result of the mapping and not S's own geometry.


Applying a smooth structure to a manifold to make it a smooth manifold is like a patching process that makes it look like a Eucliden space.

Most of calculus and undergraduate math, engineering, and physics takes place in Euclidean space R^n. So all the curves and surfaces directly embed into R^n, usually where n = 2 or n = 3. However, there are more abstract spaces that one would like to study and those are manifolds. To do calculus on them, they need to be smooth manifolds. A smooth structure is a collection of "patches" (normally called charts) such that each patch (chart) is homeomorphic (topologically equivalent) to an open set in R^n. Such a manifold is called an n-dimensional manifold. The smoothness criterion is a technicality such that the coordinates and transformation coordinates are smooth, i.e., infinitely differentiable. Smooth manifolds is basically the extension of calculus to more general and abstract dimensions.

For example, a circle is a 1-dimensional manifold since it locally looks like a line segment. A sphere (the shell of the sphere) is a 2-dimensional manifold because it locally looks like an open subset of R^2, i.e., it locally looks like a two dimensional plane. Take Earth for example. Locally, a Euclidean x-y coordinate system works well.


Conversely, I'd argue most brilliant people tend to have more dumb ideas than others, usually on oddly specific topics which most people would find inconsequential.

It's true. Smart people tend to have a lot of novel ideas, most of which are going to be retarded. Most people just have no ideas.

Interesting, Elixir should scale far more than that. Are you doing a lot of non-io processing or computations? I run Elixir on raspberry pi 4s doing IoT and they easily handle say generatings graphs with hundred of thousands data points.

One possibility is you're using a single process instead of parallelizing things. For example, you may want to use one process per event, etc. Though if the hardware is very underpowered and say single core, I could see it becoming problematic.


> Are you doing a lot of non-io processing or computations?

Unfortunately.

From metrics, computing AWS signatures takes up an absurdly large amount of CPU time. The actual processing of events is quite minimal and honestly well-architected, a lot of stuff is loaded into memory rather than read from disk. There's syncing that happens fairly frequently from the internet which refreshes the cache.

The big problem is each event computes a new signature to send back to the API. I do have to wonder if the AWS signature is 99% of the problem and once I take that burden off, the entire system will roar to life. That's what makes me so confused because I had heard Erlang / Elixir could do on the scale of significantly more per minute even with pretty puny hardware.

One thing I am working on is batching then I am considering dropping the AWS signatures in favor of short-lived tokens since either way, it's game over if someone gets onto the system anyway since they could exploit the privilege. The systems are air-gapped anyway so the risk is minimal in my opinion.

> One possibility is you're using a single process instead of parallelizing things. For example, you may want to use one process per event, etc.

This is done by pushing it to a task ie: `Task.Supervisor.async_nolink`? That's largely where I found my gains actually.

It took a dive into how things schedule, because a big issue that was happening was the queue would get massively backed up, and I realized that I needed to apparently toggle on a flag telling it to pack the scheduler more (`+scl true`). I also looked into the wake-up lengths of threads. I am starting to get my head around "dirty schedulers" but I am not entirely sure how to affect those or how I can besides it doing it forever me.

The other one just for posterity is that I believe events get unnecessarily queued because they don't / didn't have locks. So if event A gets queued then creates a timer to re-queue it in 5 minutes, event A (c|w)ould continue to get queued despite the fact the first event A hadn't been processed yet. So the queue would just continue to compound and starve itself.


I don't know the specifics of your app so I don't feel commenting in more than generalities, but generally speaking, if you are doing work in native code, and if that native code work is CPU-bound (roughly, more than a millisecond of CPU time) you should try to do it in a dirty scheduler. If you don't, what will happen is that that CPU-bound code will interfere with the "regular" BEAM schedulers, meaning it will start to interfere with how the BEAM schedules all of the other work in your app, from regular function calls to IO to job queuing to serving requests, and whatever else.

I'm also suspicious of the `+scl true` setting as maybe being a bit of a red herring. I've been using BEAM off and on for 10 years both professionally and as a hobbyist and I've never used this myself nor seen anyone else ever need to use it. I'm sure there are circumstances where someone, somewhere has used this, but in a given Elixir app it is extremely likely that there is lower-hanging fruit than messing with scheduler flags.

In terms of queuing, are you using Oban or Broadway or something hand-built? It's common for folks to DIY this kind DAG/queuing stuff when 99.9% of the time using something like Oban or Broadway would be better than hand-rolling it.


It looks like others have address the first 90% of your post, so I'll refrain from commenting on that. I am curious about your timer code, though, because the timer shouldn't be firing at all unless the task associated with it has completed successfully. You shouldn't run into an issue where a timer is re-queueing the same task in Elixir.

> From metrics, computing AWS signatures takes up an absurdly large amount of CPU time. The actual processing of events is quite minimal and honestly well-architected, a lot of stuff is loaded into memory rather than read from disk. There's syncing that happens fairly frequently from the internet which refreshes the cache.

Oh, sounds nice! Caching in Elixir really is nice.

Okay, that makes sense. Elixir isn't fast at pure compute. It can actually be slower than Python or Ruby. However, the signatures likely are NIFs (native code). If the AWS signs are computed using NIFs then the CPUs are likely just can't keep up with them. Tokens would make sense in that scenario. But you should check the lib or code you're using for them.

> The big problem is each event computes a new signature to send back to the API. I do have to wonder if the AWS signature is 99% of the problem and once I take that burden off, the entire system will roar to life. That's what makes me so confused because I had heard Erlang / Elixir could do on the scale of significantly more per minute even with pretty puny hardware.

Yeah, crypto compute can be expensive especially on older / smaller cpus without builtin primitives. Usually I find Elixir performs better than equivalent NodeJS, Python, etc due to it's built in parallelism.

Also one thing to lookout for would be NIF C functions blocking the BEAM VM. The VM can now do "dirty nifs", but if they're not used and the code assumes the AWS signs will run fast, it could create knock on effects by blocking the Beam VM's schedulers. That's also not always easy to find with Beams built in tools.

On that note, make sure you've tried the `:observe` tooling. It's fantastic.

> One thing I am working on is batching then I am considering dropping the AWS signatures in favor of short-lived tokens since either way, it's game over if someone gets onto the system anyway since they could exploit the privilege. The systems are air-gapped anyway so the risk is minimal in my opinion.

Definitely, seems logical to me.


Thank you! You gave me a great term that I can jump off from (NIF).

I'll have to rig up Observer. I've been using recon because I was being lazy overall.


You might also look into Elixir's broadway as provides back-pressure [1].

1: https://hexdocs.pm/broadway/Broadway.html

> I also looked into the wake-up lengths of threads. I am starting to get my head around "dirty schedulers" but I am not entirely sure how to affect those or how I can besides it doing it forever me.

Note that dirty schedulers really only affect NIFs which run longer than what the BEAM schedulers expect. I mentioned that in regards the possibility that the AWS sigs are taking longer than they should, then they'd cause havoc on the scheduler.


Once upon a time I needed to do hashes en masse for a specific blockchain projects. Just a tad of Rust (via nif) really helped the performance. Might be of help to you, check this out (not my lib)

https://github.com/ExWeb3/ex_keccak

for my usage, benchmarks went from 2.79K ips to 346.97K ips


As the article says, money talks.

That sounds like an ideal attack vector! Norton and other AV have elevated privileges with an opaque data format ready to be exploited.

I believe that was exactly the other commenter's point.

The funniest part is that the update was an exe to be run from the USB stick. The one thing you should not ever do on any system.

Unfortunately I wasn't prepared to broach the subject in a way that didn't have me say "you'd be safer without the AV". So I got nowhere.


Oh even worse! Yeah, you likely wouldn't have made any headway.

I’m of the opinion that 3rd party security software is malware. If it isn’t today, a future acquisition or enshittification ensures that it will be.

While true, the future is the future, and not entirely relevant.

Or do you eschew using a fork, because in 12 weeks in will fall on the floor?

Certainly, the problem is secret falls on the floor. The ones we can see can be handled.

This problem even happens with brand names, with hardware. You buy a fridge, and a decade later go to buy another. Meanwhile, megacorp has been bought by a conglomerate, and brand name is purposefully crap.


Imagine, if you will a bed of gold embroidered and wrought with the most excuisite works. Above the bed however is a sharp sword suspended on a single hair of a horse's tail. Would you avoid relaxing on the the bed because the sword may fall and kill you at some point in the future?

What’s wrong with the brand-name AV engines and security controls shipped with the OS? To me, it’s mostly just a lack of trust on the part of management.

Kaspersky is/was a brand-name AV. Look at what happened on their way out after the US ban...

Everyone should build their own security software?

All the major desktop OS have AV engines built by excellent teams. I do trust this more than McAfee or Norton. I also trust it not to take my machine down as much as CrowdStrike.

You trust native Windows security? I’m hoping it’s not, but what if a hospital’s decision looks like a choice between ransomware and a root system like crowd strike?

Have fun running your business with no third party software. You'll have to start by writing your own OS.

Speaking of which... it's remarkable that Microsoft Windows probably has code from 50,000 people in it. Yet there haven't been any (public) cases of people sneaking malicious code in. How come?


If Windows had malicious code in it, would we be able to tell the difference?

Sure, I’m sure somebody who is going to go through the effort of slipping malicious code into Windows would also make sure to do some QA on it. So it would be suspiciously unbuggy.

Seeed Studio also does custom CM4 designs. They might be an option.

Nim has a similar `with` for the same use case. It can be handy!

> and if we pretend we can't hear the Haskell developers it's one of the best type systems out there

Eh, Rust's type system isn't one of the best out there. It's lacking higher kinded types, etc. It's abilities in type level programming are frustratingly limited as well.

So it's advanced aside from from Haskell, OCaml, Idris, Scala, etc. Compare to OCaml's effect system for an advanced type system feature.


I 100% agree with you (I really miss HKTs), but I don't think the GP was using "best" to mean "most fully-featured".

And with that in mind, I do agree with the GP. Scala's type system, for example, is full of warts (well, Scala 2; I still haven't tried Scala 3). Rust's is cleaner and has fewer gotchas. I very rarely have to look up how to express something in Rust's type system, but I remember when I last did Scala, I often ran into weird type-system related errors that I just didn't understand and had to dig into to figure out. Some of that, sure, is likely due to Scala's type system having more features, but some of that is because it's just more complex. And I would usually rate something of lower complexity as better than something with higher complexity.


One of the major things about Scala 2 vs Scala 3 is the removal of many of the type system warts, in particular the type hierarchy now forms a lattice (if I'm getting my terminology right) rather than being rather adhoc in various places.

Lots of other small annoyances like the whole tuple situation have also been fixed.

EDIT: Plus: intersection types, sane macros/inlining, etc.

Unfortunately (for me), some of our projects still have to cross-compile to 2.x, but that's irrelevant for greenfield. I'd give it a whirl -- it's a great improvement over Scala 2.x.


OCaml's effects are fairly new, and the Rust maintainers are also looking into effect systems. Who knows, we might also get HKTs and dependent types too in the future.

Also, it’s usually pretty obvious when fermenting goes bad. My take is fermentation takes a while which really allows the biological process to become really positive or really rancid.

I’m assuming the parent meant pills with few different strains. Seems like a massive dose of a few different strains would approach that of a fecal transplant.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: