Hacker Newsnew | past | comments | ask | show | jobs | submit | dylanowen's favoriteslogin

If this interests you, you might also like the below:

https://diffusionillusions.com

https://dangeng.github.io/visual_anagrams/

There are also videos with the process explained


Whenever I hear about neuromorphic computing, I think about the guy who wrote this article, who was working in the field:

Thermodynamic Computing https://knowm.org/thermodynamic-computing/

It's the most high-influence, low-exposure essay I've ever read. As far as I'm concerned, this dude is a silent prescient genius working quietly for DARPA, and I had a sneak peak into future science when I read it. It's affected my thinking and trajectory for the past 8 years


This doesn’t work quite as well as people assume. The first limit is simply size, you can only cram a few petabytes of NVMe in a server (before any redundancy) and many operational analytic workloads are quite a bit larger these days. One of the major advantages of disaggregated storage in theory is that it allows you to completely remove size limits. Many operational analytic workloads don’t need a lot of compute, just sparse on-demand access to vast amounts of data. With good selectivity (another open issue), you could get excellent performance in this configuration.

Ignoring the storage size limits, the real issue as you scale up is that the I/O schedulers, caching, and low-level storage engine mechanics in a large SQL database are not designed to operate efficiently on storage volumes this large. They will work technically, but scale quite a bit more poorly than people expect. The internals of SQL databases are (sensibly) optimized for working sets no larger than 10x RAM size, regardless of the storage size. This turns out to be the more practical limit for analytics in a scale-up system even if you have a JBOD of fast NVMe at your disposal.


I am about a quarter of the way through Modern Library’s top 100 and it has been a worthwhile journey. It is “just” literary fiction but it is among the best humanity has produced. I have learned so much about the human condition, my ability to articulate ideas has improved tremendously, and I feel like my mind has been “freed from the tyranny of the present” (to quote Cicero).

https://sites.prh.com/modern-library-top-100


I've spent a little time in this space, and I'm not sure it's a good idea to write shaders in Rust, although it's probably better than GLSL or WGSL.

Let me start with the pros:

1. Don't have to learn 2 different languages

2. Modules, crates, and the easier ability to share code

3. Easier sharing between rust structs and shader code.

Now the cons, in comparison to Slang [1]

1. No autodiff mode 2. Strictly outputs SPIR-V, while Slang can do CPU, CUDA, Pytorch, Optix, and all the major graphics APIs

3. Less support - Slang is supported by the Khronos group, and Slang gets use at Nvidia, EA, and Valve.

4. Safety isn't very valuable, most GPU code does not use pointers (it's so rare it's considered a feature by Slang!)

5. slangc probably runs a lot faster than rustc (although I would like to see a benchmark.)

6. Worse debugging experience, slang has better interop with things like NSight Graphics, and their Shader Debugger. Slang recently got support in NSight graphics for shader profiling, for example.

7. Slang has support for reflection, and has a C++ api to directly output a JSON file that contains all the reflected aspects.This makes handling the movement between rust <-> gpu much easier. Also, the example shown on the website uses `bytemuck`, but `bytemuck` won't take into consideration the struct alignment rules[2] when using WebGPU. Instead, you have to use a crate like `encase`[3] to handle that. I'm not sure given the example on the website how it would work with WebGPU.

8. If you have pre-existing shaders in GLSL or HLSL, you can use slangc directly on them. No need to rewrite.

9. In reality, you may not have to learn 2 languages but you have to learn 2 different compute models (CPU vs GPU). This is actually a much harder issue, and AFAICT it is impossible to overcome with a different language. The problem is the programmer needs to understand how the platforms are different.

[1] https://shader-slang.org/ [2] https://webgpufundamentals.org/webgpu/lessons/resources/wgsl... WGSL struct alignment widget [3] https://github.com/teoxoy/encase


What's the advantage of integrating this at a library level instead of just compiling it and running in Shadow? https://github.com/shadow/shadow

That's basically how I designed the magic bytes for the OpenTimestamps proof files:

    $ hexdump -C foo.ots 
    00000000  00 4f 70 65 6e 54 69 6d  65 73 74 61 6d 70 73 00  |.OpenTimestamps.|
    00000010  00 50 72 6f 6f 66 00 bf  89 e2 e8 84 e8 92 94 01  |.Proof..........|
0) Magic is at the beginning of the file.

1) Starts with a null-byte to make it clear this is binary, not text.

2) Includes a human-readable part to make it easy to figure out what the file is in hex dumps.

3) 8 bytes of randomly chosen bytes, all of which greater than 0x7F to ensure they're not ASCII.

3) Finally, a one-byte major version number.

4) Total length (including major version) is 32 bytes to fit nicely in a hex dump.


One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.

As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.

And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)


I'm still holding out for something that can monitor my bank account and automatically register transactions instead of me having to manually enter them. https://maybe.co/ is working on a solution for American banks.

I understand that Europeans already have protocols in place for this sort of thing. Why must the EU always get the nice things?


iOS exposes an API for this

Here’s a local keyword filtering app that works great: https://github.com/afterxleep/Bouncer


I also have largely abandoned attempts to "force" myself to sleep after years of insomnia. It results in some tired days, which in turn has resulted in some problems with consuming too much caffeine. But it's largely better for my mental health, I think, to simply get up and find some way to occupy myself until I actually feel tired. The alternative, trying to force myself to sleep when I don't feel tired, with mounting anxiety about getting too little sleep, simply doesn't have any upside.

Evolutionarily, insomnia makes sense in the context of a tribe, where it's useful to have people up and about, watching for danger. But in the modern day with synchronized workplaces, we've seemingly decided that not waking up early verges on a moral failing. "Early to bed, early to rise, makes a man healthy, wealthy and wise," is just the beginning. Showing up late to work is looked down upon, but staying at work late is underappreciated, in my experience. Being on the east coast is a surprising benefit when working with West Coast clients because it creates the impression that I'm getting more done simply because of time zones. There's something deeply ingrained in US culture going on here that I'm not sure I understand the full extent of.


This is generally considered one of the most well written games of all time. The developers wwre writers, who wanted to have a go at multimedia. This would be their only game.

We regularly talk about it on /r/adventuregames!

I also highly recommend I Have No Mouth and I Must Scream. Similarly psychological horror, excellent writing. Brilliantly voiced by its author.

Both among a small handful of games that have made me cry.


If going with client-side CSS rendering, I'd also play with CSS filter functions like contrast. The OSM tiles have always looked too low on contrast for me and thus difficult to read. Making them grayscale makes it worse.

I've been using the webgpu inspector extension[0] and so far it's proving very useful.

There are some occasional bugs but the author is very responsive on github and quick to fix issues.

Couldn't get anything useful out of PIX on the other hand.

- [0]: https://github.com/brendan-duncan/webgpu_inspector


The article gave me a deja-vu about the no-code wave a few years back and an excellent article I read on the subject [1]

> The logic doesn’t go away. Just because a decision is embedded into the wiring of a Zapier rule doesn’t remove any of the burden of maintenance / correctness.

Of course, AI is a lot more powerful than no-code, but the "End of Programming" suffers from the same delusion. If AI can reliably make every decision around engineering, design, and product, it would be capable of doing every task in the world. It's surprising that so many engineers believe writing things in plain text would obviate the need to learn programming.

[1]: https://www.alexhudson.com/2020/01/13/the-no-code-delusion/


I’ve been using a software solution for this for over a decade. It’s called Synergy (https://symless.com/synergy) and it is fast - switches instantly over wifi and also works across Windows/Mac/Linux.

Why is it scummy? Hiring people to produce goods or services and compensating them based on the value of their knowledge and abilities is sort of the entire value pitch of capitalism, and it's unlikely anyone would have moved over to Apple if the compensation wasn't more worthwhile to them.

Super cool that it includes high(ish)-resolution buildings! Folks interested in using this data may also be interested in OpenTopography[0], which is a repository of topographical datasets (some very high resolution) with possibly more friendly licensing terms. I used some data from them to make a physical topographic map of a mountain peak. The tooling is a little opaque from a newcomer's standpoint, so I wrote up a quick howto[1]. In short you go from GeoTIFF to an STL surface with phstl, then extrude into a volume using Meshmixer (could use something else).

0: https://opentopography.org/start 1: https://giferrari.net/blog/2023/1/2023-1-13-printing-lidar-t...


13 minutes in.

Check out the nobara home page, it probably has more info that what you expected https://nobaraproject.org

I guess the idea is that yeah you could do everything Nobara did to your own distro, but then you’d just end up with the same thing.


I would highly recommend Fooocus to anyone who hasn't tried: https://github.com/lllyasviel/Fooocus

There are a bajillion local SD pipelines, but this one is, by far, the one with the highest quality output out-of-the-box, with short prompts. Its remarkable.

And thats because it integrates a bajillion SDXL augmentations that other UIs do not implement or enable by default. I've been using stable diffusion since 1.5 came out, and even having followed the space extensively, setting up an equivalent pipeline in ComfyUI (much less diffusers) would be a pain. Its like a "greatest hits and best defaults" for SDXL.


Note that this technique and its results are unrelated to the infamous "spiral" ControlNet images a couple months back: https://arstechnica.com/information-technology/2023/09/dream...

Per the code, the technique is based off of DeepFloyd-IF, which is not as easy to run as a Stable Diffusion variant.


This is covered in "Information Theory, Inference, and Learning Algorithms" by David MacKay ( https://www.inference.org.uk/itprnn/book.pdf ):

> Why unify information theory and machine learning? Because they are two sides of the same coin. In the 1960s, a single field, cybernetics, was populated by information theorists, computer scientists, and neuroscientists, all studying common problems. Information theory and machine learning still belong together. Brains are the ultimate compression and communication systems. And the state-of-the-art algorithms for both data compression and error-correcting codes use the same tools as machine learning.

* In compression, gzip is predicting the next character. The model's prior is "contiguous characters will likely recur". This prior holds well for English text, but not for h264 data.

* In ML, learning a model is compressing the training data into a model + parameters.

It's not a damning indictment that current AI is just compression. What's damning is our belief that compression is a simpler/weaker problem.



Various debt has different "interest rates" and the skill is to pay off the high interest ones as the expense of 0-rate ones.

I have a closet in the basement where when I did the vinyl plank floor, I ran out so the planks don't quite go to the back of the closet all the way. Problem? Yes? A bit ugly? Yes. But in reality the problem is 100% of the time covered by boxes and anyway I can live a happy life in this house for decades and not be affected. That's 0% tech debt.

On the other hand if my gutters are clogged, there's interest on that because the longer I wait the costlier it will be to deal with since clogged gutters can lead to basement leaks or gutters themselves detaching. Or, if my stoop is broken, that's not just an eye sore but I keep tripping on it, the faster I fix it the sooner I stop tripping. That's a high-interest debt that should be fixed asap.

In engineering, a high-rate debt could be some architectural thing that slows down the development of every single feature. You want to quickly pause features to pay this down so everything can move faster. On the other hand, some file that you never touch having some shitty code or lots of TODOs may be a very low interest debt since in practice you never touch it or are otherwise not bothered by it other than knowing that it's ugly - like my closet floor.

Engineers make two mistakes around this - fixing zero-interest debt when there's more important things to do on one hand. On the other hand, when they say "oh, product management/leadership didn't sponsor our tech debt fixing" - it's often because we fail to articulate the real cost of that problem - explaining that it's high rate and how it's costing you.


I'm not sure about this. On one hand the idea sounds good on the other it raises lots of questions. For example, who will decide who is a "worthy load bearer"? Then once people are funded who will manage their workloads?

Also, I'm dubious the whole thing is actually needed in the first place. No examples of "load bearers needing funding" were given. DNS was mentioned, but what part of it exactly? Root TLDs are doing pretty well financially from what I heard by selling all the sub domains in their TLDs, so what are we talking about exactly reverse DNS? Things like .org or edu? These are funded by governments.

Network wise we have telcos paying for sub sea cables, Internet exchanges like the one in London are self funded (by connection fees). Routing is managed by same telcos and IXs.

What are these individual "Internet Load Bearers" that keep the entire Internet afloat without making a dime from it? Usenet admins? I might have paid to keep Usenet alive, but Google bought it all, didn't they?


This doesn’t appear to support rich text formatting ranges like bold, italic, etc - unless I’m missing something in the API. AFAIK Peritext is still the state of the art in rich text CRDT algorithms https://www.inkandswitch.com/peritext/

I’d love to see this build the rich text stuff from the Peritext algorithm.


I'm looking into client-side syntax highlighting as well at the moment and lezer popped up: https://lezer.codemirror.net/. It's directly based on tree-sitter but tailored to be more web friendly (and written by the same author as codemirror) - https://marijnhaverbeke.nl/blog/lezer.html

There definitely are some: https://shop.pimoroni.com/search?q=e-ink

And now I think I know what my next project is going to be, I am sure I can find some desk space


I started building out tools to track congressional stock trading in 2020.

Since then, I believe there have been 9 other proposals similar to this one. None of them have been even called to a vote.

It seem like Congress is pretty unwilling to regulate its own trading.

Until they are, feel free to track the trading here: https://www.quiverquant.com/congresstrading/


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: