Hacker News new | past | comments | ask | show | jobs | submit | menthe's comments login

Meh, typical overblown “take” ihmo.

> All this stuff is just done automatically in Rust. If you have to write out the code by yourself, that's more work, and a huge chance for bugs. The compiler is doing some very complex tracking to figure out what to drop when and where. You'd have to do that all in your head correctly without its help.

What prevents anyone from dedicating a Zig memory allocator to the job (and all of its subtasks), and simply freeing the entire allocator at the end of the job? No baby-sitting needed.

Or if the mindset is really to be assisted, because “very complex” and too “much work”, may as well use a garbage collected language.

> It's knowing the compiler is on your side and taking care of all this that makes it magical.

Until you got used to it, and trusted it so much, and it suddenly misses something - either after a compiler update, or after some unsupported code introduced, and that shit takes down prod on a Friday. I’m not going to take the chance, thank you, I can call free() and valgrind.


> What prevents anyone from dedicating a Zig memory allocator to the job (and all of its subtasks), and simply freeing the entire allocator at the end of the job? No baby-sitting needed.

Given the whole ecosystem is built around the Allocator interface, it's entirely feasible for the consumer of a library to pass in a Garbage Collecting allocator, and let it do the job of deciding what gets dropped or not.

Downside is that this is all at runtime, and you don't get the compile-time memory management that Rust has.


I am sorry, are you non-sarcastically arguing that being able to pass through airport security, potentially accessing cockpits and planting bombs onboard airplanes, with a high-school level SQL injection on a federal website used by dozens of airlines & airlines employees, is actually, "fine"?

Besides, I am not sure what sort of "security through obscurity" you are talking about? Ian and Sam found it, and frankly - with a public page, page title + first h1 tag clearly stating that this relates to a Cockpit Access system, this has got to show up in a shit ton of security research search engines instantly.


Not 100% sure why it’s often idolized on HN.

We’ve maintained a financial exchange w/ margining for 8 years with it, and I guarantee you that everyone was more than relieved - customers and employees alike, once we were able to lift and shift the whole thing to Java.

The readability and scalability is abysmal as soon as you move on from a quant desk scenario (which everyone agrees, it is more than amazing at.. panda and dask frames all feel like kindergarten toys compared), the disaster recovery options are basically bound to having distributed storage - which are by the way “too slow” for any real KDB application given the whole KDB concept marries storage and compute in a single thread.. and use-cases of data historical data, such as mentioned in the article, become very quickly awful: one kdb process handles one request at once, so you end up having to deploy & maintain hundreds of RDB keeping the last hour in memory, HDBs with the actual historical data, pausing for hourly write downs of the data, mirroring trees replicating the data using IPC over TCP from the matching engine down to the RDBs/HDBs, recon jobs to verify that the data across all the hosts.. Not to mention that such a TCP-IPC distribution tree with single threaded applications means that any single replica stuck down the line (e.g. big query, or too slow to restart) will typically lead to a complete lockup - all the way to the matching engine - so then you need to start writing logic for circuit breakers to trip both the distribution & the querying (nothing out of the box). And then at some point you need to start implementing custom sharding mechanisms for both distribution & querying (nothing out of the box once again..!) across the hundreds of processes and dozens of servers (which has implications with the circuit breakers) because replicating the whole KDB dataset across dozens of servers (to scale the requests/sec you can factually serve in a reasonable timeframe) get absolutely batshit crazy expensive.

And this is the architecture as designed and recommended by the KX consultants that you end up having to hire to “scale” to service nothing but a few billions dollars in daily leveraged trades.

Everything we have is now in Java - all financial/mathematical logic ported over 1:1 with no changes in data schema (neither in house neither for customers), uses disruptors, convenient chronicle/aeron queues that we can replay anytime (recovery, certifying, troubleshooting, rollback, benchmarks, etc), and infinitely scalable and sharded s3/trino/scylladb for historical.. Performance is orders of magnitude up (despite the thousands of hours micro-optimizing the KDB stack + the millions in KX consultants - and without any Java optimizations really), incidents became essentially non-existent overnight, and the payroll + infra bills got also divided by a very meaningful factor :]


I think the adulation is mainly driven by the a few things:

1. it was fast by a huge margin for its time

2. the reason for its speed is the language behind it

3. it uses an esoteric language and still attains success

4. the core engine is implemented using surprisingly few lines of code

5. the core has been written and maintained by one person

All of these are things I've heard so I can't claim it's 100% true but I'm sure it's a combination of some of these.

I feel like APL and all its relatives had long ago gained legendary status. So the legend lives on - maybe longer than it should.

Don't get me wrong. It's still amazing!


Compared to similar dynamic scripting languages, Q is very vast. Compared to statically compiled languages, it can be surprisingly competitive, but is usually slower. The truly distinctive thing about Q is its efficiency as a user interface: at a REPL you can rattle off a short sequence of characters to transform and interrogate large datasets at interactive speeds and flexibly debug complex distributed systems live. In the right hands, it's a stunningly effective rapid-application-development tool (the above "quant desk scenario"); this was perhaps even more true in the k2 days when it was possible to build ugly but blisteringly fast and utilitarian data-bound GUIs for K programs in a few lines of code. There's certainly an abundance of romanticism and mythology surrounding it, but some of the claims are real and enduringly unmatched.


Python in a Notebook is “REPL like” and much more modern.

And though I agree low code is important, Streamlit or Dash are a much more fully featured and open way to do that.

I agree KDB has a good development workflow, but I think the same is available in an open source stack like ClickHouse + Python + Jupyter.


   And this is the architecture as designed and recommended by the KX consultants that you end up having to hire to “scale” 
I think this hits on one of the major shortcomings of how FD/Kx have managed the technology going back 15+ years, IMHO.

Historically it’s the consultants that brought in a lot of income, with each one building ad-hoc solutions for their clients and solving much more complicated enterprise-scale integration and resilience challenges. FD/Kx failed to identify the massive opportunity here, which was to truly invest in R&D and develop a set of common IP, based on robust architectures, libraries and solutions around the core kdb+ product that would be vastly more valuable and appealing to more customers. This could have led to a path where open sourcing kdb+ made sense, if they had a suite of valuable, complementary functionality that they could sell. But instead, they parked their consultants for countless billable hours at their biggest paying customer’s sites and helped them build custom infra around kdb+, reinventing wheels over and over again.

They were in a unique position for decades, with a front row seat to the pain points and challenges of top financial institutions, and somehow never produced a product that came close to the value and utility of kdb+, even though clearly it was only ever going to be a part of a larger software solution.

In fairness they produced the delta suite, but its focus and feature set seemed to be constantly in flux and underwhelming, trying to bury and hide kdb+ behind frustratingly pointless UI layers. The more recent attempts with Kx.ai I’m less familiar with, but seem to be a desperate marketing attempt to latch onto the next tech wave.

They have had some very talented technical staff over the years, including many of their consultants. I just think that if the leadership had embraced the core technology and understood the opportunity to build a valuable ecosystem, with a goal towards FOSS, things could look very different. All hindsight of course :)

Maybe it’s not too late to try that…


I'm very curious about this rewrite in Java, especially the orders of magnitude improvement. That sounds extremely impressive, and something that I wouldn't have considered possible. Can you share a bit more about how this performance improvement is achieved?


Well, I don't think the founders of that exchange complain about KDB that much. After all KDB allowed them to go to market quickly and make billions and than they changed the tech stack when demand justified it. So what? KDB was never meant to run a large exchange, but you just demonstrated that it can run a smaller one.

> ... and without any Java optimizations really ...

Come on, be honest! All of the core tech needs to be implemented in highly optimized GC-free Java. And you need to hire senior Java consultants who are highly specialized and do that for 10+ years and they also cost millions. I happen to know that BitMEX (located in Asia) has such consultants working from the EU. So, it's that easy to hire them!


"without ever having a job"

First Google Result for "nofx retire":

> The punk group have all got normal jobs, which they will continue to pursue, but are pulling out all the stops for one final tour for their army of fans.

Seems in direct contradiction with this anarchist click bait title.


Their wonderful song, The Death of John Smith, describes the life of a normal man with a "proper job": https://www.youtube.com/watch?v=EGgUt9vNGlo

NOFX's version of a job is better described by "Thank God It's Monday": https://www.youtube.com/watch?v=22rDfUc9HgA

" I live a 5 day weekend, I gotta a year long holiday, Thank God it's Monday... "

I love this band!!


It's a reference to their music, although a very poorly thought out one. But I don't think the result is entirely fair, as Fat Mike is a record label owner. That's not what one may think when you say "a job", these people are all middle-aged and aren't working the cash register at a Walmart or something like that. A job is a job, but NOFX are far past some idealistic idea of "live off the music, skip college" and whatever else people thought of them.


Please do not smear the good name of anarchism with mass media clickbait practices.


[flagged]


The pop culture/mass media portrayal of anarchism vs. the movement/anarchist communities themselves have almost nothing to do with each other.


Anarchists' portrayal of themselves is actually much more discrediting than the pop culture version; it's basically an ideology about everyone being in meetings all day.

Unlike some other ideologies they will happily answer any questions about it, it's just that the answers are extremely bad, whether it's "how would you make insulin" (you won't) or "how does criminal justice work" (sometimes meetings, sometimes lynch mobs).


Not my experience at all, esp. in terms of the people/communities I've met or the books I've read.

> Unlike some other ideologies they will happily answer any questions about it, it's just that the answers are extremely bad, whether it's "how would you make insulin" (you won't) or "how does criminal justice work" (sometimes meetings, sometimes lynch mobs).

That's precisely what I meant by pop culture/mass media. At this stage, this is more of a meme regarding anarchy than reality and this couldn't be farther from truth. (and it's quite an old one, think: French enlightenment era philosophers using pre-partition Poland as an example of anarchy).


How will you make insulin?


> Unlike some other ideologies they will happily answer any questions about it, it's just that the answers are extremely bad, whether it's "how would you make insulin" (you won't) or "how does criminal justice work" (sometimes meetings, sometimes lynch mobs).

To be fair, every ideology and/or mode of societal organization sucks at handling the nasty edge cases (even the best organizational mode we've found has a saying encapsulating this fact: "hard cases make bad law"), but yeah, anarchists mainly thrive in places where they aren't subject to the consequences of bad answers (e.g. twitter, reddit, bluesky, etc).


The problem with ideologies is that they work really well in small communities.

Communism works. In a small communjty/village. Same with anarchism. Because that's basically how small villages work. You can adjust as needed in the moment without too much effort.

The problem happens when you grow. What happens when people you are governing you will never meet? What happens when you're busy and can't research all the complexities of local politics and some old person is stirring up trouble with yells for not in my backyard stuff... And one person starts bribing and so on.


The problem is that we want to find ideologies that work across millions of people. Even democracy is pretty terrible in modern countries because of how large they are, so it creates a lot of inequality and centralisation of expenditure.

I think the secret is to make the state smaller, not in the libertarian sense, but in actual geography and number of "subjects". No organisation has any business deciding the lives of tens or hundreds of millions of people. The trend is for governments to become larger and larger, just like any other empire. A one-world government, for example, would not be a utopia, it would be a veritable hell on earth.

(Disclaimer: I'm an anarchist, and as you say, anarchism works only at smaller scales, so I'm biased)


You're probably onto something. Being too big leads to problems. But also leads to power.

You can't fuck with the US/China, even if nukes were removed, because they are a massive force that will crush you beneath their heel. So there is mass power in unity. However you look at europe, and you see a lot of gains as well, where multiple smaller governments exist, and a unifying body was created to compete with the likes of the US without giving up their individuality. But also the US is distributed since states govern themselves with overarching federal oversight.

It is massively complex. At the end of the day we need not only a small anarchistic state, we also need reasons why some cult of personality won't be able to rally his million followers and start guns blazing taking over neighbors. Unfortunately I feel like democracy is the least bad system we got.


As anti-AI as I am, I think democracy might be the best we get until the day we are able to create benevolent machine kings organising our lives. Monarchies/authoritarian governments can in theory be fairer and much more efficient at tackling big problems than any democracy, but fail spectacularly, and with a lot of bloodshed, when paired with human stupidity and greed.

Until then, we're in kind of a political limbo of mediocrity.

The image I have of anarchism is not the mainstream one of "million of people doing whatever they want"; organisation and hierarchy are not in conflict with anarchism, as long as you are free to leave and form your own. So, under that point of view, anarchism-as-political-force is little more than a multitude of small heterogeneous communities collaborating and trading with each other. Maybe the secret is to embrace our tribal nature, but avoiding the issue of cult of personality—this is an interesting dilemma to which I don't have a good answer for, so thank you for the food for thought.


The problem of "free to leave" is that that only works with unlimited resources. Why should I allow a random person into my house to eat my food and promise to assist, only to be free to leave at any time. And they have to go to another house, why would someone there do the same.

Being altruistic is great, but at some point you run out of necessary resources. And altruism dies at scale when resources dry up.


The NYT article says this:

> "Controversies aside — most of which involved drugs, onstage banter taken too far and the unpredictability of both the band and its fans — the members of NOFX managed to do something most people can only dream of: They avoided having a day job for 40 years."

We're going to need some deep investigative journalism to sort this mystery out...


Also, I remember reading when I was young that NOFX quickly abandonned the "drugs/alcohol/sex" ethos and become quite healthy people despite having the opposite branding.

I have no idea if is true or not.


The article talks about how Smelly got his name because of the drugs but has been clean and sober for a long time.

It also mentions how instead of drugs, they do yoga and get massages.

No mention of their current sex life.


It hasn't exactly been a clean path. Fat Mike has been on and off through the years and they never stopped singing about doing drugs. That said I think they are vegetarians and otherwise relatively health conscious.


Indeed. I think many people don't realise how many artists do actually have day jobs as well - it's not always as lucrative as you'd think.


Dexter Holland, the frontman of The Offspring, finally got time to finish his PhD in molecular biology, after they stopped touring.

https://en.m.wikipedia.org/wiki/Dexter_Holland

For him it wasn't about money, though.


Milo Aukerman of the Descendents has a PhD in molecular biology - there's a great interview (maybe in the Filmage documentary?!) where he's discussing a conversation with his colleagues who are asking "what did you do at the weekend?" and he replies "Oh, my band was playing Woodstock..." (or something along those lines!)


That's Dr. Holland to you.

Ain't a lot of money in molecular biology these days anyway.


Well, if Dr.Holland would have invented a cure for Covid or a vaccine, there would have been some money I guess.


I'm sure the company would've paid him a small bonus, and kept the 99.9999% of the profits from it for themselves.


I know it kills you, but guess what: There is no "Danny's idea." Everything that comes in here belongs to the agency.

I give you money, you give me ideas.

https://youtu.be/BnNV4_8izkI?si=Ptz0QCr6V01MOMpN&t=57


That isn't what happened to the people who actually did invent them at BioNTech/Moderna.


Greg of Bad Religion I believe also has some sort of PhD.


Interesting, did not know that. Punks seem to like biology it seems ..

"Graffin obtained his PhD in zoology at Cornell University and has lectured courses in natural sciences at both the University of California, Los Angeles and at Cornell University."


They've toured for 40 years, sold 8 million records and have over a million monthly listeners on Spotify. The singer's label has put out many of the big punk bands for the last 30 years. I agree with your point, but this lot are fine.


They are indeed “doing just fine” which I am sure is what you must have been referencing — from one of their most famous songs:

  Buy me a Becks beer
    or pass me the bong
  Gimme some Bushmills
    I'll sing you this song
  Open another
    big box of cheap wine
  We're over 30
    we’re doing just fine
The last time I saw them live it was pushing 40 instead of over 30 and that was nearly a decade ago.


Hahaha, yeah, they're a fun band. I saw them (for the last time, I guess?!) a couple of weeks ago (and oddly in reference to the lyrics, I actually saw them tour Pump Up The Valuum as well). Mike is pushing 60 now!


TS Eliot (poet) worked in a bank

Phillip Glass (composer) was also a plumber, which surprised the London music critic who unwittingly employed him to install a dishwasher

Anthony Trollop was a postal surveyor

Kurt Vonnegut was car dealer

See https://www.mentalfloss.com/article/52293/11-celebrated-arti...


Same goes for hard sciences in academic fields. Cant feed family on physics postdoc pay, goto program or drive a bus. You goto do what you gota do!


"goto" considered harmful, use "gotta" instead


> "without ever having a job"

It's not entirely different from all the successful artists who are living lavish lifestyles (for example many musicians are complete car nuts and own insane cars collection [1]) but sing songs criticizing "money" (even though they have plenty) or criticizing "wall street" (even though most of their savings, in addition to cars and real estate, are at their broker).

I have nothing against money or wealth.

But the irony of a musician or a band criticizing money while flying private is not lost on me.

[1] as an example of such musicians (but not that they necessarily did criticize money), Miles Davis used to run vs Herbie Hancock in the streets in a "Ferrari vs AC Cobra" style. And I love that.


To be fair, I don't think anarchism (at least as espoused by the CNT-FAI et al) suggests that anyone can just sit on the couch without doing anything.

If you want that, may I recommend capitalism?

If your expenses are in USD, 30Y TIPS are above 2% last I saw, so putting 50x your burn rate in those should allow you to sit right back, clip those risk-free coupons, and watch the world go by.


I don’t get it. If you do not want to participate in the preservation and distribution of the archive, why don’t you just move on instead of complaining?

Besides, gluetun+chihaya+qbit containers do the job without breaking a sweat, and without ever having to remember that you run a VPN - as it’d only be tunneling the containers of your choice. gluetun is the best image ever made!


Hang on. I think the OP is trying to find a middle ground between their level of comfort and contributing to the project. I think their intentions are positive; they want to help but are constrained by their other needs or priorities.


They’re not complaining they’re asking a question. I think you completely misread the tone.


No I definitely want to. I was stating why I haven't thus far. I support piracy.


Everyone’s talking about how you MUST hit the database for revocations / invalidations, and how it may defeat the purpose.

How is no one thinking of a mere pub-sub topic? Set the TTL on the topic to whatever your max JWT TTL is, make your applications subscribe to the beginning of the topic upon startup, problem solved.

You need to load up the certificates from configuration to verify the signatures anyways, it doesn’t cost any more to load up a Kafka consumer writing to a tiny map.


For maximum scalability you'd want a bloom filter at each service for testing the token, and some central revocation lists where you go test the token that fail this.

But this is way overkill for anybody that isn't FAANG, and it's probably overkill for most of FAANG too. On normal usage, it's standard to keep the revocation filter centralized at the same place than handles renewals and the first authentication. This is already overkill for most people, but it's what comes pre-packaged.


So, basically a database where you store a replica in memory on every edge node.


Not really. A pub/sub bus cluster (or PaaS) is pretty different from a database.


I’m not talking about the pub sub cluster.

If you have a pub sub cluster that you push revocation details into, and running servers subscribe to that feed and then track a rolling list of tokens that would otherwise still be active but have been revoked, you are effectively storing a revocation database on every edge node.


Again not really. If the pub/sub broker is persistent, you don't have to persist the revocation list on edge nodes. And just pointing out that it's a db in some loose sense of the word doesn't help with the actual challenge of organizing the flow of data reliably in a federated system (i.e. one that can't share a single database).


Frankly, who can read this!? I am not sure what's worse, the multi-line comments spanning multiple lines of code, having multiple instructions on a single line, or the apparent disconnect between the pseudo-code of the article.


I would blame the majority of your criticism on the fact that HN is not the best place to read code. Also, syntax highlighting & basic familiarity with Nim helps.

His code is doing a few more things than necessary. The actual algorithm is inside the `uniqCEcvm` template. The `it` it receives is anything you can iterate over (a collection or an iterator). Multiple things in one line really only appear where they directly relate to the part at the beginning of the line.

The `when isMainModule` is Nim's way of Python's `if __name__ == "__main__"`. The entire part below that is really just a mini CL interface to bench different (random) examples. Final thing to note maybe, the last expression of a block (e.g. of the template here) will be returned.

And well, the style of comments is just personal preference of course. Whether you prefer to stay strictly below 80 cols or not, shrug.

I grant you that the usage of 2 sets + pointer access to them + swapping makes it harder to follow than necessary. But I assume the point of it was not on "how to write the simplest looking implementation of the algorithm as it appears in the paper". But rather to showcase a full implementation of a reasonably optimized version.

Here's a version (only the algorithm) following the paper directly:

    proc estimate[T](A: seq[T], ε, δ: float): float =
      let m = A.len
      var p = 1.0
      var thr = ceil(12.0 / ε^2 * log2(8*m.float / δ))
      var χ = initHashSet[T](thr.round.int)
      for i in 0 ..< m:
        χ.excl A[i]
        if rand(1.0) < p:
          χ.incl A[i]
        if χ.card.float >= thr:
          for el in toSeq(χ): # clean out ~half probabilistically
            if rand(1.0) < 0.5:
              χ.excl el
          p /= 2.0
          if χ.card.float >= thr:
            return -1.0
      result = χ.card.float / p


Thank you very much for having taken the time.. Your comments and function are both very helpful!


For having used Memories on Nextcloud, and having spent hours trying to micro optimize the Nginx & PHP configuration, I can safely say that, while it is better than the Nextcloud’s native Photos app, this is absolutely nowhere near to Immich, Filerun, or surprisingly even a dumb SMB share (which doesn’t have thumbnail caching…!). I’ve really tried hard, as Immich’s support for external libraries was still in a PR at that time, and didn’t want to have two separate tools to grab files and grab photos.

A big part of the problem, it seems, is that, when you have a large library, and you jump/scroll to a specific year or so, it won’t cancel the previous page(s) worth of thumbnails loading. So as soon as you’re scrolling to search for something, it quickly accumulates hundreds of useless requests that quickly overload the PHP workers, and make everything crawl to a standstill.

I personally had to give up. When trying to grab photos from abroad for my shortly upcoming proposal, I’ve literally deleted Nextcloud/Memories, plopped Immich in docker compose, let it index/transcode/generate thumbnails from scratch against my “external library” (so Immich doesn’t duplicate the medias), and that ended up savings me days of buffering, and was able to find the nice pictures for the occasion!

(R740xd with 48 cores and 96TB SSD-backed ZFS pool)


It's silly to micro optimize nginx / php when you have docker. Just use the Nextcloud Docker image or AIO and be done with it, everything is pre-optimized.

Thumbnail caching exists (it's even highly configurable), there's absolutely zero buffering even with 100k photos+ on a raspberry pi. You obviously did not read the documentation or install the preview generator (which the docs clearly tell you to)

Your deployment skills are hot garbage

EDIT 3: ^the last line was in response to something that has been edited out of the original comment

EDIT: the comment this is in reply to was edited multiple times. This is pointless and a lot of it is just false.

EDIT 2: (at least currently the previous comment claims unnecessary PHP requests) this only happens if your configuration is incomplete; you didn't install preview generator as the docs say. Secondly it happens exactly once, the first time you see the image. All other requests are gracefully cancelled.


Absolutely was using the AIO image, with thumbnail generation enabled for every formats of my library (another thing you need to manually edit in Nextcloud’s configuration as by default the format list is limited).

And it’s only “pre-optimized” if you are cool with PHP memory limit crashes, PHP operation timeouts, PHP request size limits, and the works.

Another joy associated with using Nextcloud sync is that uploads don’t even seem to support multi-part resumable uploads. So not only is it crazy slow, if there’s any error during the auto-upload of a 2G video clip, or the app is temporally backgrounded by iOS, it’ll go into an exponential back off (which you can force start), and eventually just start the upload for that/those file(s) over from scratch - good ways to waste days burning in your screen while in a trip and trying to ensure your medias are backed up in case you lose your phone on a trip. Try uploading raw images & 4k clips shot on iPhone to Nextcloud using the Nextcloud app + the AIO image from abroad.

I’m telling you, I’ve tried to use them for quite some time, and I’m far from DevOps-illiterate - I’ve been using k8s since it’s infancy, we wrote the original Operators at CoreOS way back.


I don't know what to say if you think flipping a switch in the admin UI is "manually" configuring.

Otherwise, mostly all of this is just false. I routinely upload massive files (both RAW and 4K, yes) with almost default configuration and it just works. You also lied with "no thumbnail caching" in the first comment, no idea why.


Wow, your first comment was completely rude and unnecessary. Why do you feel the need to say, "you must be lying or you suck at deploying, because it works for me."

also, they meant that their SMB share didn't have thumbnail caching


Hmm I can reply now, strange. That comment was edited multiple times so this is pointless. Also the original commentor started the rude exchange with "hot garbage" (wonder if they'll edit that out too now)

EDIT: yeah, they edited that out too.


I understand now that you are the developer of this app.

I'm sure it doesn't feel very good to have someone criticize it, I get that. But, this person cared enough about the thing you made to use it, troubleshoot it, and post a comment about it on HN.

At the end of the day, it's valuable user feedback :)


No, just no.

Valuable user feedback (which I absolutely love) is someone pulling the server logs, filing a bug on GitHub and following through till it gets fixed. Or, even attempting to see what parts are slow and reporting it. Worse but still very helpful, providing a link to an affected instance that might help "see" what might be happening.

Spending a few hours trying random things and then complaining loudly like a know-it-all is NOT valuable feedback; it's bullshit. Nothing here is helpful, at all. There's absolutely zero indication of what could be fixed and why this particular person's deployment is broken while thousands of others on much slower hardware work just fine. None.


Yeah, you're right. You should say "please file a detailed bug report and consider contributing to the project", instead of being a dick about it.

The other comments you posted are also a bit odd without you disclosing you're the author. just saying


100% agree, generally speaking.

In this case I was rather annoyed since the original comment was very offensively worded and the person obviously had zero intention of helping out. Their only goal was to stroke their own ego by shouting out how something they couldn't get to work is crap.

This is part of the reason for open source maintainer burnout -- useless comments about how something is broken with zero intention of helping to fix it. Hey, it's free -- if you don't like it then either help, or stop crying and move on to something else.


You asked for feedback in your post. No more, no less. Then you started flaming a person for giving their feedback. And start defending the flaming because you actually wanted feedback _in a certain format and worded nicely_.

You are doing great stuff with Memories. Community building skills need some work though.

That is my feedback. Which you asked for.


Totally understandable sentiment!


Well I for one would like to say I truly appreciate the brilliant work you have done. The app is a joy to use and I have had several coworkers ask what website I was using when I show them something.

Your work has given me reminders to memories I long forgot about, and nothing can come close to the importance of recalling good memories.


I, for one, am sick of "just run the Docker image" as a deployment strategy and the be-all end-all of support. On my last attempt at serving a photo gallery, I deployed Hetzner's Photoprism image on a Hetzner server... and it failed. You would think such a thing would be bulletproof! They don't tell you an IPv4 address is needed and the log does not indicate anything is wrong other than Traefik has problems connecting to the certificate server.

If something doesn't work—regardless of how unhelpful the report or oddly configured the deployment machine is—I would love to hear about it so I temper my own expectations before trying it myself.

While I sympathize with the developer whose product is popular enough to collect 1000 issues as of two days ago, some of your many thousands of users can also get fatigued by spending resources (time, money, mental effort) on deployments that fail because the machine and network running Docker is still different enough from yours that issues arise.

My Hetzner Photoprism bug report has been sitting unanswered for two weeks. Getting the log data and trying out different DNS configurations and writing the bug report took a few hours, because I had to SSH into the Docker image and run curl verbosely and figure out which of the five docker-compose elements was causing problems; running Docker and setting up servers isn't my day job. I don't feel like paying 25 bucks a year for an IPv4 address and don't really want to figure out how to get Let's Encrypt to work on Hetzner's IPv6 by manually adjusting the Docker Compose configuration. I thought that's the point of Docker Compose: that you wouldn't need to dick around with it to get it working. I'll probably delete the thing and replace it with something else—potentially Nextcloud as there's no preconfigured Immich image. So, you know... expect my Memories bug report in a few days.

I can't imagine this user's complaint was fabricated from thin air. Rude or not, they are having problems with the thing you made. Make a mental note, "at least some small percent of users are still having issues, in this case no clear root cause, probably a small enough population to ignore, maybe one day further reduce the friction for reporting bugs or find a way to gather more detailed info." Maybe put them in their place if they actually attack you personally or actually have no useful information e.g. "Product Sucks!!" but beyond that, I (as a potential fellow user) find these not-very-dev-helpful reports insightful, as there are two dozen competing FOSS photo storage programs and I want to efficiently figure out which application has features I prefer, is actually stable and easy to deploy, not likely to switch licenses going forward, has a clear goal and steady progress, documentation is well-written and not just a "Brothers Karamazov" dump of one developer's stream of consciousness, etc.

Should I take two or three hours to file bug reports for each of the 20 photo albums I'd consider testing instead of spending time with family or practicing music? Maintainer fatigue is no joke, but it's also a burden on users if the software does not run, and they've already sunken some opportunity cost, and then not every user knows how to be kind and helpful through their frustration.

Anyway, your reaction is valid. I hope you keep working on the project, but I'd also be okay with not having so many different FOSS options and still no clear winner.


Last post can't be edited.

I got done loading a Nextcloud image and it works fine. It's also a different base server and configured differently, and it has IPv4 without extra cost. The only issue so far is that ffmpeg is not detected by Memories so transcoding cannot be enabled, even if I install the only app related to ffmpeg, "Automated media conversion." I'll have to keep reading to see if that's the right app. The server is managed in a way that I can't ssh or change anything Docker-related. I can only log in to Nextcloud at a given URL, so I don't know how run commands from the documentation such as "occ ..." With enough time, I can search if this is usable or not.

It will take probably 20 or 30 minutes to figure out running commands and if ffmpeg can be installed/accessed. I've already committed an hour to this platform even before uploading a single ARW, although I'm already farther along than I was with Photoprism...

EDIT: 24 minutes. I can run occ commands. I can't install ffmpeg. Many others have the same well-known problem: no video thumbnails. Oh well, not a dealbreaker.


The day SMB supports server-side thumbnail generation/caching, kindly let me know :]


This is moot.

- Immich supports external libraries

- Use docker compose and never worry about versions breaking


> This is moot. Immich fully supports external libraries.

You're correct, Immich does support external libraries. To be more elaborate with my original comment, I meant inbuilt apps of Nextcloud which integrate well and complement the memories app. An example app would be the Face recognition one or Recognize if you fancy a different implementation. Nextcloud is after all an ecosystem so using memories gains you the other benefits of such an ecosystem. This might be overkill for some so it's upto your usecases.

Versions breaking is an issue since both mobile and server clients have to be on the same version. Compared to Nextcloud Memories this is not an issue. This was an issue when I've last used Immich so this may have changed since then.


External libraries? What do you mean by that? External storage?

The last time I looked, Immich can’t work with a existing file and folder structure without importing (copy) everything in his own structure (database). That’s a big no go for me.

In Memories, the file structure of your photos is preserved as-is. And you can run it alongside with other solutions that respect your folder structure.

EDIT: looks like Immich can work with external folders. But: Does it put pictures from my phone in that external folder or in its own folder?


It absolutely can, and it does not duplicate nor modify the medias. I mount my several TBs large library with the read-only flag in Docker.

https://immich.app/docs/features/libraries#external-librarie...


Immich on mobile doesn't give you much flexibility with where each local folder gets uploaded to yet so it doesn't preserve folder structure. If you're using the CLI you can program the structure and tell it which album a folder can map to.


You can add any folder to immich as external library. No need to use cli.

So if you want custom structure, synchronize files from mobile to server in any way you prefer (Syncthing, PhotoSync, etc.) and add that folder as an external library.


This is quite a basic feature which should be inbuilt to the Immich mobile app. It's a common use case to want your screenshots, WhatsApp media album to not be displayed on the main timeline.

If you're running an instance for less technical users it's more hiccups to setup syncthing etc and have to explain why another app is needed.


This, seriously. The long-term maintenance, tribal knowledge & risks associated with this giant hack will be greater than anything they'd ever have expected. Inb4 global outage post-mortem & key-man dependency salaries.

There's no virtually no excuse not spinning up a pg pod (or two) for each tenant - heck even a namespace with the whole stack.

Embed your 4-phases migrations directly in your releases / deployments, slap a py script to manage progressive rollouts, and you're done.

Discovery is automated, blast / loss radius is reduced to the smallest denominator, you can now monitor / pin / adjust the stack for each customer individually as necessary, sort the release ordering / schedule based on client criticality / sensitivity, you can now easily geolocate the deployment to the tenant's location, charge by resource usage, and much more.

And you can still query & roll-up all of your databases at once for analytics with Trino/DBT with nothing more but a yaml inventory.

No magic, no proprietary garbage.


Figma has millions of customers. The idea of having a Postgres pod for each one would be nearly impossible without completely overhauling their DB choice.


You are making a major conflation here. While they do millions of users, they were last reported to only have ~60k tenants.

Decently sized EKS nodes can easily hold nearly 800 pods each (as documented), that'd make it 75 nodes. Each EKS cluster supports up to 13,500 nodes. Spread in a couple of regions to improve your customer experience, you're looking at 20 EKS nodes per cluster. This is a nothingburger.

Besides, it's far from being rocket science to co-locate tenant schemas on medium-sized pg instances, monitor tenant growth, and re-balance schemas as necessary. Tenants' contracts does not evolve overnight, and certainly does not grow orders of magnitude on week over week basis - a company using Figma either has 10 seats, 100 seats, 1000, or 10,000 seats. It's easy to plan ahead for. And I would MUCH rather having to think of re-balancing a heavy hitter customer's schema to another instance every now and then (can be 100% automated too), compared to facing a business-wide SPOF, and having to hire L07+ DBAs to maintain a proprietary query parser / planner / router.

Hell, OVH does tenant-based deployments of Ceph clusters, with collocated/coscheduled SSD/HDD hardware and does hot-spot resolution. And running Ceph is significantly more demanding and admin+monitoring heavy.


Reported where? Does that include a monolithic "free tenant" that would be larger than thousands of their other tenants put together? Every Figma user has their own personal workspace along with the workspace of any organizations they may be invited to


Thousands of directories over thousands of installations? It’s not that far fetched.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: