Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Drop-in SQS replacement based on SQLite (github.com/poundifdef)
627 points by memset 1 day ago | hide | past | favorite | 149 comments
Hi! I wanted to share an open source API-compatible replacement for SQS. It's written in Go, distributes as a single binary, and uses SQLite for underlying storage.

I wrote this because I wanted a queue with all the bells and whistles - searching, scheduling into the future, observability, and rate limiting - all the things that many modern task queue systems have.

But I didn't want to rewrite my app, which was already using SQS. And I was frustrated that many of the best solutions out there (BullMQ, Oban, Sidekiq) were language-specific.

So I made an SQS-compatible replacement. All you have to do is replace the endpoint using AWS' native library in your language of choice.

For example, the queue works with Celery - you just change the connection string. From there, you can see all of your messages and their status, which is hard today in the SQS console (and flower doesn't support SQS.)

It is written to be pluggable. The queue implementation uses SQLite, but I've been experimenting with RocksDB as a backend and you could even write one that uses Postgres. Similarly, you could implement multiple protocols (AMQP, PubSub, etc) on top of the underlying queue. I started with SQS because it is simple and I use it a lot.

It is written to be as easy to deploy as possible - a single go binary. I'm working on adding distributed and autoscale functionality as the next layer.

Today I have search, observability (via prometheus), unlimited message sizes, and the ability to schedule messages arbitrarily in the future.

In terms of monetization, the goal is to just have a hosted queue system. I believe this can be cheaper than SQS without sacrificing performance. Just as Backblaze and Minio have had success competing in the S3 space, I wanted to take a crack at queues.

I'd love your feedback!






The whole idea here is superlative.

+1 for k8s, kubernetes, cloud native, self-hosted, edge-enabled at low cost, no cost.

I ran rq and minio for years on k8s, but been watching sqlite as a drop-in-replacement since most of my work has been early stage at or near the edge.

Private cloud matters. This is an enabler. We've done too much already in public cloud where many things don't belong.

BTLE sensors are perfectly happy talking to my Apple Watch directly with enough debugging.

I'd argue the trip through cloud was not a win and should be corrected in the next generation of tools like this, where mobile is already well-primed for SQLite.


Really interesting. Question: when it comes to running these software on k8s, do you prefer to manage and host yourself, or do you use managed solutions on top of your own infra? (Do you pay for minio support?)

Asking from a business perspective - I of course intend to keep developing this, but am also really trying to think through the business case as well.


Good question!

The answer depends on funding, i.e. in my own never-leaves-my-house case it is always self-host, much like SOC work.

In the case of startup or research lab work (day-job, for lack of a better descriptor). It's frequently a slice of AWS, GCP, or Azure, i.e. 6 figure/mo cloud bills.

I think those two broad cases are worth considering.


Managed systems make a lot of sense in small companies, where keeping head count down is the key to agility & communication. For personally funded projects it's never worth the cost and for big teams (100+) its typically worth the savings to self-host IF you can find good people. Low quality/rushed/novice self-hosting is almost never a good idea for any size company.

k3s for on premises deployments. I'm also using it for local self hosting side projects.

Definitely a fan of k3s as k8s lite - complexity that your particular project may not need, particularly since every project doesn't need k8s and many are better off with less.

If you look at the history from J2EE to the k8s prototype in Java to what we have now, it's a great idea to encapsulate all of these things into a single container, particularly at Google scale, but many unintended consequences arise from complexity accruing to features and functions which weren't actually requirements for your particular project being supported, i.e. the notion that YAGNI because few orgs have Google scale problems. If so, great! Carry on... If not, consider k3s or aptible or more emergent platforms I haven't actually used.

The mere presence of unneeded items in source and documentation presents a "why am I here?" choice paradox. That's before we even get into keeping track of deprecations in the never-at-rest source/release evolution.

Federation is a good example. I've worked at places that needed it and places that didn't.


can you share more about the BTLE sensors talking with Apple Watch? I'm aware that BLE heartrate sensors are detected as 'Health Devices' if I want to connect to it, but what other sensors are you working with?

Yes, thanks for asking.

Continous Glucose Monitors (CGM). That used to be a diabetic-only problem until Abbott, Dexcom,and other vendors expanded their markets beyond diagnosed diabetics into pre-diabetes markets and exercise, health, and well-being applications like:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10635370/

and:

https://www.levelshealth.com/

This has exploded beyond the hundred years of ketogenic research and as research on glycemic variability in mental health has grown.

My current kit includes an Apple Watch series 9 and an iPhone SE. Prior to that it was Google Pixel and a Fitbit Sense, though direct BTLE->Watch was not an option in that generation.

I have two complications: Abbott Freestyle Libre 3 and Dexcom G7. Continuous Glucose Monitors (CGM). The Dexcom is entirely proprietary. The Abbott is accomplished via a series of 3rd party hacks:

https://www.youtube.com/watch?v=YqUZjXo5VXY

I'd say open source, but I'm not certain that every link in the chain is open source. I've run many generations of various open source tools on Android and iPhone since no vendor ships a complete end-to-end solution that is perfect.

Nightscout and watchdrip are two open source examples:

https://nightscout.github.io/

https://watchdrip.org/

When G7 originally shipped last year, sensor data left the sensor and used the iPhone as a proxy-to-cloud storage, as has become the default mode across many IoT devices because it was easy, obvious, despite the unintended consequences of the design choices here.

At that point, because the data has already taken a long and perilous journey into cloud when BTLE->Watch was dramatically shorter, cheaper (in terms of hops and requirement for service), and arguably better. Hence, any data request pays that full routing price into and out of cloud, even the most trivial display, such as watch.

After a latent Dexcom BTLE->Apple Watch update a year later, I don't even need to carry my iPhone anymore, despite the fact that I don't have service beyond WiFi on my Apple Watch, since the data exchange is entirely BTLE.

The sad fact is that straightforward questions around a person's glycemia were not answerable directly without an entire belt-worn cloud ecosystem being paid for and fully functional 24x7x365.

BTLE->Apple Watch is by no means perfect, but it's dramatically better than my previous 5 years of 24x7x365 routing through cloud.

HTH!


rq as in Redis Queue?



I linked Redis Queue on the previous reply but had not heard of rqlite so thanks for this!

It looks like this may solve at least another part of my quest to replace various back-ends with sqlite.


Keeping questions from scale & benchmarks aside, this is a cool thing for functional/unit testing module that uses SQS, instead of dumb mocks.

Thank you! Yes, you should definitely use this instead of localstack :)

Although I wouldn't recommend using LocalStack's SQS implementation for production workloads, calling it a dumb mock is a bit outdated. It emulates pretty much every SQS behvaior including long polling, delayed messages, visibility timeouts, DLQ redrive, batch send/receive, ...

Tangential but I looked at the codebase - because I like to imagine I can code (I can’t) - and for a laymen, I could follow the code.

Makes me think quite positively of go lang and of the dev for designing in such a way. Can understand why teams like it because of easier maintenance

Quite elegant from my uneducated pov.


That's the beauty of Go, generally, and why I immediately became attracted to it. The code is supremely readable because it enforces a style guide, and doesn't allow much magic.

A number of people don't like it for limiting their expression and abilities, which I understand that feeling too. But as a middle aged programmer I realized that readability trumps conciseness and cleverness in the long run.


I can't remember who said it but I remember hearing that a good go Dev and a new go Dev should be able to produce generally similar code in order for maintainability. I do like playing with go, it's sits in a nice little playground in my head, mostly thanks to the compile times I think.

Two experienced Go devs tackle the same task, and their code will look almost identical.

Two experienced Rust devs tackle the same task, and their solutions will be worlds apart.

(I write both, but I do love Go for its simpleness)


Thank you for the kind words! I'll be sure to work on making it as complicated as I can :)

Honestly shows the capabilities of both:

- The dev for making it both fast & simple to understand

- Golang for making the codebase easy to follow


> In terms of monetization, the goal is to just have a hosted queue system. I believe this can be cheaper than SQS without sacrificing performance. Just as Backblaze and Minio have had success competing in the S3 space, I wanted to take a crack at queues.

are you monetizing this as a separate business from: https://www.ycombinator.com/companies/scratch-data


Truthfully, I don't know yet - I haven't even built a paid/hosted version at all. It is related to my existing business in the sense that it deals with realtime data.

But I started working this as something I wish existed as opposed to having some big VC strategy and pitch deck behind it.

(Also, I appreciate all of your feedback on this a month ago! It was really helpful to encourage me to keep looking into this and also figuring out the "first" things to launch with!)


Not every open source project needs to be monetized. Projects that were created to "scratch one's itch" tend to fare better than those built to make money. Devs put more love and less stress into them than into things they want to build a business out of.

The monetization paragraph reads really weird, as if you believe HN is a community of VC-adjacent people looking for new ways to make money (it isn't), and talking about how you plan to exploit your new project is mandatory (it isn't either).


There are plenty of people here not interested in the VC space, not interested in startups (in the pg sense of the word), and not strictly looking for new ways to wake money. But it's run by a VC firm. It's primary purpose is one part of a large funnel for getting new applicants to that VC firm. It is by definition VC-adjacent, and by extension many or perhaps even most people on here, certainly most of the people posting and commenting a lot, will be somewhat in that market as well. You can ignore that if you want at no risk to yourself but just declaring "it isn't" as objective incontrovertible fact is kind of silly when at the very least it's up for debate. But honestly whatever HN is or is not is a pretty irrelevant point to argue.

What isn't irrelevant, I think, is the accusation that monetizing something you've worked on is somehow exploiting it. If this brings value to people, and they can make money using it, or make more money using, the OP deserves to charge them for it if they want to. There's nothing wrong or exploitative about that. You are free to donate all your time and not charge anyone for anything but it's silly to frame someone charging for their work as an exploitation of either that thing or the people paying for it.


> HN is a community of VC-adjacent people looking for new ways to make money

it kinda is. There are all sorts of people here, but HN is owned by YC, a well known VC fund. That doesn't mean that everyone here is one, but it certainly influences the community here.

> and talking about how you plan to exploit your new project is mandatory

it's not mandatory but it's a frequently asked question. OP might as well answer it while they have the mic.


> Not every open source project needs to be monetized.

Where was this claim made? I missed it.

> Projects that were created to "scratch one's itch" tend to fare better than those built to make money. Devs put more love and less stress into them than into things they want to build a business out of.

This project is based on SQLite, which is several people's livelihood. As a result, it's rock-solid, reliable, and available to all without fee or restriction.

So it's an odd choice of project to choose to express this sort of eminently debatable sentiment.


What's the point of AGPL though? The enterprises with the budget for self hosting this sort of software usually have requirements and scale beyond what SQLite can offer out of the box.

He's going to make backends pluggable. So other more scalable backends than sqlite could work

In practice, I don’t plan on adding an entire proprietary layer - that is just way too much work, and it defeats the purpose. If the open source code has bad performance, why would someone even bother with a hosted version? Like, I want people to be so impressed by the open source code that they’d trust a cloud version to be even better. Clickhouse and duckdb do this very well.

The main difference I expect with a hosted solution are things like multiple tenants or billing integrations. These aren’t core to the product and only necessary when you need to host someone else’s data.


Ugh I didn't see that.

My enthusiasm has instantly waned.

Why is AGPL needed? Just be MIT and make it easy for people, espefcially if you're not planning on monetising it.

I won't use AGPL code just on principle.


Not gonna lie, man - this is just as obnoxious as when GPL zealots shit on someone's permissively licensed project because they think it should be GPL. The man has a right to use whatever license he wants.

Why? Does AGPL really matter if you're not planning on monetising whatever it is you're using it for?

Not trying to be snarky, I'm genuinely curious why you'd be so vehemently opposed to it.


AGPL is the best OSS license to ensure project continuity as Open Source. That simple.

Permissive licenses allow for proprietary forks, which may become more successful than the upstream project.

AGPL would be able to benefit from any improvements from any fork, and all those will remain OSS for everyone.

Nothing written here is related in any way to monetization.


I think the crux of the argument is that if you are not planning to monetize it, why stop other people from doing so?

AGPL don't stop nobody from monetizing anything, they just gotta make their modifications public.

If you're anywhere close to a technical or developer space it's pretty clear how making your entire codebase public could negatively impact a monetization strategy. So yeah you're right that AGPL doesn't explicitly prevent monetization (other than "pay us for the source code" stuff of course) but in practice nobody with a serious monetization strategy is going to be releasing all their code AGPL either.

> but in practice nobody with a serious monetization strategy is going to be releasing all their code AGPL

Which fulfills the developer will of attracting higher quality users that plan to collaborate with him. You got it right :)

You got something wrong tho, If you're anywhere close to a free software/open source space, you should know that using his AGPL SQS replacement, the only thing that would need to be public is whatever you change in it, not the things you use it.


Also, is there really any money left in message queues?

Every man and his dog has made a message queue with Postgres. Message queues are everywhere on github and often posted on HN.


Provoking this sort of response is the best feature of the AGPL. I love it.

The goals are a little different but I think it's worth pointing out ElasticMQ. I use it simulate sqs in a docker environment. https://github.com/softwaremill/elasticmq

I have looked at elasticmq but not played with it myself. You might also be interested in their benchmarks of all of the existing queues out there: https://softwaremill.com/mqperf/

I also just used ElasticMQ in a docker environment. Did you see any specific downsides, when you looked at it? In which scenarios should I consider replacing it with your solution?

We use ElasticMQ to have an SQS compatible service for local development. We use it with docker-compose locally. In our remote envs we use SQS.

So far I had not problems with ElasticMQ.

I'm much intrigued by the small LOC count of SmoothMQ. When I compare it to ElasticMQ it's much smaller (probably by using sqlite's features).

https://github.com/softwaremill/elasticmq


Congratulations!

I also love writing AWS API-compatible services. That's why I did Dyna53 [1] ;P

(I know, unrelated, but hopefully funny)

[1] https://dyna53.io


It is fun, indeed. Could be a candidate for the best technology abuse of the year.

This is an absolutely unhinged project idea. Great work

This is really fun!

This is super cool! I love projects that aim to create simple self-hostable alternatives to popular services.

I assume this would work without much issue with Litestream, though I'm curious if you've already tried it. This would make a great ephemeral queue system without having to worry about coordinating backend storage.


I haven't tried this with litestream! It will be worth exploring that as a replication strategy.

The nice thing about queues is that backend storage doesn't really need to be coordinated. Like, you could have two servers, with two sets of messages, and the client can just pull from them round robin. They (mostly) don't need to coordinate at all for this to work.

However, this is different for replication where we have multiple nodes as backups.


Since this is a single go binary I bet it would work perfectly with litestream. I use litestream in the exact same way with Pocketbase and it works perfectly (Pocketbase is also a single go binary). I'm going to give it a try and report back.

One quick suggestion on project structure:

Move all the structs from models/ into the root directory.

This allow users of this package to have nice and short names like: q.Message and q.Queue, and avoids import naming conflicts if the user has its own „models“ package.


Thanks for the tip - and for even looking at the code! I always struggle to figure out how to organize things.

Just noticed that your root directory is already „package main“ so you can either move that to /cmd/something/ or simply rename models/ to q/. That would have the same effect and is also idiomatic.

Actually, the "model/" (note: singular form) directory (package model) would be preferred in the golang world.

Correct me if I’m wrong but: SQLite sounds like: runs on one server. While that will work most of the time, it won’t work 100% of the time. I don’t know the specifics, but I’m fairly sure if a queue server crashes, SQS will keep working, as stuff is redundant. So while it can work in a best case, this (probably) won’t have the same reliability as SQS has..

I break it down two ways.

First, the project does not yet have a distributed implementation, you’re correct. Stay tuned!

Second, SQLite is incidental. Data is still stored on disk, just as SQS must be, but I’ve chosen SQLite as the file format for now.


Strictly speaking, SQS does not necessarily store data on disk and being highly available and fault tolerant does not exclude one from operating a completely volatile data store. Indeed, SQLite itself has an in-memory option which, when appropriately architected, could be used as the primary data store for a highly available and fault tolerant queue.

Having said that, I think it is safe to assume that SQS probably stores information to non-volatile storage.

Nice work on the project!


Does this even need to compete feature wise with SQS?

If it faithfully reproduces the SQS Api what could possibly stop me from using this product now (if he ever does hosted) and then switching to SQS if the scale is ever justified?

I'm all for a full suite of solutions targeting the same API. Can I run this thing on a single server with my dashboard app for the 3 person team I'm developing for and all that it cost them is per hour for tech support and server hardware and deloy my multi million user app on SQS but have effectively the same code.


The choice of SQLite is a great one as the distributed sqlite space is also growing. You have things like rqlite (raft+sqlite), and you have cloudservices like CloudFlare D1 which is basically distributed/HA sqlite. Plus you can swap it for basically any other sql database.

Good, we need open implementations of all the AWS stuff.

I swear they reimplement stuff we have just so there are more places to bill us.


Curious: if you were going to switch, how would you want to run this? Would you want to deploy it to your own EC2 instances, or would you want a hosted solution (just as SQS itself is?)

Self hosted probably. Why buy knock off SQS in the cloud when real thing is right there?

If you are greenfield and scared of hitching yourself to Amazon, why not go something like RabbitMQ? There is also RabbitMQ cloud providers as well.


Unless you need the features, I would steer away from the "real message queues". Running these adds complexity, especially if you want higher availability and they are not always cheap.

HTTP based solution is easy to understand and implemented (if basic functionality is enough for your use case). Real MQs obviously have more complex protocols and not all client libraries are perfect.


I find basics of most AMQP systems to be extremely reliable. Message on, one time delivery with message hiding until expiration of TTL. Sure, you can get into some insane setups with RabbitMQ/Kafka and that's where libraries features don't always match up. But SQS probably wouldn't work for you either in those cases.

Ideally this would be run within a k8s cluster, to easily fit with the other services.

SQS is cheap.

10m requests for $4.

You will need hundreds of millions of requests per month for it to be noticable.

And can an implementation like this even help you at that point?


Not just cheap per message, but queues themselves are free so there is zero “reserved” cost - it’s pay per message.

Add to that, it’s enormously scalable in terms of throughput and retained messages.

And it’s globally available.

A single file app is no comparison, really. The value of SQS is in the engineering.


Question: is there any feature set (in contrast to pricing) that would make it worthwhile for someone to switch?

Operations. At my $bigCo we are always rolling our own ops tooling for queues, as it’s more than just “reprocess message”; there is always some business logic that separates different types of consumers/destinations, and so q management is never straightforward. Might be a differentiable feature.

Can you tell me more about the types of queue things you need to implement? (Feel free to email me too - I’d be grateful for the feedback!)

Why not use LocalStack? It has SQS and a lot of AWS services for testing/development. Well documented, open source.

https://docs.localstack.cloud/overview/


I think the vibe might be that this is something someone could use for production.

With no distribution model and built on SQLite, it certainly doesn't scream "production candidate" to me.

There’s more production systems on this planet running on a single node than ones running in a distributed setup. And SQLite is a rock solid database, deployed in more places than Postgres. The only thing here that could be taken when weighing production readiness would be the maturity and stability of this project itself.

You know there are millions of production deployments of SQLite right? It's perhaps the most widely deployed database.

At the very least, I'm guessing this performs better since it's written in Golang.

Because SIMPLE.

single exectuable binary

golang with sqlite so fast

minimal config so no ridiculous hours or days of config and operations

This blitzes other ideas for use cases that do not require distributed queues - and that is likely many use cases.


I may be asking a naive question, but what is the rationale behind disabling foreign key support and using them anyway in the database schema? See https://github.com/poundifdef/SmoothMQ/blob/46f8b22/queue/sq...

The "TODO: check for errors" comment, combined with what seems like disabling foreign key constraint checks, makes me a bit hesitant to try this out.


Good question. It was an evolution. Originally I enforced foreign keys, but inserts/updates were unbearably slow. So I updated the connection string to disable them (but, as you point out, I haven't updated the CREATE TABLE statements.)

In practice, I did not find they were necessary - I was only using foreign keys to automatically CASCADE deletes when messages were removed. But instead of relying on sqlite to do that, I do it myself and wrap the delete statements in transactions.

There are many TODOs and error checkings that I will, over time, clean up and remove. I'm glad you've pointed them out - that's the great thing about open source, you at least know what you're getting into and can help shine a light on things to improve!


Some of the slowdown will come from not indexing the FK columns themselves, as they need to be searched during updates / deletes to check the constraints.

Could you elaborate why you didn't choose e.g. RabbitMQ? I mean you talk about AMQP and such, it seems to me, that a client-side abstraction would be much more efficient in providing an exit strategy for SQS than creating an SQS "compatible" broker. For example in the Java ecosystem there is https://smallrye.io/smallrye-reactive-messaging/latest/ that serves a similar purpose.

Where AWS is the likely migration path for an app if it needs to be scaled beyond dev containers, already having tested with an SQS workalike prevents rework.

The celery Backends and Brokers docs compare SQS and RabbitMQ AMQP: https://docs.celeryq.dev/en/stable/getting-started/backends-...

Celery's flower utility doesn't work with SQS or GCP's {Cloud Tasks, Cloud Pub/Sub, Firebase Cloud Messaging FWIU} but does work with AMQP, which is a reliable messaging protocol.

RabbitMQ is backed by mnesia, an Erlang/OTP library for distributed Durable data storage. Mnesia: https://en.wikipedia.org/wiki/Mnesia

SQLite is written in C and has lots of tests because aerospace IIUC.

There are many extensions of SQLite; rqlite, cr-sqlite, postlite, electricsql, sqledge, and also WASM: sqlite-wasm, sqlite-wasm-http

celery/kombu > Transport brokers support / comparison table: https://github.com/celery/kombu?tab=readme-ov-file#transport...

Kombu has supported Apache Kafka since 2022, but celery doesn't yet support Kafka: https://github.com/celery/celery/issues/7674#issuecomment-12...


I don't get where you are pointing at.

RabbitMQ and other MOMs like Kafka are very versatile. What is the use case for not using SQS right now, but maybe later?

And if there is a use case (e.g. production-grade on-premise deployment), why not a client-side facade for a production-grade MOM (e.g. in Celery instead of sqs: amqp:)? Most MOMs should be more feature-rich than SQS. At-least-once delivery without pub/sub is usually the baseline and easy to configure.

I mean if this project reaches its goal to provide an SQS compatible replacement, that is nice, but I wonder if such a maturity comes with the complexity this project originally wants to avoid.


When you are developing an information system but don't want to pay for SQS in development or in production; for running tests with SQLite instead of MySQL/Postgres though.

SQS and heavier ESBs are overkill for some applications, and underkill for others where an HA configuration for the MQ / task queue is necessary.


So I assume it does the back-end as well?

I never cared to figure out what parts of SQS are clients-side and server side, but - does SmoothMQ support long polling, batch delivery, visibility timeouts, error handling, and - triggers? Or are triggers left to whatever is implementing the queue? Both FiFo and simple queues? Do you have throughput numbers?

As an SQS user, a table of SQS features vs SmoothMQ would be handy. If it's just an API-compatible front-end then that would be good to know. But if it does more that would also be good to know.

The reason you'd use this is because there are lots of clients who still want on-prem solutions (go figure). Being able to switch targets this way would be handy.


Everything here is on the backend. The client does very little except make api calls.

It implements many of these features so far (ie, visibility timeouts) and there are some that are still in progress (long polling.) a compatibility table is a good idea.


Perhaps do a small example application.

  go  get github.com/poundifdef/SmoothMQ/models
  go: github.com/poundifdef/SmoothMQ@v0.0.0-20240630162953-46f8b2266d60 requires go >= 1.22.2; switching to go1.22.4
  go: github.com/poundifdef/SmoothMQ@v0.0.0-20240630162953-46f8b2266d60 (matching github.com/poundifdef/SmoothMQ/models@upgrade) requires github.com/poundifdef/SmoothMQ@v0.0.0-20240630162953-46f8b2266d60: parsing go.mod:
          module declares its path as: q
                  but was required as: github.com/poundifdef/SmoothMQ

Good idea! fyi this is not meant to be used as a library. It runs as a standalone server, and then your application connects to it using an existing AWS sdk.

This is very interesting! The self-hosted aspect is something I'll have to consider for certain purposes.

My lab also developed an SQS-esque system based on the filesystem, so no dependencies whatsoever and no need for any operational system other than the OS. It doesn't support all SQS commands (because we haven't needed them), but it also supports commands that SQS doesn't have (like release all messages to visible status).

https://github.com/seung-lab/python-task-queue


Very cool. I needed something like this. Looking at short video in readme - I suspect adding live stats similar to sidekiq would make UI look more dynamic and allow quick diagnostics. Docs for current features are more important though.

Can you tell me more about why you needed this (and what you ended up using?)

Totally agree on adding more metrics and information to the UI - but how much of that should be on the dashboard vs exposed as a prometheus metric for someone to use their dashboard tool of choice?

I am not very good at visual design and have chosen the simplest possible tech to build it (static rendering + fomanticUI). I sometimes wonder if the lack of react or tailwind or truly beautiful elements will hold the project back.


This is really pedantic, totally outside any actual functionality, and may be a me thing, but a trigger for me is seeing left-justified (or worse, centered), columns of numbers (eg: your number of messages column).

I'll grant it's small to infinitesimal, but you asked for feedback.


Hadn't even thought about that. Now I can't unsee it. Thanks for the suggestion.

I think it’s better to say it’s sqs compatible versus an sqs replacement.

I'd have named it MQLite. Congrats on finishing and delivering a project, this is quite a challenge in itself. I think SQLite can be a great alternative for many kinds of projects.

There is a lot of new ecosystem being built around SQLite in the browser using wasm, as a primary data store or as a local replica for every client, and there are some interesting interactions with crdt and peer to peer applications; does this fit into your business case? Would be interesting to see a massively distributed - via browser embedding - queue, that even uses standard sqs bindings.

This looks great! How were you planning on tackling distributed?

This is a great question and something I've been thinking about a lot. Big picture:

Each queue node can operate (mostly) independently, and this is good. As a consumer, I don't really care where my next message comes from, so I can minimize the amount of data that needs to have a "leader".

The only data that needs to be synced is the list of queues, which doesn't change often. If one server is full, it should be able to route a request to another server.

When we downscale, we can use S3/Dynamo (GCS/firestore) to store items and redistribute.

There's more nitty gritty here (what about FIFO queues? What about replication?) but the fact that the main actions, "enqueue" and "dequeue", don't require lots of coordination makes this easier to reason about comapred to a full RDBMS.


You're hand-waving away all the complexity in the "nitty gritty."

Enqueue absolutely requires coordination, if not via leader then at least amongst multiple nodes, if you want to guarantee at least once delivery

If you don't guarantee that, cool, but you're not competing with sqs


This would increase the complexity, but you could always run something like rqlite [0], even if only for the items that require distribution and synchronization.

Or if you truly only need to store simple values in a distributed fashion, you could probably use etcd for that part.

[0]: https://rqlite.io/


Have you considered libsql? It is a distributed SQLite database, written mostly in Rust. There is a go database/sql and GORM driver available - https://github.com/tursodatabase/libsql

disclaimer: I am one of the maintainers


I did not know about libsql! Thanks for sharing. How do you think I should reason about making this distributed - should I use one of the (various) sqlite replicatoin libraries? Or should this be something I roll on my own (on top of some other protocol, such as raft, or something else.)

> How do you think I should reason about making this distributed - should I use one of the (various) sqlite replicatoin libraries? Or should this be something I roll on my own (on top of some other protocol, such as raft, or something else.)

It really depends on the semantics of the MQ you want to provide. There is rqlite if you want a distributed SQLite over Raft.

The question is what sort of guarantees you'd like to provide and how much of latency / performance you are willing to compromise


The only reason I use SQS is because how it's compatible with other AWS Services, I can have an S3 bucket whenever there's an upload I can send an SQS Message and I can have it trigger a lambda.

Minor thing - the gif browser demo should have been maximized. The video zooming into the action on what would have fit onto a single screen was distracting.

Are there maintenance actions that the admin needs to perform on the database? How are those done?


Thank you for the feedback - agree on the gif. Also it should probably show real messages an an example instead of rand().

re: maintenance - I have tried to build this to be hands-off. The only storage this uses is SQLite, and I have the code set to automatically vacuum as space increases.

It also has a /metrics endpoint which has disk size. This is going to be used for two things in the future: first, as a metric for autoscaling (scale when disk is full) and second so that a server can stop serving requests when its disk is full (to prevent catastrophic failure.)


A testcontainer could be useful. I use always this adobe-mock from s3 for my test. Hmm.

I was not aware of testcontainer until you just mentioned it! I have a very basic Dockerfile for deploying to fly.io. What/how would I get started?

It looks like localstack is a supported testcontainer and they do support SQS (but I haven't tried it myself.)


Hmm, i never provided a testcontainer myself, but i have used docker images in a testcontainer. That is pretty straight-forward.

The specialized testcontainers just pass down or expose more config parameters, imho, because they possess more knowledge about the docker container that it is about to start. For SQS i could imagine, that it is convenient to expose uri or arn or even auth parameters, that one then can refer to when setting up tests with dynamic config parameters and to setup the sqs client.

Localstack i have not used so far. I use containers often for database tests also.


How would one use this to replace Sidekiq for instance?

Rate limiting is something I miss on Sidekiq (only available on the premium plan that I can't afford) and the gems that extend it break compatibility often.


Glad to find out what you've been up to! Very cool and I am excited to see where you take this.

Actually pretty excited to try this, there are so many cases where I need a bare-bones (aka "simple"), local, persistent queue, but all the usual suspects (amqp, apache whatever, cloud nonsense, etc.) are way too heavy.

I'll probably try poking at it directly through the HTTP API rather than an SDK ... does it need AWS V4 auth signatures or anything?


The only HTTP API that is exposed is actually the SQS one. (I'm not opposed to a "regular" HTTP API but the goal was to make it easy for people to use existing libraries.)

If you do use your language's AWS SDK, the code handles [1] all of the V4 auth stuff. https://github.com/poundifdef/SmoothMQ/blob/main/protocols/s...

I'd love your feedback! Particularly the difficulties you find in running it, of which I'm sure there are many, so I can fix them or update docs.


I'd also give a shout out to beanstalkd. I use it to teach message systems and it's great.

sounds like a great project. How do you handle performance with large message sizes?

Not sure about the goal of providing a hosted service cheaper than SQS. SQS is already one of the cheapest services on Earth. It's pretty hard to spend more than a few bucks a month, even if you really try!

That is fair. Two things to validate from a biz perspective:

1. Is there some threshold where this would make sense financially (n billions of messages.)

2. Are the extra developer features (ie, larger message sizes, observability, DAGs) worth it for people to switch?

Would love your thoughts - what, if anything, would make you even entertain moving to a different queue system?


Not the GP, but I also think you'll struggle to compete in the hosted service space.

Maybe you could add a web admin GUI as a paid add on?


What is your target market? Cloud-native is, like the commenter said, going to be difficult to differentiate based on cost. I can see this being useful for hybrid cloud (onprem instances), functional or local testing, “localfirst” workloads, enthusiasts etc.

Now you sound like an investor :)

My own vision is to take a queue that is relatively dumb and make it smarter. I want it to be able to, for example, allow you to rate limit workers without needing to implement this client side. And so on for all of the other bits that one needs to implement in the course of distributed processing.

I’m still figuring out the market. Very large firms spending thousands of queues? Or developers who want a one stop solution built on familiar tech? Or hosting companies who want to offer their own queue as a service?


I've heard this use case come up in hybrid new/legacy products where there is a chasm between legacy scalability (probably a monolith in the compute or storage layer, or both). The "new" side needs to be able to self-regulate and if you can put that into the service-side it would it would be a helpful transition enabler (and, let's face it, given these transitions often never finish your product would be effectively sticky).

I don't think the cost of queues is a problem anywhere (I'm sure it is somewhere, but not a market's worth). The problems created by queues, on the other hand, are myriad and expensive.


This depends on your definition of few.. but SQS costs us more than the servers handling the messages… It’s cheap until you do billions of messages…

I wrote sasquatch which is similar. sasquatch is more simple than SQS and queues are virtual (i.e the queue exists because you put a message in it, and is gone when there's no messages in it).

https://github.com/crowdwave/sasquatch

sasquatch is also a message queue, also written in Golang and also based on sqlite.

sasquatch implements behaviour very similar to SQS but does not attempt to be a dropin replacement.

sqsquatch is not a complete project though, nor even really a prototype, just early code. Likely it does not compile.

HOWEVER - sasquatch is MIT license (versus this project which is AGPL) so you are free to do with it as you choose.

sasquatch is a single file of 700 lines so easy to get your head around: https://raw.githubusercontent.com/crowdwave/sasquatch/main/s...

Just remember as I say it's early code, won't even compile yet but functionally should be complete.


Thanks!

Loving all these self-hosted KISS stuff :)


I love that we're seeing a lot of projects apply the KISS principal and leverage (or take inspiration) from SQLite, like this, PocketBase and even DuckDB. An entire generation of developers (including myself) were tricked into thinking you had to build for scale from day one, or worse, took the path of least resistance right to the most expensive place for cloud services: the middle. I'm hopeful the next generation will have their introduction to building apps with simple, easy to manage & deploy stacks. The more I learn about SQLite, the more I love it.

Thank you for the kind words! I wanted to get something out even if it doesn't have the scalability stuff built. But I have been thinking and architecting that behind the scenes, and now I "only" need to turn it into code.

I think the queue itself will end up using a number of technologies: SQLite for some data (ie, organizing messages by date and queue), RocksDB for other things (fast lookup for messages), DuckDB (for message statistics and metadata), and other data structures.

I find that some of the most performant software often uses a mix of different data structures and algorithms by introspecting the nature of the data it has. And if I can make one node really sing, then I can have confidence that distributing that will yield results.

I think SQLite is a great start, but I really do want the software to be able to utilize the full resources the underlying hardware.


"build for scale", it really depends what needs to scale and what is "scale". Nearly every interview, every company these days thinks they need google or such level scaling. Nearly all do not.

SQS: Amazon Simple Queue Service

Thank you, I was scratching my head trying to figure that out.

Why some people think we need AI everywhere to "augment" and "enrich" our "experiences".

People don't bother a google search, after, what, 20 years in town?


[flagged]



Have you run any benchmarks yet? I'm guessing it happily handles thousands of queue operations a second given that it's SQLite and Go - would be interesting to see how settings like SQLite WAL mode affect its performance.

I'm using WAL mode - I'd almost given up on sqlite until I remembered to enable it.

I've been playing with benchmarks, yes! I haven't written them up yet because I worry about doing them the "right" way. but since you asked, I did a quick test with a single server and single client on a t2.nano. With 3 sending threads and 5 receiving threads, and mesg size of 2kb, I can send 700 msgs/s and receiver 500 msgs/s.

It is way faster on my laptop, and I have a lot of tricks that I've been playing with to improve this.


I had to build a queue implementation not too long ago. We used t2.nano for our benchmarks as well. Your results are in-line with ours pre-optimizations (except we are using nats jetstream for queuing).

I’d also recommend reducing the number of threads as that can increase performance on single-processor machines (context switching will kill you). Try to find the sweet spot (for us, it was 2x-4x the number of cpus to threads, at least, for our workload).

If you are using go, it kinda sucks at single-cpu things, and if you detect that you have a single core, you can lock a go-routine to a thread and that sometimes helps.

Also, try to batch sends (aka, for-loop) to attempt to saturate the send-side so by the time you come back with your next batch they’re still messages sitting in a network buffer somewhere. For example, we have a channel that wakes up our sender, then we honest-to-god wait like 60ms in the hopes there will be more than one message to pick up — and there usually is in production.


These are great suggestions. Today every db write is indeed wrapped in a mutex. The two optimizations I am experimenting with are:

1. When new messages are inserted, immediately append to a file on disk. Then, in batch, insert to SQLite.

2. When dequeuing messages, keep n message IDs in memory as a ready queue, and then keep dequeued message IDs in another list. Those can be served immediately (using a SELECT which is fast) and then updating messages to the dequeued status can happen in batch.

Appreciate the tips!


For 1. We went with a per-routine bytes.Buffer that was batch "inserted" every x milliseconds or n messages. However, we don't care if we lose some messages on a crash. For integrity, some queues are set to 0ms, 1msg because we don't want to lose anything, but when it is ok that messages are lost, this is great for perf.

For 2. you could probably do something like this:

    BEGIN TRANSACTION;
    -- Select the oldest pending message
    SELECT id, message FROM queue WHERE status = 'pending' ORDER BY created_at ASC LIMIT 100;
    -- Mark messages as 'processing'
    UPDATE queue SET status = 'processing' WHERE created_at < ?; -- assuming created_at is monotonic
    COMMIT;
Basically, select a batch and then abuse the ordering properties to batch mark them. Then all messages in your select you can dispatch evenly to sender threads. Sender threads can then signal a buffered channel that they've completed/failed, and the database can be updated. At startup, you can just SELECT where status = 'processing' and recover.

This is a pretty decent translation of how ours works.


You might need dedicated resources, including storage speed guarantees, to get reproducible benchmarks out of a cloud provider.

Love the red sweater btw very classy yet simple


That sweater is one of my favorite items, thank you :)

The goal is for this to be able to run well on commodity hardware, which I think is possible. If I can run this on $platform with $cheap instances and an $autoscaler then that would be my ideal design goal because I think it matches the setup that most people have access to.

However, I agree - I am a big fan of Hetzner's servers and am excited to try out benchmarks on beefier hardware too.


FYI, this has AGPL 3.0 licensing.

When I see this I assume VC backed cloud monetization is in the works.

We need to include license information in Show HN projects to save people time-- our company find non MIT/BSD license projects are problematic and needlessly complicates things.


The purpose of Show HN is not really ‘product catalogue’

i didnt ask for a product catalogue, all im asking is to clarify whether you are truly FOSS or not by being transparent about the license.

Any type of licensing conditions like AGPL 3.0 to force you to share your code is a no go for many many companies.

I understand the desire to make money and you can do that without AGPL 3.0


It's right there in the LICENSE file: https://github.com/poundifdef/SmoothMQ/blob/main/LICENSE

What's unclear or not transparent about that?


All I'm asking is to include the type of license in the Show HN title

I had no idea there would be so much push back against this.


I actually agree.

The AGPL, for something like this, is a total non-starter, IMO, and just changes the conversation.

It'd be one thing to choose the AGPL because you believe in the FOSS movement. Or if you're releasing something end-users can benefit from, directly, by self hosting it.

To release something that will always be a component of something bigger, license it as AGPL, then talk about monetization… just cut the chase and release it as “source available,” with a licence people can actually use, even if it has a bunch of not-really-FOSS strings attached.

Because who can realistically use this as is? Who can download the source and actually do something with it? What AGPL-compatible FOSS project is dying to use a drop-in AWS SQS replacement?


Just from IP/legal point of view AGPL and AGPL adjacent licensing that imposes control over how you use the code is a non-starter for many companies especially ones that are "backed by YC" proudly displayed on their website because that signals money will be exchanged in the future by design.

MIT/BSD license that imposes no control over the how the code is used is the only FOSS we are allowed to use and we do donate regularly to maintainers working selflessly.

What makes me angry is that somebody then uses that MIT/BSD license code, makes some modifications and releases it as AGPL with clear intent to make a buck from it without paying that original maintainer any money.


I actually have no beef with either “money will change hands in the future” or “AGPL because we want to advance FOSS.”

The problem is that AGPL is a total no starter at “day job,” and even at “side gig,” I'm obviously not going to release the source code of the entire service because of a message queue dependency.

I mean, if I was trying to switch clouds for whatever reason, I might want this, and could even be persuaded to pay to self host it.

But at this point there's no pay option. Just source I couldn't possibly use, not even for a month, not even as a trial, company policy or no company policy.

It makes no sense.


my issue with money being mixed up with FOSS via AGPL is because it inevitably leads to a fork where the paid version has features and performance superior to the open source "community" version.

i much rather ppl just release closed source proprietary software that i can pay for and the value proposition is clear not be lead to surprises down the road where the community version is neglected


AGPL 3.0 is _absolutely_ “truly FOSS”. Probably the most “truly Free Open Source Software” license in existence.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: