Hacker News new | past | comments | ask | show | jobs | submit | dsiegel2275's comments login

A question that I have had for a while that I can't seem to find an answer: for teams that are using various columnar store extensions to turn Postgres into a viable OLAP solution - are they doing so in the same instance of their Postgres that they are using for OLTP? Or are they standing up a separate Postgres instance?

I'm trying to understand if there is any potential performance impact on the OLTP workload by including the OLAP in the same process.


And further, with this pg_mooncake extension allowing you to store the data in S3, is Postgres simply providing compute to run DuckDB? I suppose it's also providing a standardized interface and "data catalog."

transactions are also managed by postgres as if they are native table, so that you don't need to worry about coordinating commits between postgres and the S3 data.

Congrats! It is never too late to be doing this type of study and work.

I'm doing something similar: I just turned 50 and have been taking graduate ML classes where I work (at Carnegie Mellon). When I finish the graduate certificate program in generative AI and LLMs that I am enrolled in, I will be only two semesters away from earning a full masters degree.


I spent 10 minutes or so playing around with this and it is impressive. I've also built a couple of small side projects around language learning out of frustration with Duolingo.

I'm around a C1 level in French and am just starting to learn Polish. The audio pronunciations are a great feature here and (at least in French, where I can judge it) the accents are quite good. I can see this being quite useful for my Polish work for learning some of the basics.


Using it a bit more, it doesn't seem that the C1 or C2 level generated sentences are anywhere close to being C1 or C2 level. I'd be curious to understand the sentence generation process and how it is or is not attempting to generate sentences at various CEFR levels.


Agreed I have heard that feedback a few times now. I tried to make sure the A and B levels were pinned down as it was quite verbose in my testing. I'll have a look into C1 and C2 levels. If you have examples I can back test against I'll use them.

Thanks for the feedback!


Thanks for the feedback. I started with web text-to-speech APIs and beta testers hated the robotic sounds so spent a fair amount of time getting text-to-speech right.

Looking forward to seeing more advanced ML models in the browser as that's how this project started really.


How is this article worthy of the front page of HN? It must be simply the click-bait nature of the title as there is nothing of substance beyond that.


It is a question. I tried to put what my opinion is on a few statements but I absolutely cannot summarize 160 pages (Business Insider did using GPT, which I find insulting and funny) nor have a 100% opinion on something that involves national security, secrets and other stuff that I don't have access to.


I thought it was extremely thought provoking. Are you overconfident in your weights perhaps?


Thank you.


The irony of these conversations is bizarre.

As the (tautological) saying goes: everyone is doing their best. My interest is whether this can be improved - perhaps at some point when AI gets closer to challenging us for cognitive supremacy we will awake from our slumber.


This article is blogspam. It's the paper that has people's attention.


It was an earthquake.

Full stop.


Came here to see this. Thank you.


Same here, glad I'm not the only one.


But does it taste like the real thing?


Yeah it was indeed. You essentially had to code event handlers to react to window size changes in which you repositioned your UI elements to maintain/scale your UI. I remember some of the first 'library' code I ever wrote as a professional developer was reusable "layout" handlers for VB3. Good times!


If I recall the things I've read on this in the past, the horizontal scaling limitations of distributed Erlang are encountered around like 500 to 1,000 nodes? Which I have to imagine means the large, large majority of shops using Elixir and scaling horizontally will be just fine, as they are typically in the size of 2 - 10, or maybe 20 nodes.


There is a reason stuff like Partisan were conceived - https://arxiv.org/abs/1802.02652

If you do a lot of cross node talk with say, the Phoenix channels, then lowering the total number of nodes helps (n*(n-1) for a full mesh). And since the BEAM scheduler will try to use up all available cores by default, past a certain threshold of nodes, it makes more sense to use smaller numbers of nodes with more cores per node. When you also have say, an observaility agent taking up 1000 mcore, doubling the instance size makes a difference. In the last shop, we would idle at 4-cores, jump to 16-cores instance sizes for the morning rush, and go back to 8-cores for the late afternoon traffic. I had tried to horizontally scale it with 4-cores machines, but saw a lot more instability and performance issues.

This is one of the advantages with using Elixir or Erlang. You simply don’t get this kind of operational characteristics with Ruby, Python, or Nodejs.


Agreed, naming things is hard. That and off by one errors are the three hardest things in software development, IMO.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: