No problem I will tell you and after telling you I will also add a guide
AETHRA How to use it?
First go to the GitHub link I gave you in the post. Download AETHRA or clone it. After downloading go to the folder named 'AETHRA' then go to bin then RELEASE and then to Net 10 Windows Folder. You will get an exe named AETHRA v0.8. After that start your Music Journey. To get all it's commands go to the GitHub project and read the README. You will get a built in AETHRA script, if it is working you are ready to go!
Open Hardware + Open Software is good enough for me to hit the buy button.
Seems like a good toy, I hope I don't lose interest within a month of buying.
Looks like a great work. My main concern with CMS is SEO/GEO.
Where would this stand in terms of SEO/GEO?
Any particular reason to choose Next.js? Could there have ben a better choice to optimize for SEO/GEO?
The UK neobank completed a share sale at a $75 billion valuation, up from $45 billion last year Bloomberg, led by Coatue, Fidelity, and—wait for it—Nvidia's NVentures PYMNTS.com. Because apparently Nvidia is now just investing in everything that moves.
Revolut's 2024 revenue grew 72% to $4 billion with profit before tax increasing 149% to $1.4 billion Disruption Banking. Unlike most fintech darlings, they're actually profitable and growing fast across 65 million customers.
But here's the kicker: this is a secondary sale—meaning existing shareholders are cashing out. When insiders are selling at these valuations, it's worth asking who's buying the peak. Especially when traditional banks are still struggling to innovate and Revolut is building a global bank from scratch.
The company is solid, but $75 billion for a neobank? That's more than many established global banks. The fintech premium is alive and well, even as the AI bubble inflates around it. What do you think?
As others have commented, all the mentioned issues are resolved, I will favour in using the PGVector.
If Postgres can be a good choice over Kafka to deliver 100k events/sec [1], then why not PGVector over Chroma or other specialized vector search (unless there is a specific requirement that can't be solved wit minor code/config changes)!
So its a longish article and doing a point by point explanation is probably too much for a single post. But several of the points are solved but just standing up a specific Postgres instance for the vector use cases instead of doing this inside an existing instance.
Most of the rest of his complaints comes down to this is complex stuff. True, but its not a solution, its a tool used in making a solution. So when using pg_vector directly, you probably need to understand databases to a more significant degree than a custom solution that won't work for you the moment your requirements change. You surely need to understand databases more than the author does. He doesn't point to a single thing that pg_vector doesn't do or doesn't do well. He just complains it hard to do.
In summary, pg_vector is a toolkit for building vector based functionality, not a custom solution for a specific use case. What is best for you comes down to your team's skills and expertise with databases and if your specific requirements will change. Choose poorly and it could go very badly.
> He doesn't point to a single thing that pg_vector doesn't do or doesn't do well. He just complains it hard to do.
He very clearly complains that IVFFlat indexes have to be periodically rebuilt, that HNSW has high overhead (both during inserts and rebuilds) and that the query planner is not particularly good at optimizing queries involving this kind of indexes. None of this is a problem if the dataset is puny enough, but deadly if you want to scale up without investing significant engineering.
Agree. The resistance has become exhausting.
The solution cannot be just to ask businesses to not collect data.
The solution has to be orthogonal.
Maybe one solution can be to develop new tools that act as a seamless filter/mask between us vs the party collecting data ensuring we have to think less at the same time fulfilling the requirement by the party collecting data.
reply