For numerical PDE solvers, you never do the actual inversion, but the discretization of the system (denoted A) can have significant sensitivity to numerical precision. Look up Condition Number and Preconditioning. In fact, resolving the problems with numerical precision for some A takes as much time as actually solving the system, and it's worth it.
Oh I'm very aware, I'm lumping those all into "tricks used to avoid/work around the numerical issues associated with matrix inversion" for simplicity, because explicit computation of a matrix inverse is one of the classic examples of a numerically unstable task. Hence why a large subfield of applied math can be summarized as "coming up with ways to avoid matrix inversion." PDE solvers like you mention are one of the main applications for that.
Tricks like clever factorization (which a lot of factorization algorithms have their own severe numerical issues e.g. some of the ways to compute QR or SVD), preconditioners, sparse and/or iterative algorithms like GMRES, randomized algorithms (my favorite) etc are all workarounds you wouldn't need if there was a fast and stable way to exactly invert any arbitrary non-singular matrix. Well, you would have less need for, there are other benefits to those methods but improving numerical stability by avoiding inverses is one of the main ones.
Assuming they're 40, how far do you think $250k will go 20-30-40 years from now? It's not a stretch to think dollars could be devalued by 90%, possibly even worthless, within 30 years.
I love how the comment I'm responding to literally says "then collect $250k per year risk free after taxes," and then you all pile onto me with downvotes telling me that's he's not just going to invest in treasuries (which is exactly the implication of HIS comment and not mine).
Ok. Asian Americans are consistently over represented in US higher education, especially in math-heavy fields. What discriminatory practices by these institutions are causing this?
So how does this work? Asian Americans are vastly over represented relative to their population in higher education. This "implies" they've been receiving special treatment or favoritism?
Pretty much; might or might not; "suggest" may be a better word. It means there may be something to check into.
What OTHER advantages, disadvantages, preparation levels, etc. are working for and/or against that group?
Can those factors account for the differential?
What is the groups' actual relative performance in the situation (i.e., do they tend to perform at|above|below the level to which they have risen, e.g., legacy college admissions with "Gentleman's Cs" are performing below)?
And so forth.
The point is that statistical disparities are not conclusive, but are most often indicative of some kind of structural favoritism, like smoke to fire. Not always, and you can always cherry-pick some counter-example, but the default assumption is that it merits investigation.
FWIW, I didn't flag you. Just wanted to say you're accusing me of a straw man, but when I give a particular example that is a concrete example within a general statement, that's not a straw man.
Between the two of us, I am looking at a well-researched and widely understood example of why a disparate outcome doesn't actually imply what we would intuitively think, while you are talking about metaphors of smoke and "taking closer looks" as if those justify angrily jumping to conclusions.
If you have some sort of correlation coefficient to justify jumping to conclusions even mildly, I'd appreciate that contribution. But "something isn't equal" isn't even necessarily reason to look into something further, let alone a reason to presume the cause. Nothing is ever equal. Equality is a fictional concept, and doing deep dives into every example of it would exhaust our collective resources.
IDK what's the story, My comment is non-displaying and it looks like yours is flagged.
In any case, none of this conversation is at the level of a dissertation, neither enough time or space.
The start was a seven-word comment with a tone I read that was entirely dismissive of even the concept that IBM could be practicing ageism.
It cannot be dismissed so easily (and I have family working there trying to evade exactly those ageist axes to get to retirement with full benefits).
The point of the smoke/fire is that a single good counter-example of disparate results likely not implying discrimination does not disprove all examples. The fact remains, such statistics remain a good starting point.
You have many words, but beyond the first general wave of the hands that statistics do not imply discrimination and one non-correlating example, I've missed where you provided any info or evidence to show IBM is not practicing ageism, in particular to effectively cut pensions.
Since IBM are clearly managing the workforce for profit, the multiplier of cutting older people is far larger; you save not only a few years of higher salaries, but also up to decades of zeroed or reduced pensions. Pretty strong financial incentive.
I think you're right that IBM is considering in particular the most effective cost savings strategy. And it happens that older people are more effective in cost reduction. But that's not ageism.
Ultimately it doesn't matter. Your "blue" is just a translation of that frequency to some distinguishable impression to allow you to see. But it's a good bet that the same wiring that went into your brain making that translation also went into other brains.
I believe the performance reported in their paper is circumstantial. It’s not that much faster when I tried it, and not worth the horrible macro syntax.
spdlog is designed as a general purpose logging library and it can’t beat low latency loggers. It doesn’t scale for multiple threads because it’s async mode is using a mutex and a cv to notify the background thread.
Chipotle's stock dropped 10% and Starbucks's rose 25% when the CEO switched from one to the other this past week. If this guy is just a meritless lottery winner, then you can make a lot of money betting against the market and the extremely large group of heavily motivated and intelligent investors who think otherwise.
Ah yes, the "if you're truly right clearly you would be rich using that information to trade" argument. Would you be willing to consider that the market is not always smart or a good measure of how we should value things?
Would you be willing to consider that some half baked theory about why CEOs are the manifestation of luck is actually just an unhealthy way of coping with failures?
So, bad CEOs are bad? What a revelation. We're not isolating this analysis to bad ones (who generally don't last very long). We're talking about a blanket, overarching, ill-informed and bitter "analysis" that contends CEOs are meritless lottery winners.
Have you ever seen any process of filtering job candidates that doesn't have massive failures? Look no further than sports. There is no profession with more accessible information about candidate qualifications, and we still see significant failures, signing people to hundreds of millions of dollars who never produce any value over a replacement player. Does that mean the process is no better than throwing darts at phone book? No. There are still actual skills that are hard to attain, and are decent predictors of performance.
When it comes to CEOs, despite the fact that this process doesn't value YOUR skills very highly doesn't mean it's purely random. It means you've either chosen to ignore this filtering process, you never learned what the process is, or you aren't good enough to stand out within it. But that doesn't mean it's a zero-signal filtration.
Basic question. Is there a service out there where I can easily link my database to an LLM to do this exact same type of analysis, except on one of my own Postgres DBs instead of one backed by PGlite? My org has several non-technical people who would greatly benefit from being able to interact with our DB via an LLM, rather than SQL 101 queries. The PostgreSQL Explorer extension on VS Code helps some, but doesn't quite make it as seamless at this.
Another possibility might be to export the database (or a subset of it) to be loaded in a more ephemeral environment like PGlite so that you don't have to worry about non technical users running inefficient/unindexed queries taking down the prod DB.
Mine has been a little bit more along the lines of helping them understand how everything is linked. They can't really even understand the power of JOIN, which greatly reduces the power of what I set up. Basically, I'm sort of stuck in a place where I built something that's too powerful for them to use, and I can see an LLM finally bridging that gap
I'm in the middle of building something like this but it's not ready yet.
You'll just provide an openAI/anthropic api key and connection details to the db/schema. I intend for it to work a lot like postgres.new but with regular postgres instances.
Well this is the second time Supabase has had a product announcement very similar to mine.
My implentation is more focused on data analysis and visualization via a natural language interface.However, the straighforward database operations that postgres.new tries to tackle are included.
I was mainly referring to the plethora of other services doing exactly that: analysis and visualization via natural language. Supabase is different since they are doing it locally in the browser but just regular text-to-SQL is one of the most common applications of LLMs. How will you differentiate yourself from the rest of the "chat with your database" services?
I don't know enough to endorse a specific one but there are probably hundreds of services doing text-to-SQL using LLMs. "Chat with your database" is one of the most popular products in the space.
reply