Hacker News new | past | comments | ask | show | jobs | submit | c1b's comments login

> The difference is that the A6000 absolutely smokes the Mac.

Memory Bandwidth : Mac Studio wins (about the same @ ~800)

VRAM : Mac Studio wins (4x more)

TFLOPs: A6000 wins (32 vs 38)


VRAM in excess of the model one is using isn’t useful per se. My use cases require high throughput, and on many tasks the A6000 executes inference at 2x speed.


As someone familiar with the USG data management tech landscape — it’s probably because it’s by far the best product with no remotely close second.


> As someone familiar with the USG data management tech landscape — it’s probably because it’s by far the best product with no remotely close second.

That is sweetly naive, unless you are talking about their marketing department


Let me rephrase: I am extremely familiar with the USG data management tech landscape.


I'd love to know (in as much detail as you are allowed) what you feel the strengths and weaknesses of CHEETAS is.


I cannot comment specifically on CHEETAS, but what I can say is that USG developing in house software solutions almost always produces a disastrous product that goes over budget and has extreme maintenance overhead.

To see why, you can simply ask yourself: do you think that the unelected officials overseeing government agencies that embark on enterprise software development projects have sufficient expertise and enterprise software project management experience to be able to do this well?

Furthermore, do you think that the quality of engineers that the NHS or DoD can attract with less than half of the compensation of an actual software company stands a chance at developing something good in house?

It’s unfortunately almost impossible for these projects to go right.


CHEETAS isn't really developed in house though; it's mainly developed by Dell. Certainly the leadership is USG-associated, but I think the leadership is actually really good. Unfortunately I seem to be unable to get _real_ access to CHEETAS and finding anyone who has worked with it is a challenge.

I suspect underneath it's mostly Hadoop but it's impossible to separate the roadmap from the implementation without getting my hands on it.


Interesting, thank you for sharing!

That experience speaks more to the perils of in-housing, not to why Palantir is the best COTS for specific needs here. Are there specific leading COTS here you view it so far ahead of for such a contract?

Closer to our own practice.. Modern LLMs have basically reset the field for SOTA in this space, with Palantir, by definition, being behind OpenAI in the most basic tasks, and thus being in the same race as everyone else to retool. Speaking from our own USG experience, we are deep tech leads in some other intelligence areas (graph, ...), and before OpenAI, often chose to adopt prev-gen leading LLM models (BERT, ...) for tasks closer to the NLP side as we recognized that wasn't where our deep tech had an inhouse advantage. We basically had to start over on some of those projects there as soon as GPT4 came out because it just changed so much that the incumbent advantages of already being delivering on a contract were a dead end for core functionality, and almost a year later, it's now obvious that it was the right choice when we get compared to companies that haven't been. Palantir has been publicly resetting as well for using GenAI era tech, which suggests the same situation.


It seems like you don’t know what Palantir is. Nothing OpenAI does is competitive with what Palantir does. Palantir, like every other software company out there, is exploring what “my product + AI” means.


That's a fair surface-level view, but worth thinking through a bit.

Palantir is multiple main things, and a whole ton of custom software projects on top, and a good chunk of them rely on the quality of their NLP & vision systems for being competitive with others. My question relates to the notion that they are inherently the best when, by all public AI benchmarks, they don't make the best components and, in the context of air-gapped / self-hosted government work, don't even have access to them. Separately, I'm curious how they relate to their COTS competitors (vs gov inhouse) given the claims here. For example, their ability to essentially privatize and resell the government's data to itself and make that into a network effects near-monopoly is incredible, but doesn't mean the technology is the best.

I've seen cool things with them, and on the flip side, highly frustrated users who have thrown them out (or are being forbidden to.) It's been a fascinating company to track over the years. I'm asking for any concrete details or comparisons as, so far, there is zero in the claims above, which is more consistent with their successful gov marketing & lobbying efforts than technical competitiveness.


I mean the topic of this thread is data management. That’s their bread and butter.

It just doesn’t make sense to be having this conversation through the lens of AI.


AI leadership seems existential to being a top data management company and providing top data management capabilities:

* Databricks data management innovations, now that basics are in, are half on the AI side, like adding vector & LLM indexing for any data stored in it, moving their data catalog to be LLM-driven, adding genAI interfaces to accessing data stored in it, ...

* Data pipelines spanning ingestion, correction, wrangling, indexing, integration, and feature & model management, and especially of the tricky unstructured text, photo, and video nature, and wide nature of event/log/transaction recordings important to a lot of the government, are all moving or have already moved to AI. Whether it is monitoring video, investigating satellite photos, mining social media & news, entity resolution & linking on documents & logs, linking datasets, or OCR+translation of foreign documents, these are all about the intelligence tier. Tools like ontology management and knowledge graphs are especially being reset due to the ability of modern LLMs to drastically improve their quality and improve their scalability & usability through automation.

* Data protection has long been layering on AI methods for alerting (UEBA, ...), classification, policy synthesis, configuration management, ...

Databricks is a pretty good example of a company here. They don't preconfigure government datasets on the governments behalf and sell that back to them, but we do see architects using it as a way to build their own data platforms, and especially for AI-era workloads. Likewise, they have grown an ecosystem of data management providers on top vs single-sourcing, eg, it's been cool to see Altana bring supply chain data as basically a Databricks implementation. For core parts, Databricks keeps adding more of the data management stack to their system, such as examining how a high-grade entity resolution pipeline would break down between their stack and ecosystem providers.


What are the better products that are available?


Which product specifically? As I understand Palantir has several products and the NHS isn't buying one of them but paying for something bespoke.


https://www.palantir.com/uk/healthcare/

They are using palantir foundry, which is palantir's big data platform, or how they call it: "The Ontology-Powered Operating System for the Modern Enterprise"


What they call it doesn't sound like legendary marketing that I was expecting.


What would be the second in your opinion, even if it's not close?


It varies by agency — either something built in house (very bad) or built by a company that knows how to acquire government contracts, of which there are few - the set of which frankly always has worse tech than Palantir. If product efficacy is not absolutely critical, the acquisition process will be driven by nepotism or other forms of corruption.

As an example for the second case in DoD space, there’s Advana.


“As if LLMs are doing something creative and aren’t just algorithms”

You have no idea what you’re talking about huh?


Its AR


So you just gave three examples off the top of your head but you need to make sure he picked the right one..?


Sorry, I thought it’d be obvious to everyone reading he clearly isn’t talking about:

- labor organizers getting fired

- Palestine

- Folks getting sexually harassed


I believe the author was pointing out that pg is selective in his application of "heresy."


So you agree that the issue of heresy is a problem, but instead of addressing the actual issue, you are more interested in knowing which team PG is in, in the ongoing pattern of treating politics like team sports.


I am first and foremost interested in consistency, a property that is consistently in short supply with pg.


I somewhat follow, but it seems shares have privileges that cannot be synthesized in the same way that dividends and value can. For example, if the firm votes for a new CEO, these shares should have voting power, but B cannot fulfill this obligation to A, so how can these shares be resold to multiple buyers?


When A loans out her shares, she accepts the loss of voting rights as part of the deal. If she cares more about voting her shares than about the income from lending, she will simply direct her broker not to loan out her shares.

In the “A loans to B who sells to C” scenario, C is the one who gets to vote.


This is not entirely accurate. It is the same position as borrowing some amount of money to buy the stock. If I buy a put and call and the price at expiry is exactly that strike I am guaranteed to lose money.


Agreed. One of the reasons is that the option has a risk and therefore there's a cost associated with managing that risk and taking on the trade.


Not quite. It's the 'risk-free bond' term that I dropped from my explanation of the put call parity.

To replicate the stock, you'd buy the call and sell the put. The risk exactly balances out in the sense that a total portfolio of 1 stock short, 1 call long and 1 put short would have zero risk and behave like a risk-free bond.

(If the risk premia of the long call and the short put would not exactly balance, you could make money with very simple arbitrage trades.)


Yes, my explanation dropped the bond term from the put call parity.


Both AirPods that have been released have already been wayyyyy better than every “established” earbud product in their price category. Don’t see why you would assume that AirPods Max would all of a sudden be worse.


heard of gold?


human invention? heard of rocks?


Rocks are fine for throwing at targets, which behavior certainly dates back much further, but you can't play polo with them. (I found TFA's blithe dismissal of the polo mallets found at the same cemetery unconvincing.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: