Hacker News new | past | comments | ask | show | jobs | submit | coder543's comments login

That quoted IOPS number is only with an 8-disk stripe (requiring the full instance), even if you don't need 488GB of RAM or a $3600/mo instance, I believe.

The per-disk performance is still nothing to write home about, and 8 actually fast disks would blow this instance type out of the water.


> Privacy and Security by Default [...] you can also run custom models on your own hardware via Ollama.

That's nice for the chat panel, but the tab completion engine surprisingly still doesn't officially support a local, private option.[0]

Especially with Zed's Zeta model being open[1], it seems like there should be a way to use that open model locally, or what's the point?

[0]: https://github.com/zed-industries/zed/issues/15968

[1]: https://zed.dev/blog/edit-prediction


We definitely plan to add support for this! It's on the roadmap, we just haven't landed it yet. :)

That’s good to hear!

People have been perfectly capable of making that mistake themselves since long before "vibe coding" existed.

I've never actually tried them, but if you google "RPLIDAR", there seem to be some budget-friendly options out there.

Historically, the term you're looking for might have been "patronage". Wealthy individuals supported artists, scientists, or explorers not purely for financial return, but because they believed in the person, the cause, valued the association, enjoyed the influence, or whatever else.

This sounds right to me

ChatGPT is the number one free iPhone app on the US App Store, and I'm pretty sure it has been the number one app for a long time. I googled to see if I could find an App Store ranking chart over time... this one[0] shows that it has been in the top 2 on the US iPhone App Store every month for the past year, and it has been number one for 10 of the past 12 months. I also checked, and ChatGPT is still the number one app on the Google Play Store too.

Unless both the App Store and Google Play Store rankings are somehow determined primarily by HN users, then it seems like AI isn't only a thing on HN.

[0]: https://app.sensortower.com/overview/6448311069?tab=category...


Close to 100% of HN users in AI threads have used ChatGPT. What do you think the percentage is in the general population, is it more than that, or less than that?


Another thing you’re running into is the context window. Ollama sets a low context window by default, like 4096 tokens IIRC. The reasoning process can easily take more than that, at which point it is forgetting most of its reasoning and any prior messages, and it can get stuck in loops. The solution is to raise the context window to something reasonable, such as 32k.

Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.


I can't belive Ollama haven't fix the context window limits yet.

I wrote a step-by-step guide on how to setup Ollama with larger context length a while ago: https://prompt.16x.engineer/guide/ollama

TLDR

  ollama run deepseek-r1:14b
  /set parameter num_ctx 8192
  /save deepseek-r1:14b-8k
  ollama serve


Nope... this stuff is 96.5% copper, and copper is ~3x as expensive as stainless steel. Even if tantalum and lithium were free, it would be substantially more expensive. Tantalum is not free, though. It's a very expensive material at about 100x the cost per kg relative to stainless steel, so it nearly doubles the cost of the raw material inputs by itself with its 3% contribution. The process of making this alloy is also likely to be expensive.

I'm also not sure how much being in an alloy would impact the antimicrobial effects of copper.


You're right about the cost angle, though it might be cheaper than stellite, inconel, monel, that kind of thing.

Generally copper does retain its antibacterial properties in alloys where it's a high proportion of the alloy, like this one.


Well, this could dramatically increase the demand for tantalum, which (econ 101) could dramatically increase the supply over time? Is tantalum in much demand today?


Huge demand for copper hasn’t brought its price down to the price of stainless steel, has it? Most definitely not, so it seems like Econ 101 was incomplete. Not all goods are perfectly elastic. Inelastic goods do not get cheaper with more demand.

Tantalum is in demand today, yes. Tantalum capacitors are a well known application, but it is used in all sorts of things.

My point was that even if tantalum were free, a material that is 96.5% copper is still not going to be significantly cheaper than copper, which I think is a pretty self-evident outcome.


Copper has been in high demand for centuries. Lithium might be a more similar situation to tantalum, huge spikes in demand in the last decade have absolutely floored prices


"When you have thinking turned on, all output tokens (including thoughts) are charged at the $3.50 / 1M rate"[0]

[0]: https://x.com/OfficialLoganK/status/1912981986085323231


I downloaded a sample RAF file from the internet. It was 16 megapixels, and 34MB in size. I used Adobe DNG Converter, and it created a file that was 25MB in size. It was actually smaller.

Claiming that DNG takes up 4x space doesn't align with any of my own experiences, and it didn't happen on the RAF file that I just tested.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: