Hacker Newsnew | past | comments | ask | show | jobs | submit | sieve's commentslogin

This is a typical technical solution to a sociopolitical problem. The powers-that-be are not comfortable with the free-for-all that exists on the internet. All these laws are meant to fix that squeaky wheel, one ball-bearing at a time.

"Children" gets the Right to march behind you unquestioningly. "Misinformation/Nazis" does the same for the Left. This is now a perfect recipe for a shit sandwich.


I agree. But if you find a different way to protect the children, that normal people can understand and relate to ("It's like buying beer"), and still maintain privacy, you take away at least one leg of support for what a lot of states really want to do (remove anonymity).

It's better than the fatalism in your comment IMO.


What is the useful life of something like this compared to an RCC structure? Do you have to keep painting them to protect it from rust?

You do see steel used in mobile towers etc because you may not be able to place an RCC structure of that height on top of a building not designed for those loads. And in single story workshops/sheds.


The useful life is less than RCC. A RCC structure is good for about 50 years, steel for 25. The 4x4 is even less, about 15 years.

No, you paint them initially when you build the structure. It's quite hard to paint afterwards.

Also, if you notice closely, the steel is welded rather than bolted. Newer buildings are bolted now-a-days, which increases their useful life.

Example: https://imgur.com/a/f4z84dx

This is the current building being built that I talk about. Notice (1) the two layers of paint, and (2) bolts being used instead of welds compared to the steel structure photos in the essay


The issues with subscriptions to streaming services are manifold (if you ignore the gargantuan waste of time that mindless TV-watching is):

- the UI is deliberately crap

- the library is deliberately incomplete

- accessing content is deliberately complicated

I had an experience recently where my phone provider bundles 20+ OTT services in a single plan within a single app that runs on your TV/phone/browser. The kicker: you can add stuff to a watch list, but the watch list is never exposed anywhere. While they want you to pay for stuff, they do not want you to be choosy about it.

YT has, to my mind, the best user interface of all the services I have tried.


Jellyfin is quite good.


Nice! His Shakespeare generator was one of the first projects I tried after ollama. The goal was to understand what LLMs were about.

I have been on an LLM binge this last week or so trying to build a from-scratch training and inference system with two back ends:

- CPU (backed by JAX)

- GPU (backed by wgpu-py). This is critical for me as I am unwilling to deal with the nonsense that is rocm/pytorch. Vulkan works for me. That is what I use with llama-cpp.

I got both back ends working last week, but the GPU back end was buggy. So the week has been about fixing bugs, refactoring the WGSL code, making things more efficient.

I am using LLMs extensively in this process and they have been a revelation. Use a nice refactoring prompt and they are able to fix things one by one resulting in something fully functional and type-checked by astral ty.


Unwilling to deal with pytorch? You couldn't possibly hobble yourself anymore if you tried.


If you want to train/sample large models, then use what the rest of the industry uses.

My use case is different. I want something that I can run quickly on one GPU without worrying about whether it is supported or not.

I am interested in convenience, not in squeezing out the last bit of performance from a card.


You wildly misunderstand pytorch.


What is there to misunderstand? It doesn't even install properly most of the time on my machine. You have to use a specific python version.

I gave up on all tools that depend on it for inference. llama-cpp compiles cleanly on my system for Vulkan. I want the same simplicity to test model training.


pytorch is as easy as you are going to find for your exact use case. If you can't handle the requirement of a specific version of python, you are going to struggle in software land. ChatGPT can show you the way.


I have been doing this for 25 years and no longer have the patience to deal with stuff like this. I am never going to install Arch from scratch by building the configuration by hand ever again. The same with pytorch and rocm.

Getting them to work and recognize my GPU without passing arcane flags was a problem. I could at least avoid the pain with llama-cpp because of its vulkan support. pytorch apparently doesn't have a vulkan backend. So I decided to roll out my own wgpu-py one.


FWIW, I've been experimenting with LLMs for the last couple of years, and have exclusively built everything I do around llama.cpp exactly because of the issues you highlight. "gem install hairball" has gone way too far, and I appreciate shallow dependency stacks.


Fair enough I guess. I think you'll find the relatively minor headache worth it. Pytorch brings a lot to the table.


I suspect the OP's issues might be mostly related to the ROCM version of PyTorch. AMD still can't get this right.


Probably - but the answer is to avoid ROCM, not pytorch.


Avoiding ROCm means buying a new Nvidia GPU. Some people would like to keep using the hardware they already have.


The cost to deal with rocm is > cost of a consumer nvidia gpu by orders of magnitude.


If you’re not writing/modifying the model itself but only training, fine tuning, and inferencing, ONNX now supports these with basically any backend execution provider without needing to get into dependency version hell.


What are your thoughts on using JAX? I've used TensorFlow and Pytorch and I feel like I'm missing out by not having experience with JAX. But at the same time, I'm not sure what the advantages are.


I only used it to build the CPU back end. It was a fair bit faster than the previous numpy back end. One good thing about JAX (unlike numpy) is that it also gives you access to a GPU back end if you have the appropriate stuff installed.


> The CEO is also more puritan than the pope himself considering the amount of censorship it has.

In that case, you should try OpenAI's gpt-oss!

Both models are pretty fast for their size and I wanted to use them to summarize stories and try out translation. But it keeps checking everything against "policy" all the time! I created a jailbreak that works around this, but it still wastes a few hundred tokens talking about policy before it produces useful output.


Surely someone has abliterated it by now


I started using this a couple of days ago. It is a fully functional replacement for what I have been doing with WhatsApp. About 30-40% of my n/w is on it now and I have also created our Sanskrit channel on it.

What it is missing:

- E2E encryption for text messages

- Communities as a container for groups

- Chat exports

- UPI payment integration

Also, the servers are under pressure so messages can get delayed sometimes.

But Vembu has promised continuous development. So let's see.

I am using it regularly and do hundreds of messages every day across groups and contacts.


I have been planning to put out a quarterly Sanskrit newsletter for some time now, and was dreading having to deal with LaTeX. For basic stuff, LibreOffice PDF export works. But that is not a plain text workflow.

I then discovered typst and it is a breath of fresh air. Unicode/Dēvanāgarī support out-of-the-box, no installing gigabytes of packages, near-instant compilation.

My complements to those who got this done.


Where can we sign up for the newsletter?


I will post it on our website as well as reddit when it is ready. I am taking my time to ensure that it does not become a one-off thing and can continue for many quarters.

- https://www.adhyeta.org.in/

- https://old.reddit.com/r/adhyeta/


They are very good at some tasks and terrible at others.

I use LLMs for language-related work (translations, grammatical explanations etc) and they are top notch in that as long as you do not ask for references to particular grammar rules. In that case they will invent non-existent references.

They are also good for tutor personas: give me jj/git/emacs commands for this situation.

But they are bad in other cases.

I started scanning books recently and wanted to crop the random stuff outside an orange sheet of paper on which the book was placed before I handed the images over to ScanTailor Advanced (STA can do this, but I wanted to keep the original images around instead of the low-quality STA version). I spent 3-5 hours with Gemini 2.5 Pro (AI Studio) trying to get it to give me a series of steps (and finally a shell script) to get this working.

And it could not do it. It mixed up GraphicsMagick and ImageMagick commands. It failed even with libvips. Finally I asked it to provide a simple shell script where I would provide four pixel distances to crop from the four edges as arguments. This one worked.

I am very surprised that people are able to write code that requires actual reasoning ability using modern LLMs.


Just use Pillow and python.

It is the only way to do real image work these days, and as a bonus LLMs suck a lot less at giving you nearly useful python code.

The above is a bit of a lie as opencv has more capabilities, but unless you are deep in the weeds of preparing images for neural networks pillow is plenty good enough.


pyvips (the libvips Python binding) is quite a bit better than pillow-simd --- 3x faster, 10x less memory use, same quality. On this benchmark at least:

https://github.com/libvips/libvips/wiki/Speed-and-memory-use


I'm the libvips author, I should have said, so I'm not very neutral. But at least on that test it's usefully quicker and less memory hungry.


I think Gemini is one of the best example of an LLM that is in some cases the best and in some cases truly the worst.

I once asked it to read a postcard written by my late grandfather in Polish, as I was struggling to decipher it. It incorrectly identified the text as Romanian and kept insisting on that, even after I corrected it: "I understand you are insistent that the language is Polish. However, I have carefully analyzed the text again, and the linguistic evidence confirms it is Romanian. Because the vocabulary and alphabet are not Polish, I cannot read it as such." Eventually, after I continued to insist that it was indeed Polish, it got offended and told me it would not try again, accusing me of attempting to mislead it.


as soon as an LLM makes a significant mistake in a chat (in this case, when it identified the text as Romanian), throw away the chat (or delete/edit the LLMs response if your chat system allows this). The context is poisoned at this point.


>Eventually, after I continued to insist that it was indeed Polish, it got offended and told me it would not try again, accusing me of attempting to mislead it.

I once had Claude tell me to never talk to it again after it got upset when I kept giving it peer reviewed papers explaining why it was wrong. I must have hit the tumbler dataset since I was told I was sealioning it, which took me back a while.


Not really what sealioning is, either. If it had been right about the correctness issue, you’d have been gaslighting it.


I find that surprising, actually. Gemini is VERY good with Sanskrit and a few other Indian languages. I would expect it to have completely mastered European languages.


That's hilariously ironic given that all LLMs are based on the transformer algorithm, which was designed to improve Google Translate.


Would you share your system prompt for that grammatical checker?


There is no single prompt.

The languages I am learning have verb conjugations and noun declensions. So I write a prompt asking the LLM to break the given paragraphs down sentence-by-sentence by giving me the general sentence level English translation plus word-by-word grammar and (contextual) meaning.

For the grammar, I ask for the verbal root/noun stem, the case/person/number, any information on indeclinables, the affix categories etc.


> a subscribe and save order

Yeah. This is a joke. They give us a 5-10% discount to do this. But when the time for the next delivery came, they had doubled the prices instead of locking in the price I had subscribed at. I had to cancel the order.

If I had been informed during subscription that fulfillment will be done at the price prevailing at that time, I would never have subscribed in the first place.


Instead of Subscribe and Save, which is almost useless for the reason you give, I wish I could place a standing limit order to buy up to X of something every Y months, but only if it's price Z or better.


I have had serious issues with Amazon these last two-three months which has resulted in my moving a majority of my purchases to a different online retailer.

I bought some ASSIMIL language-learning books being sold by a (known) POD firm. But I got some random (or so I thought) POD crap instead of the books I had ordered. I returned them and tried it again a month later (after confirming with customer care that I will get what I see in the listing) to see that the exact same books were sent again.

When I compared the ISBN numbers, I found that the books I had ordered were the older 978 series which can be reduced to the 10-digit version while the ones they were sending were the newer 979 series with only the check digit differing. I had to call them 15-20 times before I got my money back because they would repeatedly set up a return pickup, not do it and then claim that I have cancelled it. The books are still lying with me. They haven't bothered to collect.

They have routinely sabotaged multiple other deliveries by not visiting my house and providing bogus "OTP not provided/unable to contact" updates.

The absolute worst thing that happened was with some books I bought from a small publisher that they, unfortunately, sent by Amazon Shipping. One month and multiple calls/emails later to complain about Amazon's bogus delivery attempts, they completely ghosted me and the publisher had to RTO the books back. I talked to the publisher and they said they cannot afford to be out of pocket on shipping, so I paid them INR 1,000 to cover their expenses for the unnecessary two legs of shipping.

I have now decided that I am dealing with absolute scoundrels who do not value their customer's time and plan my purchases accordingly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: