Hacker News new | past | comments | ask | show | jobs | submit | waste_monk's comments login

It seemed quite promising but from the outside momentum appears to be almost completely stalled, with only a handful of posts per year on the forum.

I'd be curious to know if there's progress being made behind the scenes.


From forum threads in the last year or two it sounds like they have gotten quite far and only need investment to progress further.

https://millcomputing.com/topic/any-plans-for-2024/ https://millcomputing.com/topic/yearly-ping-and-see-how-thin...

Edit: posted a HN thread about getting investors for Mill:

https://news.ycombinator.com/item?id=43054697


It has historically been solved in different ways [1] :)

Unfortunately most people would not be happy taking the same approach to their web browser.

[1] https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98... where the subject post was from comp.lang.ada.


>I recommend you to sometime go to look at second-hand markets for books. That is less selective antiquarians and especially something like salvation army or flea markets. Places that accept donations of books and then try to resell them for very cheap are good picks.

This is a very dangerous recommendation! You will inevitably leave with more books, and soon enough you'll find you've filled a bookshelf and need another, which will then look bare with only a few books, so you go and acquire more books... next thing you know, your house is primarily composed of bookshelves.


We have folk who believe in a flat earth, I wonder if there is also a "flat core society"...

Tool use is fine, when you have the education and experience to use the tools properly, and to troubleshoot and recover when things go wrong.

The use of AI is not just a labour saving device, it allows the user to bypass thinking and learning. It robs the user of an opportunity to grow. If you don't have the experience to know better it may be able to masquerade as a teacher and a problem solver, but beyond a trivial level relying on it is actively harmful to one's education. At some point the user will encounter a problem that has no existing answer in the AI's training dataset, and come to realise they have no real foundation to rely on.

Code generative AI, as it currently exists, is a poisoned chalice.


If you know the enemy and know yourself, you need not fear the result of a hundred battles. - Sun Tzu

Failing that, I have heard that removing your jacket, wrapping it around one arm, and allowing the dog to bite that arm is a decent move, in that it will hopefully protect you long enough to make use of humanity's strengths (finding a tool to do extreme violence with, and community to rescue you).

If there's multiple dogs and no one around to help you're probably screwed.


Cripes [1].

[1] https://qntm.org/perso


Everyone who eats a banana (eventually) dies. Coincidence? I think not!


> but I'm yet to find an instance of it explaining a concept wrong.

How do you know for sure? LLMs output is often plausible-sounding but incorrect - usually it's fairly obvious, but it can be subtle enough that I would not suggest using it until you've learned the old fashioned way and can better judge whether the LLM is wrong.


Like all software you run it and see if it works.


> Like all software you run it and see if it works.

"Oh my god, where are my files gone ?"


At this point it's more about being scared of the bird.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: