Hacker Newsnew | past | comments | ask | show | jobs | submit | d-lisp's commentslogin

I found chatGPT to be bad at VimGolfing.

``` Here is a 35 keystrokes solution that beat your 36 keystrokes solution ! <89 keystrokes> ```

And then it keeps looping in the same way if you ask it about the seahorse emoji (or sometimes just lie about the keystrokes number).

In fact that's not surprising, what is rather surprising is that some of the solutions actually work (>= 100 keystrokes)


They should probably train LLMs to be bad at vim golf. The whole point of vim’s funky language is that human keypresses are very valuable and should not be wasted. Saving keystrokes for an LLM is a non-goal at best.

Why would you need webapps when you could just talk out loud to your computer ?

Why would I need programs with colors, buttons, actual UI ?

I am trying to imagine a future where file navigators don't even exist : "I want to see the photos I took while I was in vacations last year. Yes, can you remove that cloud ? Perfect, now send it to XXXX's computer and say something nice."

"Can you set some timers for my sport session, can you plan a pure body weight session ? Yes, that's perfect. Wait, actually, remove the jumping jacks."

"Can you produce a detroit style techno beat I feel like I want to dance."

"I feel life is pointless without a work, can you give me some tasks to achieve that would give me a feeling of fulfillment ?"

"Can you play an arcade style video game for me ?"

"Can you find me a mate for tonight ? Yes, I prefer black haired persons."


>Can you set some timers for my sport session, can you plan a pure body weight session ? Yes, that's perfect. Wait, actually, remove the jumping jacks."

Better yet, why exercise -which is so repetitive- if we can create a machine that just does it for you, including the dopamine triggering, why play an arcade video game where we can create a machine that fires the neuron needed to produce the exact same level of a excitement than the best video game.

And why find mates when my robot can morph into any woman in the world, or better yet, the brain implants that trigger the exact same feelings than having sex and love.

Bleak, we are oversimplifying existence itself and it doesn't lead to a nice place.


Maybe I should have rephrased everything with : " Make me happy"

"Make me happy"

"Make me happy"

"Make me happy"


That's an infinite loop.

Stop the loop of wanting

The Tao of Zen in 5 words.

> Bleak, we are oversimplifying existence itself and it doesn't lead to a nice place.

We are already on this path for many-many years, certainly decades if not centuries, although availability was definitely spotty in the past.

It is also kind of impossible to hop off this train, while it is individually possible to reject any of these conveniences, in general they just become a part of life. Which is not necessarily a bad thing, but just different.


> although availability was definitely spotty in the past.

lol, that seems like a reference to the William Gibson quote "The future is already here, it's just unevenly distributed"


Citation needed on that last sentence about ir not a bad thing, also I'm pretty sure climate change 100% counts as a collateral damage of this behavior.

You say bleak, but a huge number of people would consider what you're describing as a utopian paradise...especially the morphing robot part.

I should know I am one of them, I mean exclusively the robot part.

Zizek is a good reference here. What’s the word for it, interpassivity?

I think this is well illustrated in a lot of science fiction. Irregular or abstract tasks are fairly efficiently articulated in speech, just like the ones you provided. Simpler, repetitive ones are not. Imagine having to ask your shower to turn itself on? Or your doors to open?

Contextualized to "web-apps," as you have; navigating a list maybe requires an interface. It would be fairly tedious to differentiate between, for example, the 30 pairs of pants your computer has shown you after you asked "help me buy some pants" without using a UI (ok maybe eye-tracking?).


On a tangent but I still don't know why we don't have showers where you just press a button and it delivers water at the correct temperature. It seems like the simplest thing that everyone wants. A company that manufactures and installs this (a la SolarCity) should be an instant unicorn.

What's "correct" for you might not be "correct" for others. Furthermore, your owb definition of "correct" changes depending on circumstances; sometimes you want it hotter, sometimes you want it colder. Sometimes you want to change it partway through.

How do you calculate for that?

Back in the 90s, Fuzzy Logic was thought to be the solution. In a way, yes, but only for niche/specialized purposes, and they still have to limit the variables being evaluated.


Water + electronics/power typically isn’t very durable, or reliable. Most people want their shower valves to work at least 20 years, ideally 50-100.

Can be mitigated to a degree by separating the (cheaper) sensors and the (pricier) logic.

But then it will become a tradeoff of complexity vs longevity.


Nah, because it would still need servicing.

And why? There are reasonably well done, low maintenance, temperature balancing valves out there.

And they do typically last 20+ or more years.


Maybe you don't even need a list if you can describe what you want or able to explain why the article you are currently viewing is not a match.

As for repetitive tasks, you can just explain to your computer a "common procedure" ?


They actually aren’t done well via voice UI either - if you care about the output.

We just gloss over the details in these hypothetical irregular or abstract tasks because we imagine they would be done as we imagine them. We don’t have experience trying to tell the damn AI to not delete that cloud (which one exactly?) but the other one via a voice UI. Which would suck and be super irritating, btw.

We know how irritating it would be to turn the shower off/on, because we do that all the time.


Voice interfaces are not the be all and end all of communication. Even between humans we prefer text a lot of the time.

What GP describes sounds to me like having a friend control the computer and dictate to them what to do.

No matter how capable the friend it, it's oftentimes easier to do a task directly in a UI rather than to have to verbalize it to someone else.


Plus there are many contexts where you don't want to use your voice (public transport, night time with other people sleeping, noisy areas where a mic won't pick up your voice...).

And there are people that unfortunately cannot speak.


Well, there are people that cannot see.

Fortunately, there are solutions.

I want to add that I think you are missing my argument here. Devices that allow you to speak without speaking shall soon be available to us [0].

The important aspect of my position is to think about the relevance of "applications" and "programs" in the age of AI, and, as an exaggeration of what is shown in that post, I was wondering if in the end, UI is not bound to disappear.

[0] https://www.media.mit.edu/projects/alterego/overview/


I cannot speculate about this, because I am not sure too observe the same.

We've had writing for only around 6000 years. It shall pass.

I just this week vibe-coded a personal knowledge management app that reads all my org-mode and logseq files and answers questions, and can update them, with WebSpeech voice input. Now it's my todo manager, grocery list, "what do I need to do today?", "when did I get the leaves done the last few years?" and so on, even on mobile (bye bye Android-Emacs). It's just a basic chatbot with a few tools and access to my files, 100% customized for my personal needs, and it's great.

I did that in the past, without a chatbot. Plain text search is really powerful.

Full assistant and a text search are quite a bit different things in terms of usefulness

Not for org-mode.

Very cool! Does it load all of the files into context or grep files to find the right ones based on the conversation?

This will eventually cause such reduction of agency that it will be perceived as fundamental threat to one's sense of freedom. I predict it will cause humanity to split into a group that accepts this, and one that rejects it at its fundamental level. We're already seeing the beginning of this with vinyl sales skyrocketing (back to 90s levels).

I must be really dumb because I enjoy producing music, programming, drawing for the sake of it, and not necessarily for creating viable products.

Ive been imagining the same thing. Were kinda there with MCPs. Just needs full OS integration. Or I suppose you can write a bunch of clis and have LLM call them locally

Well, if you have a terminal emulator, a database, a voice recognition software, a LLM wrapped in such a way that it can interact with the other elements, you obtain a ressembling stack.

This is what all the people put out of work by AI are going to do.

> “Hell of a world we live in, huh?” The proprietor was a thin black man with bad teeth and an obvious wig. I nodded, fishing in my jeans for change, anxious to find a park bench where I could submerge myself in hard evidence of the human near-dystopia we live in. “But it could be worse, huh?”

> “That’s right,” I said, “or even worse, it could be perfect.”

-- William Gibson: The Gernsback Continuum


Basic engineering skills (frontend development, python, even some kind of high level 3d programming) are covered. If you do C/C++, or even Java in a preexisting project then you will have a hard time constantly explaining the LLM why <previous answer> is absolute nonsense.

Everytime I tried LLMs, I had the feeling of talking with a ignorant trying to sound VERY CLEVER: terrible mistakes at every line, surrounded with punchlines, rocket emojis and tons of bullshit. (I'm partly kidding).

Maybe there are situations where LLMs are useful e.g. if you can properly delimit and isolate your problem; but when you have to write code that is meant to mess up with the internal of some piece of software then it doesn't do well.

It would be nice to know from each part of the "happy users" and "mecontent usere" of LLMs in what context they experimented with it to be more informed on this question.


Exactly. But also I enjoy experiencing good things more than once and sometimes need that kind of help to be remembered about good things I may have forgotten.


more of an "and" than a "but" then


Can "but also" be a convoluted form of "and" ?


If this isn't cosplay I'd be glad to know how you do so.


Incrementation ?


I believe in UML usefulness as a whiteboard/blackboard language. A fun way to explain what you need or what you imagine to be a good architecture, but that's all, it's a drafting tool. But then, why not using it as a communication tool ? You would draft something on the board, the LLM would generate the program. Sometimes it is simpler to draw 5 rectangles, name then and show their relationships in UML class modeling than to explain it textually.


UML class diagrams in mermaid syntax require less code than just defining actual classes with stubbed attrs and methods in some programming languages.

Years ago I tried ArgoUML for generating plone classes/models, but there was a limit to how much custom code could be round-tripped and/or stuffed into UML XML IIRC.

Similarly, no-code tools are all leaky abstractions: they model with UI metaphors only a subset of patterns possible in the target programming language, and so round-trip isn't possible after adding code to the initial or periodic code-generation from the synced abstract class diagram.

Instead, it's possible to generate [UML class] diagrams from minimal actual code. For example, the graph_models management command in Django-extensions generates GraphViz diagrams from subclasses of django.models.Model. With code to diagram workflow (instead of the reverse), you don't need to try and stuff code in the extra_code attr of a modeling syntax so that the generated code can be patched and/or transformed after every code generation from a diagram.

https://django-extensions.readthedocs.io/en/latest/graph_mod...

I wrote something similar to generate (IDEF1X) diagrams from models for the specified waterfall process for an MIS degree capstone course.

It may be easier to prototype object-oriented code with UML class diagrams in mermaid syntax, but actual code shouldn't be that tough to generate diagrams from.

IIRC certain journals like ACM have their own preferred algorithmic pseudocode and LaTeX macros.


That is not what is implied here, OP seems to dislike the speech aesthetics produced by the model. I feel the same; the sugar-coating provided before and after any actual valuable information is (to me) : - not succeeding in awkwardly trying to achieve an experience that would be comparable to talking with a human person - not efficient, not enjoyable - perfectly matching the experience of talking with an highly standardized and gimmicky version of human_v0.01.

Now, that being said, I don't really care about all of this.

The USA population is equivalent to approximately 4% of the total world population.


If html is a markup language and mardown a ... mardown language, then org is all of this and much more. I wish there was an org-mode for everything non-textual, as org-mode solves the textual part of my life quite well.


I always had 19.5/20 at school no matter the subject, if the subject involved logic and even if I didn't learn anything. On the other part, my grades in other subjects such as history were much more "ordinary". I skimmed through the academic part of my life without paying any attention to what the teachers were telling, drawing stuff in a notebook.

On the other hand, I cannot remember simple things such as where to go to <enter the name of a place> and I often have to wander and explore when I ride my car ...

Also, if you ask me what I did yesterday, it is highly probable that I won't be able to tell.

Also, I never know what day of the week we are, and sometimes I don't even know what month we are.

For every other aspect, learning new things is like learning things I already know.


I had the exact same experience. Top grades in math etc without paying attention or studying for tests etc. Terrible at history and similar classes that mostly require you to remember trivia.

Also struggle to remember directions, I lean heavily on Google maps and might use it 3-5 times before I actually remember the route. Terrible with faces and names. Get easily distracted but some times I can focus for hours at a time. Other times I can't focus for the life of me, it's like my brain just isn't working.


I feel I am in a perpetual day-dream; hence space and time go unnoticed by my "brain".

Logical articulation, -and I don't know why- resonnates perfectly with my state of mind. If the "memory task" can be represented as a tree or axioms and theorems, then it feels like I already know it by heart (the information is compressed and can be developped from a small piece of information, be it a formula or a fact).

The weirdest is that some aesthetical ability in my brain allow me, through some kind of hallucination (There's no other word) to perceive logical articulation where it doesn't exist: I can remember phone numbers only because I see a "discourse" or articulation in them, when truly that's plainly wrong.

I also noticed I was much more "affected" by signs that weren't part of the road signalization when driving, and sometimes I know where a graffiti is, for example, better than I know which of the next two village is the nearest. (But at least I make people laugh).

I am amazed by my wife that is able to travel a whole country knowing almost exactly which town and village we will go through without looking at a map, while telling me about the names of the different mountains, hills and vegetal species around us (??????).


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: