Hacker News new | past | comments | ask | show | jobs | submit | frde_me's comments login

Quite honestly, as long as the UX is _actually_ improving, I'm completely fine with having to adapt. I don't want to live in a world where things stay the same just because it's comfortable.

Having said that, at least 50% of the time that people change the experience, it makes it worst. So I agree that for companies that don't know how to design interfaces, this is maybe a benefit.


I disagree with this. The touchscreen on my phone allows for so much versatile applications than is possible with physical buttons.

I really don't miss the days where applications had to retrofit their controls onto a fixed physical setting.

Sure, maybe for dialling a phone number or texting it was better. But for everything else I do on a phone, give me a touchscreen.


Having no kind of gatekeeping or moderation whatsoever before changes are published seems like a fantasy of another age. Nowadays, the wiki would be co-opted by some type of spam or malicious activity basically instantly.

It wasn't a fantasy, but it does belong to another age. The Internet was small back then and a much higher trust society. Today it's a war zone and everything must be locked down.

"Speak friend, and enter."


I didn't suggest no gatekeeping or moderation whatsoever. I explicitly qualified 'gatekeeping' with the word 'pre-emptive'.

A wiki allows anyone to edit by default and their changes are deployed without verification by another user. Moderation takes the form of locking specific pages on a case-by-case basis to users with tenure (who can still edit instantly) and rolling back misguided changes.

There's nothing wrong with having a site that isn't a wiki, but this is just a static site maintained by one person, so calling it a wiki is odd.


Ya, even claude artifacts have replaced scripting for me to some extent. For example, if I need some data transformation thing "Create a web app that takes text input and does X, Y, and Z".

It's correct for me 99% of the time, and the remainder I can trivially ask it to tweak something. (especially since for those kind of one-off tools, I really don't care about the actual UI and styling)

Beats figuring out the right incantation of JQ, regex, whatever other tool I only use every 3 months, ... every time. And I can trivially just go back to that artifact and iterate on it later.


I'll be honest, I've never had the command line tool to setup a React / NextJS / Solid / Astro / Svelt / any framework app fail to make something that runs, ever

I had create-react-app break for me on the first try, because I managed to luck into some transient issue with dependencies.

What exactly magic command line tool are you referring to? What cmd tool configures the frontend framework to use a particular css framework, webpack to work with your backend endpoints properly, setups cors and authentication with the backend for development, configures the backend to point to and serve the spa?

dotnet new <the_template>

We had to encode a lot of videos at work once to play back to users. The resulting FFMPEG commands ended up being so complicated, every argument had about a one paragraph comment explaining it

Had to try and fix a bug with that script and it took me hours just to get some understanding of it.

Makes me wonder if video processing is just that complicated, or if someone could make some sort of simpler tool for this


There's simpler tools like Handbrake that present a GUI for handling many video encoding tasks, or even things like Tdarr for distributed transcoding tasks.

But if you're documenting it right then you'll still need a paragraph explaining what every checkbox does...


ffmpeg should be config-based with verbose descriptive names for everything. It doesn't make sense in its CLI form. It's far from human-readable since a very long time.


Google will reliably index dynamic sites rendered using JS. And other search engines do the same. There's really no good reason to do this if you want to be indexed on search engines.


Agreed. Yet whether it should be done is different than whether it is done. Google was recommending it in 2018, and degraded it to a "workaround" just two years ago. Sites still do it, SaaS products still tout its benefits, and Google does not penalize sites for it. GP's gripe about SERP blurbs being missing is still very much extant and blessed by Google.


Well, one answer might be that someone could spin up emulators

Or reverse engineer whatever app you have.

Or reset their phone? (or would you restrict it somehow to one account per physical phone? What happens if it gets sold or given away?)

Having worked in fraud detection a bit, it's _really_ hard to prevent people from making multiple accounts. Short of requiring ID based verification, and even then.

And then you have to still not go overboard and keep the onboarding low friction enough that people will be willing to go through it


Your points are all true. But we must not forget that security is like onion layers. The fact that something can't be made military-grade hack-proof doesn't mean we should leave it wide open for the whole world to abuse.


I've found it to be useful for things where the auto-complete is both long enough and boring enough that it doesn't actually take longer to look at it

For a practical example, lets say I define protobuf models, I start by writing

```

service X {

   rpc doSomething([cursor]
}

```

It'll generally be smart enough to complete to the same pattern as every other one in the codebase

I can then put my cursor after and get it to generate the models by just tabbing a bunch:

```

service X {

  rpc doSomething(DoSomethingRequest) returns (DoSomethingResponse);
}

[cursor]

```

```

service X {

  rpc doSomething(DoSomethingRequest) returns (DoSomethingResponse);
}

message DoSomethingRequest {}

message DoSomethingResponse {}

```

Then I can go to my actual code file where this is implemented, and having had both the context of the codebase and the context of the protobuf file, it'll generate

```

class DoSomethingImpl extends some.grpc.package.DoSomethingService {

  override def doSomething(request: DoSomethingRequest): Future[DoSomethingResponse] {

     // Some usually bad code I just delete

  }
}

```

Nothing here is super complicated, I know it's right at a glance, it's super easy to write, but I hate having to do the boilerplate. Could I write a simple script that kind autogenerates this for a given API? Maybe. But a bunch of typing and piping things around becomes just a bunch of "tab" presses. And an LLM is more flexible to slight changes in the pattern.

This is multiplied times 10 if I then want to consume this from some other language, and it has both in it's context. I literally just need to create the one definition, and the LLM will complete the required code to product / consume the API on both sides as long as it's all in the same context window (and tools are getting good at doing this)

What I really want now is to just have to say "Add a proto api for `doSomething` and add boilerplate to give a response and then consume it. Put it in X existing grpc service" and just have it do all of this without a series of smaller completion.


> Maybe we are in this strange situation where the people who make the products are so hyped about this new tech but the consumers are really hating it.

I think everyone is tired of conversing with a chatbot over text by now. But there are ways of integrating AI without making the primary interactions a pain. I'm also a bit puzzled by why people think it's a good idea in the first place

Using a chatbox as a way to do primary interactions? Nope. But use that same AI to quickly summarize reviews for a product into a digestible format that I can easy glance at, or ignore? That last one actually saves me time, and doesn't harm my user experience, so why not?


Most LLM-based chat user interfaces are just so terrible.

They seem interesting on a surface level, because it feels like they should be able to help with whatever issue you have, and the demos make it look as if they do, but in reality, they almost never do anything. They just pretend to be a human who has the ability to actually do something.

When Microsoft showed Windows Copilot, what they showed made it feel like you could do things like tell it to "create a new user account with the name John and the password 285uoa29tu and put the picture of a dandelion as the profile picture," but you can't. It can't really do anything, other than give you (often misleading) advice on how you can do things yourself.

People have learned that. They have learned that these chat LLMs are just a facade used to waste their time, a simulacrum not of a human being, but of the outward appearance of a human being. It's another hurdle companies put in place for people to jump across before they can talk to an actual person who has the actual ability to do something for them.

So when people see "AI", they don't see "helpful tool," they see "an obstacle I have to get rid of to actually get anything done."

Who would want to pay for that?


I agree, IMHO people don't like to interact with AI, but they love it when the tedious work is done by AI.

The interaction part is cool at first until your curiosity about the tech itself vanishes. The current AI tech has some very useful use cases and its here to stay but its not replacing human interaction or the Human-Computer interface as I previously believed.

The AI pin, Rabbit and who knows who else failed in attempting to replace human interactions or screen UI with speech or text.

Chatbots look so deceivingly capable but they are completely useless when it comes holding authority and trust.

Every real human will always provide you with accurate information best to their knowledge and when they fail on it(sometimes in bad faith) its considered a big deal and can lead to anything from being angry with to not trusting that person ever again(therefore ignoring that person when possible) and in some cases imprisonment of the person.

AI lying is too cheap to warrant a human interaction. It's pure waste of time when it comes to doing anything consequential.


Maybe because chatbots in general, not just AI chatbots, are a terrible idea in the first place? And even in the cases where idea might be somewhat OK (instead of wading through tons of support documents or FAQs just ask a question and get redirected to the answer) it usually is implemented terribly.

Honestly, what's the point of even considering implementing it if the only interaction will be "I want to talk to a human"?


> But use that same AI to quickly summarize reviews for a product into a digestible format that I can easy glance at, or ignore?

You want to give them more plausible deniability in lying to you about reviews?


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: