Hacker News new | past | comments | ask | show | jobs | submit | matt_heimer's comments login

The best? I tried the Photoshop AI features to clean up a old photo for the first time this week and it crashed every time. After a bunch of searching I found a post identifying problem - it always crashes if there are two or more faces in the photo. Guess someone forgot to test on the more than one person edge case.

I know 5 AI image-gen apps that are better than photoshop and cost around $10-20/month. For example, ideogram. Photoshop doesn't even come close.

Thanks. I will check this out. I was shocked how terrible the output of Photoshop AI tools are. Not even midjourney 4 level.

also check out gpt4o image generation. It can fit in up 20 objects with correct texts and it's very good at following instruction, in my experience.

There have been several posts on HN about Google no longer indexing the entire internet. My understanding is that AI generated content has become so common that Google has become selective about which content it will index. If low traffic sites don't put effort into getting their content indexed then it won't appear in Google results.


It's not only a problem of not indexing.

Sometimes I search for a whole sentence, e.g. the title of a video I know to exist, and it doesn't find it. Then I look for a vague sentence related to the video, and with some luck it finds it.


With Google Photos and YouTube they should have had enough leverge to build a social platform based on photo and video sharing. The implementation, marketing and commitment just wasn't there.


Google is so bad at marketing. Gemini should have be an internal only name. Google's ChatGPT should be branded as Google AI or GoogleGPT and it should be in the Google app.

Google+ was particularly awful. They had to break the established search behavior of using + and - to indicate required and excluded terms. Now we have quotes for required and - for excluded? It should have been Google Social.

The thing that is Google One would have been better often with the Google Plus name.

Don't even get me started on Google Chat/Messenger....


Google's branding strategy historically has been terrible, I grant you that, but Gemini is not an example of it.


Can you elaborate?


Not parent, but "Google AI" is overloaded - Google has a many AI products that won't be "Google AI". "Gemini" refers to a specific set of capabilities, which are a subset of Google AI efforts[1]. Imagine Apple developing a new, non-iPad slate and branding it the "Apple Tablet".

Granted, Google's AI strategy is still muddled, e.g. Gemini is maybe replacing Google Assistant in some scenarios, but I'm able to express my meaning clearly with Gemini in the preceding sentence, as opposed to "Google AI is replacing Google Assistant - which is Google's AI assistant"

1. Gemma, Flash, anything Google Deepmind develops would be Google AI products that won't fall under the "Google AI" branding


Gemini has already replaced Assistant for Pixel users and on modern Nest devices. In the current Android Auto beta, it's also replaced it there, too.

The thing that confuses me, though, is the fact that they use the Gemini branding for both the dev-oriented products you can license via Google Cloud, as well as the consumer facing AI interfaces, and then also for the ties into Workspace products. ... but then there are standalone AI products (or is a feature?) like Notebook LM that aren't associated with Gemini.


It's a great name. "G" matches the company, it's easy to say, it's a known word, it sounds good spoken, and the word itself has many subjective interpretations as to what it might mean (e.g. gemini = twins = you and AI).

Same reason that it's Alexa and not Amazon Assistant, Siri and not Apple Assistant, etc.

Google Pay/Android Pay/Google Wallet/Android Wallet/Pay Pay/Yap Yap should be the focus of our ire.


Siri was not named by Apple. It was an independent app that Apple bought out in 2010.


Entirely irrelevant. They could have renamed it after buying it and chose not too.


I ran a lot in two pairs of Altra Instinct 1.0s. Altra shoes used to be very minimalist that also happened to be zero drop with big toe boxes. The most notable thing was how little cushion it had, it was like running on a piece of leather, close to barefoot running. You wouldn't even think the Altra shoes made today are from the same company, they have so much cushion. I don't think its a bad thing, what I eventually took away from my Altras was that I need a huge toebox for my feet to be happy and if my shoes have too much drop I tend to heel strike. I'm glad my shoes have more cushion today, I just wish the cushion didn't break down so fast.


The book Artificial Intelligence: A Modern Approach starts by talking about schools of thought on how to gauge if an agent is intelligent. I think it was mimicking human behavior vs behaving rationally which I thought was funny.


LLM are replacing Google for me when coding. When I want to get something implemented, let's say make a REST request in Java using a specific client library, I previously used Google to find example of using that library.

Google has gotten worse (or the internet has more garbage) so finding code an example is more difficult than it used to be. Now I ask an LLM for an example. Sometimes I have to ask for a refinement and and usually something is broken in the example but it takes less time to get the LLM produced example to work than it does to find a functional example using Google.

But the LLM has only replaced my previous Google usage, I didn't expect Google to develop my applications and I don't with LLMs.


This has been my experience of successful usage as well. It's not writing code for me, but pulling together the equivalent of a Stack Overflow example and some explaining sentences that I can follow up on. Not perfect and I don't blindly copy paste it, same as Stack Overflow ever was, but faster and more interactive. It's helpful for wayfinding, but not producing the end result.


I used the Kagi free trial when I was doing Advent of Code in a somewhat unfamiliar language (Swift) last year, as well as ChatGPT occasionally.

The LLM was obviously much faster and the information was much higher density, but it had quite literally about a 20% rate of just making up APIs from my limited experiment. But I was very impressed with Kagi’s results and ended up signing up, now using it as my primary search engine.


It is really a double edged sword. Some APIs I would not have found myself. In some way an AI works like my mind fuzzy associating memory fragements: There should be an option for this command to do X because similar commands have this option and it would be possible to provide this option. But in reality the library is less than perfectly engineered and the option is not there. The AI also guesses the option is there. But I do not need a guess when I ask the AI - I need reliable facts. If the cost of an error is not high I still ask the AI and if it fails it is back to RTFM but if the cost of failure is high then everything that comes out of a LLM needs checking.


I did the Kagi trial in the fall of 2023 and tried to hobble along with the cheapest tier.

Then I got hooked by having a search engine that actually finds the stuff I need and I've been a subscriber for bit over a year now.

Wouldn't go back to Google lightly.


In order to use a library, I need to (this is my opinion) be able to reason about the library’s behavior, based on a specification of its interface contract. The LLM may help with coming up with suitable code, but verifying that the application logic is correct with respect to the library’s documented interface contract is still necessary. It’s therefore still a requirement to read and understand the library’s documentation. For example, for the case of a REST client, you need to understand how the possible failure modes of the HTTP protocol and REST API are translated by the library.


I wonder how good Google could be if they had a charge per query model that these LLMs do. AI or not, dropping the ad incentive would be nice.


Where are you ordering glasses from that you can make selections like this?


Firmoo for example. Probably a local optician should also be able to make this choice.


Many body builders go through bulk and cut phases.


It depends. The 5mg, 10mg, and 15mg were the doses tested and recommended as maintence doses. The 2.5mg is meant as a starter to reduce side effects but since most people don't see results with it the recommendation is that you only take it for 4 weeks. The 7.5mg and 12.5mg doses are meant as transitional doses but you can stay on them longer than 4 weeks.

Some doctors will go by the Lilly recommendations but I think more are allowing people to stay on the lowest dose providing benefits. That leaves your health insurance as the only other obstacle.

There is a chart at https://zepbound.lilly.com/hcp/dosage

I was one of the rare people that saw results with 2.5mg but by the end of that first month I had plateaued. After 3 months at 5.0mg I've plateaued again and will probably move up to 7.5mg in the near future and stay on it as long as I can.

Some people like to go back to 2.5mg as their maintence dose after reaching their weight loss goals.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: