Hacker News new | past | comments | ask | show | jobs | submit | zomg's comments login

i was thinking the same but with respect to this entire article -- feels like we're missing the second half and/or much more detail. feels like the article was due in to the editor by 11pm and the author forgot and started writing it at 10pm. :x

either way, very fascinating experiment. i look forward to hearing about the results!


i miss the early aol days. i remember all the havoc you could wreak using punters and mass mailers. ah, to be a script kiddie again! xD


what's a realistic use case for ai on a mobile phone? i have yet to find myself saying "gee, if only i had ai on my phone, i could do XYZ!"


On iPhone, if I take a picture of a plant or animal, it identifies it for me. It's not 100% by any means, but it's useful enough. I've figured out what were baby plants I wanted vs. weeds. I've figure out species of birds I'd taken photos of with my SLR (ie: phone takes picture of Lightroom editing the image, and is able to identify it from that... I'd prefer there was a way to not require me to take a photo of my monitor, either doing it "live", and/or adding the functionality into the Mac.) For people and pets it can find other images that contain the same subject.

When my daughter was studying Chinese, I could use the live-video translation app and see the lesson text translated to English, and see her hand-written answers also translated to English. I could see this being more broadly useful when travelling, along with live translation of spoken words.


While true your examples are AI, I believe in this case AI is being used in this context to mean LLM-based AI.

I don't know if LLM-based translation is better than previous translation models.


While the AI focus these days is on LLMs, AFAICT, the NPUs and GPU accelerators are just generically fast MUL, and MAD machines with varying precisions, which should help any AI, and even non-AI tasks likes image filter kernels.

Getting hardware to enable faster AI processing on phones should be good thing if used for useful tasks, LLM or not.


Well, I'm still waiting for an AI feature that recognizes my usage patterns, and adapts the system's behavior.

E.g. if it sees that I always reopen an application 2 seconds after the OS kills it in the background, then maybe it shouldn't be killed.

Or if I wake up 3 minutes before the alarm would go off, and take a trip to the toilet, maybe it shouldn't blow up the speaker while I'm frantically pulling up my underpants, but recognize that I'm already awake, or at least wait with the alarm until I'm around the phone again.

Or automatic backlight shouldn't go crazy when I walk in the night under the streetlamps, it should recognize that lamps are coming and going, and that backlight adjustment every 5 seconds is silly and annoying.

I could go on. IMO there is definitely a place for machine learning/AI in phones (and other places too), especially for quality of life thingies. Just nobody is doing them, I guess becacuse these are not as visible as image generation. My credit card has been ready to spend on such developments since at least 2021. One of these days I will have enough of waiting and do it myself, out of spite...


Spellcheck, voice control, voice-to-text, autocomplete and next-word-prediction are all some AI features that are already in use. Voice-to-text could certainly be much better if something like whisper was integrated. I pretty much never actually listen to voicemails, so having a reliable transcription there would be great.

I'd also love to be able to give commands that traverse multiple apps (e.g. take my google sheet and venmo request everyone the specified amount). Most likely this would happen by teaching an AI tool use and having apps expose an API.

I'd love to be able to give voice commands for certain things (e.g. flipping through recipes when my hands are wet) and have the phone be able to do the actual thing I want.

I actually think phones are a much better place for AI since they're so difficult to type on that voice could provide a higher-bandwidth interface.


One use case could be improving navigation directions. Right now, map apps provide granular, step-by-step instructions that include unnecessary details, such as how to exit your own neighborhood.

AI could provide more human-oriented direction that focus on key landmarks and decisions rather than every minor turn. For example:

"Hop on 80 West, cross the bridge, take Sir Francis Drake onto 101 South, take the Alexander Avenue exit, don't go through the tunnel, and your destination will be on the right."


I'd love it if CarPlay/Siri just could read out stuff it finds on the Internet. Currently, all I can get out of it is "Sorry, I cannot show this to you right now" for basically everything except trying to control multimedia.

At one point, I had ChatGPT working via voice in CarPlay mode (via Shortcuts I think?), but seems like Apple disabled that at one point, for some stupid reason probably.


It will likely become available for application developers to use. At work, we use it to assist warehouse checkins by allowing the guard to take photos of the truck, paperwork, seal, etc and fill out the forms going in and out. If built-in then it can be run on-device, so over time a lot more workflows can be seamless.


I used Google lens yesterday to get the artist name of a painting I liked, that was neat.


That's not "on a phone," though. That's just schlepping an image up to the Google data center, and getting a result back. That you're using the phone as an interface to a datacenter doesn't make it "AI on the phone."


The only one I use regularly is object replacement in photos. It's great for editing a street sign out of a picture of the sky or something, especially if you just don't want to dox yourself posting a pic. It's definitely not high quality most times. Just blurry redraw of what the background might look like.

Otherwise, totally with you. No idea why my phone needs AI. I can just open the ChatGPT app if I want to have a discussion with ChatGPT about something. I'm so tired of apps updating to "Add a new AI assistant!" like why do I need to talk to an LLM in most of the apps I use?


can someone explain how the "mini-moon" breaks out of earth's gravitational pull? it would seem logical that once an object is affected by the gravitational pull of another object, it would require some outside force to "break" free. what am i missing?


In the article, it says the sun


what's particularly disappointing is that the USDA knew about this for TWO YEARS prior to the outbreak and forced no action. and of course there will be no accountability.


i find it fascinating that there are likely engineers working on this program who are younger than the spacecraft.


FDA clearance only means a company can legally market the device.

additionally, product testing != human trials. they are two very different things.

final product testing is called "design verification and validation" (aka dV+V) where a statistically significant number of devices are tested against several different categories.

human trials can run the gauntlet from a small IDE studies to phase I through IV.

the drug approval process is MUCH different than the medical device process.


another confounding problem is how the FDA classifies and approves devices for clearance. most regulatory strategies have the company strive to find a predicate and prove they are substantially equivalent (SESE), which generally means a class II device, which does NOT require clinical data for clearance.

it's frustrating that the article uses the term "approve" which is specific to PMA devices (class III), whereas class II devices are "cleared" by the FDA.

- a medical device executive


i don't agree with any of the above. in fact, it's cringy, tbh.

once you appreciate the dance that all functional teams perform in an organization, you'll experience true success.


i find that the automotive/gearhead youtube channels i enjoy most tend to be ones run by individuals who do it on the side and enjoy documenting and sharing their work. the content on these channels is also very technical in nature.

a few to check out are Dieselcreek, I Do Cars, and Watch Wes Work.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: