Hacker Newsnew | past | comments | ask | show | jobs | submit | mirsadm's commentslogin

Even then there is no transparency on how it decides what runs on the ANE/GPU etc

Correct. OS level stuff get first priority, so you can’t count on using it.

Turns out third party actually gets priority for ANE

Isn't the whole point for it to learn what features to extract?

This use to be the most embarrassing thing that could happen. A team member asks you why you did something a certain way during a PR and you can't provide an answer. This seems to be becoming the norm now.

It also used to be an indicator that potentially someone was outsourcing their work overseas.

Edit: I had an instance once where about once a month another developer would ask me about workplace setup, mentioned it to someone and was told maybe they were the English speaker of the group. Upon further investigation, that seemed to be the case.


This is yet to be seen. Certainly feels like I'm more productive but I'm not seeing any faster results. It would be nice for this to be studied more.

What people mostly see is the illusion of productivity. But the measure should be outcomes, not the amount of stuff made. If a factory produces 10x the product but it is only 1/3rd the quality of what it was before that is long term unsustainable and leaves the door open for a competitor to attack them on quality.

This is the key driver behind all those 'enshittification' problems that we see. Quantity over quality is almost always a balance and not a binary, if you start treating it as if one should always trump at the expense of the other then sooner or later it will catch up with you.


I know Anthropic is burning cash but I'm pretty sure they can afford to pay the developer fees for those platforms.

This seems like a bot comment.


So is yours.


Not true. Phone sensors are amazing even without any processing. The difference is not as large as you might think.


As a person who has an expensive phone and a professional camera, let me retort by saying that the difference is larger than you think. On some level, it's basic physics. You get fewer photons, etc. Apple hasn't unlocked the secrets of optics or semiconductor manufacturing that are out of reach for Canon or Nikon. So if they keep making sensors and optics that are many times larger and bulkier than in a phone, there's probably a reason for it.


I like to think I have some experience in this area. I have an app on Android that records RAW video (MotionCam Pro). We've compared large expensive cameras to phone sensors many times (you can see it on our YouTube channel if you like).


It's dark outside. I'm sitting in my living room with reasonable indoor lighting. I point my "Pro" phone camera on the tele setting (but no digital zoom) at the wall and take a photo. I zoom in on the capture and there is basically no real pixel detail anywhere in sight. It's all smudged to algorithmically cover up severe sensor noise. Crown moulding edges are not even straight lines and all have weird jaggies.

If I go and grab my full-frame mirrorless, every pixel of the image will be usable at 1:1 crop. "Taking photos indoors and wanting to zoom in and crop" is not an extreme test. The usability of 1:1 crops from cell phones is limited even in daylight.

But yeah, if you look at a photo taken in good light with no cropping and at screen display resolutions, it looks pretty close to the output from a pro mirrorless. My only point is that you get maybe 3-5 good megapixels, not 45+.

And that's before we get into dynamic range, bokeh, etc, all of which phones more or less need to fake to approach the look and feel of photos taken with grown-up cameras.


Not GP, but for me the biggest differentiators of larger sensors are less perspective and better low-light performance. There are probably some other details like f-stop range but I haven't played with those much. I'm just a smartphone shooter (I don't even own a large sensor), but I still prefer to use the telephoto when possible to get squarer-looking shots with less noise, and to me that feels like what a larger sensor should deliver.


Really depends on the environment. Low light and nighttime are much worse than you might think, anything else isn't so bad.

(Try taking a photo of the moon with an iPhone. You can't do it, not even with Halide.)

The lenses are also different and direct lighting can cause annoying internal reflections. I don't know this area as well, but lenses are more important than sensors for photos.


You absolutely can:

https://mastodon.social/@heliographe_studio/1156653713048409...

(taken with BayerCam.app, not Halide, but Halide can capture the same raw Bayer data)

It's not an amazing photo by any means. But it is a photograph of the moon - the seas are all well delineated, Copernicus/Kepler/Aristarchus/Grimaldi are visible/recognizable.

A test that smartphones did not pass a few years ago.


What I've found out is that a lot of people don't actually care. They see it work and that's that. It's impossible to convince them otherwise. The code can be absolutely awful but it doesn't matter because it works today.


My issue with any of these claims is the lack of proof. Just share the chat and now it got to the discovery. I'll believe it when I can see it for myself at this point. It's too easy to make all sorts of claims without proof these days. Elon Musk makes them all the time.


But what if you want both on a shared memory system?


No problem: Then you provide an optional more complex API that gives you additional control. That's the beautiful thing about Cuda, it has an easy API for the common case that suffices 99% of the time, and additional APIs for the complex case if you really need that. Instead of making you go through the complex API all the time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: