Hacker News new | past | comments | ask | show | jobs | submit | Twisell's comments login

It's pretty standard practice for all cameras manufacturers to use a basic incremental filename. Many more useful data are embedded in jpeg exif metadata.

On the contrary including a date in the filename could be perceived as user hostile because none of the multiple iso representations (or non iso) is universally used and understood by the general public.

Eg : 20241112, 1112024, 1211024, 131208, 081213 and so on...


I think the issue is more that the battery runs out and now it's 2007 again and you start overwriting img_20070101_01.jpg ; last-directory-entry++ is a bit more robust.

One upside is that it hopefully prevented developer to ship half-baked software that rely on filename and can't handle duplicate name gracefully.

You can't prevent collisions (multiples sources/counter reset/date reset, etc). So it's actually nice to have an unforgiving standard that will bite you if you make unfounded assumptions.


I guess not in final release, but security researchers used the developer beta, probably with some verbose mode enabled.


It's probably more of a tradeoff.

This longer delay won't prompt hectic headlines about users angry about random reboot, it is long enought so federal agencies won't publicly react and plea Trump for their backdoor again, and it is a low profile update that won't necessarily be noticed beside tech circles thus "small fry" bad actors won't know how to correctly cover their back.

A user hostile design would have been to never implement it in the first place. It's basically Apple's signature to choose generic default value and don't bother the user (for the better and sometimes the worse).


M3 and M4 is not there because the Asahi Teams have a roadmap and stick to it.

They don't want to leave M1/M2 half botched before moving on to the next gen that will ultimately support more features.

If you are not happy with the pace go on and contribute, but don't invent false issues.


You are misreading the comment. It is indicting Apple, not the Asahi team, for not caring. If Apple cared and hired the Asahi folks and provided them with help, they would probably be able to churn out drivers faster.


Apple does not want it.


Nothing is barring Apple from supporting Vulkan natively on MacOS. This is essentially the closing statement of Alyssa Rosenzweig´s talk.

With Apple knowledge of internal documents they are the best positioned to produce an even better low level implementation.

At this point the main blockroad is the opinionated point that Metal porting is the only official supported way to go.

If Valve pull up a witch-crafted way to run AAA games on Mac without Apple support that would be an interesting landscape. And maybe would force Apple to re-consider their approach if they don't want to be cornered on their own platform...


> If Valve pull up a witch-crafted way to run AAA games on Mac without Apple support that would be an interesting landscape. And maybe would force Apple to re-consider their approach if they don't want to be cornered on their own platform...

Right, except that Game Porting Toolkit and D3DMetal was an exact response to this scenario. Whether it's the right approach, time will tell, but Apple definitely already headed this one off at the pass.


Game Porting Toolkit isn't a response in this scenario. All advertising for GPTK is aimed squarely at publishers, and even Whiskey has to pull a binary out of a back alley for D3DMetal. Apple is essentially doing nothing and hoping it works.


> Metal porting is the only official supported way to go.

Apple already provides its own native implementation of a DirectX to Metal emulation layer.


And yet I see more Game available for steamdeck than for apple Silicon... Maybe because porting as opposed to emulating requires action on developer side.

And this is especially true for existing game that "aren't worth porting" but are still fun to play. (Is there a Vulkan to Metal / OpenGL to Metal in Gaming toolkit? is it the NexStep?)

There is actually a sweet spot here for Valve that could benefit everyone:

  - Valve as a necessary third party

  - Gamers to access a wider catalog

  - Apple so they don't have to bother developing a porting process for old games


> Maybe because porting as opposed to emulating requires action on developer side.

The Steam Deck is also just emulating native Windows APIs, but on Linux.

https://www.wikipedia.org/wiki/Proton_(software)

Game compatibility isn't 100% with either, and both have community maintained lists with compatibility information.


Are you sure about your point?

From what I had in mind, notarization is only done developer side before publishing. Client side it's just a check against Apple certificates to verify that the binary haven't been tampered since notarization, no phoning home should be involved. (Or maybe just to update Apple certificates).


According to this article macOS does do a network request to check the notarization ticket:

https://eclecticlight.co/2023/03/09/how-does-ventura-check-a...

They also check the developer certificate in the OCSP stage.

Both of these are mechanisms where apple can effectively lock out developers from having a smooth install experience for their software at their discretion.


Isn’t this how certificate revocation flows work?


Proposing new competing hypothesis is the very nature of scientific progress.

Dark matter and cords theory are still imperfect working draft compared to other paradigms.

Unless you can link to a systematic study proving the CCC+TL is flawed there is no guarantee you are on the good side of scientific progress. Flat earthers where once convinced they were not on the side of crackpotery.


As someone with a PhD in cosmology I'm well aware how scientific progress works. This paper is simply not well done. It's lacking basic plausibility checks. It's a waste of time to give every badly formulated idea your attention, you'll drown in noise.

> there is no guarantee you are on the good side of scientific progress

I know, but I'm very confident. To quote Carl Sagan: 'It pays to keep an open mind, but not so open your brains fall out.'


It's minimalist until you read the installation part...

As a backend SQL guy I always feel overwhelmed by "minimalist" software that actually depend on me knowing ho to deploy safely on docker or mastering N dependencies before actually having something to try. Long are gone the lamp days... they had their own set of problems (wrong versions!) but it was a simpler time where you felt a little bit more in control.

Old man yell at the clouds I guess...


I often feel the same way. I had someone ask me to make a microservice on some platform she built. I was told it would take 10 minutes. In reality, it took a couple weeks, and then every week for a year I was getting told something was changing and I had to mess with this or that, and also attend daily meetings about the project. 10 minutes turned into 30% of my whole year. The whole platform she built lasted maybe 2 years before it was decided we needed to move on from it. It was a total waste of time.

Meanwhile, I have a little LAMP project that is used significantly more than the micro service, that I’ve run for 15 years that I only have to touch when it needs feature updates. The platform itself just works. Occasionally I’ll need to move to a newer OS, which takes a few hours to get the new server built, run the job to configure it (doing it manually doesn’t take too much longer), then submit a request to change the load balancer to point to the new servers.

Granted, some of this comes down to experience. However, needing to know all the tools involved for the microservice was much more annoying and they broke half the time.


Boring technology just works. That's why it is boring and not appealing to younger developers.

https://boringtechnology.club/


people forget that you need a huge ranch, and several farm hands etc to handle cattle.

while everyone can care for pets with little effort.


That's another subject altogether. Huge ranch tools makers don't advertise their solutions as "simple".

They usually don't advertise at all to the general public because they are b2b oriented.


I got it running locally like this, using uv to manage dependencies:

    git clone https://github.com/redimp/otterwiki.git
    cd otterwiki

    mkdir -p app-data/repository
    git init app-data/repository

    echo "REPOSITORY='${PWD}/app-data/repository'" >> settings.cfg
    echo "SQLALCHEMY_DATABASE_URI='sqlite:///${PWD}/app-data/db.sqlite'" >> settings.cfg
    echo "SECRET_KEY='$(echo $RANDOM | md5sum | head -c 16)'" >> settings.cfg

    export OTTERWIKI_SETTINGS=$PWD/settings.cfg
    uv run --with gunicorn gunicorn --bind 127.0.0.1:8080 otterwiki.server:app
I filed an issue here suggesting that for the docs https://github.com/redimp/otterwiki/issues/146 - and also that it would be great if getting started could be as simple as this:

    pip install otterwiki
    otterwiki \
      --repository app-data/repository \
      --sqlite app-data/db.sqlite \
      --secret-key secret1 \
      --port 8080


And to think people used to think

  ./configure —-prefix=/home/user/appname
  make
  make install
was too complicated


The problems was mainly that even "apt-get build-dep" is not enough to handle all the problems that arise from that. Even if configure was standardized, there was always problems with diversity in systems.

The NIH syndrome is still big in software build tools, everything is complicated unless you have written it yourself in your environment. Admitted I seldom run those commands manually anymore, but things have gotten way worse when I do try. Specific versions of tools, libraries and kernels, or just kernels. Nix build scripts are actually one of the worst offenders here often ignoring every other standard available. Not saying it is bad, just an example of why what you write above is more complicated than it sounds.


I got it to work under Piku (https://piku.github.io) in much the same way (since I support uwsgi, that bit was trivial).

I did have to hardcode the data path, and I think having some form of export/snapshot would help as well, but submitting a patch might be a fun weekend project.


But deploying on Docker is simpler than LAMP! All dependencies included. All binaries included. You can even just tell systemd run it (also usually included).


Its comprehensive. It is not complex. They just show all of the typical ways one would want to deploy.


I can't see any issue with what's in the installation part. It all looks very straightforward for _each_ installation method.


Yeah I’m with you. I cannot stand having to jump through hoop after hoop just to get started - things to download, command line utils to install, line after line after line to copy into the terminal, layers and layers of dependencies, possibly with version incompatibilities that the “getting started” page was never updated to reflect… it’s a nightmare.

Sometimes you just want to sit down and write code and see it working.


come on, you just need to teach a bit of git/k8s/docker/tls/proxies/storage/vault/markdown/linux/apt and then your family will be autonomous at managing this wiki IF you are allowed to take vacations.


Most probably they employ overseas, underpaid workers with non-standard English accents and so they include text-to-speach in the production process to smoothen the end result.

I won't argue wether text to speech qualifies as an AI but I agree they must be making bank.


I wonder if they are making bank. Seems like a race to the bottom, there’s no barrier to entry, right?


Right, content creators are in a race to the bottom.

But the people who position themselves to profit from the energy consumption of the hardware will profit from all of it: the LLMs, the image generators, the video generators, etc. See discussion yesterday: https://news.ycombinator.com/item?id=41733311

Imagine the number of worthless images being generated as people try to find one they like. Slop content creators iterate on a prompt, or maybe create hundreds of video clips hoping to find one that gets views. This is a compute-intensive process that consumes an enormous amount of energy.

The market for chips will fragment, margins will shrink. It's just matrix multiplication and the user interface is PyTorch or similar. Nvidia will keep some of its business, Google's TPUs will capture some, other players like Tenstorrent (https://tenstorrent.com/hardware/grayskull) and Groq and Cerebras will capture some, etc.

But at the root of it all is the electricity demand. That's where the money will be made. Data centers need baseload power, preferably clean baseload power.

Unless hydro is available, the only clean baseload power source is nuclear fission. As we emerge from the Fukushima bear market where many uranium mining companies went out of business, the bottleneck is the fuel: uranium.


You spent a lot of words to conclude that energy is the difference maker between modern western standards of living and whatever else there is and has been.


Ok, too many words. Here's a summary:

Trial and error content-creation using generative AI, whether or not it creates any real-world value, consumes a lot of electricity.

This electricity demand is likely to translate into demand for nuclear power.

When this demand for nuclear power meets the undersupply of uranium post-Fukushima, higher uranium prices will result.


Continuing that thought, higher uranium prices and real demand will lead to unshuttering and exploiting known and proven deposits that are currently idle and increase exploration activity of known resources to advance their status to measured and modelled for economic feasiblity, along with revisiting radiometric maps to flag raw prospects for basic investigation.

More supply and lower prices will result.

Not unlike the recent few years in (say) lithium, anticipated demand surged exploration and development, actual demand didn't meet anticipated demand and a number of developed economicly feasible resources were shuttered .. still waiting in the wings for a future pickup in demand.


Spend a few months studying existing demand (https://en.wikipedia.org/wiki/List_of_commercial_nuclear_rea...), existing supply (mines in operation, mines in care and maintenance, undeveloped mines), and the time it takes to develop a mine. Once you know the facts we can talk again.

Look at how long NexGen's Rook 1 Arrow is taking to develop (https://s28.q4cdn.com/891672792/files/doc_downloads/2022/03/...). Spend an hour listening to what Cameco said in its most recent conference call. Look at Kazatomprom's persistent inability to deliver the promised pounds of uranium, their sulfuric acid shortages and construction delays.

Uranium mining is slow and difficult. Existing demand and existing supply are fully visible. There's a gap of 20-40 million pounds per year, with nothing to fill the gap. New mines take a decade or more to develop.

It is not in the slightest like lithium.


> Spend a few months studying existing demand

Would two decades in global exploration geophysics and being behind the original incarnation of https://www.spglobal.com/market-intelligence/en/industries/m... count?

> Once you know the facts we can talk again.

Gosh - that does come across badly.


Apologies.

When someone compares uranium to lithium, I know I'm not talking to a uranium expert.

All the best to you, and I'll try to be more polite in the future.


Weird .. and to think I spent several million line kms in radiometric surveys, worked multiple uranium mines, made bank on the 2007 price spike and that we published the definite industry uranium resources maps in 2006-2010.

Clearly you're a better expert.

> when someone compares uranium to lithium, I know I'm not talking to a uranium expert.

It's about boom bust and shuttering cycles that apply in all resource exploration and production domains.

Perhaps you're a little too literal for analogies? Maybe I'm thinking in longer time cycles than yourself and don't a few years of lag as anything other than a few years.


Once again, allow me to offer my sincere apologies.

You are well-prepared to familiarize yourself with the current supply/demand situation. It's time to "make bank", just like you did in 2007... only more so. The 2007 spike was during an oversupplied uranium market and mainly driven by financial actors.

I invite you to begin by listening to any recent interview with Mike Alkin.

Good night and enjoy your weekend.


> Most probably they employ overseas, underpaid workers with non-standard English accents and so they include text-to-speach in the production process to smoothen the end result.

Might also be an AI voice-changer (i.e. speech2speech) model.

These models are most well-known for being used to create "if [famous singer] performed [famous song not by them]" covers — you sing the song yourself, then run your recording through the model to convert the recording into an equivalent performance in the singer's voice; and then you composite that onto a vocal-less version of the track.

But you can just as well use such a model to have overseas workers read a script, and then convert that recording into an "equivalent performance" in a fluent English speaker's voice.

Such models just slip up when they hit input phonemes they can't quite understand the meaning of.

(If you were setting this up for your own personal use, you could fine-tune the speech2speech model like a translation model, so it understands how your specific accent should map to the target. [I.e., take a bunch of known sample outputs, and create paired inputs by recording your own performances of them.] This wouldn't be tenable for a big low-cost operation, of course, as the recordings would come from temp workers all over the world with high churn.)


Can you identify any of these models?


I think it's unusual to assume they are based in the US and employ/underpay foreigners. A lot of people making the content are just from somewhere else.


Yes it's a analogous using CA is still a higher bar, but it would arguably be better to also use CA to validate openssh host certificates for all the reasons he listed above.

So maybe we should ask ourselves why can't we just figure out a way to improve handling of CA? Thanks to Let's Encrypt https coverage dramatically improved, now is maybe the time for more people to switch to self CA.

I agree though that promoting adoption through good tooling and pedagogy would be a nicer approach than Apple slap on the wrist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: