Scientific knowledge evolves as new discoveries are made, immutable and unequivocal “truth” is the realm of religion, not science (which makes the former much more appealing to many than the latter).
Trust really should not erode if X acted in good faith based on the consensus knowledge at the time.
If the consensus is evolving, sure. If the consensus is going back and forth for decades, and each time it is presented with the authority of medical or physical science, then it is normal and correct to stop listening at some point.
When scientists have weak theories that they're not sure of, they're not supposed to share those breathlessly with the public, and certainly not try to shape public laws based on the theories they know are weak.
And nutrition science has been guilty of this for over a century. You can find people in the field making confident recommendations and setting dietary standards from the time when they didn't know vitamins were a thing. If you followed the science on nutrition and adjusted your diet accordingly around 150-100 years ago, you could literally get scurvy or other vitamin deficiencies. The field has evolved a little bit, but it's still extremely weak as scientific fields go.
I’m not sure how this decade is any different than the one that preceded it?
The current console generation is 4 years old and it’s at mid-cycle at best.
Games running on modern consoles are visually marginally better than those in the previous generation, and AAA titles are so expensive to develop that consoles will still be the target HW.
I really could not be bothered in updating my 3080…
>I’m not sure how this decade is any different than the one that preceded it?
2010, your game has to run on a PS3/Xbox 360. That didn't matter for PC games because all 3 had different architectures. So they were more or less parallel development.
2015, Playstation and Xbox both converged to X86. Porting between platforms is much easier and unified in many ways. But the big "mistake" (or benefit to your case) is that the PS4/XBO did not really try to "future proof" the way consoles usually did. A 2013 $4-500 PC build could run games about as well as a console. From here PCs would only grow.
2020. The PS5/XBX come out at the very end, so games are still more or less stuck with PS4/XBO as a "minium spec", but PCs have advanced a lot. SSDs became standard, we have tech like DLSS and Ray Traced rendering emerging from hardware, 60fps is being more normalized. RAM standards are starting to shift to 16GB over 8. But... Your minimum spec can't use these, so we still need to target 2013 tech. Despite the "pro versions" releasing, most games stlll ran adequately on the base models. Just not 60fps nor over 720p internal rendering.
Now come 2025. Playstation barely tapped into the base' power and is instead releasing a pro model already. Instead of optimizations, Sony wants to throw more hardware at the problem. The Xbox Series S should have in theory limited the minimum spec. But we have several high profile titles opting around that requirement.
The difference is happening in real time. There's more and more a trend to NOT optimize their tech as well (or at least, push their minimum spec to a point where the base models are onl lightly considered. A la Launch Cyberpunk), and all this will push up specs quite a bit in the PC market as a result. The console market always influences how PCs are targeted. And the console market in Gen 9 seems to be taking a lot less care for the low spec than Gen 8. That worries me from a "they'll support 10 year old hardware" POV.
>Have I missed a new “Crysis”?
If anything, Cyberpunk was the anti-crysis in many ways. Kind of showing how we're past the "current gen" back then, but also showing how they so haphazardly disregarded older platforms for lack of proper development time/care. Not becsuse the game was "ahead of its time". It's not like the PS5 performance was amazing to begin with. Just passable.
Specs are going up, but not for the right reasons IMO. I blame the 4k marketing for a good part of this as opposed to focusing on utilizing the huge jump in hardware for more game features, but that's for another rant.
Yes. For example I fed it a public tender and associated regulations in Norwegian and it was able to answer questions about the parts I mas interested in correctly and succinctly. I have also fed it research papers that normally I would not have the patience or knowledge to go through on my own.
In terms of actual usefulness, it’s one of the AI tools that most impressed me.
The main issue is, of course, privacy.
I have tried to reproduce something similar using AnythingLLM and the low tier Llama models, but of course the experience is much worse, both in terms of results, response times and UI. If someone knows of a better local setup, I’m all ears.
I would consider a Workspace subscription if I could actually trust Google to make good on the commitment of not reading your stuff, which I’m finding hard to do…
I fed it an Italian Wikipedia entry and it generated a (relevant) podcast and summaries in English, so at least the input seem to support multiple languages even if the output is in English.
If I had come across something like this 2/3 years ago it would have looked like magic, but even now it’s quite impressive.
Edit: You can change the language following the indications in the FAQ; after that, text generation seems to be in the configured language, but the podcast is still in English.
Even being unable to see the eclipse live, I felt compelled to read Nightfall again, yesterday.
One of the things I appreciate about Asimov as a scifi author is that he never goes too deeply into explaining the science or physics of his novels (no technobabbles or midichlorians), he is more interested in the Big Ideas and characters.
I'm not sure that a world like Kalgash can exist, and many details about the story could be nitpicked to death, but it doesn't make it any less fascinating and complelling.
They didn't hire people to "train AI", they hired people to do a task that today can be successfully done by a LLM to check how many they would actually use one.
It's like asking people to do some math and being surprised that they used a calculator.
In Midjourney you get fantastic results just by using their discord and a text prompt.
To get some similar results in Stable Diffusion you need to set it up, download the models, understand how the various moving parts work together, fiddle with the parameters, donwload specific models out of the hundreds (thousands?) available, iterate, iterate, iterate...
Setting up the environment and tooling around in the code is not a burden, it's a nice change of pace from the boring code I have to deal with normally. Likewise, playing around to build intuition about how prompts and parameters correspond to neighborhoods in latent space is quite fun.
Beyond that, being able to go to sleep with my computer doing a massive batch job state space exploration and wake up with a bunch of cool stuff to look at gives me Christmas vibes daily.
Sure, but if Midjourney outputs a low quality results for your prompt, they are going to be much more difficult to improve. It's a black box at this point.
While with SD there can be multiple solutions for a single problem, but yeah, you have to develop your own workflow (which will inevitably break with new updates)
Scientific knowledge evolves as new discoveries are made, immutable and unequivocal “truth” is the realm of religion, not science (which makes the former much more appealing to many than the latter).
Trust really should not erode if X acted in good faith based on the consensus knowledge at the time.