It's too much energy to keep up with things that become obsolete and get replaced in matters of weeks/months. My current plan is to ignore all of this new information for a while, then whenever the race ends and some winning new workflow/technology will actually become the norm I'll spend the time needed to learn it.
Are we moving to some new paradigm same way we did when we invented compilers? Amazing, let me know when we are there and I'll adapt to it.
I had a similar rule about programming languages. I would not adopt a new one until it had been in use for at least a few years and grew in popularity.
I haven't even gotten around to learning Golang or Rust yet (mostly because the passed the threshold of popularity after I had kids).
It's a collector's market, the value is in the demand and scarcity. Same as with all other collectibles like baseball cards and such. Or even wines, there are some that are so old they become undrinkable but cost like a car. In collectors market the price is detached from any kind of purpose of the item.
Also consider that most Magic cards are also valuable only because of their collector status. The valuable ones are mint first editions and nobody is buying them to play them.
So who fuels this collectors market? Nostalgic 30-something that have now disposable income and want to buy things they wanted as children. Same as with videogames collectors and such. You don't need an original copy of Supermario to play it, but people still spend thousands to buy it.
> I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
My hypothesis is that developers work on different things and while these models might work very well for some domains (react components?) they will fail quickly in others (embedded?). So one one side we have developers working on X (LLM good at it) claiming that it will revolutionize development forever and the other side we have developers working on Y (LLM bad at it) claiming that it's just a fad.
I think this is right on, and the things that LLM excels at (react components was your example) are really the things that there's just such a ridiculous amount of training data for. This is why LLMs are not likely to get much better at code. They're still useful, don't get me wrong, but they 5x expectations needs to get reined in.
A breadth and depth of training data is important, but modern models are excellent at in-context learning. Throw them documentation and outline the context for what they're supposed to do and they will be able to handle some out-of-distribution things just fine.
I would love to see some detailed failure cases of people who used agentic LLMs and didn't make it work. Everyone is asking for positive examples, but I want to see the other side.
- on some topics, I get the x100 productivity that is pushed by some devs; for instance this Saturday I was able to make two features that I was reschudeling for years because, for lack of knowledge, it would have taken me many days to make them, but a few back and forth with an LLM and everything was working as expected; amazing!
- on other topics, no matter how I expose the issue to an LLM, at best it tells me that it's not solvable, at worst they try to push an answer that doesn't make any sense and push an even worst one when I point it out...
And when people ask me what I think about LLM, I say : "that's nice and quite impressive, but still it can't be blindly trusted and needs a lot of overhead, so I suggest caution".
I guess it's the classic half empty or half full glass.
I believe what Wikipedia tries to do (simplifying here) is reporting the "opinion" of reputable sources which should have an informed view on the matter. If reputable sources believe it's a genocide, then they will report it, if not they will not.
Calling these sources biased because they do not corroborate your view of the situation is your subjective opinion and doesn't mean they actually do have a bias. The whole point of considering them reputable sources is that they should be as unbiased as possible (even though 100% neutrality is impossible), if they had "significant bias" as you claim they would not be considered as reliable sources to begin with.
Actually there's a Wikipedia guideline (WP:BIASED) along the lines of "bias doesn't necessarily make a source unreliable", which in practice is taken to mean that bias doesn't matter.
Of course in practice, editors have their own biases and decisions come down popularity contests. Wikipedia's own biases seem to get worse over time, as more neutral editors give up, so we end up with some weird things like
- Almost all conservative news sources having low reliability ratings.
- Daily Mail for example is deprecated, the lowest possible rating outside of literal spam.
- Al Jazeera, which seems largely controlled by the Qatari monarchy, has the highest reliability rating and is the most-used source in Israel-Palestine. Even their blog is the top source on many articles, despite news blogs being against policy.
- Al-Manar, the Hezbollah mouthpiece which is very unashamedly biased (e.g. refering to their terrorists as "men of god"), has a somewhat low reliability rating, but still higher than several conservative sources like Daily Mail.
There's also a tricky situation where some political factions consistently report closer to reality than others. This makes it hard to be both reality-focused* and politically neutral at the same time.
* It's not this page, but there's a separate Wikipedia policy which says that editors should only insert content which is true.
Circular reasoning that is completely ignorant of the last 2 years of analysis of media reporting on Gaza.
The evidence of media bias is extensive and extremely blatant: it spans framing ("[horrible event, war crimes, etc.] happened, according to Hamas" vs no such qualification for Israeli claims, "20 people killed in Gaza" without mentioning who or what killed them), dehumanisation ("2 people killed" when reporting on children deaths in Gaza vs "2 teenagers in hospital" when talking about IDF soldiers), selective reporting (remember the pogroms in Amsterdam that got debunked on social media while every chief of state was sending their condolences?), constant repeat of Israeli "right to self-defence" while Palestinian context is not mentioned, etc., etc., etc.
If you need something more visual/real-time, Newscord has been been reporting on this consistently: https://newscord.org/editorials
The media might be largely a reputable source, when it doesn't contradict the preferred narrative, and the Gaza genocide was probably the strongest example we could have had of this.
I'm not sure why I even wrote this out, because 2 years in calling it "subjective opinion" is obviously not a position that is based on facts or reason.
The only reason I can find for anyone to be bored by the inside is if they visited on a cloudy day. The way the light enters through the stained glass and colors the environment (and how the light changes during the day) is astonishing, never experienced something similar tbh.
I don't want/need the whole thing to be flat but I do prefer it to be stable. For instance if the plateau were a bit thicker so that the camera lens was flush with the surface (even just an extra bar sort of inside the plateau) it would mean that when I put it down it would never rock back and forth when I'm tapping at it on a table.
The problem with the one-sided camera bump is that the phone is unstable. It wobbles when you touch it, making using it while lying “flat” on the table incredibly annoying.
What prevents anyone to take a signed picture by photographing a generated/altered picture? You just need to frame it perfectly and make sure there are no reflections that could tell it's a picture of a picture and not a picture of the real world, very doable with a professional camera. All details that could give it out would disappear just lowering the resolution, which can be done in any camera.
With a bit (OK quite a lot) of fiddling, you could probably remove the CCD and feed the analog data into the controller, unless that's also got a crypto system in it.
Presumably if you were discovered you would then "burn" the device as its local key would be known then to be used by bad actors, but now you need to be checking all photos against a blacklist. Which also means if you buy a second hand device, you might be buying a device with "untrusted" output.
Any problem that requires cryptographic attestation or technical control of all endpoints is not a solution we should be pursuing. Think of it as a tainted primitive. Not to be implemented.
The problem of Trust is a human problem, and throwing technology at it just makes it worse.
I'm absolutely in agreement with that. The appetite for technical solutions to social problems seems utterly endless.
This particular idea has so many glaring problems that one might almost wonder if the motivation is less about "preventing misinformation" or "protecting democracy" or "thinking of the children" or whatever, and more about making it easier to prove you took the photo as you sue someone for using it without permission. But any technology promoted by Adobe couldn't be about DRM, so that's just crazy talk!
So all I would have to do to make a "legitimate" fake picture is to generate it, print it, take a signed picture of the print with a camera and then upload it on the web?
With the right setup I could probably just take a picture of the screen directly, making it even easier (and enabling it for videos too).
reply