I think you're spot on. It feels like parts were edited with AI and parts were left alone.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
I was thinking about that recently. Maybe decades from now people will look at things like the Linux kernel or Doom and be shocked that mere humans were able to program large codebases by hand.
I was being a little facetious, but there are things that most people would find tedious today that we would put up with in the past. Writing anything long by hand (letters, essays), doing accounting without a spreadsheet, writing a game in only assembly language, using punch cards, typesetting newspapers and books manually...
I've noticed that too and it's not too different from political discussions. At the end of the day, I think the split is really about different values people have, their identity, and justice.
A lot of developers' identities is tied to their ability to create quality solutions as well as having control over the means of production (for lack of a better term). An employer mandating that they start using AI more and change their quality standards is naturally going to lead to a sense of injustice about it all.
> I think the real divide is over quality and standards.
I think there are multiple dimensions that people fall on regarding the issue and it's leading to a divide based on where everyone falls on those dimensions.
Quality and standards are probably in there but I think risk-tolerance/aversion could be behind some how you look at quality and standards. If you're high on risk-taking, you might be more likely to forego verifying all LLM-generated code, whereas if you're very risk-averse, you're going to want to go over every line of code to make sure it works just right for fear of anything blowing up.
Desire for control is probably related, too. If you desire more control in how something is achieved, you probably aren't going to like a machine doing a lot of the thinking for you.
This. My aversion to LLMs is much more that I have low risk tolerance and the tails of the distribution are not well-known at this point. I'm more than happy to let others step on the land mines for me and see if there's better understanding in a year or two.
I am a high quality/craftsmanship person. I like coding and puzzling. I am highly skilled in functional leaning object oriented deconstruction and systems design. I'm also pretty risk averse.
I also have always believed that you should always be "sharpening your axe". For things like Java delelopment or things where I couldn't use a concise syntax would make extensive use of dynamic templating in my IDE. Want a builder pattern, bam, auto-generated.
Now when LLMs came out they really took this to another level. I'm still working on the problems.. even when I'm not writing the lines of code. I'm decomposing the problems.. I'm looking at (or now debating with the AI) what is the best algorithm for something.
It is incredibly powerful.. and I still care about the structure.. I still care about the "flow" of the code.. how the seams line up. I still care about how extensible and flexible it is for extension (based on where I think the business or problem is going).
At the same time.. I definately can tell you, I don't like migrating projects from Tensorflow v.X to Tenserflow v.Y.
> I'm looking at (or now debating with the AI) what is the best algorithm for something.
That line always makes me laugh. There’s only 2 points of an algorithm, domain correctness and technical performance. For the first, you need to step out of the code. And for the second you need proofs. Not sure what is there to debate about.
Not true. There is also cost, money or opportunity. Correctness or performance isn't binary -- 4 or 5 nines, 6 or 7 decimal precision, just to name a few. That drives a lot discussion.
There may be other considerations as well -- licensing terms, resources, etc.
I was using a M1 Mac Mini and only 8GB of RAM on it to build iOS apps for maybe a year. It's absolutely doable, though it very noticeably gets a little less snappy when building projects. When building in Xcode and then switching to Firefox to browse for instance, I could tell it took slightly longer to switch tabs and YouTube playback would occasionally stutter if too much was happening.
I also was using an Intel MacBook Pro with 16GB at the time. Doing the same thing there was much smoother and snappier. On the whole, it actually made me want to just the laptop instead since it "felt" nicer. (This isn't measuring build times or anything like that, just snappiness of the OS.)
I definitely encountered this second-system effect recently. I have an app that works well because it was written to target a specific use case. User (and I) wanted some additional features, but the original architecture just couldn't handle these new features, so I had to do a rewrite from the ground up.
As I rewrote it, I started pulling in more "nice to haves" or else opening up the design for the potential to support more and more future features. I eventually got to a point where it became unwieldy as it had too many open-ended architectural decisions and a lot of bloat.
I ended up scrapping this v2 before releasing it and worked on a v3 but with a more focused architecture, having some things open-ended but choosing not to pursue them yet as I knew that would just introduce unneeded bloat.
I was quite aware of the second-system effect when doing all this, but I still succumbed to it. Thankfully, the v3 rewrite didn't take as long since I was able to incorporate a lot of the v2 design decisions but scaled some of them back.
I wrote some BBS door games back in the day and was thinking of making a new one today, although not multi-player. It would be in the style of the old games (ANSI-style art and text) but for a single-player and with a daily play limitation as well. You'd only play a few minutes each day and if you died, you'd have to come back the next day. Nothing concrete yet, but I definitely would like to make one just for old time's sake.
You're probably right about the terminology being around for a while, but I think most people just called them smileys (i.e. ;) would be called a "winking smiley"). I remember seeing the term used maybe in the early- or mid-90s either on a BBS or Usenet and thinking "Ah, that's what they're called" and as a nerd being annoyed that nobody used that term colloquially.
Yup. AI can't automate long-term responsibility and ownership of a product. It can produce output quicker but somebody still has to be responsible to the customer using said product. The hard limit is still the willingness of the human producing the code to back what's been output.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
reply