Hacker Newsnew | past | comments | ask | show | jobs | submit | ear7h's commentslogin

> At least it is a lot more realistic than silly 3D animation approach used in many previous movies (e.g. "hacking the Gibson" on Hackers, or the much worse portrayals on Swordfish)

One of the things I love about Hackers is that it portrays the feeling of hacking and programming to someone who might not have done it. Yea I think a lot of people have the green text hackerman image when they think about hacking but it hardly conveys what's happening inside the head of the hacker, it's just something cryptic magic that solves a problem and advances the plot. In Hackers, the Gibson is a space, somepeople live there and oversee it, other's have to transport themselves (there's a montage with fast shots of a subway, then computer circuit boards, then the "buildings" of the gibson that work really well imo). Not every film has to convey all of this but I really appreciate that Hackers does.


> but it hardly conveys what's happening inside the head of the hacker

Mr Robot is another great one at that. It has layers of trippy stuff, but the hacking stuff is both real-ish and pretty well explained by the main character's monologues.


One of the odd things about Hackers is how it created a cultural feedback loop. When it came out, the style it showed was pretty weird and kind of campy, but I think it got integrated into actual hacker culture over time (e.g. visible in hacker spaces, conferences, online culture etc), and because of that today the movie as a whole seems less weird than originally.

I mean, it was a solid interpretation of cyberspace as envisioned by one W. Gibson (the name of the system not being a coincidence, obviously). As it was meant to be. You (hopefully) wouldn't see boring nmap terminals in a hypothetical Neuromancer filmization, either!

Would that be the same Cowboy Gibson who was mentioned in Hyperion?

Yes. A reference to the real author who’s work inspired the cyberpunk chapters.

what u mean, the swordfish decrypting cubez is fake? :((

This idea of LLMs a vehicle of midlife crisis is fascinating. I'm not sure if it's just about "throwing the fastball" though. Most of the usual midlife crisis things are a rejection of virtue. For example: buying a porsche, pickign up a frivolous hobby, or cheating on your wife, these are irresponsible uses of money, time, or attention that a smart, dedicated, family man wouldn't partake in.

In relation to LLM usage I think there's two interepretations. 1) This midlife crisis is a rejetion of empathy, understanding, and social obligation however minute. Writing a one-sentence update on an issue, understanding design decisions of another developer, reading documention are all boilerplate holding them back from their full potential in a perfectly objective experience. Of course, their personal satisfaction still relies on adoption of their products by customers (though decades of viewing customers through advertising surveillance has stripped away the customers' humanity from their perspective). Or 2) economic/political factors such as inflation, rising unemployment, supply chain issues, starvation of public services, and general instability means doing the usual midlife crisis activities are too expensive or risky, and LLMs present a local optmimum allowing them to reject societal virtues (eg. craftsmanship, collaboration, empathy) without endangering their financial position. Funny enough, I feel this latter point was also a factor of the NFT bubble (though, the finances were more clearly dubious).


This identity politics/virtue signaling seems off topic.

> I havent used Loops

I think the worst repercussion of consuming short form content is that it gives the _consumer_ a false sense of engagement. That their passive consumption endows them with knowledge and credibility, leading to the deluded belief that a display of disintirest such as this one is 1) appropriate and 2) a profound condemnation rather than the petty, irrelevant whine that it is.


Nope, the term comes from bitkeeper which does refer to master/slave.

See this email for some references:

https://mail.gnome.org/archives/desktop-devel-list/2019-May/...


I'm fully on-board with not using master-slave terminology. I work in the embedded space where those terms were and still are frequently used, and I support not using them any more. But I've been using git pretty much since it was released and I've never heard anyone refer to a "slave repo" or "slave branch". It's always been local repo, local branch, etc. I fully believe these sorts of digital hermeneutics (e.g. using a 26-year-old mailing list post to "prove" something, when actual usage is completely different) drive division and strife, all because some people want to use it to acquire status/prestige.


I would have thought someone of such extensive life experience would be more comfortable with the uncovering of an unknown than to characterize it as "driving division and strife". It is undestandable to have a chip on your shoulder in the face of the ageism rooted within the tech industry, but my "digital hermaneutics" is simply a fact and not an attempt at toppling your "stats/prestige" of being a day-1 git user, there is no need to be defensive about it.


Does git use "slave?"

Then does simply performing a search on bitkeepers documents for "slave" then automatically imply any particular terminology "came from bitkeeper?"

Did they take it from bitkeeper because they prefer antiquated chattel slavery terminology? Is there any actual documents that show this /intention/?

Or did they take it because "master" without slave is easily recognizable as described above which accurately describes how it's _actually_ implemented in git.

Further git is distributed. Bitkeeper was not.

This is just time wasting silliness.


Does asking rhetorical questions count as effective argumentation?

If I do enough sealioning will my unsupported thesis be belived?

What about imposing my modern perspective into a chain of historical events to prove my own perspective?

Further, I'm going to use technical jargon to get around Occam's razor.

You seem very serious about this, I think wasting time on something silly could be good for you.


I wonder what was happening 6 years ago that gave him a chance to develop and explore the hobby.


Wow you're so right, you did such a good job asking computer mommy to confirm your priors!

But actually, that's not the goal here. AI, at least the kind of products that need dedicated datacenters ie. generative, isn't critical infrastructure. The focus is on documents, collaboration tools, file servers, single-sign on, databases etc. that are seemingly monopolized by US providers.


> documents, collaboration tools, file servers, single-sign on, databases

All being (or soon to be) fed through LLM agents running on fibers and datacenters controlled by NOT European entieties. And if you build DC you'll be powering them with energy imports.

Software being built on library repositories also under foreign jurisdictions. Network infrastructure built on imported tech running whatever backdoors "partners" see fit.

Its like you didn't notice the snowden revelations, the shift from dependence on Russian Gas to US gas, nordstream sabotage, stuxnet, etc


Also to be honest, suppose EU uses kimi model which is open source. They can literally swap out one word from the provider and move from say American datacenter companies to European.

Quite frankly, there is literally 0 moat and its great to see EU focus on the real moat/lock-in issues.


"before it was cool" people have been talking about this since at least 1848. But for the average American the fear of the C word seems to outweigh any sense of self preservation.


> I own more books than I can read.

> I started asking for things I did not need.

For a community that prides itself on depth of conversation, ideas, etc. I'm surprised to so much praise for a post like this. I'll be the skeptic. What does it bring to you to vibe code your vibe shelf?

To me, this project perfectly encapsulates the uselessness of AI, small projects like this are good learning or relearning experience and by outsourcing your thinking to AI you deprive yourself of any learning, ownership, or the self fulfillment that comes with it. Unless, of course, you think engaging in "tedious" activities with things you enjoy have zero value, and if getting lost in the weeds isn't the whole point. Perhaps in one of those books you didn't read, you missed a lesson about the journey being more important than the destination, but idk I'm more of a film person.

The only piece of wisdom here is the final sentence:

> Taste still does not [get cheaper].

Though, only in irony.


I don't think you fully understood the purpose of the project. He wanted an end product (the bookshelf app) that he had been putting off due to the time commitment. He did not say he wanted to learn about how to program in general, nor did he even say he liked programming. People care about results and the end product. If you like to program as a hobby, LLMs in no way stop you from doing this. At the end of the day, people with your viewpoint fall short of convincing people against using AI because you are being extremely condescending to the desires of regular people. Also, it is quite ironic that you attempted to make a point about him not reading all 500 books on his bookshelf, yet you don't seem to have read (or understood) the opening section of the post.


I'm not trying to _convince_ a stranger on the internet whether to use AI for their vibe shelf hobby project; I'm engaging with a project being presented by it's creator. Interesting that you think continuing to use AI is some enormous own against my presumed attempt at persuasion. Sounds like maybe you're the one needing validation for your viewpoint. It's clearly easy to achieve such validation given the evidence in this comment section so, I'm not sure why you're seeking it from me.

As for the main concern in your comment, I did in fact read the blog post; see how I quoted multiple parts, verbatim ("word for word")?. I now understand this audience may not be entirely familiar with literature or reading beyond basic instructions from their preferred datacenter or advertising company, but generally the beginning of a piece of writing (the "introduction") serves as the premise while the end (the "conclusion") describes the abstract ideas a reader should take away from the entire piece. I'll even let you in on a little secret: the word "conclusion" is synonymous with "a judgement following logical steps". As I mentioned in my original comment there is also a middle section which can often be more important or meaningful (to both characters and readers) than the introduction or conclusion. Howver, in this piece of writing it amounted to "I didn't know how to do something so I asked AI and when it didn't do the right thing I asked it again" which isn't a very engaging story (there's a similar famous premise about an "oracle" that can respond to three "queries", however the entertainment relies on this limitation). Anyways, the badic premise seems to be well received already and lacking any interesting description of the process, I chose to engage with the conclusion. The question of taste.

The author believes, or rather instructed an LLM to generate an article from the perspective in which someone belives, generative AI can enable the good taste of someone in prototype hell to come to fruition. But in my original comment I'm making the point that creating something of good taste is inextricably linked to engagement with the medium. But the author shows a willful lack of engagement, with their medium whether that be software or a book shelf.

If you'd like to engage with my original comment in good faith, here are some questions: * do you really think this project constitutes good taste? for software? for book shelves? * can someone with an apathy for a craft as extreme the author have good taste? * might this even be considered bad taste given the technological sensibilities of this forum? (disdain for js bloat, foss, "elegant solutions")


Since you said you read the post, explain this to me:

>To me, this project perfectly encapsulates the uselessness of AI, small projects like this are good learning or relearning experience and by outsourcing your thinking to AI you deprive yourself of any learning, ownership, or the self fulfillment that comes with it. Unless, of course, you think engaging in "tedious" activities with things you enjoy have zero value, and if getting lost in the weeds isn't the whole point.

In the context of his first paragraph

>I own more books than I can read. Not in a charming, aspirational way, but in the practical sense that at some point I stopped knowing what I owned. Somewhere around 500 books, memory stopped being a reliable catalog.[...] For years, I told myself I would fix this. Nothing elaborate, nothing worthy of a startup idea. A spreadsheet would have been enough. I never did it, not because it was hard, but because it was tedious.

Wouldn't your statement be completely moot because he plainly said the purpose of the project was to create a system to handle his books, and the only reason he hasn't done it yet was because it was tedious? (Hint: if you need the paragraph or thge rest of his article to be broken down for you more, I suggest asking ChatGPT to give a summary for you).


N=32 and

> We want to start creating a developmental story and start understanding whether the things that we’re seeing are the root of autism or a neurological consequence of having had autism your whole life


Yeah, how many studies are done a year? Random chance is the #1 explanation with that small of a sample size. It doesn't take a degree in stats say that the next thing that needs to be done is to replicate the study a few times before making any claims or searching for any publicity. This subject is so emotional for the families involved that publicizing without more confirmation is a bit irresponsible especially if it is easy to do follow-up studies.


Follow-up studies cost money, and you don't get any of that if you don't publish.


Agreed. Publish, but don't publicize. My remarks were aimed at the article, not the paper. This sounds like a promising, very initial, study that needs a lot more data before making claims about having found anything. Qualified headlines like 'Early study hints at..' Or 'Initial research potentially shows a promising....' would be better but even then a study with this little data should be very cautiously approached by any type of science reporting. More than mentioning it in passing as promising is probably not warranted until the n value is a lot higher and involves other teams and other methods.


It's a university press release. Hyperbole in practice.

Wish I could read the paper.


The reduction of mGluR5 was reported 10 years ago in postmortem tissue.

doi: 10.1016/j.bbi.2015.05.009


Can't you just work around all of this by proxying to the third party site(s) with a subdomain?


I think you're right. I imagine if third party cookies were ever banned, we'd quickly see googleads.whatever.com become a common sight.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: