Hacker News new | past | comments | ask | show | jobs | submit | samsartor's comments login

https://github.com/mcabbott/Tullio.jl is my favorite idea for extending einsum notation to more stuff. Really hope numpy/torch get something comparable!


Yeah, Tullio.jl is a great package. Basically, it is just a macro for generating efficient for loops.

I guess, it might be hard to achieve similar feature in Python without metaprogramming.


My favorite has always been "I can't log in when I stand up" https://www.reddit.com/r/talesfromtechsupport/comments/3v52p...


Well that is the whole trick. ML models ideally generalize from the training inputs to whatever new inputs show up during inference. For example, a vision model should recognize an image of a dog as a dog even if that exact image was not trained on. But that generalization always has limits. Usually score will decrease substantially the further "out-of-domain" the inputs are. So this model works fine when running a randomly generated dungeon it has never seen, but not when running a set of game rules it has never seen.


They do participate pretty heavily in ML research from what I've seen. To continue your metaphor, they try to invent as many gold digging techniques as possible which exclusively work with their own shovels and buckets.


Yep, see for example ‘Earth 2’.


As a person midway through a PhD, I personally like these sorts of competitions. My dissertation is so all consuming that I have to be able to make fun of it a little bit! Life would just be too depressing otherwise.


> Intel and AMD don't use x86 cores but [...] instead translate the x86 code to RISC code on the fly

I used to say this all the time and I've been informed that it's something of a misunderstanding. For example, most RISC-V processors also decompose instructions into multiple μops: https://docs.boom-core.org/en/latest/sections/execution-stag...

So it isn't like there is a literal RISC processor inside the x86 processor with a tiny little compiler sitting in the middle. It's just that the out-of-order execution model requires instructions to be broken up into subtasks which can separately queue at the core's various execution units. Even pipelining instructions still wastes a lot of silicon (while you're doing an integer add the floating point ALU is just sitting there, bored) so breaking things up this way greatly improves parallelism. As I understand it, modern μop-based processor cores can actually have dozens of ALUs, multiple load/store units, virtual->physical address translation units, etc all working together asynchronously to chug through the incoming instructions.


I think the framing around these sorts of discussions always annoys me because few sci-fi books (and certainly no Neil Stephenson books) are primarily written as predictions. They are written as stories, with technology serving narrative above all else.

Snow Crash in particular uses the metaverse mostly as an excuse to include sword fights and motorcycle/monorail chase scenes. Fantastically fun in my opinion! But that motivates all kinds of choices (making the Internet a literal place with a street grid and real estate, where people can get chopped up by swords) that real XR tech has no particular technical use for. And I would implore any engineers using Snow Crash as an inspiration to consider how absurd it would be to take all the world's most sophisticated technology and dedicate it to gratifying personal power fantasies. It starts with the almost pornographic display of advanced weaponry and logistics deployed to deliver a pizza, and just gets more gloriously ridiculous from there. The main character is named "Hero Protagonist". The main antagonist has a nuke strapped to his motorcycle. Take a hint!

Anyway, I am happy Diamond Age gets a call-out because it is by far my favorite of Stephenson's novels. And I think the Young Ladies Illustrated Primer is one of the all-time most interesting technological plot contrivances (the Imago machine, game/civilization of Azad, and Shrike all providing strong competition). But the technical constraints/capabilities of the Primer have almost nothing to do with realistic limitations/advantages of AI technology, and everything to do with getting the right characters into the right places at the right times. We need a Miranda to provide the Primer's voice so that Nell can have some kind of human connection in the end, and we need Miranda to be paid anonymously so that Nell won't get that human connection too soon. The Primer is a language model not a robot so that Nell will have to solve problems on her own. Yet she can learn Kung Fu from a language model because we need a few action scenes. I think really the interesting question posed is "Can a person grow up to be influential given no resources except a perfect education?" not so much "Can a language model provide a perfect education?". Many characters in Diamond Age seem to agree with the former notion, but in the end it (SPOILER) gets shot down when Nell needs control of an literal army to come out on top.


It starts with the almost pornographic display of advanced weaponry and logistics deployed to deliver a pizza

The opening chapter takes on a whole different meaning when you consider it as a heavily embellished self-narrative by a guy who lives in a storage unit and doesn't really have a career and doesn't have any prospects, delivering pizzas for a massive chain in difficult to navigate suburban sprawl and how he fucks up his job while sticking his neck out for his employer. He's driving a noisy, used and abused, yet tricked out, vehicle in risky situations, and they give him the cheapest, lamest weapon they can to defend himself, seemingly some kind of taser that charges in the cigarette lighter. So he feels more comfortable and empowered carrying swords. He calls himself The Deliverator, a member of an elite order; this is him telling himself "I am somebody".

The opening chapter isn't necessarily about all the high tech things of this dystopian future. It's the grandiose story a normal, struggling guy tells himself to feel more important and cooler than he actually is. Everyone considers themselves to be the hero protagonist of their own story, and Hiro (assuming that's his real name and not the name he's given his persona he created to deal with his station in life) is no exception.

There's definitely some near-future sci-fi technology/situations in the story such as Reason, the ratthings, Ng's deal, most of YT's accoutrements, the nuclear warhead sidecar. But even today, things like the metaverse, the Librarian and Earth, are increasingly becoming more mundane than futuristic. But they'll get described with more flourish and flowery language than is actually warranted.


I haven't thought too much about Hiro potentially being an unreliable narrator, and it definitely puts a different spin on things! In that case I certainly think the specific design of the technologies in Snow Crash is motivated by that narrative need to make Hiro's mundane life into something exciting, rather than the literally mundane motivations behind technology in the real world. Assuming anyway that we aren't careening into a future were all technology is driven primarily by hype... that's a scary thought.


> Snow Crash in particular uses the metaverse mostly as an excuse to include sword fights and motorcycle/monorail chase scenes.

I don't think that's "most" of the usage of the metaverse in Snow Crash. The entire point of the metaverse in Snow Crash is to deal with the hubris of thinking that one protocol standard to rule them all could last and provide for the opportunity for a Tower of Babel-like fall of civilization as protocol wars break down an overly centralized metaverse. All of the cool stuff is just to show how high the Tower got built before it crumbled and fell.

I think any engineers using Snow Crash as an inspiration are probably missing the "Don't Build the Torment Nexus" message of the direct ending of the book.


> Snow Crash in particular uses the metaverse mostly as an excuse to include sword fights and motorcycle/monorail chase scenes.

That's an interesting take. Personally, I thought those parts were the least interesting, and the larger social commentary was the most. As always, everyone takes something different from compelling fiction.

I've never viewed Snow Crash (or The Diamond Age) as predictions of the future though. They both seem to be commentary squarely about the present to me.


Yah "mostly" might have been a bit strong on my part. I think Neil Stevenson is capable of both expressing interesting ideas and shoehorning in great excuses for action set pieces at the same time. Certainly any good science fiction can inspire different interpretations by different people!


The ideas in both Snow Crash and Diamond Age are wonderful, worthy of considerable investment, but Stephenson's writing is harmed by his inclusion of pandering to a lowest common denominator of reader: already mentioned is the character's name being "Hero Protagonist", in addition he goes into a bit too much lurid detail during a main character rape with pretty demeaning sexual name, and the climax of Snow Crash is literally a description of the abstract visual effects one would see if Snow Crash were a VFX heavy feature film. Diamond Age is more mature, but the Primer has 'super book' capabilities closer to the Matrix Download of Knowledge to enable more excitement in his narrative. Great ideas, but a bit too comic book in execution.


I always thought both of those books would make excellent graphic novel or anime adaptations.


Beyond what you have pointed out, this list of tropes helps remind us of how weak some of the characters were as well as details that seemed unnecessary or out-of-placec.

https://tvtropes.org/pmwiki/pmwiki.php/Characters/SnowCrash


I agree, I don't think the purpose of sci-fi is to predict the future. The future is just impossible to predict due to the myriad factors, variables, unknown unknowns, and second-order+ degree effects an action can have.

The purpose of sci-fi IMO is moreso to:

1. Provide an entertaining story/narrative with technology as the main focus of the world and characters' actions

2. Define a set of concepts to help you think about technology and its possible effects on humans and the world

3. Nudge people to think about what kind of future they would want or not want and how they can use or control technology to achieve that

Here's Ken Liu talking about the purpose of sci-fi: https://www.youtube.com/watch?v=5knkpmxXu-k


I do think predictions are an unavoidable side-effect of writing science fiction, but those predictions are almost always in service to those purposes you listed. Stephenson must think nanotech book AI is more plausible as a form idealized childhood education than say, magic fairies. But in the end he relies on our suspension of disbelief to get on past that to the big questions of nurture vs nature and the quality of human intelligence.


Yeah, I'm annoyed by writers equating "any computer that speaks like a real person" to "predicting ChatGPT". My four year old wanted a robot to clean up her LEGOs. Did she predict iRobot/Roomba?

I would be more impressed by a scenario where, e.g., a writer predicts email, and then the side effects that (1) you have to have an email to create accounts for things, and (2) mail is constantly clogged by junk mail.


Reminds me of the joke: "I invented the laptop when I opened a book turned 90° at the age of four."


Particularly when talking computers are an extremely common trope in scifi, and had been for at least three decades before Stephenson wrote The Diamond Age, as for that matter had primitive chatbots.

The aspects of the setup for the Primer I remember (handcrafted by an individual for a specific end, reliant on human voices for narration, far-future advanced technology all around) don't feel at all like "predicting ChatGPT" either.


Written end-to-end electronic communications was obvious going back to at least the 70s (and arguably much longer). What was aruably a lot less obvious that email/chat would become almost ubiquitous and increasingly a prerequisite for fully participating in society. (Including on the go.)


> They are written as stories, with technology serving narrative above all else.

Greg Egan’s stories are almost completely the opposite, with characters and plot only there to explore the science and technology.


Yah I was thinking of Diaspora a bit when I included the word "few" lol. Although even in that case Egan is willing to hand-wave away technology in one place if it is the most narratively convenient way to move the story around to some other technological or philisophical conundrum he's after in particular (like how the computer technology of a Polis is basically just magic) or if it makes the story more accessible to a human audience (like how the "gestalt" sense is usually written as straightforward human-style vision even if it is ostensibly some distant derivative there of). If we do start uploading our brains into computers I don't think I'm going to give Greg Egan credit for "predicting" it, but he did take good advantage of that trope to explore some really cool ideas.

IML the point of sci-fi is to ask interesting questions. Very rarely is that question "what is the absolutly most plausible thing that could happen next century?”.


Diaspora was a bit of a grab bag, he re-used elements from some of his short stories including basically the whole of “The Planck Dive” and “Wang’s Carpets”. But the main theme, as far as I can tell, was how far can you change a persons mind and still have them be the ‘same’ person.


Just a nitpick - The main character in Snow Crash is named Hiro, not Hero.


I have yet to come across an SF novel that is as fun as the first 2/3 of Snow Crash. Recommendations?


In terms of well-written fun I would rank "Murderbot Diaries" very highly, alongside any of the John Scalzi books I've read. For similar levels of ridiculous (and trying-not-to-be-but-kinda-still sexist) look no further than "Off to Be the Wizard". That one is best if you recruit a few programmer buddies to also read it, so you can all compare each other's resulting self-insert fan fiction. I'll also occasionally reread the "Synchronicity Wars" series if I'm feeling ill and want something sorta kooky.


This is still pretty idealized. In my experience the pytorch abstraction leaks _constantly_. In particular, any interesting ML project probably pulls in at least a few dependencies with custom pytorch extensions somewhere.


At my old university one of the intro classes moved all their students to Replit. Which might have been fine in isolation, but I think it created a year of students who struggled endlessly to use gcc, Makefiles, debuggers, or local tooling in general. Starting your CS journey with the local command line seems to put you into the perfect headspace right off the bat, even if it is a little slower to get going.


IMHO, the ugliest part of local environments is the OS, and fighting with interactions.

But VMs can abstract away most of this, especially in a learning environment where performance isn't a primary goal.

Then you can have a known-good set of instructions that doesn't have to include messing with whatever Apple/Microsoft/Ubuntu changed recently.


Docker was getting us pretty close to this until the M1 Macs came out and our Intel CPU presumptions blew up.


Docker has plenty of ARM images and I think they also have built in Rosetta support now to run x64 albeit with lower performance.


Well, those OS differences taught us to write (or strive to) portable code. It also taught me to better appreciate strengths in different operating systems over the years.


Absolutely! But in a learning format, the student probably needs to be focusing on the task at hand.

Learning to paper over OS errata isn't as generally useful as, say, groking multithreaded coding models.

Yes, there are environment quirks. Yes, you'll have to deal with them. Yes, you can look up documentation when you run into those situations.


There's a lot of magic behind being able to write basic code and see the output on the screen.

I hate setting up environments, even with a folder of templates I've assembled over the years. The most frustrating barrier to get to the "magic" of seeing something happen is figuring out the proper/correct way to structure a project.

Towards the end of intro courses seems like an appropriate time to start bringing up tooling.


Reflecting a little bit more I don't think it was replit's fault, per-say. But that change should have been made together with a larger adjustment to the program. Like adding a class/unit in the style of [the missing semester](https://missing.csail.mit.edu/) to make sure people came away with a good range of intuitions.


For the future the term you probably intended to use was "per se" - a Latin phrase meaning "by itself" or "in itself".


sounds great from Replit's point of view

they donate a couple of thousand dollars and in return get a CS class that will go out into the world completely dependent on them


Looks more like 4 s/iter which is actually quite good for CPU inference in Automatic111. It is an apples-to-oranges comparison to the LCM models used by fastsdcpu (since those can get away with fewer sampling steps) or to the quantized models favored by apple (since they use totally different underlying compute capabilities). Currently automatic111 and other popular GUIs are pretty focused on users with beefy GPUs that can sample images in seconds, so they haven't yet implemented some of these really significant CPU-friendly speedups. But we'll get there!


Are those cpu speedups out there? Where can I find them?


Not for A1111's SAI backend, as far as I know.

VoltaML has some CPU optimized options for the torch.compile optimization.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: