Hacker News new | past | comments | ask | show | jobs | submit | Symmetry's comments login

I've found that refactoring to fix a lack of abstraction is usually easier than refactoring to fix the wrong abstraction.

Definitely. Among other things, this is akin to Work Hardening.

Refactoring tries to avoid this but the slope of that line can still end up being positive and you can’t refactor forever unless you’re very careful. And “very careful” is also not yet quantified.


The concept of Don't Repeat Yourself and the concept of You Ain't Gonna Need It are the yin and yang of software development.

When a robot is moving itself it has a very good idea of what each part of itself weighs, how big it is, and has sensors to tell it what angle all its joints are at. These are all much more difficult with an object the robot is holding in its hand. But you can still find videos of Atlas doing practical tasks, just not as impressively as it jumps around.

https://www.youtube.com/watch?v=F_7IPm7f1vI

A mentor of mine in robotics talks about how the people at his research group would watch a video from an old Julia Childs TV show of her chopping vegetables and doing other cooking tasks and they would try to figure out how long until a robot could do those things.


This spasm would have ruined my dishes: https://youtu.be/F_7IPm7f1vI?t=80

I think the author probably made a mistake in using that subheading. For those familiar with the meme[1] it's saying that this isn't that important but read straight it says the opposite. I'm all for playful subheadings, I love when The Economist uses them, but they shouldn't radically alter the meaning when a big fraction of the audience won't get any particular subtle reference and I think that makes this a failure of writing.

[1]https://knowyourmeme.com/memes/you-sit-on-a-throne-of-lies


Having a system level cache for low latency transfer of data between CPU and GPU could be very compelling for some applications even if the overall GPU power is lower than a dedicated card. That doesn't seem to be the case here, though?

Strix Halo has unified memory, which is the same general architecture as Apple's M series chips. This means the CPU and GPU share the same memory, so there is no need to copy CPU <-> GPU.

There's not need for an explicit copy, but physically does the data move from the CPU cache to RAM to the GPU cache or does it stay on the chip?

> does the data move from the CPU cache to RAM to the GPU cache

Probably, not. Because it need dedicate channel on hardware level.

- GPU are mostly for streaming applications with large data blocks, so usually, CPU cache architecture is too different from GPU to simply copy (move) data, plus, they are on different chiplets, and dedicate channel means additional interface pins on imposer which are definitely very expensive.

So, when it is possible to make SoC with dedicated channel CPU<->GPU (or between chiplets), but usually it used only on very expensive architectures, like Xeon, or IBM Power, and not used on consumer products.

For example on older AMD products with APU, usually, graphics core have priority over the CPU to access unified RAM, but CPU cache don't have any additions to handle shared with GPU memory.

On latest IBM Power and similarly on Xeon, invented shared L4 cache architecture, where blocks of extremely huge L4 (near to 1Gb per socket on Power, as I remember somewhere about 128Gb on Xeon), could be assigned programmatically to exact core(s) and could give extremely high performance gain for applications running on these cores (usually these things very beneficial for DB or something like zip compressing).

Added: example difference CPU cache to GPU, for CPU usual size of transaction is less than 64bits, may be current 128..256bits but this is not common on consumer hardware (could be on server SoC), just because many consumer applications are not optimal to use large blocks, but for GPU normal to use 256..1024bits bus, so their cache definitely also have 256bits and larger blocks.


I have seen video. There stated exactly: CPU don't have access to GPU cache, because "they have ran tests, and with this configuration some applications seen two digits speed increase, but nearly none applications they tested shown significant gains with CPU have access to GPU cache".

So, when CPU access GPU memory, CPU just directly access RAM via system bus, but not trying to check GPU cache. And yes, this mean, could be large delay between GPU write cache and data actually delivered to RAM and seen by CPU, but probably, smaller than on discrete GPU on PCIe.


Plus, main idea of GPU, their "Computing Unit" is not alone.

Mean, in CPU could cut any core and it will work completely separated without other cores.

In GPU, typical, have blocks for example 6x CUs, which have one pipeline for all, and this is how they achieve thousand CUs or more. So, all CUs basically run same program, in some architectures could make limited independent branching with huge penalties on speed, but mostly just one execution path for all CUs.

Very similar to SIMD CPU, even some GPUs was basically SIMD CPUs with extremely wide data bus (or even just VLIW). So, GPU cache sure optimized for such usage, it provide buffer wide enough for all CUs on same time.


Of if you you don't want to trudge out to a hospital but are happy with written things by a doctor or quoting them:

https://www.nytimes.com/2013/11/20/your-money/how-doctors-di...

https://slatestarcodex.com/2013/07/17/who-by-very-slow-decay...


There's a whole scientific study of consciousness that actually comes out of behaviorism. The thought is, if I have a conscious experience I can then exhibit the behavior of talking about it. From this developed a whole paradigm of investigation including stuff like the research of subliminal images.

Stanislas Dehaene's book Consciousness and the Brain does a great job of describing this, though it's 10 years old now.


Trouble is that you can also exhibit the behavior of talking about it just by being exposed to the idea, even if you don't have the experience. If you were never exposed to the idea and you started talking about it, then I'd be convinced you had the experience, but nobody is actually like that. The fact that the idea exists at all proves to me that at least one human somewhere had conscious experience, and I know there's at least one more (me), but that's it.


I was evidently unclear. I mean, if an image of a parakeet is flashed up on a screen for 100ms and you can say "I saw a parakeet" you were conscious of the image. If the image is flashed for 50 ms and you can't you weren't conscious of the image. In this paradigm being conscious is being conscious of particular things.


That seems like a fairly simple machine could be conscious, which is not usually how the word is used. Typically consciousness means that there is some ill-defined entity that has a subjective experience, what the philosophers call qualia.


That's fine for wheeled robots or robots bolted to the floor but for legged robots, especially bipeds, the hard question is how to prevent them from falling over on things. These don't look heavy enough to be too dangerous for a standing adult but you've still got pets/children to worry about.


He certainly didn't have any trouble keeping C++ out.


Also in that thread is Greg KH: https://lore.kernel.org/rust-for-linux/2025021954-flaccid-pu...

> C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.


Benefits C Vs. Rust are much more impactful than C Vs C++


Oh certainly. And many fewer potentially dangerous complex corner cases than C++ brings.


They can put away clutter but if they could chop a carrot or dust a vase they'd have shown videos demonstrating that sort of capability.

EDIT: Let alone chop an onion. Let me tell you having a robot manipulate onions is the worst. Dealing with loose onion skins is very hard.


Sure. But if you showed this video to someone 5 or 10 years ago, they'd say it's fiction.


Telling a robot verbally "Put the cup on the counter" and having it figure out what the cup is, what the counter is in its field of view would have seemed like science fiction. The object manipulation itself is still well behind what we saw in the 2015 DARPA Grand Challenge though.


There's something hilarious to me about the idea of chopping onions being a sort of benchmark for robots.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: