Hacker News new | past | comments | ask | show | jobs | submit | bronxbomber92's comments login

What are the hero use cases for AI agents? Reading the website, I don't see a good demonstration of what "AI agents" are and what they're good for.

AI agents excel in tasks like automating workflows, handling complex decision trees, and managing multi-step processes across APIs. For example, an agent can monitor a sales pipeline, send follow-ups, update CRMs, or manage logistics autonomously. See our demo here: https://github.com/picahq/onetool-demo.

This particular Oxide and Friends podcast includes a fantastic discussion of AI agents, the problems with their definitions, and at least one promising use case: https://oxide-and-friends.transistor.fm/episodes/predictions...

I believe this post is referring to device-scoped memory barriers - also sometimes called fences - as opposed to execution barriers.

The former being a mechanism to ensure memory accesses follow a well defined order (e.g. it'd be bad if the memory accesses executed inside a critical section could be reordered before or after the lock and unlock calls).

The latter being a mechanism that ensures all threads (within some scope, perhaps all threads running on the "device") reach the same point in the program before any are allowed to proceed.


That's correct, it's the memory scope that I expect to be device-scoped. GPUs tend not to have execution barriers in the shader language beyond workgroup scope; generally the next coarser granularity for synchronization is a separate dispatch. However, single-pass prefix sum algorithms, including decoupled look-back, can function just fine with device-scoped memory barriers, and do not require execution barriers with coarser scope than workgroup.


The post also mentions unspecified behavior (mixing atomic and non-atomic memory accesses) where everybody has to cross their fingers and hope that the hardware designers had the same idea about how it should work. Which is almost fine with enough test coverage, but a shader translation layer adds uncomfortable complexity on top of it.


Threads are also a tool for concurrency.


And concurrency can be a tool to implement threads (albeit, with no parallel speedup).


The author states it wasn't actually work-life balance that was making him happy and tired. Rather, he discovered:

> By mid-2021 I was tired all the time. I know I wasn’t alone, because it was an ongoing meme inside Google2. It’s only now that I realize what was wrong: I missed the satisfaction of building things and finishing projects.


This is the classic justification that leads people to self-defeating workaholism: The idea that you can fill the voids in your life by just working harder.

The false dichotomy is the idea that the alternative to Google is to work more hours + evenings + weekends at a startup. He's replacing one problem with another, but this new problem feels fresh and new and like turning over a new leaf. At least for now.


I get what he’s saying though. There can be great joy and a positive feeling of “losing yourself” in your work when you actually get to create. I think his role and and the internal bureaucracy prevented him from using that creative energy.


Indeed, finally fixing something that may have been a thorn in your side for years is a very rewarding thing.

I've worked myself a little extra with a few of these, and I'm glad for it.

I've found a tendency in my line of work/colleagues to call patchwork acceptable, and it regularly comes back to haunt us.

We find a way to stop the 3AM calls and stop there.. neglecting that maybe one person has the necessary context and it may grow out of control again


I don’t think it makes you a workaholic to observe that a shit work environment drains your energy and burns you out, whereas a good one can leave you feeling energized.

They weren’t saying that they needed to work harder at Google to be happy, they were saying they needed to move somewhere else where they could get job satisfaction from completing projects.


I mean, I'm the same way. I look back at my life and the times I didn't create things of value seem so meaningless. I don't want to go back to creating meaningless things. Even if I'm working harder, I'm enjoying what I'm doing.

The article really hit home for me, personally.


I enjoy my work… But it is not the thing that gives meaning to my life!


It must be nice to be so sure about how others should live their lives.


This is a bizarre non sequitur.


This is the same for any specialized software engineering role. Compilers, GPGPU, embedded systems, computer graphics, image processing, etc. In an interview panel for any of these roles, you will be expected to be a competent software engineer and have domain knowledge about the sub-field.


Do you have references to the incident(s) or published work(s)?


> The players are traded freely among teams Yes, this happens more often than in the past, but there are still many players and staff that stay with an organization for an extended period of time (e.g. Tom Brady and Bill Belichick).

> all teams are owned by the same business What do you mean? Each team has a different owner.

> So for example the San Francisco 49ers have very little to do with San Francisco, except for the name' In many geographical regions, you grow up watching the team that is closest in proximity. e.g. I grew up in rural western New York State and I grew up watching the Buffalo Bills because they were the closest franchise despite being 100 miles away.

The name is just a semantic. Would you be happier if the 49ers were called the "Northern California 49ers"?


There is an American baseball team that started their life as Florida Marlins. The thinking was that they would establish this tribe mentality for all of Florida. After years of not great results in that regard they changed their name to the Miami Marlins for the same reason - to better establish a tribe. They haven’t moved locations, they are just re-targeting their brand


My impression was that the iTunes stunt was not received well because they had already lost popularity. The iTunes stunt didn't help, but I doubt it materially hurt their popularity either.


To me, it exposed U2 to a bunch of people that had never heard of them, in a very bad light. Killing off any chance of new fans. Not that they would have grown in popularity, but that the trail off got much steeper.


I agree. Other things are going on. Rock has long been in decline, the social messages and trends U2 focused on 30-40 years ago have passed, and U2 is no longer breaking new musical ground like they were from the late 70s until Pop.


As a lifelong rock fan, it's really been sad to live through the long slow inexorable decline of the genre. The vast majority of current popular music doesn't appeal to me at all, which simply wasn't true even as recently as the 90s.


I think the machine that leveraged interest and controlled attention for so long through radio, magazine, videos, and festivals has basically failed and young people have turned away from guitar-based rock.

My teenaged kids have no interest in what I listen to, with the possible exception of Queen, and I think that has more to do with the biopic than organic discovery.


I won't argue that young people are turning away from guitar-based rock, but I'm not seeing the connection between that and the changes in the popular music machine. Are you implying that people would not have liked guitar-based rock this entire time if not for said machine? That the natural state of things, which we're now reverting to, is to prefer other types of music? I think it's more just that tastes change and go through cycles, and guitar-based rock is on the decline (and might or might not recover; there are so many dead dead dead musical genres and instruments out there).


There is roughly equal levels of complexity in any modern, high performance, general purpose/programmable chip.


But it's not equally exposed to the programmer. The branch predictor in an Intel CPU may be insanely complex but when writing code you usually don't need to care. On the other hand if you want to use a GPU you have to care about a huge amount of complexity right from the start.


Except you really do need to care about this complexity on CPUs. Things like cache locality & predictable access patterns are critical to achieving good CPU performance. This is why there's things like data-oriented design, SoA vs. AoS, and Z-order curves. It's also why linked-lists are so incredibly awful in practice, despite having superb algorithmic performance in theory.

A big reason programming for CPUs doesn't seem as complex is because the vast, vast majority of time nobody actually cares about CPU performance. We all just prefer to pretend a runtime or JIT or compiler managed to magically make a language that's god-awful horrendous on modern CPUs run fast. They didn't, we just all look the other way though.

The difference between CPUs & GPUs is when people reach for GPUs, such as for games or HPC, those are also the people that care a lot about performance. And guides like this are for them.


I don't think we're fundamentally in disagreement. But I will say that there's a huge amount of value in CPUs not forcing you to care about the complexity when it doesn't matter, and it doesn't always matter.


How much would the trade offs change if GPUs shared the same main memory as CPU?


Not sure I understand exactly which trade-off you're referring to, but on systems without the GPU-handicapping (e.g. IBM Power; and also when you put link up many GPUs together with NVLink) - there is still a significant design and implementation challenge to produce a full-fledged analytic DBMS, competitive vis-a-vis the state-of-the-art CPU-based systems.

There are also other considerations such as: The desire to combine analytics and transactions; performance-per-Watt rather than per-processor; performance-per-cubic-meter; existing deployed cluster hardware; vendor lock-in risk; etc.


Here’s also a big misconception. When a CPU runs it has access to the entire memory (or more specifically, a process can chew up almost all available memory if allowed). The cost of copying from RAM to L1|2|3 cache is lot lower than GPU.

GPU on the other hand is slightly complex. It behaves like lots of small CPUs with their own local memory. They can access the full swath of memory but in parts and copying between the two is much more expensive. If the problem can be boiled down to map on GPU and reduce, then GPU excels. If the problem is serial or can be parallelized with SIMD instructions, CPU will run circles around GPU.

SIMD has come pretty far.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: