Hacker Newsnew | past | comments | ask | show | jobs | submit | aerhardt's commentslogin

I work in tech for Executive Search, which is often (way) lower volume than generalist recruitment, but scheduling is still an issue. Keeping an eye out on this - best of luck.

Thank you very much! We are actually already working with some of the largest US executive search firms (don't have a case study out yet), solving for that exact 'high-touch' scheduling hurdle. If you are ever curious, we would love to share more: https://tryvela.ai/book-demo

Bro, it reeks of AI.

I find this a better argument.

I get instantly turned off by a mere whiff of AI when reading something, and consequently I refuse to foist such garbage on my fellow human beings. But by god if I read another two-line (!!!) comment with an emdash on LinkedIn, I'm going to drop a bollock.

Please.

I am seeing similar dynamics at work, but also in my graduate studies.

I am currently doing the OMSCS at Georgia Tech and taking Machine Learning (7641) which has always had a reputation for being difficult. I don't mind a challenge, but I feel that the AI policy creates a sense of permanent and unpayable cognitive debt and learning deficits.

The class has traditionally taken a "data-first approach" to ML, where instead of focusing on the details of the different algorithms, students must apply them to datasets and analyze their performance and trade-offs. There are four colossal end-to-end ML projects which culminate in an 8-page IEEE-style paper each. (I actually prefer this general direction rather than an algo-heavy one - I find it more valuable to my work in business applications.)

For their AI policy, they've decided that all code can be generated by AI - the only rule is that the paper contents must be original analysis. To avoid taking any risks, I do not even use spell-checking AIs on the paper.

However, it seems to me that to compensate for the AI help, they've cranked up the amount of ground that needs to be covered in the projects. In the first project we were given two datasets, six algos to test, and a bunch of params and metrics to experiment with, producing a real combinatorial explosion of stuff to work on. This is on top of up to around 150+ pages of scientific reading on some weeks.

I am leaning very heavily on LLMs to generate massive chunks of the code, but I feel like I can't keep up at all.

I don't even feel my skills coming in where poor. I am a confident programmer, recently brushed up on math, and this is actually my second CS degree and my fourth course at Georgia Tech. I am rather familiar with the feeling of difficult courses or work problems pushing me to my intellectual limits where I stare into the abyss, but this feels radically different.

I am pushed to work at a higher (less detailed) level of abstraction, as many have foretold LLMs would do. I feel like I am learning about the data science meta-process but cannot keep up with details that are not even that fine. There is some complex math in there that could probably make my head spin but I cannot even get to that - I am cognitively stuck at higher abstractions like keeping up with so many families of algos, datasets, APIs, and thousands of AI generated codes.

In some sense this may be a shape of things to come at work too, but here's where that analogy breaks down: the performance of our work doesn't matter and we're not even graded on it. As long as we convincingly explain why things happen, we should be good, but even as I start to get the class and focus on that, I feel like I can barely keep up. If only they had made a bit of room with the AI productivity increase to focus a bit longer on that!

I thought I was losing it but this morning I found a Reddit thread with dozens of current students venting and found some solace in seeing that I'm not alone.

I also feel for the teaching staff, who I think are absolutely well-meaning, competent and attentive, but who just like the rest of us are trying to wing it in this brave new world.

AI is transformative for the good and the bad, and it's going to take us all many years to sort it out. We're not even started understanding social media and AI could be orders of magnitude more complex and also further complicating the former.


A question I've been asking myself and which I honestly want to put out there - and I apologize in advance, because you will see me repeat it in other threads, out of genuine curiosity:

Does your life have so much friction that you need a digital agent to act on your behalf?

Some of the use cases I saw on the OpenClaw website, like "checking me into a flight", are non-issues for me.

I work in business automation, but paradoxically I don't think too much about annoyances in my private life. Everything feels rather frictionless.

In business, I see opportunities to solve friction and that's how I make money, but even then, often there are barriers that are very hard to surmount:

(a) problems are complex to solve and require complex solutions such as deterministic or ML systems that LLMs are not even close to being able to create ad-hoc

(b) entrenched processes and incumbent organizations create moats that are hard to cross (ex: LinkedIn makes automation very hard)

(c) some degree of friction, in some cases, may actually be useful!

I imagine there are similar dynamics in the consumer space, but more than anything, I may not be seeing issues with such a critical eye (I like to relax after work, after all)

So, do you have problems in your private life that you'd want to take on the risks - and friction - of maintaining these agents?


Similar CV, similar take. My guess? Anyone involved in automation for >2 years at the enterprise levels knows in their gut all the silent, sudden, annoying ways automation can fail and so has a higher internal bar for "must save this much time to be worth automating."

That said, old beliefs should be challenged by new technological capabilities!

If LLM based automation is (a) less fragile and (b) quicker to develop, then that bar should be lowered.


Do not underestimate the modern marketing and its capability to create needs that didn't exist before.

It is not about removing friction, it is about convincing that friction existed in a first place


I don't get it either. I even sat down to try it out and see what it's all about, but I can't think of a single thing in my life that I want an agent to automate.

Summarizing news? No, I'd rather just read it. Besides, it's hard to say what will or won't be interesting at any given moment.

Reply to emails? No, I want to make sure I say what I mean to say and I don't see why I would tell the whole story to an LLM just so it could rephrase it all.

Trade stocks? Dear God no! That's a good way to lose my life savings, and if the solution is to just put in a little then what's the point?

Every video and post I see talking about all the things they have automated with agents are things I would never want. Most of them describe content farms - look at what's hot on Twitter, generate a video, post to Reddit, etc. Others like preparing a morning summary are neat and all but so what? If my life was so hectic that I needed a personal assistant to take my calls and book my meetings, I'd hire one.

Seriously, what is the thing that agents can do that every ordinary person just can't live without?


I take him to be a good programmer on top of a pioneer venture capitalist and entrepreneur but Hackers and Painters contains some pretty bad predictions and takes on programming, and if he didn't have that good foresight then, it has probably become worse with the years.


The article is hard to read, paywall notwithstanding, and tells us very little about Herzog's book other than that the critic didn't like it.

I really appreciate Herzog as an artist. I think Grizzly Man is a unique piece of art, and Herzog's commentary is an integral part of it - original, and very worth listening to.

Tonight I was planning to watch either Fitzcarraldo or Aguirre after having listened to Herzog on the Freakonomics podcast earlier this week. But after hearing about the book there, I was really put off by some of the things he said and concluded that the book would be a hard pass for me. Nothing persuaded me that he had anything interesting to add - neither rationally, nor aesthetically - about a topic which has been extensively covered by very diverse thinkers throughout the millennia.


>Nothing persuaded me that he had anything interesting to add - neither rationally, nor aesthetically - about a topic which has been covered by philosophers throughout the millennia.

That sounds more like an emotionally charge reaction than some calm assessment on the merits of the book for what it stands.

Especially when the idea here is that he presents his idiosyncratic vision of the concept of “truth" - not some claim that he solves the problem of truth "which has been covered by philosophers throughout the millennia", and which could very well be inherently unsolvable anyway.

A writer (even more so, an artist with a unique viewpoints) can add lots of very interesting observations and new ways of seeing the concept of truth or our approaches to it, even when they do it "in the small", without taking on or pretending to tackling the philosophical / ontological core issue.

It's even more useful if an author says some things that rub you off the wrong way, or challenge your core tenets. Else, I guess one cal always just resort to some echo bubble friendly comfort reading.


Which things put you off?

He has some extreme takes on things, many of which I don't agree with, but I love that humans like him exist. He's one of the rare humans who has truly "sucked out all the marrow of life".


>He's one of the rare humans who has truly "sucked out all the marrow of life".

I think the biggest problem with him is that he styles himself as exactly that, but as the article points out he's a self-mythologizer who frequently makes stuff up. He's pretentious in the literal sense of that term, he's constantly exaggerated his own life, has even said that's the case and that doesn't seem as profound or counter-cultural to me as he thinks it is, given that we're mostly surrounded by exactly these kind of types now.


I take it as part of his act on film and in reality. He does it with a straight face and people are happily buying it and love him for it. Heck, I like him, too!

One can still enjoy the stories and not everything is scripted but that’s the fun puzzle to kind of figure out how much is staged and made up.


> But after hearing about the book there, I was really put off by some of the things he said and concluded that the book would be a hard pass for me.

What are some of those things? I am not trying to be snarky, I like Herzog and I am curious.

One thing to keep in mind is he views documentaries are fiction almost. They are not supposed to be taken in as pure information or facts. Treat it like you’d treat any movie.

You said you liked Grizzly Man. Well there you can tell the reason he put in the coroner in there and he told him to “act” and the extra long pauses after the scenes are there to add awkwardness. It should be obvious what is going on.


"Calvinism makes pretty lofty claims for a religion who has yet to demonstrate soul salvation anywhere near Lutheranism if they're leaning into the reformation angle"

- Someone in the 16th century, probably


I feel the same. I can't believe the amount of shit I am throwing at Codex for a measly 20€.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: