Hacker News new | past | comments | ask | show | jobs | submit | dennis_moore's comments login

Could the conclusions generalize to other types of anxiety such as GAD or OCD?

Which profilers in particular are you referring to because I've always thought that Callgrind is a profiler? perf?


perf or Intel VTune are the two standard choices AFAIK. Both have a certain learning curve, both are extremely capable in the right hands. (Well, on macOS you're pretty much locked to using Instruments; I don't know if Callgrind works there but would suspect it's an uphill battle.)

Callgrind is a CPU simulator that can output a profile of that simulation. I guess it's semantics whether you want to call that a profiler or not, but my point is that you don't need a simulator+profiler combo when you can just use a profiler on its own.

(There are exceptions where the determinism of Callgrind can be useful, like if you're trying to benchmark a really tiny change and are fine with the bias from the simulation diverging from reality, or if you explicitly care about call count instead of time spent.)


perf on the whole system, with the whole software stack compiled with stack pointers, flamegraphs for visualisation, is an essential starting point for understanding real world performance problems.


It works until it doesn't at which point you have a massive, useless pile of uninterpretable garbage.


Then you ask another LLM to explain it to you lol, like - I don’t think people are thinking hard enough about what this future is likely to look like.


Well, someone (a human being) still maintains it, and ultimately someone likely will find the code unmaintainable even if LLMs help. If you use ChatGPT enough you would know it has its standards as well, actually pretty high. At one point the code likely still needs to be refactored, by human or not.


Crazy that people say shit like this without seeing a problem with it.


It’s really not a problem at a certain point. Also, we’ll probably have “remove and replace” but for software in the next couple years with this stuff.


This is great: I like both the concept and how responsive the implementation is. Please consider open sourcing it.


Thank you! I'm looking into it @ open sourcing.


> A canonical list of references from a leading figure would be appreciated by many.

That confirms the opposite, no?


I failed to find any evidence that this person is an OpenAI employee, but please correct me if I'm wrong.


> “if I didn’t chose to do it you cannot be angry at me."

Yes I can, because I didn't choose to be angry either.


But would anger be reasonable? Of course not.


I disagree. But we’re not going to solve free will in HN comments. Personally I don’t think “free will” means anything or makes sense any more than “god” makes sense. It’s just a bundle of feelings that means something different to everyone.


I think that the term "software system" can have a much broader meaning than the one you are using here.


For sure. I suppose the unspoken step is "work out which component is the bottleneck".

If you're lucky then it's CPU bound and everything runs on the same box and you can just look in htop.


Not sure if raw data size is a good metric. One usually gains more information by reading a book than watching a movie.


I suppose we could debate that. Regardless, the point stands that there's still more data outside of text that can be mind.


This made me think that we, as a society, ought to have some sort of convention to mark AI-generated images as such, like a small watermarked symbol in a corner.


Doesn't really seem necessary unless one is claiming that the image is real. For the image in this post, it seemed obviously fake to me, so I didn't feel the need to label it as AI-generated.


In a factual article, the default assumption is that the pictures are real. You should tag generated images.


It's also useful to give sources for pictures you didn't generate yourself.

Either because they are from a camera or from someone else.

(Just like it's useful to give sources for almost any other piece of media or factual claim or even exercise you are using.)


Whether it's obvious or not seems to depend on a lot of factors. For instance I've seen a lot of Wittgenstein so that made me think it's made up. Someone else might think it's real.

Anyway I think you did a great job with Midjourney. Even the coarse clothes correspond to clothes Wittgenstein wore. Would you like to share your prompt?


Thanks, sure the prompt was:

*Wittgenstein in the front row of a movie theatre, 8k, hyper realistic --q 2 --v 5 --s 750*

I had to do a few re-runs as it kept putting him in a suit and tie. In reality he rarely wore one.


This will solve literally nothing.


[flagged]


I don't think it's unreasonable to ask to know the difference between fact and fiction. Articles on astronomy are clear when they share an artist's depiction of some distant phenomenon so readers don't mistakenly believe that our telescopes picked up that impressive image.

With AI tools getting better and better, it's already getting to the point where viewers will struggle to differentiate. Where's the harm in labelling the images?


[flagged]


I was not advocating for "online slacktivism", I was merely giving my opinion of what rules or laws ought to be introduced to the effect, akin to copyright laws or open-source licenses (it is not "online activism" to give correct attribution according to the terms of a license). Shouldn't we be able to have this debate, or do you think that this decision-making should be confined to policymakers?


Whats the difference between your behavior and theirs? You're both trying to influence other people's behaviors with words. How you seek to claim the moral high ground does not seem like an important discriminator.


The difference is I'm not saying what everyone should do, but rather expressing dislike for people who like to fantasize about ordering everyone around.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: