Hacker News new | past | comments | ask | show | jobs | submit | gregatragenet3's comments login

AI summary:

1. NAD+ Injections for Longevity: NAD (nicotinamide adenine dinucleotide) is a molecule essential for cell function and longevity. Injecting NAD+ has become popular for improving metabolism, brain function, and energy levels, with some users reporting benefits like better sleep, improved workout performance, and increased vitality.

2. Celebrity and Athlete Endorsements: NAD+ injections are being used by elite athletes, such as Premier League footballers, and endorsed by celebrities. It's said to enhance endurance and improve performance in sports.

3. Scientific Backing: Research by Prof Shin-ichiro Imai suggests NAD can delay aging and improve quality of life, though there is debate on the efficacy of injections versus oral supplements. Imai supports NMN supplements as a more efficient way to boost NAD within cells.

4. Market Growth: The NAD market is rapidly expanding, with at-home injectable kits now available. These products are costly, ranging from £195 for two weeks to £395 for two months, but demand is increasing.

5. Concerns and Individual Dosing: High doses of NAD+ IV drips are cautioned against by some doctors due to side effects like nausea and headaches. Patients are advised to monitor and adjust doses for optimal results.

6. Lifestyle Factors: NAD levels naturally decrease with age, and lifestyle choices like stress, alcohol consumption, and poor diet can further deplete it. Boosting NAD levels might be a lifelong commitment for maintaining health and mitigating aging.

7. Reported Benefits: Users of NAD+ injections have reported increased energy, improved sleep, weight loss, and potential relief from conditions like Parkinson’s, menopause, and long Covid.

8. Long-term Use and Maintenance: While NAD injections may offer short-term boosts, many users eventually switch to oral supplements for maintenance due to the inconvenience of frequent injections.

9. Caution Against Miraculous Claims: Experts advise against viewing NAD as a miracle cure and recommend blood and genetic testing before use. Results may be more gradual and cellular, such as improving muscle insulin sensitivity, rather than immediate visible changes.


Seeing as most will read ^^ I'll put my 2c here:

In the article one professor says to just use NMN instead of injections. (6O GBP for 30 capsules).

I also find it suspect the article doesn't talk about nicotinamide riboside (NR).


'Unrealized Gains' is one of the methods by which the wealthy avoid paying taxes. When you read the article about 'Billionaire X paid 2% taxes this year' this can be the cause.

It works like this: you pay taxes on income, or realized gains (sold your stock). But you don't pay taxes on unrealized gains.

You have TSLA/AMZN/NVDA stock which has gained $10M, and you have a mansion/yacht payment coming due. You could sell the stock and pay $2M in taxes...

OR

You can get a loan with your stock as collateral. You may pay 7% interest but you still OWN the stock and the S&P grows at a rate of 10%, netting you a 3% annual profit on your collateral stock. AND the interest might be tax deductible offsetting other income taxes you may owe. You pay 0% taxes because you didn't sell anything.

Search for 'buy borrow die' for more resources on this strategy. This new tax proposal is trying to address the problem that the most wealthy individuals in our society pay a much lower percent in taxes compared to the average individual using this type of strategy.


What cost? A few cents per question answered?


The billions spent on R&D, legal fees, and inference?


This is why I wrote https://github.com/gregretkowski/llmsec . Every LLM system should be evaluating anything coming from a user to gauge its maliciousness.


This approach is flawed because it attempts to use use prompt-injection-susceptible models to detect prompt injection.

It's not hard to imagine prompt injection attacks that would be effective against this prompt for example: https://github.com/gregretkowski/llmsec/blob/fb775c9a1e4a8d1...

It also uses a list of SUS_WORDS that are defined in English, missing the potential for prompt injection attacks to use other languages: https://github.com/gregretkowski/llmsec/blob/fb775c9a1e4a8d1...

I wrote about the general problems with the idea of using LLMs to detect attacks against LLMs here: https://simonwillison.net/2022/Sep/17/prompt-injection-more-...


Great, I would love to get some of the prompts you have in mind and try them with my library and see the results.

Do you have recommendations on more effective alternatives to prevent prompt attacks?

I don't believe we should just throw up our hands and do nothing. No solution will be perfect, but we should strive to a solution that's better than doing nothing.


“Do you have recommendations on more effective alternatives to prevent prompt attacks?”

I wish I did! I’ve been trying to find good options for nearly two years now.

My current opinion is that prompt injections remain unsolved, and you should design software under the assumption that anyone who can inject more than a sentence or two of tokens into your prompt can gain total control of what comes back in the response.

So the best approach is to limit the blast radius for if something goes wrong: https://simonwillison.net/2023/Dec/20/mitigate-prompt-inject...

“No solution will be perfect, but we should strive to a solution that's better than doing nothing.”

I disagree with that. We need a perfect solution because this is a security vulnerability, with adversarial attackers trying to exploit it.

If we patched SQL injection vulnerability with something that only worked 99% of the time all of our systems would be hacked to pieces!

A solution that isn’t perfect will give people a false sense of security, and will result in them designing and deploying systems that are inherently insecure and cannot be fixed.


I look at it like antivirus - it's not perfect, and 0-days will sneak by (more-so at first while the defenses are not matured) but it is still better to have it than not.

You do bring up a good point which is what /is/ the effectiveness of these defensive type measures? I just found a benchmarking tool, which I'll use to get a measure on how effective these defenses can actually be - https://github.com/lakeraai/pint-benchmark


My personal lack of imagination (but I could very much be wrong!) tells me that there's no way to prevent prompt injection without losing the main benefit of accepting prompts as input in the first place - If we could enumerate a known whitelist before shipping, then there's no need for prompts, at most it'd be just mapping natural language to user actions within your app.


> It checks these using an LLM which is instructed to score the user's prompt.

You need to seriously reconsider your approach. Another (especially a generic) LLM is not the answer.


What solution would you recommend then?


Don't graft generative AI on your system? Seems pretty straightforward to me.


If you want to defend against prompt injection why would you defend with a tool vulnerable to prompt injection?

I don't know what I would use, but this seems like a bad idea.


Does your library detect this prompt as malicious?


Extra LLMs make it harder, but not impossible, to use prompt injection.

In case anyone hasn't played it yet, you can test this theory against Lakera's Gandalf: https://gandalf.lakera.ai/intro


I'm confused, this is using an LLM to detect if LLM input is sanitized?

But if this secondary LLM is able to detect this, wouldn't the LLM handling the input already be able to detect the malicious input?


Even if they're calling the same LLM, LLMs often get worse at doing things or forget some tasks if you give them multiple things to do at once. So if the goal is to detect a malicious input, they need that as the only real task outcome for that prompt, and then you need another call for whatever the actual prompt is for.

But also, I'm skeptical that asking an LLM is the best way (or even a good way) to do malicious input detection.


The most interesting thing is that that post sat for months without the answer "why" it was done, but chatgpt knew, from the comment added 2h ago.

The j was to prevent forgery, or altering the document. ii could be altered later to iii.. but if it was ij its obvious its been tampered with if it later appears as iji


ChatGPT just read that on Wikipedia (https://en.m.wikipedia.org/wiki/Roman_numerals#Use_in_the_Mi..., https://samplecontents.library.ph/wikipedia/wp/r/Roman_numer...). Notably, Wikipedia's source (https://archive.org/details/materiamedica00bastgoog/page/584...) doesn't appear to say anything about fraud or forgery or tampering, and that explanation doesn't actually make much sense as it would be quite easy to circumvent.

So please don't trust comments that just say "ChatGPT told me..."


The question was asked over 10 years ago and now it's getting clicks from HN someone's copypasted a claim from a LLM, possibly without verifying it. That is kind of interesting, but not in a good way.


The question was "asked Oct 10, 2013 at 1:54" and the answer (which explains why it was done) was "answered Oct 10, 2013 at 2:58". A more likely take on chatgpt's answer is that their anti-fraud explanation is not even correct; the 2013 answer provides citations from the time explaining why, and anti-fraud is not mentioned.


StackExchange doesn't allow AI answers. ;)


x can be altered to xx; why isn't that a problem?


If it ain't Boeing it ain't blowing.


Still vastly cheaper than 10 hours of flight training in a Cessna that was built when humans were last on the moon.


Tragic truth. :(

GA has problems, and they're only getting worse.


Anyone else come to the realization that when spokespeople talk about "AI Safety" they aren't concerned with the skynet-esque enslavement of mankind or paperclip maximizing, but that controls be in place that prevent people from using the technology in a way that is misaligned with the maximum extraction of profit?


I've mentioned in past posts that the terms "safety" when applied to the AI discussion are never quantified or qualified in any way.

It's more of a smear. If you're the one arguing from an AI safety standpoint, it means your opponent is not being safe and is dangerous.

You should be expected to articulate why your position is "safer" and how "safe" is defined.

In my opinion most AI safety arguments are about vaguely defined variables like speed: "it's going too fast" - to who? who defines what fast is? do you slow down 2x or 10x? do competitors and/or rival nations less concerned zip by and the end result is still the same?

There is some validity to the discussion if AI is being "racist" or "biased" (again however you define these) but again competitors and/or rival nations may be less concerned and the end result may still be the same.

If anything the "safest" option is to define what your concerns are, then race ahead to try to be the canonical solution or defined standard in order to set standards for the rest to follow or be bound by.


We need open source models that are trained on the same data and that can be built from source and run on lightweight machines like mobile devices.



At this point, I suspect that "AI Safety" has taken on the practical meaning of "regulations to make sure the big guys are the ones who benefit"... which is probably true anyways because of the capex needed to run these massive LLMs, but I am sure they would still like a moat or two against, e.g. the Chinese.

Otherwise I don't see what this means now; that the LLMs e.g. dont use racist terms? ok, great, nice, but how is that anything more than what you need to do on the web now anyway? how's that related to "AI" at all?

What I'd love to see is that this gambit backfires and instead we start talking about "tech safety" and that creates actual regulations with teeth that cut down the techzillas a bit (or a lot).


Its like this with a lot of industries. E.g. Meta is probably interested in social media “safety” to the extend it moats them legal protections and allows them to define the regulatory environment they operate within through lobbying. Likewise for automakers or any other business with a chance for harm.


Your "realization" is counter to everyone on the anti-safety side being monetarily incentivized (VC's, OpenAI employees with $1m+ pay packages, startup founders seeking to get rich) compared to the people that never aimed to profit at all like Helen Toner, Yudkowsky, Bengio, and Hinton.


Please stop framing your own personal cynicism as a grand narrative revalation.


TL;DR - Duke is dropping Basecamp due to 37 signals policy of people at work focusing on, you know, writing software that provides customers value, while having a policy that ppl at work not be spending their work time on causing distractions with ideological side-quests.


It sounds like they're actually dropping it because of DHH's ideological side quests.


"people at work focusing on, you know, writing software that provides customers value, while having a policy that ppl at work not be spending their work time on causing distractions with ideological side-quests"

The mission of DEI is exactly the opposite, so it should come as no surprise they hate this stuff.

If they are willing to sacrifice education and research on their altar, you think they would blink an eye on killing off productivity software in their eyes created by blasphemous hereticts?


You should read the actual article because it's clear you didn't.


Does this mean they'll get back to work improving their Moneyclip Maximizer?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: