I’ve found that the RLHF’d ChatGPT is way too submissive these days. I really do not enjoy asking for minor clarification and getting back “I apologize for the confusion…” followed by a completely and incorrectly revised reply.
I was searching for implementations to get a more concrete idea of the algorithm and came across https://github.com/Yawning/secp256k1-voi which appears to be well commented, robust, and oddly entertaining. It’s got Schnorr, and ECDSA for comparison.
This study has some nice data in it, but they didn't really start with a very strong hypothesis, to my understanding. The paper starts with a citation about stress being inversely associated with telomere length, and proposes that psilocybin decreases psychological stress, but then goes on to demonstrate decreases in entirely different in vitro physiological stress markers. How is one to tell that this isn't just cherry-picking dependent variables that look good?
State of senescence in cells can also be caused by mechanical stress [1], this is the basis of senescent cell formation in osteoarthritis etc. So maybe psilocybin is somehow generalizing.
> Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
The site guidelines suggest not accusing posts of being written by bots :(
When I put the word 'bot' in that guideline I had in mind something quite different from LLMs - it was more about asking people not to post things like "zomg this thread is overrun by $nation bots i guess that's the end of hacker news".
The community response to LLMs (including to comments that read like they were generated by an LLM) is more complex and not necessarily abusive, and I'm inclined to let it play out. So yeah, technically we should probably take the word 'bots' out of that guideline as ozarker's sibling comment suggests.
I quit my job of five years three weeks ago to work on building what I'm calling autonomous computing [0].
I'd always been hoping to do a startup, having read all of the varying experiences here. The opportunity arose during the EthDenver hackathon where I hacked up a winning proof of concept. The decision to set off was easy since I felt if I didn't at least try to make secure serverless for web3 work, I'd regret never taking the opportunity.
So far, with ChatGPT as my legal and marketing assistant, things are progressing and I remain optimistic about controversial things like trusted hardware, blockchain, and human-AI interaction. Actually, what I would really love is an AutoGPT for bizdev if anyone has that...
Are Polkadot’s parachains an equivalent centralized approach? I’m trying to understand the problem you’re trying to solve. Sorry it’s a bit over my head atm. The blockchain space truly has lots of opportunities and work to be done.
My Osprey Daylite+ backpack cost $80 and has been used every day for five years now. It has a small tear and broken zipper pull, but neither affects functionality. If it ever has a problem, I can always send it back to Osprey for repair or replacement under the warranty. It’s too good, actually. I want to get a GR1 but the Daylite+ leaves almost nothing to be desired.