Hacker News new | past | comments | ask | show | jobs | submit | Imanari's comments login

Would you mind giving some insight to your approach?

aider.chat

command-line driven, you give commands, the system directly changes your code and commits the changes so you can easily go back.


I personally had good experience with a ZMA supplement for exactly the problem you described (not a severe though). More vivid dreams also.


The core issue that parent is talking about is that the decision-tokens should built on the reasoning-tokens vs the reasoning-tokens are generated according to the decison-tokens. RAG just provides the context the LLM should reason about.


Could you elaborate on your setup and what tasks you delegate?


Sure, I use Raycast, including its Raycast AI feature and snippets feature. I also have Raycast script commands (mostly specially-formatted shell and Python scripts) that integrate with internal systems, such as our firewall, help desk, and MDM systems, as well as Linux servers via SSH.

For help desk tickets, I have a script that pulls new tickets, reads the information for each one, determines a likely next action (response, resolution, or follow-up questions), and asks me if I want to proceed with the response. Most of the time, I hit "y" + Enter, and the script handles the response.

The responses are always well-written, cheery, and concise, regardless of my current mood or level of distraction, and I've received good feedback on them.

I also use the Raycast AI commands "Improve Writing" and "Summarize" several times a day on emails, documentation, tickets, and other text. I select the text in any window, hit a hotkey to launch the action, and it quickly performs the action on the selected text and optionally replaces the selected text or copies it to my clipboard. It's a very efficient process.

My goal is to automate anything I do at least once per day.

In addition to Raycast, I had much of this set up on Alfred (what I used prior), Albert and now ulauncher on my Linux box, and the launcher that comes with Power Toys on Windows. I could also do all of this with scripts in zsh.

1. https://www.raycast.com/

2. https://manual.raycast.com/ai

3. https://manual.raycast.com/snippets

4. https://github.com/raycast/script-commands and https://manual.raycast.com/script-commands .

Script examples here: https://github.com/raycast/script-commands/tree/master/comma...


Interesting idea. This made me think of these audio illusions[0] where what you hear depends on what you expect to hear. I wonder if this would present challenges for the proposed approach.

[0] https://www.youtube.com/watch?v=8FXQ38-ZQK0 sorry for the fast-food-tier video, best I could find that was not a short.


Check out the works of Eamonn Keogh. I think his methods are very up to date and applicable as most of his algorithms have implementations in sktime/tslearn/stumpy etc.


Hit me up if you want to chat. I‘ve been a hobbyist algotrader for some years (with mixed results).

I’d say this: It is very hard to beat the market consistently. It is even harder to statistically prove and convince yourself that our new strategy actually now beats the market. There are a lot of gotchas and caveats to watch out for when backtesting.

I spent most of my time with time series techniques as this was most fun to me. My current stack is ccxt, binance, polygon.io and self made backtesting in python.


Interesting indeed, only one lagged feature for time series forecasting? I’d imagine that including more lagged inputs would increase performance. Rolling the forecasts forward to get n-step-ahead forecasts is a common approach. I’d be interested in how they mitigated the problem of the errors accumulating/compounding.


Seems to have very similar ingredients to the DIY solution from another poster. Essentially bees wax and some kind of oil and/or butter. I’ll give it a try, thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: