Hey, I'm Conor, a software engineer specializing in the backend, but capable of using whatever technologies required to get a problem solved. I have experience with large multinationals, but much prefer my time working with startups. I was the first hire in my current company, and am now looking for a new opportunity having helped the company find product/market fit, scale the solution, and reach a position of stability. Most of my details can be found on my LinkedIn, including links to my Github, Blog, and other technical works.
I am a software engineer specializing in Python backends, but have demonstrated ability using whatever technology necessary to best solve the problem at hand. I have experience working with large multinationals, and startups, both in person and on a remote-only basis. My current role is fully remote, and involves software architecture/design decisions, PoC prototyping, DevOps, improving developer tooling, and general purpose programming.
The best place to see my professional experience is on LinkedIn, but feel free to reach out to me by any means you prefer so we can discuss opportunities.
Another interesting thing in this space is Traindown[1]. It's simply Markdown for your exercises. As an example for those who need convincing to go to a link, this could be a simple workout:
It's much more free-form than this, which is a bonus for me and allows me to track metadata such as weight, time of day I'm exercising at, or general mood/feeling about the workout. I can ultimately just take these plain markdown files from the app (I use a basic Android app that visualizes this Markdown[2]), import them and do whatever processing I like in Python.
Interesting. I wrote a small program some time ago that would have needed it... well i am needing the workout... so i should go and refactor the app instead of refactoring the body...
There it is! I've been looking for something like this!
I've been using my own homebrewed toml spec, but since I am more experienced with code than training, I was concerned if it would still work well as I become more experienced. Not sure if I'll use this, but good to see another interpretation of this kind of data!
I expressed myself too succinctly and without context, sorry.
I meant we need a new DSL better suited for prompt engg, and a UI that better supports longer strings. Actualy this UI can be something compatible with Python.
But overall a reimagination of the dev experience is what I am getting at (like Jupyter for LLMs).
1) Can you get the actual code output or will this end up calling OpenAI each function call?
2) What latency does it add? What about token usage?
3) Is the functionality deterministic?
1) The OpenAI API will be queried each time a "prompt-function" is called in python code. If you provide the `functions` argument in order to use function-calling then magentic will not execute the function the LLM has chosen, instead it returns a `FunctionCall` instance which you can validate before calling.
2) I haven't measured additional latency but it should be negligible in comparison to the speed of generation of the LLM. And since it makes it easy to use streaming and async functions you might be able to achieve much faster generation speeds overall - see the Async section in the README. Token usage should also be a negligible change from calling the OpenAI API directly - the only "prompting" magentic does currently is in naming the functions sent to OpenAI, all other input tokens are written by the user. A user switching from explicitly defining the output schema in the prompt to using function-calling via magentic might actually save a few tokens.
3) Functionality is not deterministic, even with `temperature=0`, but since we're working with python functions one option is to just add the `@cache` decorator. This would save you tokens and time when calling the same prompt-function with the same inputs.
- Getting familiar with new APIs
- Bouncing general knowledge questions off it
- Having "discussions" to interact with it in a Socratic style
- Giving some "personality" to automated services by calling it as an API
2) Thanks for the pointers! Will check them out today!
I don't have much interest in playing with the internals for now, but I generally like keeping my data personal and the services I use self-maintainable as much as reasonably possible! I also feel like I could find the token limits and price limiting with ChatGPT.
Try to write monthly about technical projects I've managed to complete. I'm beginning to mix it up to include more recently musings on non-technical topics now however!
What does the monitoring actually do for you? I've seen these setups, even setup one for myself a few times (either Grafana or similar such as Netdata, or Linode's Longview) but I've not really seen what it does for me beyond the "your disk is almost full" warnings.
Setting an email address you actually check in /root/.forward would provide most of this, and all of it with the addition of low-tens of lines of shell script and a cron job or two, no? I get that tastes vary, but adding more services to worry about & keep updated to my home server(s) is not my idea of a good time. I doubt the custom pieces required to get all of those alerts via email would take longer than installing and configuring that stack, and then the maintenance is likely to be zero for so long that you'll probably replace the hardware before it needs to be touched again (... and if you scripted your setup, it'll very likely Just Work on the replacement)
Oh definitely, but only if you are not interested in the visualization side.
I wanted the ability to quickly see the current & historical state of these and other metrics, not just configure alerts.
I’m also omitting the fact that I have collectors running inside different VMs on the same host. For example, I have Telegraf running on Windows to collect GPU stats.
Ah, yeah, that probably won't be enough for you then. Need Windows monitoring, and want the graphs—yeah, much bigger pain to get anything like that working via email.
Continuous performance monitoring of a service, from its inception. I'm building a storage service using SeaweedFS and also a web UI for another project. One thing I'm looking at doing is using k6[1] in order to do performance stress testing of API endpoints and web frontends on a continuous basis under various conditions.[2] For example, I'm trying to lean hard into using R2/S3 for storage offload, so my question is: "What does it look like when Seaweed offloads a local volume chunk to S3 aggressively, and what is the impact of that in a 90/10 hot/cold split on objects?" Maybe 90/10 storage splits are too aggressive or optimistic to hit a specific number. Every so often
-- maybe every day at certain points, or a bigger global test once a week -- you run k6 against all these endpoints, record the results, and shuffle them into Prometheus so you can see if things get noticeably worse for the user. Test login flows under bad conditions, when objects they request are really cold or large paginations occur, etc.
You can run numbers manually but I think designing for it up front is really important to keep performance targets on lock. That's where Prometheus and Grafana come in. And I think looking at performance numbers is a really good way to help understand systems dynamics and helps you ask why something is hitting some threshold. On the other hand, there are so many tools and they're often fun to play with, it's easy to get carried away. There's also a pretty reasonable amount of complexity involved in setting it up, so it's also easy to just say fuck it a lot of times and respond to issues on demand instead.
[2] It can test both normal REST endpoints but also browsers thanks to the use of headless chrome/chromium! So you can actually look at first paint latency and things like that too.
Remote: Preferred
Willing to Relocate: No
Technologies: Python (+ Frameworks), AWS, Linux, Docker, ...
Résumé/CV: https://www.linkedin.com/in/conorjflynn/ | https://blog.randombits.host/resume (PDF Preview)
Email: hn@randombits.host
Hey, I'm Conor, a software engineer specializing in the backend, but capable of using whatever technologies required to get a problem solved. I have experience with large multinationals, but much prefer my time working with startups. I was the first hire in my current company, and am now looking for a new opportunity having helped the company find product/market fit, scale the solution, and reach a position of stability. Most of my details can be found on my LinkedIn, including links to my Github, Blog, and other technical works.