Hacker News new | past | comments | ask | show | jobs | submit | tipsytoad's comments login

I wholly disagree with the comic, but a anti AI art take I’m more sympathetic to: https://x.com/soi/status/1815584824033177606?s=46


I don’t think this hits at the heart of the issue? Even if we can catch AI text with 100% accuracy, any halfway decent student can rewrite it from scratch using o1s ideas in lieu of actually learning.

This is waay more common and just impossible to catch. The only students caught here are those that put no effort in at all


> rewrite it from scratch ... in lieu of actual learning

If one can "rewrite it from scratch" in a way that's actually coherent and gets facts correct, then they learned the material and can write an original paper.

> This is waay more common and just impossible to catch.

It seems a good thing that this is more common and, naturally, it would -- perhaps should, given the topic -- be impossible to catch someone cheating when they're not cheating.


Just another +1 that if you’re going to give vscode a fair shot, it’s much better to go with vscode-neovim than the standard vim extension. You can even map most of your config right over.

E.g. (mine) https://github.com/tom-pollak/dotfiles/tree/master/nvim


How is this different from instructor? github.com/jxnl/instructor

namely, why did they take so long for something that just seems like a wrapper around function calling?


Looks really nice, but the only concern I had — how does the perf compare to more mature libraries like faiss?


Benchmarking for this project is a bit weird, since 1) only linear scans are supported, and 2) it's an "embeddable" vector search tool, so it doesn't make a lot of sense to benchmark against "server" vector databases like qdrant or pinecone.

That being said, ~generally~ I'd say it's faster than using numpy and tools like txtai/chromadb. Faiss and hnswlib (bruteforce) are faster because they store everything in memory and use multiple threads. But for smaller vector indexes, I don't think you'd notice much of a difference. sqlite-vec has some support for SIMD operations, which speeds things up quite a bit, but Faiss still takes the cake.


Author of txtai here - great work with this extension.

I wouldn't consider it a "this or that" decision. While txtai does combine Faiss and SQLite, it could also utilize this extension. The same task was just done for Postgres + pgvector. txtai is not tied to any particular backend components.


Ya I worded this part awkwardly - I was hinting that querying a vector index and joining with metadata with sqlite + sqlite-vec (in a single SQL join) will probably be faster than other methods, like txtai, which do the joining phase in a higher level like Python. Which isn't a fair comparison, especially since txtai can switch to much faster vector stores, but I think is fair for most embedded use-cases.

That being said, txtai offers way more than sqlite-vec, like builtin embedding models and other nice LLM features, so it's definitely apples to oranges.


I'll keep an eye on this extension.

With this, what DuckDB just added and pgvector, we're seeing a blurring of the lines. Back in 2021, there wasn't a RDBMS that had native vector support. But native vector integration makes it possible for txtai to just run SQL-driven vector queries...exciting times.

I think systems that bet on existing databases eventually catching up (as is txtai's model) vs trying to reinvent the entire database stack will win out.


The most useful feature of llms is how much output you get from such little signal. Just yesterday I created a fairly advanced script from my phone on the bus ride home with chatgpt which was an absolute pleasure. I think multi-prompt conversations don't get nearly as much attention as they should in llm evaluations.


I suppose multi-prompt conversations are just a variation on few-shot prompting. I do agree though, that they don't play a big enough role in eval, but also in the heads of many people. So many capable engineers I now nope out of GPT because the first answer isn't satisfactory, instead of continuing the dialog.


Switched to Firefox a month or two ago, mostly for ublock origin on android, and unlimited history (seriously, why is this not the standard?)

I've tried to like it but honestly it's been painful. MacOS Sonoma seems to have a hover bug, which has been unresolved through the last 3 bug fix updates. Performance is "fine" but seems to lag with many tabs open which was never an issue in chrome (this is on an M2 pro!) PDF reader also seems significantly slower as well. At this point I'm considering going back to chrome.


Unlimited history is nice but I hate how history works out of the box. I might be doing something wrong but ctrl+h pops up a sidebar which shows every single website visited today in no discernible order. I've learned about ctrl+shift+h which is better but even there, the UI is a bit lacking compared to what Chrome has out of the box. Is there anything I can do to improve this?


There's a filter at the top right if I remember right, change it to Last visited


Have you tried Brave?


Moved to vscode with the neovim extension (vim mode was slow for me over large files). vsc is super customisable, and you can remove all the tab bars, side panels and anything else you don't use fairly easily. It's also far more stable than the neovim ecosystem, i don't have time to mess with neovim plugins breaking bi-weekly anymore


Using fzf through bash history and using # comments as tags on common commands seems to work just as well for me. Not sure what else this adds


I created it for convenience. Can edit, delete, backup easier. I'm not sure that #comment can do that or not


Eh, you can probably get very similar to fish with zsh + loads of plugins, but fish has lots of niceties out of the box (syntax highlighting, autosuggestion based on your directory). I've been using fish + starship with just a fzf plugin and it's got everything I need.

Check it out, it's effective with very little config


Speaking of starship, today I realized that I don't really need any of its features, I just like the way it looks. So I replicated it with pure fish by creating a simple function:

  function fish_prompt
      # Fedora Silverblue workaround
      set -l pwd (string replace /var/home /home $PWD)
  
      echo
      echo -s (set_color -o cyan)(prompt_pwd -D 3 $pwd) (set_color magenta)(fish_vcs_prompt)
      echo -ns (set_color green) "" (set_color normal) " "
  end


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: