Hacker Newsnew | past | comments | ask | show | jobs | submit | melodyogonna's commentslogin

I imagine it is the only option if you want your AI to do anything with Twitter

How are they making SOTA if they're 3rd rate? You forget how late they came into the game

That is because the language is pre-1.0. The new language I follow a lot is Mojo, it also has this problem.

I think the only way to follow a new (unstable) language is to join whatever community where the conversation happens; otherwise, what you think you know about the language will become outdated pretty quickly.


I have a simple philosophy in how I approach everything: Too much of anything is bad.

When I started taking notes with obsidian I almost fell into this trap of over-analysing everything in terms of what should go into a note, making folders and sub-folders. It became quickly obvious to me that the mental burden of this can accumulate quickly.

These days I store most of my notes in one folder. The only times I now make a note are: 1. When I'm reading. 2. Very rare these days, but sometimes I still have nagging thoughts that wants to be written down. 3. When I have important information that needs to be stored, like IP address, things like this.

I've found that not thinking about notes obsessively like this helps me better, most thoughts are useless and fleeting, they're not worth writing down imo. Best to be in your mind in those.

The outcome of this is that my vault has remained simple and small even after a year, and when I search it for information it is almost always for some important detail I knew I wrote down, I don't get overloaded with junk.

To keep my notes space clean I also regularly move things to archive, which I rarely check.


I’ve also struggled with over-analyzing where stuff should go. I’ve restarted a new Obsidian vault based on PARA [1], and am experimenting with using LLMs (both Cursor and Claude Code) to help me decide where stuff should go. Been a big help so far.

[1] https://fortelabs.com/blog/para/


I've started seeing a number of people talk about using Claude Code for searching, writing, and organising text documents. It is an interesting trend to me. I tried it out with my non-technical girlfriend and she really liked using it for helping with analysing interview transcripts. It seems like that agentic workflow is really effective outside of coding as well.

I just hope non-technical people that pick this up also pick up version control. Or, is there a better alternative to Claude Code that can accomplish a similar thing while being more friendly to non-technical people?


> am experimenting with using LLMs (both Cursor and Claude Code) to help me decide where stuff should go.

Can you elaborate on this?


My notetaking has devolved in much the same way. Nowadays I pretty much just have one big "Work" note, and then occasional smaller temporary notes for specific things (planning a trip, shopping list, etc).

PKM can often turn into a form of procrastination (it's more fun to make lists and folders and grand archives, than to work on your actual projects). I decided to cut all that off and do the bare minimum instead.


> I have a simple philosophy in how I approach everything: Too much of anything is bad.

This. Everything in moderation, including moderation.


> most thoughts are useless and fleeting

While I can't disagree with your own analysis of your own thoughts, I would push back on generalizing this to what other people think. One's own thoughts have an intrinsic value, and to seize on them and let them flourish is one of my personal greatest joys. As a thinking being, how could I call my thoughts "useless"? Sure, I don't record every thought of every day, but I sure hope I continue at least having one or two interesting and, indeed, noteworthy thoughts a week or so through till my old age.


Hi, I may not have written that well.

I meant that most thoughts are not important enough to write down, not that they're not important. For those I prefer to just let myself think. Most times the thought just goes away, some other times I just subconsciously refine it until it becomes pretty obvious this is something I want to write down. Said another way, I don't write them every thought I have


My mother found a pike of letters from 30 years ago. Mondane conversations that we today would carry out over message apps.

It made her very emotional and nostaligic. Ended up much more precious than she thought when she wrote them.


Interesting to think about if finding a database dump of messages 30 years from now would bring back the same nostalgia. I would think that the tangible aspect would have a more profound impact.


I also think the format of letters lends itself a bit better to being re-read. A single message talking about the "last little while" rather than atomic thoughts and reactions. More context about the snapshot of life it describes. Like a page from a diary that was shared with someone.



Isn't it heavily used in Google Cloud?


But those people at Google were veteran researchers who wanted to make a language that could scale for Google's use cases; these things are well documented.

For example, Ken Thompson has said his job at Google was just to find things he could make better.


They also built a language that can be learned in a weekend (well, now two) and is small enough for a fresh grad hire to learn at the job.

Go has a very low barrier to entry, but also a relatively low ceiling. The proliferation of codegen tools for Go is a testament of its limited expressive power.

It doesn't mean that Go didn't hit a sweet spot. For certain tasks, it very much did.


What does the performance of Julia's GPU kernels look like in comparison to kernels written by Nvidia or AMD?


Benchmarks of an early version on NVIDIA's blog show that it is about the same (the average across the benchmark shows Julia as actually a little faster here, but it's basically a wash) https://developer.nvidia.com/blog/gpu-computing-julia-progra....

While much has changed since then, the architecture is effectively the same. Julia's native CUDA support simply boils down to compiling via the LLVM .ptx backend (Julia always generates LLVM IR, and the CUDA infrastructure "simply" retargets LLVM to .ptx, generates the binary, and then wraps that binary into a function which Julia calls), so it's really just a matter of the performance difference between the code generated by the LLVM .ptx backend vs the NVCC compiler.


He works on Mojo a lot, mostly on weekends. In the past few months, he has worked on strings, collection literals, dependent types, reference captures, comprehensions, and many other nice language features.


Indeed, but not only GPUs but accelerators in general. Mojo will be able to target weird esoteric hardware (portably if that is important)


The plan is to have a mechanical tool someday that can transpile Python code to Mojo


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: