Hacker News new | past | comments | ask | show | jobs | submit | Mkengine's comments login

This is really interesting, do you have other recommendations for improvements (gladly with sources I you have any)? I have to build a RAG solution for my job and right now I am collecting information to determine the best way to go ahead.

I'm exploring tooling for building these graphs and would love to pick your brain about your use case, if you're willing. No pressure! wade at tractorbeam dot ai

You can find the comparison to uBO under 5.5

Apparently, the name goes back to the unicode emoji for a hug ():

https://qz.com/hugging-face-microsoft-artificial-intelligenc...


I use Consentomatic, which automatically rejects cookies and can also reject dark pattern banners. Of course this should be better solved, but at least this way I have some control over my privacy without being disturbed.


I looked 3 times through the list, I can't find a "deluxe-chat" there, how do I select it?


It has only ever been available in battle mode. If you do the battle enough you will probably get it eventually. I believe it is still there but doesn’t come up often, it used to be more frequent. (Battle mode is not uniformly random, some models are weighted to compete more often than others.)


Most materials contract when they cool down. For example, differential contraction between metal contacts and semiconductor materials can cause them to detach or break, disrupting electrical connections. Also extremely low temperatures can lead to changes in semiconductor properties such as carrier mobility, which affects how efficiently electronic signals are processed.


Couldn't all connectors be made with some kind of expansion joint, like how infrastructure (bridges etc) are made? Given how they use older nodes, this may perhaps even be possible for transistors (Assuming they also have such expansions)?


Normally materials in chips are selected so that they have similar thermal expansion coefficients - otherwise they would fall apart just from reaching normal work temperatures.

There's another problem here: below a certain temperature semiconductors become insulators. You're running the risk of your chip shutting down in a disorderly manner.


Of course you can do all sorts of things to make the electronics able to survive, but ultimately that supposedly just wasn't part of the initial design requirements they settled on.


It's good to remember that the total lifetime budget of this lander is only $121.5 million, and that includes all the staff still receiving data. It was always intended to be a technology demonstrator, not an ongoing science lab.


I am currently finising my PhD and will start next month as a software architect. Among others I also looked into these books. What is the problem with them and what would you recommend instead?


This lecture is a decent criticism of "clean code" and other non-expert advice.

https://youtu.be/7YpFGkG-u1w


The issue with those books is that they don't have any concrete data that affirms that they are worth following. And, in fact, that style of code is largely to blame for why modern software feels so sluggish. They often reduce performance by 10x if not 100x or worse. I like this video on the topic for reasons not to follow SOLID principles[0]. Muratori also has an excellent talk on writing APIs that are flexible and performant[1]. As for books on understanding the hardware performance, Computer Systems: a Programmer's Perspective, is the best example (not the international edition though, according to the author's recommendation).

I'm not aware of any architecture books recommended by anyone that cares about performance unfortunately. Most high performance software is written iteratively, meaning they aren't assuming a code structure from the start. Andreas Fredriksson, a lead engine programmer at Insomniac Games, has an excellent quote on how he writes high performance software[2]:

> Work backwards from hard constraints and requirements to guide your design. Don’t fall into the trap of designing from a fluffy API frontend before you’ve worked out how the thing will handle a worst case. Don’t be afraid to sketch stuff in while you’re proving out the approach.

> The value is what you learn, never the code. Hack it and then delete the code and implement “clean” or whatever you need. But never start there, it gets in the way of real engineering.

> As an industry we spend millions on APIs, documentation and abstraction wrapping a thing that isn’t very good to start with. Make the thing good first, then worry about fluff.

Casey Muratori also has written blogs about his programming style[3]. (He also runs a great course about performance at computerenhance.com). Abner Coimbre has a great article on how NASA approaches writing software[4]. Of course, there is also Mike Acton's famous CppCon talk about Data-Oriented Design[5].

The standard advice usually boils down to this: focus on the problem you have to solve, and be aware how damaging solving the wrong problem can be. It's a good idea to focus on what data your program receives and focusing on handling worst cases.

Since it is difficult to tell who is worth listening to, I suggest always investigating what actual software the person speaking has written. Those that write real time software or software that must not fail under any condition tend to speak very differently about typical industry practices for good reason.

[0] https://youtu.be/tD5NrevFtbU?si=Jkg6VKBHns32_IU_

[1] https://youtu.be/ZQ5_u8Lgvyk?si=tMuPFxKbrboKrBFr

[2] https://twitter.com/deplinenoise/status/1782133063725826545

[3] https://caseymuratori.com/blog_0015

[4] https://www.codementor.io/@abnercoimbre/a-look-into-nasa-s-c...

[5] https://youtu.be/rX0ItVEVjHc?si=buLbaqoc3Zugfwr7


I don't know how stressful my life will be then, but I thought about reading to my kids later and creating audiobooks with my voice for them, for when I am traveling for work, so they can still listen to me "reading" to them.


Sometimes I ask myself if WALLE is not that unrealistic of an dystopy, as we can already see that people are worse at remembering numbers in contrast to the time before smartphones, that save this information for you. Right now you can let ChatGPT do your homework without knowing anything about the subject. Where does this stop? Are we steering for a future where machines are doing everything for us and we will become dumber and dumber?


From the perspective of a hunter gatherer, we already live in this future. Our skills have nothing to do with theirs. From their perspective we are phenomenally dumb because we couldn't survive very long in the wild, and know very little of our natural surroundings.

In fact you can argue natural pressures have stopped us from evolving. We don't need our logical skills anymore to deduce the tracking and pathing of animals we hunt, or our memory to remember where the berries are.


We're not worse at remembering numbers, there's just no need for it so we don't bother.


i think what people memorize has been made optional, but that people still memorize a lot. sure not phone mumbers, but the amount of lore they can recite from some fandom seems to be of similar depth


Not OP, but for me the following combination works great:

- Obsidian for knowledge management to help with the forgetfulness - Todoist to get the daily stuff done and personal project management - Timetree to coordinate with my wife and be able to plan ahead more than a week - any alarm app for the critical stuff the next day


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: