Hacker News new | past | comments | ask | show | jobs | submit | ashirviskas's comments login

That's interesting, because Kinro or キ-ン-ロ is an actual name in Japanese (or at least some Anime that I watch)


It's been a meme for a few years already: https://knowyourmeme.com/memes/btw-i-use-arch

And obviously, I use arch btw.


Honestly. Arch is my favorite distro. I've been using arch for years even before it went mainstream.


I don't see a difference there. If someone sells something without doing much innovation for over 10 years (doesn't matter if it is electronics or clothing), for much more than it costs to make, why is one overpriced and the other one is not?


I suspect our disagreement is in the much in "Without doing much innovation", and the difference (I am not a clothing designer though I've sewed my own clothes; there may be more to it than I imagine.) between electronics and clothing is that electronics have to change to keep up, with the times, which incurs a large expense. Human bodies haven't changed size so dramatically so as to need a whole new process to handle extra arms.

There is a non-trivial amount of engineering work required to go from Usb 1.0, to 1.2, then 2.0, and then all the way to 3.0, even though to the end customer it's just updating to the latest version of USB.

I see that non-trivial engineering cost as what makes the difference.

Using a newer kind of fabric doesn't require new sewing machines.

If you're able to make a product, and not change it for a decade, and also not change how it's being made for that same decade, and on top of that, not have competitors pop up, then it's overpriced.


> There is a non-trivial amount of engineering work required to go from Usb 1.0, to 1.2, then 2.0, and then all the way to 3.0

Yes, and this in the context of a thread about a device originally marketed by Cypress (Anchor Chips) as “EZ-USB”. All this engineering work was done by Cypress for a device sold at a few dollars or so in quantity. Hardware wise most of these sig cap devices were reference designs clearly heavily using reference libraries.

This isn’t bad, but the whole point of these relatively expensive (compared to say a bare 8051) devices (which is literal pennies) is to save all this R&D money.

It also isn’t bad when someone takes this same off the shelf design and put it in a slightly shittier packaging and sell it closer to cost.

This “infamous” line is silly as this microcontroller line existed nearly a decade before it became a thing in low-end/hobbyist sig cap devices. It originally was produced by a company called Anchor Chips in the late 90s and bought out by Cypress. It has been used in a lot of shit.


Why not? There's still 7 months left for breakthroughs.


Small leaves wiggle room, but it's extremely unlikely trad small, <= 7B, will get there this year even on these evals.

UX matching is a whole different matter and needs a lot of work: Worked heavily with Llama 8B over last days, and Phi 3 today, and the Q+A benchmarks don't tell the full story. Ex. It's nigh impossible to get Llama _70_B to answer in JSON; when Phi sees RAG from search it goes off inventing new RAG material and a new question.


Both??


I've read that Matthew Walker isn't a good source on sleep, as he tends to just make up some claims or bend the data to support whatever point he wants to make [1].

[1] - https://guzey.com/books/why-we-sleep/

EDIT:

>Melatonin levels peek in the middle of the day and are lowest around midnight.

Isn't that the opposite of how melatonin works?


Based on https://slatestarcodex.com/2018/07/10/melatonin-much-more-th..., as well as every chart of melatonin I can find, it doesn't seem to peak during the day. It seems to rise before bedtime and peak in the early hours of sleep.

According to the blog post, the treatment for some sleep disorders is to take a dose of melatonin during the day, which moves your natural melatonin production earlier for unclear reasons. Maybe there's confusion between when you take the melatonin and when melatonin is naturally highest?


Yeah, it was my understanding that melatonin is secreted under low-light conditions according to one's circadian rhythm.

> A substantial number of studies have shown that, within this rhythmic profile, the onset of melatonin secretion under dim light conditions (the dim light melatonin onset or DLMO) is the single most accurate marker for assessing the circadian pacemaker. Additionally, melatonin onset has been used clinically to evaluate problems related to the onset or offset of sleep. DLMO is useful for determining whether an individual is entrained (synchronized) to a 24-h light/dark (LD) cycle or is in a free-running state. DLMO is also useful for assessing phase delays or advances of rhythms in entrained individuals. Additionally, it has become an important tool for psychiatric diagnosis, its use being recommended for phase typing in patients suffering from sleep and mood disorders. More recently, DLMO has also been used to assess the chronobiological features of seasonal affective disorder (SAD). DLMO marker is also useful for identifying optimal application times for therapies such as bright light or exogenous melatonin treatment. [1]

[1] https://pubmed.ncbi.nlm.nih.gov/16884842/


>Flops really are quite cheap by now, e.g. vision inference chip ~$2/teraflop/s !!

I'm really interested, can you share where you got these numbers?


Axelera [1] or Halio [2] give you 100-200tflop for ~$200.

8-bit ops, inference only, low memory embedded, excluding the host, implied utilization from FPS specs is ~20%

But the trend is there.

There are also newer ADAS/AV units from China which claim 1000tflops and cant really cost more than $1000/$2000 per car.

These are all tiled designed (see also dojo/tesla) heavily over-weighed on flops vs memory

[1] https://www.axelera.ai/

[2] https://hailo.ai/


You can't get flops on a Hailo-8, they're fixed-point only. As much as these specialised inference chips are cool, we're a long way from just being able to drop them in where a GPU was. Not to mention the memory is hugely constrained. The Hailo chips I've worked with were all limited to 20MiB for the weights which is a squeeze even at 4-bit.


Yeah, and Belarus is doing so much better than those "degrading" baltics.


I currently have OpenAI subscription and I am really interested in moving to Kagi, but it is hard to find information on their ultimate plan. What AI features does it offer?


You can choose between gpt-3.5, gpt-4, claude-1, claude-2, and google 'bison' (I didn't try that one though).

Mysteriously, it seems like the chat box on openai.com gave me slightly better answers than via their api; and sometimes Kagi.com renders a bit poorly while streaming responses, but meh. I can throw a random istio error into it or whatever and it gets me 75% of the way to a solution, so it works well enough. YMMV


Thanks! Can you do image generation, or upload your own images for it to use?


> I have ptsd just thinking about [...] what it means to have hardware devices with drivers that barely work most of the time.

Same, but for my macbook pro. Bluetooth implementation is borked and I can't even use a single device properly - I have to constantly unpair/pair my headphones. If I use 2 devices there is some stuttering in sound. But if I use 3 - all devices stutter, unless I kill some bluetooth daemon, then it works normaly for 2-5 seconds and goes into the bork mode again.

But then the software is awful too:

1. Switching between workspaces takes up to a full second before the new workspace becomes active even after disabling all animations

2. Some windows just randomly decide to not show the three control buttons and it becomes impossible to close them without messing with the process.

3. For simple screen recording I have to open Quicktime player. Then the screenshot tool becomes screen recording tool with no apparent way to return it back to the screenshot tool.

And these are just the ones I experienced this/last week. Don't get me started on mouse getting stuck in a secondary monitor or disappearing completely and other shitty UX experiences I've had being forced to work with macos for the past 2 years. Can't wait to move away and not look back.


Can't comment on the Bluetooth issue, since Apple devices have the least BT issues for me by far. But for screen recording there's the cmd+shift+5 shortcut (or the Screenshot App) to do the same thing without opening Quicktime player.


It is the only platform giving these bluetooth headaches and the reason my nice BT headset sat in the closet for almost a year.

Regarding the screenshots, I've just learned these shortcuts, but if I open the screenshot tool using CMD + Spacebar + "screenshot", it often just defaults to screen recording and I see no option to switch that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: