Hacker News new | past | comments | ask | show | jobs | submit | eggdaft's comments login

A lot of people wake up at 3 or 4am. This is typically stress related.

Possible solution:

1. Sit up or get up. If getting up, I usually get a mint tea.

2. Journal. Write down your thoughts. Very important: write down _what you are going to do tomorrow_, step by step. Usually the brain is worrying about something and by telling it what you’re going to do about it tomorrow it seems to calm down.

3. When you’ve gotten everything out, read. Just keep reading until you can’t keep your eyes open.

I’ve found this almost always works. Waking in the middle of the night is caused by stress around a problem. Your brain just wants to know the narrative around how that problem will get resolved or improved. Then it will fall sleep again.


> Waking in the middle of the night is caused by stress around a problem

I don't think this is universal. For example: rotating shift workers can have a mostly physiological insomia, due to various disrupted rhythms.


Problems with f/oss for business applications:

1. Great UX folks almost never work for free. So the UX of nearly all OSS is awful.

2. Great software comes from a close connection to users. When your software is an OS kernel that works just fine for programmers, but how many OSS folks want to spend their free time on zoom talking to hundreds of businesses and understanding their needs, so they can give them free software?

See also: year of Linux desktop


The good news for FOSS is that the UX of most commercial software is also awful and generally getting worse. The bad news is that FOSS software is copying a lot of the same UX trends.


What happened when someone from the cool club got promoted and became a boss?


I actually think Slack is great and it has improved over the last 12 months.


Tbf the same debate has raged over SO snippets for a long time.


For folks without responsibilities like kids, aging parents, etc. I really don’t think startups are very “risky”.

What’s the worst that happens? It doesn’t work out and after five years you go get a job in boring corp corp with an incredible skillset and vast life experience.

You’ve sacrificed some income perhaps, but so what? People make choices like that all the time. Your working career could easily be 40 or 45 years, 5 is not that much and it’s not like you went bankrupt. Your skillset might even mean you more than make up for lost time.

I don’t understand the talk of “risk” unless you’re Elon Musk betting the farm on your businesses and facing bankruptcy.

Work in your spare time until you have something Angel worthy, then get a modest salary to get to the next level and on you go. Or just bootstrap.

Is it easy? No, it’s the hardest thing you’ll ever do. Is it risky? Not so much.

So why do Canadians and Brits see it as a risky thing to do? I think they don’t. What they see is _uncertainty_ - where will I be in six months? What if it doesn’t work out? What if I fail and people judge me? They don’t like uncertainty. That is conservative with a small c. Probably it’s a cultural artefact rather than anything remotely rational. The problem is you end up in an equilibrium where the society is conservative (“what you wanna do wasting your time with that”) so the ambitious people just leave and go to somewhere like (parts of) the US where people want to change things, make things, improve the world. And the conservative society gets more conservative until it is ossified.

Startups carry high uncertainty but not high risk.


There are countries where the business culture makes you unemployable and almost impossible for you to get a loan for the rest of your live if you have ever failed a business (bad enough). Many countries aren't as open to failure as the US.


This is a fair point. I was talking in the context of USA UK Canada but it might not generalise.


This depends on what you see as risk. If I can safely earn for 5 years way above national average and build a strong savings egg that can provide income forever.

Or I can fail at a startup and be close to zero five years later, the fact that you aren't homeless and starving and can get another job doesn't mean it wasn't risky, you still wasted a bunch of years compared to slow and steady accumulation.

I've read the majority of millionaires in the US get created like this, working and saving through decades.

You're basically repeating investor kool-aid, because for their model to work, 100 people will fail and 1 succeed, and so they tell you to not worry if you're in the 99.


Of course you could probably say at least some of the same things about grad degrees that may not really translate into appreciable different/better career outcomes. Of course some say exactly that, especially about PhDs.


I don’t think there are risky either.

For me risk is “could go horribly wrong” but the worst case for most startup founders is … get a job?


“Wasting years” is not a risk. It’s a choice. And as I pointed out it’s likely not wasted anyway.


Wasting from a savings perspective, come on now.


Hmm, no I think people don't get what I'm saying.

Yes, you might waste five years (in your words) of income. But that is not a "risk".

A "risk" for me is "this could all go badly wrong". Not having an extra five years of savings is just a fairly straightforward consequence. It's predictably going to happen, I can decide if I want it.

Real risk is "something can go catastrophically wrong". If say you've taken out a huge bank loan to fund the business and you have to declare bankruptcy if it fails, now _that_ is a risk. But nearly every founder I've ever met has taken nothing like that risk.

That's my point, startups are not risky in that sense, for most people, most of the time. It's kinda strange that so many people think they are.


The increased cost of living in the last few years has changed this somewhat. That 5 years of lower earnings now means less nice groceries, fewer holidays and being under the yoke of landlords for considerably longer.


I can't speak for Canada and I may be a wrong, but it seems to me harder to loan money for business than in NA. Banks are the ones that don't want to take risks, not necessarily the people with ideas.

Also failures aren't considered the same in every job market.


Turns out there’s only enough people with this mindset to fill a couple hubs around the world. The rest prefers less volatility and happily takes on less downside risk for capped reward and/or less upside risk.


The tech wasn’t ready. Alexa is the same. No progress.

Businesses have to focus and it made sense to drop this as a priority.


Sounds like excuses.


What do you mean by “an LLM doesn’t reason”?


I mean that it does not follow basic logic rules when constructing its thoughts. For many tasks they'll get it right, however it's not that hard to find a task for which LLM will yield obviously logically wrong answer. That would be impossible for human with basic reasoning.


I disagree, but I don’t have a cogent argument yet. So I can’t really refute you.

What I can say is, I think there’s a very important disagreement here and it divides nerds into two camps. The first think LLMs can reason, the second don’t.

It’s very important to resolve this debate, because if the former are correct then we are likely very close to AGI historically speaking (<10 years). If not, then this is just a stepwise improvement and we will now plateaux until the next level of sophistication of model or computer power etc is achieved.

I think a lot of very smart people are in the second camp. But they are biased by their overestimation of human cognition. And that bias might be causing them to misjudge the most important innovation in history. An innovation that will certainly be more impactful than the steam engine and may be more dangerous than the atomic bomb.

We should really resolve this argument asap so we can all either breathe a sigh of relief or start taking the situation very very seriously.


I'm actually in the first camp. For I believe that our brains is really LLM on steroids and logic rules are just in our "prompt".

What we need is a LLM that will iterate over its output until it feels that it's correct. Right now LLM output is like random thought in my mind. Which might be true or not. Before writing forum post I'd think it twice. And may be I'll rewrite the post before submitting it. And when I'm solving a complex problem, it might take weeks and thousands of iterations. Even reading math proof might take a lot of effort. LLM should learn to do it. I think that's the key to imitating human intelligence.


I think what this argument is missing is the emergent properties of the LLM.

In order to “predict the next word”, the LLM doesn’t just learn the most likely word from a corpus for the preceding string. If that were true, it would not generalise outside of its training set.

The LLM learns about the structure of the language, the context, and in the process of doing so constructs a model of the world as represented by words.

Admittedly the model is still limited, but it seems to me that there is something more insightful to be gleaned here: that given enough data, and sufficient pressure to learn, that excelling at scale on a relatively simple task leads indirectly to a form of intelligence.

For me the biggest takeaway of LLMs might be that “intelligence is pretty cheap, actually” and that the human brain is not so remarkable as we’d like to believe.


Each word is taken to a distribution over words this is where the illusion of "context" largely comes from. eg., "cat" is replaced by a weighted: (cat, kitten, pet, mammal, ...) which is obtained via frequencies in a historical dataset.

So technically the LLM is not doing P(next word |previous word) -- but rather, P(associated_words(next word)|assocated_words(previous), associated_words(previous_-1), ...).

This means its search space for each conditional step is still extremely large in the historical corpus, and there's more flexibility to reach "across and between contexts" -- but it isnt sensitive to context.. we just arranged the data that way.

Soon enough people with enough money will build diagnostic (XAI) models of LLMs that are powerful enough to show this process at work over its training data.

To visualize roughly, imagine you're in a library and you're asked a question. The first word selects a very large number of pages across many books (and whole books), the second word selects both other books, and pages across the books you have. Keep going.. each more word you're ask, you convert to a set of words, and find more pages and books and also get narrower paragraph samples from the ones you have. Now finally, with total set of pages and paragraphs etc. you have to hand at the end of the question, you then find the word most probable following the other.

This process will eventually be visualised properly, with a real-world LLM, but it'll take a significant investement to build this sort of explanatory model.. since you need to reverse from weights to training data across the entire inference process.


The context comes from the attention mechanism, not from word embeddings.


Run attention on an ordinal word embedding and see what happens


Well yes, necessary but not sufficient, obviously.


> and that the human brain is not so remarkable as we’d like to believe.

Well, it IS pretty seamlessly integrated with a very impressive suite of sensors.


Yes, our human sensor fusion is remarkable. The input signal of say our eyes is warped, upside down and low resolution apart from a tiny patch that races across the field of vision to capture high resolution samples (saccades). Yet, to us, it feels seamless and encompassing.


Bingo.

When I write some 100% bespoke code that is rather hastily composed and then paste it all into ChatGPT4 asking it to "refactor this code with a focus on testability and maintainability" and not only does it do so, but it does a pretty damn good job about it, it feels rather reductive to say "it's just providing the next most likely word".

I mean, maybe that's how it works, but that statistical output clearly involves modeling what my code does and what I want it to do. Rather than make me think LLMs are a cheap trick, it just has me thinking, "shit - maybe that's all I do too."


Averaged faces are beautiful, averaged code is clean. Not sure how that is hard to believe. Just don't extrapolate it too far or it will get strange.


This is a highly disingenuous argument, flagged by your reference to STV, which any one versed in UK politics knows was a stitch up of the Lib Dem’s by the conservatives.

The UK is controlled by a powerful ruling class that maintains FPTP and other undemocratic institutions to their benefit, as I imagine you well know, and regular polls of the population (as noted by other responses) clearly contradict your assertion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: