Hacker News new | past | comments | ask | show | jobs | submit login
Non-determinism in GPT-4 is caused by Sparse MoE (152334h.github.io)
397 points by 152334H on Aug 4, 2023 | hide | past | favorite | 181 comments



Floating point inaccuracies are generally deterministic - running the same calculations twice ought to yield the same results, down to the bit.

You only get divergent results if there is some other source of state or entropy: not zeroing buffers correctly, race conditions, not setting rounding mode flags consistently, etc…

From the quality of the code I’ve seen being cobbled together in the AI/ML ecosystem I would assume all three of those issues going on, and maybe more.


No, this is not true for GPUs. https://www.twosigma.com/articles/a-workaround-for-non-deter...

(In this particular case, the order in which the numbers are summed up is non-deterministic due to GPU parallelism, which may change the result slightly.)

I would generally refrain from insulting other people's code if you don't know much about the system it's written on.

.

Editing here since all the replies to this are mostly saying the same thing: Yes, CPUs can also be parallel and it can happen there as well, but unlike a CPU where most instructions on their own are deterministic, CUDA provides primitives that aren't. This is very much by design (as they're faster than their deterministic counterparts), and I mostly just take issue with how parent phrased this as a bug caused by bad code.


GPUs are deterministic machines, even for floating point.

The behavior in the linked article has to do with the use of atomic adds to reduce sums in parallel. Floating point addition is not associative, so the order in which addition occurs matters. When using atomic adds this way, you get slightly different results depending on the order in which threads arrive at the atomic add call. It's a simple race condition, although one which is usually deemed acceptable.


I just edited my comment while you were writing your comment to add an explanation. The point here is that some primitives in eg. cudNN are non-deterministic. Whether you classify that as a race condition or not is a different question; but it's intended behaviour.


Right but that's not an inherent GPU determinism issue. It's a software issue.

https://github.com/tensorflow/tensorflow/issues/3103#issueco... is correct that it's not necessary, it's a choice.

Your line of reasoning appears to be "GPUs are inherently non-deterministic don't be quick to judge someone's code" which as far as I can tell is dead wrong.

Admittedly there are some cases and instructions that may result in non-determinism but they are inherently necessary. The author should thinking carefully before introducing non-determinism. There are many scenarios where it is irrelevant, but ultimately the issue we are discussing here isn't the GPU's fault.


What I'm saying is "there are non-deterministic primitives", not "there are no deterministic primitives".


Yes, and `gettimeofday` is a non-deterministic primitive. There is nothing special about GPUs here. If you write tests that fail sometimes because you used non-deterministic primitives like gettimeofday and someone files a bug we don't throw up our hands and say "this is not a bug but due to how CPUs work." We remove the non-deterministic bit.

There's no difference here. This isn't a GPU problem.


Except the issue is inextricably linked to GPUs. All of the work in practical DNNs exists because of the extreme parallel performance available from GPUs, and that performance is only possible with non-deterministic threading. You can't get reasonable training and inference time on existing hardware without it.


1000 threads can run in parallel. It doesn't prevent us to sum their results deterministically:

    results = ThreadPool(workers=1000).imap_unordered(calc, inputs)
    print(math.fsum(results))
Due to the magic of the fsum alg, the result is deterministic whatever order we get results in. https://docs.python.org/3/library/math.html#math.fsum


That's not the operation being performed on GPUs that is the problem. The issue is that fundamentally GPUs allow for high performance operations using atomics, but this comes at the cost of nondeterministic results. You can get deterministic results but doing so comes with a significant performance costs.


Using atomics is easier than warp operations (using warp shuffle for example), but warp shuffle is quite fast.

I guess if determinism is so important implementations can be changed, it is just maybe not that high priority.


That summation is slow and would not be used in practice.

You could use just one thread on your 10000 thread GPU too and it would be deterministic, sure. Completely beside the point.


In my experience cuBLAS is deterministic, since matmul is the most intensive part I don‘t see other reasons for non-determinism other than sloppyness (at least when just a single GPU is involved)


Yeah. In curated transformers [1] we are seeing completely deterministic output across multiple popular transformer architectures on a single GPU (there can be variance between GPUs due to different kernels). Of course, it completely depends on what ops and implementations you are using. But most transformers do not use ops that are typically non-deterministic to be fast (like scatter-add).

One non-determinism we see with a temperature of 0 is that once you have quantized weights, many predicted pieces will have the same probability, including multiple pieces with the highest probability. And then the sampler (if you are not using a greedy decoder) will sample from those pieces. So, generation is non-deterministic with a temperature of 0.

In other words, a temperature of 0 is a poor man’s greedy decoding. (It is totally possible that OpenAI’s implementation switches to a greedy decoder with a temperature of 0).

[1] https://github.com/explosion/curated-transformers


If the hardware is deterministic, so are the results. You can't generate random numbers purely in software with deterministic hardware.


The behaviour of atomic operations is definitely not deterministic. E.g. if you have a lot of atomic adds, every time you run the code you'll get a different result without a random number generator.


Read the article you linked.

It literally says that the GPU is deterministic, the NVIDIA libraries on top are deterministic, but it is Tensorflow that introduces variability (errors!) for “performance”.

My argument is that it is the AI/ML code that is introducing non-determinism, usually by sacrificing repeatability to gain performance.

That's precisely what's happening here. Tensorflow introduced a "harmless"[1] data race to improve performance by not having to use a deterministic but slower algorithm.

The individual floating point computations are deterministic, it's the multi-threaded design on top that's introducing the variability in the output.

[1] Used to be harmless, but cutting corners like this will make it nigh impossible to repeatably validate the safety of future models like GPT5. That seems pretty dangerous...


As the article says, cuBLAS is deterministic, but other CUDA primitives (eg. some of those in cudNN) are not.

Yes, the non-determinism is being introduced somewhere, but that is splitting hairs. The point is that the primitives that you work with on GPUs are non-deterministic by design.

I mostly take issue with you phrasing it as a bug and using it to insult the authors.


How is that splitting hairs?

> The point is that the primitives that you work with on GPUs are non-deterministic by design.

This is just blatantly wrong. There are _some_ operations that can be non-deterministic in some scenarios but they are not necessary.

GPUs are deterministic. If you ask them to add a million floats in order, you get the same result every time. If you ask them to add a million floats in some arbitrary order, then you may get different results every time. The distinction is that someone had to ask the GPU to do that. It's a choice.

> I mostly take issue with you phrasing it as a bug and using it to insult the authors.

It's a bug, whether it insults the authors or not is irrelevant. It's most definitely a bug.


Basically any parallel map-reduce operation using non-commutative reduce operators[0] is non-deterministic unless you specifically sort after/during the gather, or block on the gather (and gather to a thread-determined memory location). Sorting and blocking takes time. If you remove the sort/block, you will get a non-deterministic answer when operating on floats for a wide variety of reduce operations, but it will be faster. This is true of any parallel map-reduce, done anywhere (MPI, cuda kernels, openMP, spark, etc.), and is not unique to gpus/cuda.

> If you ask them to add a million floats in order, you get the same result every time.

There are a bunch of ways to add a million floats in order on a gpu, but they will all get you different results.:

* split the million floats into ‘n’ chunks, each chunk is summed, then you sum the ‘n’ results. * if you sum results as they are gathered (you don’t need to block) you will get a non-deterministic result, as the threads finishing (outside of a warp) is non-deterministic in order. * if you change ‘n’, your result will change. * if you sort after gathering , your result will change.

TLDR: parallel race-conditions are nondeterministic. Map-reduce has an underlying race-condition that you can prevent, but it costs time/performance. Sometimes you don’t care about the non-determinism enough to pay the performance penalty to fix it.

[0] https://www.microsoft.com/en-us/research/wp-content/uploads/...


Your comment, along with cpgxiii and n2d4’s are all really good. I have a question: suppose training and inference of an LLM were made to be deterministic at the cost of performance.

Would the cost be “everything will take twice as long” or would it be more like “inference will take a week and training will take a couple lifetimes”?

If it’s the latter, then it seems disingenuous to call this a “bug.” It’s like saying F1 cars could be horse drawn, and they only use internal combustion for “performance reasons.” If its the former, then maybe there is a more interesting discussion to be had about the potential benefits of determinism? (That said, I agree with n2d4 that it’s stupid to insult the authors. Talk is cheap and building is hard.)


> That said, I agree with n2d4 that it’s stupid to insult the authors. Talk is cheap and building is hard.

If your code offers an expectation of determinism then it's sloppy to not distinguish where there isn't determinism. There's nothing difficult about writing a comment to the effect of "this function is non-deterministic. For deterministic results, use X".

The code is sloppy if the developers didn't consider determinism and offer nothing to consumers, or if the consumers writing software cannot know where non-determinism is introduced.

If that's somehow insulting then I'd say someone has very thin skin.


There are flags[1] for that indeed. It feels like half of the people commenting here don't know all that much about the topic they're commenting upon

1: https://pytorch.org/docs/stable/generated/torch.use_determin...


Nothing I said conflicts with this, though?

Yes, if you eschew determinism for the sake of raw performance then the result will be non-deterministic. But you don't have to do this, nor is it inherently untenable to solve these problems in a deterministic way.

Sure it may require some performance overhead, and increase development time, but it's no different than writing deterministic code elsewhere. It's disingenuous to hand-wave away the solution because of some opaque cost or overhead we're unwilling to entertain. None of the parent posts ever mention performance tradeoffs.

In particular there is no indication that the problem being discussed couldn't be solved with determinism in an equivalent amount of time. You're making my point: GPUs are deterministic, software may decide not to be.


FWIW, I took “GPUs are deterministic” to mean they are deterministic in all possible intended use cases. This is not strictly true, since the whole point of using them is massive parallelism, which brings along non-determinism, for reasons that others have noted. Of course it’s possible to choose to forego that, but what is the point of a GPU in that case?


This is a false dichotomy. You can have massive parallelism and determinism.

You can trade determinism for convenience, but that doesn't make things easier: now you have to deal with the determinism.

But to suggest that massive parallelism somehow implies non-determinism is quite disingenuous from my perspective.

We have mutexes and lock-free ring buffers and stable sorts and all sorts of bells and whistles to make parallelism safe elsewhere. We also already have tools to solve this for GPUs.


I think whether it’s a bug or not depends on the software requirements and expectations. If the code has some expected bounds on runtime, switching the GPU code to sequential processing (for the sake of exact reproducibility) would break that expectation and could be considered a bug as well. If we expect performant code and exact reproducibility, that just might not be possible…


It's hard to call it a bug given that any concurrent float sum or product will be different in regards to changing the amount of concurrency. Even if you order the final value per thread before reducing the result will differ if you use a different amount of threads to split the problem.

Because in floating point arithmetic 1 + 2 + 3 + 4 is different than (1+2) + (3+4).


The PyTorch documentation has an entire section about how to make your code deterministic. In my experience, the performance difference is negligible.

https://pytorch.org/docs/stable/notes/randomness.html#avoidi...

Unfortunately, determinism across devices or even driver versions is not that easy. You'd have to write your own BLAS kernels using only basic operations, which are guaranteed to follow IEEE 754 semantics.

https://docs.nvidia.com/cuda/floating-point/index.html

One gotcha are fused multiply-adds, which the compiler may or may not introduce, so you have to wrap all your floating point operations with __fma* intrinsics to make sure the compiler does not interpret them differently.


As far as I can tell this article doesn't explain why this happens on the GPU (for example, why Tensorflow's reduce_sum is non-deterministic). My hypothesis is that this is entirely due to concurrency: if the same code can be run in two or more different interleavings, they can produce different results. This is corroborated by the first answer here [0].

If so, this exact same issue happens in CPU code as well: have two or more threads, run the program many times, observe different interleavings that expose race conditions which (depending on the algorithm) may or may not produce different results. This can happen even if you don't use floating point, and has nothing to do with floating point non-determinism itself. For example, have a thread print "Hello" and another thread print "World"; even without tearing, you may see either Hello World or World Hello on the screen.

Now, proper floating point non-determinism happens in two cases. One is that when you run the same code in two different architectures you could have different answers (because of rounding modes, or because some architecture doesn't support subnormal numbers or signaling nans, because transcedental functions like sine are implemented with different accuracy, etc). In this case it's deterministic when run the same in the same machine, but may run differently in another machine with a different architecture.

The other case is that some "optimizations" actually break your code if applied carelessly (you enable those broken optimizations with -ffast-math in C for example). Among other things, this may break numerical stability of algorithms like Kahan summation. And, if you let the compiler decide which exact optimizations will be applied and in what order, you get non-determinism between different compilers. So in this case it's deterministic when compiled with the same compiler, but may run differently with another compiler.

[0] https://stackoverflow.com/questions/50744565/how-to-handle-n...


To nitpick in addition to the already existing comments: this has nothing to do with GPUs per se. You would see the same issue in multithreaded code on a CPU. Even on a single core CPU this can happen with a multithreaded program depending on how the OS schedules and interrupts the threads. It just happens to be an implementation choice in a GPU library/API.


> I would generally refrain from insulting other people's code if you don't know much about the system it's written on.

Well, the general state of how utterly shoddy most of the code in the AI/ML ecosystem is is observable to anyone trying to follow a guide on how to set up Stable Diffusion on AWS. It's a fucking mess of trying various combinations of driver versions, Ubuntu kernel versions, Python versions, and the fact that Python requirements.txt (similar to NodeJS) doesn't pin versions of transitive dependencies doesn't make it easier because it makes for very brittle and not reproducible builds/guides. Oh, and at least some of that stuff won't work without root.

Yeah I'll keep AI shit cordoned off in its own subnet.


Years before ChatGPT I made the joke that AI would want to take over the world like a computer virus, but it’s written in Python, so it can’t figure out how to install itself on other computers.

I think the joke was on Twitter, RIP.


I 'member that joke. Think it must have gone around while OpenStack transitioned from Python 2 to 3. What a fucking mess.


There isn't much of a culture around code quality in ML / AI / DS.


It's not a code quality issue, there are ways to ensure determinism (sometimes you just need to set a flag), however, they are intentionally explicitly not used in order to gain performance.


I don’t know about how insulting it is, I don’t like rushing things out but we’ve all had to.

People are rushing like crazy to get there first with X for AI all over the place, it would be pretty shocking if there weren’t wires sticking out everywhere.

I don’t think that says anything positive or negative about the hackers involved.


it’s basically always reasonable to insult someone’s code because we are computer programmers and we know what we have done


So you can generate true random numbers using just the GPU parallelism? Consider me impressed!



You've moved the goal posts. You're conflating CUDA with GPUs. From Wikipedia:

> CUDA (or Compute Unified Device Architecture) is a proprietary and closed source parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels.

Is the issue we're discussing because of the GPU or is it because of choices made in software libraries?

The parent is right, there is a deterministic, reproducible way to solve these problems, so if determinism is a desired or expected property, then this is a bug. It's not an inherent problem like you make it out to be. The fact that "workarounds" are given in what you link prove this.


What you said can be violated when parallelism is involved. One such example is that we know some floating point operations such as addition and multiplication are non-commutative, hence it depends on order of execution to complete reduction for example. And then in parallel situation, some implementation will make the order or reduction non-deterministic (for performance reason) and hence the final result also non-deterministic.


Minor nit but commutative is the wrong term. Floats always obey a+b == b+a, but not associativity: (a+b)+c != a+(b+c).


Right!


It's still deterministic even if the results appear not to be. If you have memory, CPU cache, CPU registers in the same state, you will get the very same results. You need a source of entropy for the results to be non deterministic.


Actually, clock domain crossing for asynchronous clocks (as is AFAIK typical for granular dynamic frequency scaling, like running CPU cores at individual frequencies instead of all at the same, because it quite softly smoothes over to any new target frequency to prevent glitches) implicitly includes thermal noise in the raw transistors that determine which of the two involved clock edges happened earlier (a decision that eventually ends up truly random when they are at (almost) exactly the same time). And this is involved in even L3 hit latency.


Sure, but they will never be in the same state, which can even be used as a source of entropy: https://link.springer.com/article/10.1007/s11071-015-2287-7


Mathematically, computation is deterministic. The author dismisses or ignores the many ways that the physical apparatus driving the computation can force the result of a software application to be a function of time.

Calling GetTimeOfDay() could do it.

Clock frequency drift between multiple processors could it.


Quantum computer is under the category of computers.

Quantum computation relied on Quantum mechanics.

Quantum mechanics are not deterministic.

So, Quantum computers are not deterministic.

Therefore, unless P=NP, not all computations are deterministic.


When theory fails to consult reality.


hmm, how, I wonder if Alhazen’ s Circular Billiard Problem[1] results for n steps in simulation will be same for multiple runs.

[1] https://forumgeom.fau.edu/FG2012volume12/FG201216.pdf


On a large scale, not having memory with good ECC is enough to have entropy.


Small nit. You mean errors due to floating point math


Not sure I understand the excerpt from the referenced paper.

Is it saying that part of its more-efficient inferencing relies on mixing tokens from completely-separate inputs – eg, from other users? And then, depending on what other inputs chance into the same grouping, the relative assignment-to-'experts' varies, and thus the eventual completions?

If so, I'd see that as not just introducing non-determinism, but also potentially making the quality of your responses dependent on how-many-concurrent-requests are fighting for the same expert-allocations.

(For example, maybe the parts of the system best at translating/interpreting Hindi give worse results during peak usage hours-of-the-day in India, when the most concurrent inputs are competing for that same competence.)

Perhaps also, this is another possible explanation for perceived quality-degradation over time. When certain tests were reliably succeeding earlier, there was less congestion for the relevant 'experts'. Now, with more concurrent use, those same tests aren't as reliably winning as much of relevant 'experts' effort.

This may also suggest a bit of a quagmire: on whatever domains some sub-experts seem impressively good, initially, even more proportionate use will be attracted. But such new congestion means all the copycat use no longer gets the same expert allocations – and thus the initially-impressive performance degrades.

(And if the effect is strong, & known-but-undisclosed-by-OpenAI, does it amount to a bait-and-switch? Attract users with unrepresentative excellence on an initially-uncongested Mixture-of-Experts system, but then offer them the lower-quality results from a more-congested system.)


The results are showing essentially 12 unique responses from 30 tries… not what you would expect from mixing tokens.

I think it groups the batch up differently, so if I have a batch of 10, and it groups it up into 2 groups of 5, if my prompt makes it to the second group or 1st group I get a different answer. But if I’m in the same location in the batch, then I get the same answer.

The whole batch is deterministic given the same batch (sequences and ordering), but if you shuffle the batch then you lose that determinism.


this seems like a plausible outcome, and if true could spell disaster for OpenAI models relative to the competition and open source models. Currently, reliability is one of the core obstacles preventing widespread adoption of LLMs in many business critical workflows. And if these rumors, that GPT-4 is inherently un-deterministic and unreliable, are true then most enterprises are better off finetuning open source LLMs—which are just as capable—for their specific domains. they stand to gain better performance that way anyways, as domain-specific models will always outperform generalist ones


> And if these rumors, that GPT-4 is inherently un-deterministic and unreliable, are true then most enterprises are better off finetuning open source LLMs—which are just as capable

Wait, am I misunderstanding you? I feel like I've had a head injury or something, because I've never heard of an open source LLM that's as capable as GPT-4 (in most scenarios).


Only on specific domains, these models don't become generalists like GPT-4, they can become task experts for a single task.


Fine-tuned MedPalm is worse than GPT-4 on most Medical Challenge Tests. Fine-tuned Minerva is much worse on arithmetic benchmarks.

The LLM space is just different. There's no guarantee a fine-tuned model will beat a bigger generalist one.


> domain-specific models will always outperform generalist ones

That's only true assuming you habe enough data to train a domain-specific model / expertise to train it and test it correctly.

I've encountered cases where an image recognition task could be accomplished well with a very general model like CLIP, but people still fine-tuned another model on their own small data set because that's considered better.

A domain specific model might be more likely to fail on weird outliers not present in the small domain specific training data.

> could spell disaster for OpenAI

Nah I don't think so. They are not all in on one specific model architecture. If the current architecture is found to have serious unfixable flaws then they'll just change architecture.


>as domain-specific models will always outperform generalist ones

This is not even close to true for Language models.


Fine-tuned MedPalm is worse than GPT-4 on most Medical Challenge Tests. Fine-tuned Minerva is much worse on arithmetic benchmarks.

The LLM space is just different. There's no guarantee a fine-tuned model will beat a bigger generalist one.


_If_ 3.5 is a MoE model, doesn't that give a lot of hope to open source movements? Once a good open source MoE model comes out, maybe even some type of variation of the decoder models available(I don't know whether MoE models have to be trained from scratch), that implies a lot more can be done with a lot less.


I agree, and really hope that Meta is doing something in that vein. Reducing the FLOPs:Memory ratio (as in Soft MoE) could also open the door to CPU (or at least Apple Silicon) inference becoming more relevant.


It would be bad for single-consumer-GPU inference setups.


Not an expert (no pun intended), but MoE where each expert is actually just a LoRA adaptor on top of the base model gets me pretty excited. Since LoRA adaptors can be swapped in and out at runtime, it might be possible to get decent performance without a lot of extra memory pressure.


While MoE-LoRAs are exciting in themselves, they are a very different pitch from full on MoEs. If the idea behind MoEs is that you want completely separate layers to handle different parts of the input/computation, then it is unlikely that you can get away with low-rank tweaks to an existing linear layer.


Could this work well with distributed solutions like petals?

https://github.com/bigscience-workshop/petals

I don't understand how petals can work though. I thought LLMs were typically quite monolithic.


Petals does a layerwise split I think. You could probably run separate experts on each system. I don't think this sort of tech is very promising so I haven't looked.


It could be good if the relevant expert(s) can be loaded on demand after reading the prompt? If the MOE is, say 8x8b params, then you could get good speed out of a 12GB GPU, despite the model being 64 params in size. Or am I misunderstanding how this all works?


I feel like this introduces the potential for weird and hard-to-implement side channel attacks, if the sequences in a batch can affect the routing of others.


I think you’re right. Would be very hard to exploit I imagine though.


Hard like building a virtual machine in an image decoder? If there’s a way there’s a will.


the tools available to imagine such things are limited today.

the language models in our heads have not caught up to the ones in our browsers.

as the similarities and associations crystallize a bit better, it won’t look so hard.

bookmark this if you think it bullshit. eight months.


I don't expect LLMs to be good enough at engineering to trivialize this kind of thing for a while - possibly never, if something else comes along and outcompetes them.


not models.

monkeys.


Same thing was said about Spectre-like bugs


This is _excellent_ work, I've been adamantly against MoE for a set of reasons, this is the first compelling evidence I've seen that hasn't been on Substack or a bare repeating of rumor.

I had absolutely no idea GPT4 was nondeterministic and I use it about 2 hours a day. I can see why a cursory looking wasn't cutting it, they "feel" the same in your memory, a lot of similar vocab usage, but are formatted entirely differently, and have sort of a synonym-phrase thing going where some of the key words are the same.


Thanks. I'm really no expert (:P) on MoE research; I just noticed what was written in the Soft MoE paper and felt a need to check.

The non-deterministic outputs are really similar, yeah, if you check the gist examples I linked https://gist.github.com/152334H/047827ad3740627f4d37826c867a.... This part is at least no surprise, since the randomness should be bounded.

I suspect OpenAI will figure out some way to reduce the randomness at some point, though, given their public commitment to eventually adding logprobs back to ChatCompletions.


I don't think this commitment had any plausibility. Token "probabilities" only have a straightforward probabilistic interpretation for base models. In fine-tuned models, they do no longer represent the probability of the next token given the prompt, but rather how well the next token fulfills the ... tendencies induced by SL and RL tuning. Which is presumably pretty useless information. OpenAI has no intention to provide access to the GPT-4 base model, and they in fact removed API access to the GPT-3.5 base model.


Topic laundering, the probabilities are the probabilities, you don't suddenly get wrong probabilities with more training on more data


You do, because it’s not just more training it’s PPO updates instead of MLE. It’s no longer trying to estimate the token distribution of the training corpus, it’s trying to shift logprobs into tokens that maximize expected reward from the RM. The GPT-4 technical report has a figure showing that logprobs become less well calibrated as confidence scores in the RLHF vs pre-train model.


Fascinating, ty


GPT4 web chat for two hours a day? I buy that. Using the API repeatedly for the same inputs, eg developing a program, and the non-determinism is hard to miss.


I would imagine that most people use nonzero temperature, so they won't need to look for any explanation for non-determinism.


Literally the first thing I did when I had llama.cpp working was set the temperature to 0 and repeat queries.

(but that's mainly because I'm a weird old scientist with lots of experience with nondeterminism in software).


I did too, Kmeans broke me a couple years ago: but, never temperature at 0 with long length, and trusted my instinct instead of actual diffs. This is was the first time I actually diffed


Yeah, it's one of the first things you notice when trying to do some kind of "feed GPT some data and get it to produce a novel answer to a question" task with the API.


No, because if you wanted a novel answer, why would you set 0 temperature? ;)


> I've been adamantly against MoE for a set of reasons

Such as?


It was completely unsubstantiated, based on rumours from a blog, but everyone repeated it as fact.


I think it is pretty compelling that almost all of the people doing research into switch transformers at Google were hired into OAI. I am not sure if that is ouboicly reported but once Ghotz leaked those details about the models, I went to check where the authirs of those papers are now and.... yep


What do you use it for? Are you using many plugins? Curious what sort of insights someone using the tool this much might have, perhaps even through the batch of features released this week.


Mixture of Experts


Thanks. I assumed it was Margin of Error. The article doesn't expand the acronym until midway through the post, where it appears almost accidentally. Perhaps the intended audience is a mixture of experts, of which I'm not a part.


I suspect the article is written primarily to be clear to people sufficiently immersed in the relevant areas to be able to have a concrete opinion on the theory.

Also I strongly suspect that at least in the case of -me-, an article that was easier for me to understand wouldn't make the underlying theory any easier for me to judge.

(on the upside, at least I -did- understand and appreciate your self deprecating pun :)


Thank you! I knew it couldn't mean "Merger of Equals"... but then again, if those experts are equals, then maybe that acronym also works ;-)


The GPT-3.0 "davinci-instruct-beta" models have been returning non-deterministic logprobs as early as early 2021. This is speculation. CUDA itself often has nondeterminism bugs.

text-davinci-001 and text-davinci-002 were trained through FeedMe and SFT, while text-davinci-003 was RLHF; the models themselves have more variance at high temperature.


What about the foundation models, i.e. davinci and code-davinci-002?


"these tokens often compete against each other for available spots in expert buffers. " So is this also why ChatGPT is often just writing placeholders in place of functions when I ask him for some long code?


> these tokens often compete against each other for available spots in expert buffers.

Hold up, does this mean that under heavy load the results change? Does this explain why it sometimes feels like the output quality changes?


MoE: Mixture of Experts


There’s a comment that’s 3 hours older than yours that clarifies this.


I searched for MoE in the comments and didn't see it. ah, you must mean this one https://news.ycombinator.com/item?id=37006549, which doesn't include "MoE", so that's why I didn't find it. Still, my comment's upvotes show it was helpful to some - maybe they searched for "MoE" too, instead of "mixture of experts".


I asked GPT to explain this:

>In the MoE approach, different "experts" or portions of the model are selected for different parts of the input data. The selection of which experts to use can be influenced by several factors, including the specific content of the input data, the order in which data is processed in a batch, and possibly even minor variations in the internal state of the model.

>This "expert selection" process introduces a level of stochasticity, or randomness, into the model's operation. For example, if you process the same input data twice in slightly different contexts (e.g., as part of different batches), you might end up consulting slightly different sets of experts, leading to slightly different outputs.


> It’s well-known at this point that GPT-4/GPT-3.5-turbo is non-deterministic, even at temperature=0.0

Interestingly, on another discussion there was a claim that setting the temperature to 0.0 made gpt-4 deterministic: https://news.ycombinator.com/item?id=36503146


This guy probably never did anything nontrivial with the API - you notice almost instantly that the chat models (both 3.5 and 4) are nondeterministic at 0 temperature. Source - built a documentation search bot and had it crap out on me on copy pasted prompts when I was demoing it.


Apparently, and I haven't tested this, just from what I read, the simpler GPT-2 models are deterministic at 0 temperature.


If you want to make it deterministic, just cache the responses keyed by queries.


How interesting. I was just discussing this last night with our analysts after I experimentally noticed that temp=0.0 (and all penalties/top_p set accordingly) still showed non-determinate behavior. Wasn't sure why this was, and now this article comes about.

The explanation makes quite a bit of sense.


This is a plausible hypothesis. I’m curious whether OpenAI has considered this already and examined it I feel like an average senior eng could eval this in under two focused days, but maybe OpenAI has less unit-testing than I expect.


Well, a colleague of mine managed to build a non deterministic GET REST API endpoint. :D


this hypothesis makes a lot of sense. if indeed gpt-4 is a sparse MoE—which i believe it is—then OpenAI must have tested and proved their initial idea of a large capacity MoE LLM model first training/building a smaller one. this smaller test model might be gpt-3.5-turbo.


I see in the comments it seems to be a huge miss understanding between 2 uses of “non-deterministic”: 1) from normal English: cannot be determined beforehand (results may vary) 2) from theory of computation: loosely “parallel computation” (unknown path to the solution)


For floating point math, there's no distinction, as "parralel computation with unknown path to the solution" inherently implies "results will vary", as (a+b)+c != a+(b+c).


I wonder if there’s a side channel attack in there waiting to happen..


Determinism should always be an option in any system.


can somebody make some quantum AI, that's super deterministic.


Off topic

> 3 months later, reading a paper while on board a boring flight home, I have my answer.

I noticed people from hacker news routinely read scientific papers. This is a habit I envy but don't share.

Any tips or sites for someone interested in picking up more science papers to read.


For just getting started I recommend collections:

1. Ideas That Created The Future[1]. It's a collection of fiftyish classic CS papers, with some commentary.

2. Wikipedia's list[2].

3. Test of Time awards[3]. These are papers that have been around for a while and people still think are important.

4. Best paper awards[4]. Less useful than ToT as not every best paper is actually that good or important, and sometimes the award committees can't see past names or brands for novel research.

5. Survey Journals[5]. Students often get their research started with a literature review and some go the extra step to collect dozens of papers into a summary paper. I subscribe to the RSS feed for that one, and usually one or two are interesting enough to read.

6. Citation mining -- As you read all these, consider their citation list as potential new reading material, or if an old paper leaves you wanting more, use Google Scholar to find a papers that cited what you just read.

[1]: https://www.amazon.com/Ideas-That-Created-Future-Computer/dp...

[2]: https://en.wikipedia.org/wiki/List_of_important_publications...

[3]: https://www.usenix.org/conferences/test-of-time-awards

[4]: https://jeffhuang.com/best_paper_awards/

[5]: https://dl.acm.org/journal/csur


I'd like to disagree with this. In particular, about [1]: It is a collection of papers in many different topics. There is little technical overlap between Alan Turing's Entscheidungsproblem paper, for instance, and Hoare's paper on axiomatic semantics. Also, the papers are all from the 70s. They're uniformly influential papers, and have shaped the field, but the fields and the vernacular used by working researchers is very different. At best, the papers approximate a four year undergrad curriculum in CS, and at worst, are a recipe to get distracted and overwhelmed. The link to Wikipedia [2] is somewhat better in that the papers appear to be more modern, but suffers even more from the problem of diversity.

A somewhat similar problem arises with test-of-time and best paper awards. To elaborate on my complaint, imagine the exaggerated case of someone trying to understand modern science by intensely focusing on the work of researchers who won the Nobel Prize. Clearly all very important work, but understanding the 1990 Physics Nobel Prize (on electron-proton scattering) is of no use to understanding the work for which 1991 Nobel was awarded (complex systems and polymers).

There are two things that (I'm assuming the OP's field of interest is computing) a CS education provides: At the undergrad and in the early stages of grad school, breadth of topics, and their modern synthesis. You don't spend much time reading papers (at least in an undergraduate education), but you understand the basics, and get a feel for the problems considered and the sensibilities of researchers. In an intermediate-level graduate seminar, you pick a narrow topic, and focus on papers in that topic. The first papers in the area (like Dijkstra's papers on distributed computing), the best / most important papers in the area, and the latest papers on topical interests (like Merkle trees and blockchains). There is thematic and technical continuity from one paper to the next, and you start to understand the the story being told. Then, late in graduate school, and in the rest of one's professional career, one starts reviewing papers that haven't even been published. At this point, you see the story being written: the steps and the missteps, and the memorable and not-so-memorable papers in a field. To truly understand a field, one needs to read not just the great papers, but also the middling ones.

And one needs to concentrate on a topic. The thing about a forum such as HackerNews is that for every topic of interest, there's likely a person here who's an expert in the area, but it is easy to confuse that observation with the much stronger claim that there's a person here who's an expert on every topic. The last of those people died in the mid-20th century, if they ever existed.


I feel like you're giving advice on how to become a PhD student, and frankly, that's not the point of the question, and if it is: any grad student who can't read papers should ask their advisor for advice.

So I take OP's perspective to be from a practitioner (such as myself). Apart from my colleagues in R&D, we aren't called upon to write new papers that demands expertise in ever increasing narrowness. Instead we are to solve the needs of the product, usually regardless of specific expertise. So we need to be more broadly equipped, as it's typically better to have a screwdriver and a hammer and a screwdriver in the toolbox than a ten different screwdriver bits of varying niche application.

As an example, the TD-IDF paper curated in [1] has been broadly useful as a log analysis tool to surface interesting log lines and remove the mundane common "error" logs. There's been many advancements since then, using bayesian techniques or deep learning, but this one is simple enough and cheap enough to deploy.


Old ideas that were good but didn't become common/standard are something I run across a fair bit in papers and yeah, they're often way behind the state of the art but also a lot easier for me to understand/implement and far better than the relatively naive approach I'd've taken otherwise.


From there, just keep a reading queue. If you notice a particular journal is a good source of material, consider subscribing to it.


> I noticed people from hacker news routinely read scientific papers.

Do they? I suspect that most don't, and those that do are either in specialized careers or are engaged in some kind of scientific research.

Some interesting research gets disseminated via Twitter and chatrooms. Or maybe you follow a podcast that mentions new research. But you might also be following new publications from a handful of reputable journals, or following an Arxiv category, or looking through new conference papers. It's very easy to get overwhelmed with new research to read, and not knowing what's worth your time, unless you're already very familiar with the field and well-versed in the material.


Long time HN'er college dropout and I read a LOT of scientific papers. Probably an average of 4 a week over the past couple of decades, sometimes reading 40 in a week.

I probably averaged 20 a week back in March when open source AI was booming in the wake of Llama and on the heels of GPT-4.


> Long time HN'er college dropout and I read a LOT of scientific papers. Probably an average of 4 a week over the past couple of decades, sometimes reading 40 in a week.

I'm guessing that you don't actually dive into each paper to 100% understand it? I find it takes me at least 10 hours of reading/looking things up per paper before I could consider that I fully understand it. But that would mean, if I want to do 4 papers per week, I'd spend at least 40 hours/week, that's like a full-time job, so obviously I don't have time for that.

How much time would you estimate it takes you to read through one paper? And how much of the content would you estimate gets retained and can be recalled when you wish?


> I'm guessing that you don't actually dive into each paper to 100% understand it

Depends on the paper's content but there's often sections that you don't need to 100% understand to get value. For example, in survey papers, there's typically a section that is basically "what queries we typed in at the library." I skip those and I think you can too =)

For practical papers, sometimes the evaluation can be skimmed. Author's benchmarks are usually designed to be the most favorable to the paper's novel approach, so I don't spend too much time thinking about them.

Similarly, Related Work sections can be skimmed. If you're well read in the field, you probably won't learn anything from it, and if you're entirely unread "its like X but different because Y" isn't helpful as you have no idea what X is, beyond the one dense sentence the paper just gave you.

> And how much of the content would you estimate gets retained and can be recalled when you wish?

If I really want to remember a paper, it goes into Anki flashcards. This is rare, personally. Usually only for tech I support in prod.


How much I understand, and how long it takes to get there, depends on a lot on how well-read I already am into a field.

I can read and fully understand an ML paper in an hour or so. But 6 months ago it took me a day to get through a couple of ML papers and I did not fully grok the mechanics of things like attention heads.

I'm more read in material science, chemistry, pharmacology, and cognitive science. Computer science (especially quantum computing, networking, and cryptography), photonics, and pure math are also big areas of interest for me.

Anything outside of that wheelhouse will take longer and I'll initially understand less, depending on how distant it is from my stronger subjects.


That's quite a range. How do you manage the signal-to-noise ratio? Normally that requires significant familiarity with the field, or a very specific query in mind. For example I only read papers in medicine when I'm researching an actual medical issue that I or someone else is having.


I follow a lot of highly respected researchers and (research minded) operators in the fields I'm interested in. Very often they post about papers of interest on X/twitter or their personal blogs. I also follow a handful of science communicators on YouTube who do short summary videos of papers of interest (Two-Minute-Papers, Anton Petrov, Sabine Hossenfelder, to name a few).

Other times I notice a general trend (ex. increasing discussion of a new paradigm X, more startups raising to work on Y, or a large chunk of talks at an annual conference being variations of Z).

Then I ask the forementioned academics and operators in my circle what papers I should read to get a handle on XYZ and/or simply follow the citations.

Given the amount of followers a lot of these researchers, operators, and science communicators have, I do not think I'm remotely unique in my efforts.


You don't typically need to pore over the paper and absorb every detail. Usually you can skim a little and backtrack if you missed something.


I strongly agree.

Once upon a time, I was in condensed matter physics. I was (and remain) interested in a very specific niche within that, and I read a small handful of the papers that were published each week. I’m not actively researching or publishing anymore so I cap this to one or two per month now, and mostly scan over them to see if anything piques my interest.

I was still interested in condensed matter as a whole, at the time, and attended group seminars once a month to see what other people were currently excited about - there wasn’t any hope of me reading a cross section of all condensed matter papers because there is far more published per week than I’d be physically able to even glimpse at, and most of it is stuff I don’t understand or particularly care about.

I was likewise interested in physics as a whole, and twice a year I’d attend a departmental seminar and see what people in the entire department were interested in. Most was far over my head, but it still directed me to a small handful of papers that I’d read for the hell of it. Of course, I couldn’t do this without first hearing people review the research. There’s far more published per day in physics as a whole than I could read in a year, and most of it I’d find unrelatable and uninteresting.

I guess where I’m going with this is that anyone with a specific interest is already reading papers. It’s their job. Anyone with a general interest would find actively pursing paper hunting to be a waste of time with a ridiculously bad signal to noise ratio. Instead, they should use channels that align closely with their own interests, through which they can get recommendations to read papers from the aforementioned specialists who have already filtered out much of the noise themselves. At that point, they should actually read the resulting papers.

There is another trick, though, and that’s to find an individual who publishes two unrelated pieces of work that you find interesting, then read their work and maybe those of their coauthors. Be careful, though, because this is a slippery slope to specialising, after which you’ll find yourself back at the point where you don’t aren’t following 99.9% of the stuff you wanted to follow in the first place.


I typically look up and read a paper when it's referenced in discussion or cited in something else, I'm reading/watching, and the purported contents seem surprising to me. This normally happens 3 or 4 times a week.

Honestly many papers are written in a way that's hard to approach and difficult to understand unless you're prepared to reread them a few times.

You're better off just getting your science news from actual science communicators and not the raw source.


> I noticed people from hacker news routinely read scientific papers.

Highly doubt that. It’s very hard to actually read scientific papers when you are not actively doing research.

You can’t just read a research paper in isolation. It’s next to useless. You need to understand its context, where it stands with regard to its sources and what it brings which is actually new and valuable. It’s nearly impossible to do properly if you are not fully immersed in a research subject.

I don’t even know how you would scheme introduction and sources to filter articles which are immediately obviously useless without being immersed in a field.

I guess you can obviously go though lists of papers which have be deemed worthwhile by someone else or got prices. That solves the filtering issue but then nearly every time you will be better served reading a text book presenting the ideas in said papers.

I fully expect the HN readership to contain a significant amount of students and actual researchers which explain why you encounter people reading papers but these people aside I would be surprised if the habit is common.


You don't need to be doing research to read an ML paper. With some general knowledge in AI you should be able to understand most papers.

And even then, sometimes you don't understand or care about their procedures, and you just want to look at the pretty results (check out this song they generated using AI!). There's even a very popular YouTube channel that focuses on this (two minute papers).

Finally, you usually hear about these cool papers via Twitter / X


> You don't need to be doing research to read an ML paper. With some general knowledge in AI you should be able to understand most papers.

I have a degree which involved reading some ML papers and I seriously doubt that. The field is flooded with papers which looks good when you quickly read them but are actually worthless because they misrepresent the state of the art or intentionally don’t compare their methods with other papers they should know.

> And even then, sometimes you don't understand or care about their procedures, and you just want to look at the pretty results

That’s fair but I wouldn’t call that reading a scientific paper.


Don't read them for the sake of reading them. Read them to solve your current problem or trying to keep up with advancements in a narrow field you love. Most papers (especially the ones in deep learning) seem to also have a mathematical fetish (to put it mildly) where needless representations are used where none are required and are self evident (for example inputs belong to Real number set). It ends up making the paper pseudo complex and unapproachable. Most papers are doing average/summation/series operations but instead of just saying so, use the symbols all over the place. So even if a few papers appear tough, keep reading them and digest your first paper thoroughly. You will find subsequent papers mostly are a rehash of existing work with similar fetish to make trial and error appear like mathematically sound research. Once in a while, you would find some paper which is fully theoretical and try to prove that either the inputs/outputs/components of models have certain well known mathematical properties and hence can be reasoned similarly. These are rare and would be difficult to parse through.

PS: Best papers I have seen are from deepmind where the approaches usually described are novel, varied and path breaking. Worst ones are - well no names but those that just use training and eval sets generated by GPT4 and try to prove things empirically


> Most papers (especially the ones in deep learning) seem to also have a mathematical fetish (to put it mildly) where needless representations are used where none are required and are self evident (for example inputs belong to Real number set). It ends up making the paper pseudo complex and unapproachable.

I completely disagree with that. Spelling out math is literally something out of 12th century. It just hinders understanding, if you have basic STEM-level math literacy, which anyone who reads an ML paper is implied to have (how could you seriously study linear algebra and calculus without it?).

Math may actually be the first thing you recognise in a paper, which can help you cross-reference the text to understand it.


Build the habit.

When google doesn't return a good result to a specific question, switch to scholar.google.com and start reading abstracts. Everything may seem like an opaque maze at first, but just keep reading and patterns start emerging quickly and become useful.


I don't mind reading research papers, but they're really annoying to read on a phone screen. I remember a few years ago, an HN comment shared a link to some tool that could convert a PDF to single column text and make it more readable on a phone screen, but I can't find it. Anyone remember this or have the link?


I use an android (and iOS I think) app called Xodo. The "reader mode" re-flows the PDF into a screen-width single column like an e-book. The latest update really buried the option in the menus, but it's there somewhere and works pretty well.


> but they're really annoying to read on a phone screen.

+1. I've already read probably 100 research papers this year in search of solutions to some technical problems, mostly while lying on bed with a tablet. I won't read as much without it.


Once phones got relatively big (i.e. 'phablet' ceased to exist as a concept because that size was just 'phone' now) I switched to using a 7/8" tablet with my SIM in it as my primary portable device (Nexus 7 and now Galaxy Tab A6).

Means I have to carry it in my jacket pocket or a side pocket on my combats but the bigger phones weren't comfortable in my trousers' top pocket anyway so for me at least the trade-off is well worth it.


How big is your phone screen and what are you using to read it? A few inches makes a lot of difference. In landscape mode my phone is 6.5" wide and reading a pdf with moonreader in full screen because its wide enough to read without having to reformat anything. You can also click on figures to view only that figure.

If that isn't enough you might consider a tablet or e-reader instead of trying so hard to make existing options work.

You CAN convert to something like epub which is trivially reflowed and this is just fine for reading fiction but just isn't as pleasant and nicely formatted as a pdf.


The software KOReader [1] has a PDF reflow setting which you can try.

[1] http://koreader.rocks/


Check out the papers and talks from Papers We Love, a "repository of academic computer science papers and a community who loves reading them":

https://paperswelove.org/


It depends on why you want to read papers and what you want to get out of it.

https://news.ycombinator.com/item?id=37006967 suggested some avenues for finding some classic papers. The follow-up https://news.ycombinator.com/item?id=37007360 pointed out some circumstances where that's not ideal. But in the process, implicitly assumes that you want to become familiar with current research, instead of just enjoying classic papers for some other motivation.

I mostly read papers in mathematics and computer science. For other disciplines I mostly rely on pop science, like Slate Star Codex or Money Stuff and blogs. There's also The Monad Reader (https://wiki.haskell.org/The_Monad.Reader) if you are interested in functional programming.

There's various blogs with interesting articles. Eg Vitalik Buterin has great stuff, like https://vitalik.ca/general/2017/11/09/starks_part_1.html and he links to the original papers. (I have no conclusive opinions on whether crypto-currencies are useful or good for the real world, but I do find the math behind some of them endlessly fascinating. Especially zero-knowledge proofs.)

Wikipedia is also often a good starting point. Whenever you read about a random topic, Wikipedia usually has an article that comes with plenty of references. Eg https://en.wikipedia.org/wiki/Forth_Bridge#References links to http://www.bath.ac.uk/ace/uploads/StudentProjects/Bridgeconf... and down the rabbit hole you go.

https://gwern.net/ also has great write-ups and links to original papers.


Honestly a lot are really hard to read. You start with the easy ones, learn the lingo, and then just keep going. Eventually you can enjoy reading the harder ones.

You learn pretty quickly that if you want answers, it's better to just go straight to the source, rather than have it filtered through someone else, where the message can (and often does) get twisted.

What are you interested in reading about? Maybe some people can recommend you some papers to start with.


There are certainly easier and harder papers. Though when you are struggling: keep in mind that there are also papers that are just badly written (and some papers that are well written).


> I noticed people from hacker news routinely read scientific papers. This is a habit I envy but don't share.

> Any tips or sites for someone interested in picking up more science papers to read.

Personally, the older I get, the more bored I've been getting with the level of information that "crosses my desk".

Eventually I basically stopped reading blogs et al and started getting my insights from books. Those books would often mention papers. Then I noticed a lot of books (and deep well-researched podcasts) mentioning the same papers. So I started reading those papers.

When you read a couple papers, you notice most of them reference a bunch of other papers. Now you have an exponentially growing queue of interesting papers that you'll never get to. Mission accomplished.

The main trick is to read stuff you're interested in knowing and understanding. Many papers can be quite difficult to read, but getting through a single paper will fuel your brain with more valuable information than 2 weeks of "the internet". In my experience at least.

Ultimately, life is short and papers give you a better information density return on your time than almost anything else. Even the bad ones.


For computer science, https://blog.acolyer.org/ is called The Morning Paper and talks about one interesting paper per post.

Edit: It seems to've gone on indefinite hiatus but there's a lot of backlog already there and some of it's really quite fascinating.


There are some materials about "how to read scientific paper", like the pdf one from U waterloo [3] with some methodological advice. Some good advice in this old HN thread [1]

But I don't see the point of reading a scientific paper unless you're actually curious about a specific topic. They are often hard to read, dense, have so many field-specific jargon that if you're new, you won't be able to read one paper and grasp everything. You would have to read references, or a book/blog that summaries core points.

So find a specific field you're interested in, find a good book/blog/homepage/tutorial/video to get your basics going so that when you start reading papers you won't be completely lost.

Then find a highly cited survey paper to understand what progress have been made beyond what is now basic. Then you can follow your curiously along that survey, decide a branch of research to read upon. You'll probably then realize that a few labs research/publish a lot in a specific direction. Now you can follow those professors (Twitter, Google scholar email notification) to keep up to date. By reading a lot you'll also start to notice papers that are "published just to get my PhD" and soon enough you can just read abstract + intro/result to judge if it is valuable or not.

If ML/LLM is your curiosity probably Lillian Wengs blog [2] is a good start for tutorials / surveys.

[1] https://news.ycombinator.com/item?id=24986727

[2] https://lilianweng.github.io/

Edit: direct link [3] https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPape...


For me it's very helpful to print out papers and read them with a pen in hand, away from my computer. Papers tend to be dense and require a level of focus that (I at least) cannot maintain when reading on a screen. It helps as well to able to easily take notes and annotate the paper.


Pick ones that are easy to read. Some are written line a magazine article. Others are math dense, reference another paper you can’t get hold of every other sentence and are a kind of marketing material anyway.

Also youtube and code: Attention is all you need is not a nice paper to read for Joe programmer, but you can understand what it is doing by watching karpathy and reading his code (or someone else who has implemented it, Llama for example). But you need to do some basic torch training first (karpathy again!)


Anyone can read scientific papers. All you need to do is pierce the layer of jargon. It takes practice but you kind of just pick it up. Reading on a computer helps because you can get words defined by clicking on them. Reading on paper is good too, it’s easier to keep at it and it sticks better.

Some sense of urgency helps. Most people will have a medical ailment or physiological issue of some sort. I promise you that there exist useful papers on it.


Once you obtain subject mastery you just need the read the abstracts.

To get a cold start look for a “survey”, “literature review”, or “systematization of knowledge” papers. Those organize a lot of papers, check out the ones that look cool and read the abstracts.

Rinse and repeat for five years and you get a phd.


Build the habit.

When google doesn't return a good result to a specific question, switch to scholar.google.com and start reading abstracts. It'll seem like an opaque maze at first, but just keep reading and it'll start clearing up pretty quickly and become useful.


Don’t feel like you need to understand 100%. You can always give yourself an hour to read a paper and gloss over some notation. If you read 5 papers over the course of a month, you can go back to your favorite and dive into the notation.


Feedly with keywords for your favorite topics or researchers works decently.

I imagine this routine comes from people with research backgrounds, where browsing papers is the academic way of googling around for answers.


I usually just read the abstract and synthesize that with the comments on HN to get the gist (and legit-ness) of the research.


They read scientific papers in the same way that everyone "read" Capital in the 21st Century, when that was a thing.


read textbooks instead most papers are obtuse and poorly written even famous ones. you can find them in wikipedia footnotes


Step 1. Find papers you're interested in Step 2. Open them Step 3. read them


Step 4: do a depth-first lookup of every citation, and read/finish that paper before continuing


Step 4. Get lost within a minute.


Step 3.5, see some other interesting paper is referenced in the related work, go to step 1.


Step 3.5-turbo, have ChatGPT summarize papers for you to speed up your reading


LlaMAo :)


Semantic Scholar for search. Scihub for any paywalled papers. Libgen for books. Zotero to organize.


Do you like Semantic Scholar more than Google Scholar and of so why?


Pick something you’re interested in and have a passing knowledge of.


just set up a desktop service to randomly open a paper once every few hours

if they're not too boring, and you're not doing anything important, you'll read it for fun


I read the abstract and look at the pretty figures :)


I want to know what a non-boring flight would be like


High turbulence definitely makes it less boring. So will a crying baby, disruptive passenger, or someone getting sick. After a few of those, you'll prefer the boring flights.



Snakes on a plane


Airforce One


The Langoliers


Airplane!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: