Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
|
from
login
PyTorch and VLLM
(
pytorch.org
)
1 point
by
andrewstetsenko
18 days ago
|
past
Flux Fast: Making Flux Go Brrr on H100s
(
pytorch.org
)
1 point
by
sayak_paul_hf
23 days ago
|
past
Fault Tolerant Llama training
(
pytorch.org
)
66 points
by
Mougatine
25 days ago
|
past
|
14 comments
Torch Backends
(
pytorch.org
)
1 point
by
fzliu
36 days ago
|
past
VLLM is now a PyTorch Foundation-hosted project
(
pytorch.org
)
2 points
by
djhu9
72 days ago
|
past
PyTorch Foundation Expands and Welcomes VLLM and DeepSpeed
(
pytorch.org
)
2 points
by
Philpax
72 days ago
|
past
PyTorch 2.7 Release with Blackwell support
(
pytorch.org
)
3 points
by
lnyan
85 days ago
|
past
PyTorch 2.7 Release
(
pytorch.org
)
3 points
by
jonbaer
86 days ago
|
past
PyTorch 2.7.0 Release
(
pytorch.org
)
2 points
by
DreamFlasher
86 days ago
|
past
|
2 comments
Quantization-Aware Training for Large Language Models with PyTorch (2024)
(
pytorch.org
)
2 points
by
tosh
3 months ago
|
past
TorchServe is no longer actively maintained
(
pytorch.org
)
2 points
by
tbobm
4 months ago
|
past
PyTorch 2.6
(
pytorch.org
)
1 point
by
tosh
5 months ago
|
past
VLLM Joins PyTorch Ecosystem
(
pytorch.org
)
2 points
by
reqo
7 months ago
|
past
|
1 comment
Distilling Llama3.1 8B into 1B in torchtune
(
pytorch.org
)
1 point
by
tosh
7 months ago
|
past
PyTorch Deprecation of Conda Nightly Builds
(
pytorch.org
)
3 points
by
yeldarb
8 months ago
|
past
|
1 comment
PyTorch Deprecation of Conda Nightly Builds
(
pytorch.org
)
3 points
by
nmstoker
8 months ago
|
past
|
1 comment
Torch.load flipping default to weights_only=True
(
pytorch.org
)
2 points
by
formalsystem
8 months ago
|
past
PyTorch 2.5.0 Release, SDPA CuDNN backend, Flex Attention
(
pytorch.org
)
1 point
by
lnyan
9 months ago
|
past
PyTorch Conference 2024 Recap
(
pytorch.org
)
1 point
by
jonbaer
9 months ago
|
past
PyTorch Native Architecture Optimization: Torchao
(
pytorch.org
)
169 points
by
jonbaer
9 months ago
|
past
|
52 comments
Async Tensor Parallelism in PyTorch
(
pytorch.org
)
2 points
by
lnyan
10 months ago
|
past
CUDA-Free Inference for LLMs
(
pytorch.org
)
3 points
by
ororm
10 months ago
|
past
PyTorch 2.4 Now Supports Intel GPUs for Faster Workloads
(
pytorch.org
)
19 points
by
soulbadguy
10 months ago
|
past
|
2 comments
FlexAttention: The Flexibility of PyTorch with the Performance of FlashAttention
(
pytorch.org
)
210 points
by
limoce
11 months ago
|
past
|
24 comments
A guide on good usage of non_blocking and pin_memory() in PyTorch
(
pytorch.org
)
1 point
by
yu3zhou4
11 months ago
|
past
Torchchat
(
pytorch.org
)
2 points
by
jonbaer
11 months ago
|
past
Torchchat: Accelerating Local LLM Inference on Laptop, Desktop and Mobile
(
pytorch.org
)
2 points
by
OutOfHere
11 months ago
|
past
PyTorch 2.4: Python 3.12, AOTInductor freezing
(
pytorch.org
)
4 points
by
DreamFlasher
11 months ago
|
past
Int4 Decoding GQA CUDA Optimizations for LLM Inference
(
pytorch.org
)
1 point
by
jxmorris12
on June 13, 2024
|
past
ExecuTorch Alpha: Taking LLMs and AI to the Edge
(
pytorch.org
)
4 points
by
brainer
on May 1, 2024
|
past
|
1 comment
More
Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: