
Warp-CTC: Fast parallel GPU/CPU CTC loss for deep learning - cbcase
https://github.com/baidu-research/warp-ctc
======
dplarson
It's cool that they provided bindings for Torch, but also interesting. Torch
is obviously very widespread/popular as a deep learning framework, but I got
the impression that Baidu's Silicon Valley AI Lab (SVAIL) ran mostly a custom
C/C++ codebase.

Most likely the Torch bindings were added to help spur a wider variety of
researchers to use their CTC implementation (i.e. it doesn't mean they've
switch to Torch internally). But still interesting to see.

