Location: Los Angeles, CA
Remote: Office/Hybrid/Remote
Willing to relocate: Yes, within CA - prefer Bay Area
Technologies: Reinforcement Learning, Robotics - Python, Pytorch, Isaac Lab, MuJoCo, C++, ROS, HTML/CSS/JS
Résumé/CV: https://indweller.github.io/assets/Prashanth-Ravichandar-Resume.pdf
Email: rprash99[at]gmail[dot]com
Website: https://indweller.github.io/
----------------------------
I hold an MS in CS from the USC. Before joining USC, I worked as a Software Developer at Morgan Stanley for two years. I graduated from the Indian Institute of Technology (IIT) Guwahati with a B.Tech in Engineering Physics and a minor in Computer Science and Engineering.
I’ve worked with humanoid locomotion, deep learning, and RL for robotics - paper submitted to ICRA on dynamic loco-manipulation (like playing soccer) - https://indweller.github.io/ogmplm
I've work authorization and am looking for a 6-month internship from January-June 2025.
This is amazing! Can you extend this to a) allow choosing regions and b) provide some weekly digest kind of thing where the news items from the past week are summarized?
I don’t wanna add a summary, or in general host older news - the more news there is to read, the more time you’ll spend. I don’t want that. It’s like a newspaper, if something important enough happened this week that you missed - the news will still be reporting on it.
Where do you keep the rest of the things that you need to do at some time? Like do you mainatin some other list of all the things that need/want to do, but not necessarily do today?
Where do you keep the rest of the things that you need to do at some time? Like do you mainatin some other list of all the things that need/want to do, but not necessarily do today?
What advantage would a trie's prefix searching give you in a word count task? Worse, you're following word.length() pointers, potentially all over your memory. You could collapse with Patricia or Merkle or whatever, but they'll all be worse than the one pointer lookup of a hash table (modulo collision chasing if you pick a bad hash/size) & all in the service of giving you a feature (prefix lookup) you don't need.
The comments explain why I think it's slower. The TL;DR is that using a trie requires more memory accesses than a hash table (per byte of input), and those memory accesses slow the whole enterprise down.
But, not all memory accesses are created equal. So perhaps a more clever representation that exploits locality better would do better. "Obvious" techniques for shrinking memory usage made a big difference and brought the performance of the trie program closer to C: https://github.com/benhoyt/countwords/pull/2
Focus on the means rather than the end. This takes off the pressure and helps you work efficiently so that you achieve your goal, instead of just dreaming about it. It also helps overcome procrastination.
In the blog, he refers to test losses at an early stage, like in "add significant digits to your eval". Does he actually refer to the test data or is he referring to validation data? I was under the idea that we were supposed to touch the test data only once at the end of all training and validation. What is the right way to handle the test data?
By "eval" you can also mean the training subset. As I understood is at the code to evaluate the network at a given point with a given dataset. For instance, after epoch epoch, the model is evaluated for both training and validation (you see both losses)
As you said, the test subset should only be used at the very last.
I hold an MS in CS from the USC. Before joining USC, I worked as a Software Developer at Morgan Stanley for two years. I graduated from the Indian Institute of Technology (IIT) Guwahati with a B.Tech in Engineering Physics and a minor in Computer Science and Engineering.
I’ve worked with humanoid locomotion, deep learning, and RL for robotics - paper submitted to ICRA on dynamic loco-manipulation (like playing soccer) - https://indweller.github.io/ogmplm
I've work authorization and am looking for a 6-month internship from January-June 2025.