Hacker Newsnew | past | comments | ask | show | jobs | submit | jwyang's commentslogin

very good question, now we are mainly focusing on building the foundtion for multimodal perception and atomic action taking. Of course, integrating the trace-of-mark prediction for robotics and human video data enhances the model's medium length reasoning but this is not sufficient for sure. The current Magma model will serve as the basis for our next step, i.e., longer horizong reasoning and planning! We are exactly looking at this part for our next version of Magma!


Thanks for your great interests on our Magma work, everyone!

We will gradually roll out the inference/training/evaluation/data preprocessing code on our codebase: https://github.com/microsoft/Magma, and this will be finished by next Tuesday. Stay tunned!


How far are we from making peanut butter sandwiches? Is that a valid benchmark to look towards, in this space?


Good catch! A minor correction: Magma - M(ultimodal) Ag(entic) M(odel) at M(icrosoft) (Rese)A(rch), the last part is similar to how the name Llama came out, :)


How many 'M's in "Magma"? ;)


ops, a typo, no M from Microsoft.


It's ok GPT


there are two r.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: