Hacker News new | past | comments | ask | show | jobs | submit login

I had a question a while back about potential spill strategies that may be appropriate to ask here, given that a bunch of compiler people are probably crawling this thread.

Do compilers ever do "computed spilling" (for lack of a better phrase)?

For example, if you have code such as:

  int x = f();
  int y = x & MASK;
  
  use(y);
  // ... bunch of code that uses x, but causes y to spill ...
  use(y);
If we're spilling 'x' and 'y', the compiler could theoretically notice that 'y' is purely derived from 'x', and turn the "spill" of 'y' into a no-op (assuming 'x' is kept live), and the "load" of 'y' into re-computing y from x (in this case using a BITAND).

Mostly academic curiosity, but is this technique used by any major compiler?




> (assuming 'x' is kept live), and the "load" of 'y' into re-computing y from x

If I understand correctly, what you describe is called rematerialization (https://en.wikipedia.org/wiki/Rematerialization), and yes, it's standard. As with everything else in register allocation, it's difficult to decide when and how to do it.


Cool! Thanks for the info and correct terminology.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: