Enric Boix, Neil Mallinar, James B. Simon, Mikhail Belkin (Under Review), FACT: the Features At Convergence Theorem for neural networks.
Daniel Beaglehole, Adityanarayanan Radhakrishnan, Enric Boix, Mikhail Belkin (Under Review), Toward universal steering and monitoring of AI models.
Parsa Mirtaheri, Ezra Edelman, Samy Jelassi, Eran Malach, Enric Boix (Under Review), Let Me Think! A long chain of thought can be worth exponentially many short ones.
Enric Boix and Philippe Rigollet (Under Review), The power of fine-grained experts: Granularity boosts expressivity in Mixture of Experts.
Enric Boix (Under Review), On the inductive bias of infinite-depth ResNets and the bottleneck rank.
Rimon Melamed, Lucas Hurley McCabe, Tanay Wakhare, Yejin Kim, H. Howie Huang, Enric Boix (2024), Prompts have evil twins, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing.
Enric Boix, Omid Saremi, Emmanuel Abbe, Samy Bengio, Etai Littwin, Joshua Susskind (2024), When can transformers reason with abstract symbols?, International Conference on Representation Learning (ICLR).
Enric Boix (Under Review), Towards a theory of model distillation.
Enric Boix, Etai Littwin, Emmanuel Abbe, Samy Bengio, Joshua Susskind (2023), Transformers learn through gradual rank increase, Advances in Neural Information Processing Systems 36 (NeurIPS 2023).
Enric Boix and Etai Littwin (2023), Tight conditions for when the NTK approximation is valid, Transactions on Machine Learning Research.