A photo of me!

Hi, I’m Hamish! I’m (currently) a PhD student at the University of Washington at H2Lab, advised by Hannaneh Hajishirzi. I’m generally interested in NLP research, with interests in making language models more easy to use and open, exploring alternative architectures, and linking model abilities and data.

I’m from Sydney and did my undergraduate at the University of Sydney, doing a Bachelor of Arts and IT and triple majoring in Linguistics, Classical Greek, and Computer Science. I also did some NLP with the UsydNLP group, examining multi-hop question answering. Throughout my undergrad (and just after), I spent some time at the Commonwealth Bank of Australia, start-up-y stuff, and Optiver. Before my PhD, I was a predoctoral researcher at AI2 on the AllenNLP team.

If you have questions about my work, general academia/software/research-related stuff, or want to chat, feel free to reach out at hamishiv [at] cs [dot] washington [dot] edu. I am generally happy to answer most questions!


Papers

See below for papers I’ve worked on. You can also check out my Semantic Scholar and Google Scholar profiles.

    Tülu 3: Pushing Frontiers in Open Language Model Post-Training. Nathan Lambert*, Jacob Morrison*, Valentina Pyatkin*, Shengyi Huang*, Hamish Ivison*, Faeze Brahman*, Lester James V. Miranda*, Alisa Liu, Nouha Dziri, Shane Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Chris Wilhelm, Luca Soldaini, et al. 2024.
    Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning. Sriyash Poddar, Yanming Wan, Hamish Ivison, Abhishek Gupta, and Natasha Jaques. 2024. Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning. In NeurIPS.
    Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback. Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A. Smith, Yejin Choi, and Hannaneh Hajishirzi. 2024. Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback. In NeurIPS.
    OLMo: Accelerating the Science of Language Models. Dirk Groeneveld, Iz Beltagy, ..., Hamish Ivison, ..., Noah A. Smith, and Hannaneh Hajishirzi. 2024. ACL.
    Backtracking Mathematical Reasoning of Language Models to the Pretraining Data. Yasaman Razeghi*, Hamish Ivison*, Sameer Singh, and Yanai Elazar. 2024. Backtracking Mathematical Reasoning of Language Models to the Pretraining Data. In The Second Tiny Papers Track at ICLR 2024.
    Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2. Hamish Ivison*, Yizhong Wang*, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. technical report.
    How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources. Yizhong Wang*, Hamish Ivison*, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources. In NeurIPS Datasets and Benchmarks Track.
    TESS: Text-to-Text Self-Conditioned Simplex Diffusion. Rabeeh Karimi Mahabadi*, Hamish Ivison*, Jaesung Tae, James Henderson, Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2024. EACL.
    HINT: Hypernetwork Instruction Tuning for Efficient Zero-Shot Generalisation. Hamish Ivison, Akshita Bhagia, Yizhong Wang, Hannaneh Hajishirzi, and Matthew Peters. 2023. HINT: Hypernetwork Instruction Tuning for Efficient Zero-Shot Generalisation. In ACL.
    Data-Efficient Finetuning Using Cross-Task Nearest Neighbors. Hamish Ivison, Noah A. Smith, Hannaneh Hajishirzi, and Pradeep Dasigi. 2023. Data-Efficient Finetuning Using Cross-Task Nearest Neighbors. In Findings of ACL.
    Hyperdecoders: Instance-specific decoders for multi-task NLP. Hamish Ivison and Matthew E. Peters. 2022. Hyperdecoders: Instance-specific decoders for multi-task NLP. In Findings of EMNLP.
    Local Interpretations for Explainable Natural Language Processing: A Survey. Siwen Luo*, Hamish Ivison*, Soyeon Caren Han, and Josiah Poon. 2021. ACM Computing Surveys.
    Would you like fries with that? Modular Multi-hop Reasoning. Hamish Ivison. 2020. Would you like fries with that? Modular Multi-hop Reasoning. Honours Thesis, University of Sydney, November.