This page will focus on the research I am undertaking as part of my PhD in Hardware Optimisation of Machine Learning. I’ll update the page as and when I find another useful resource.


My Research Publications

Here is where you can find my Publications in FPGA and ASIC implementations and power saving methods of Machine Learning.

Useful Resources

Here is a list of resources I’ve found interesting, useful, educational or just purely entertaining whilst undertaking my PhD research.




  • Machine Learning Guide – Teaches the high level fundamentals of machine learning without getting too engrossed in the maths.
  • Learning Machines 101 – Teaches general artificial intelligence with lots of resources on the web site too!
  • TWiML & AI – Interviews a lot of high profile people in the ML and AI space.
  • Data Skeptic – Usually a couple of hosts discussing stories and tutorials to help understand data driven worlds.
  • Talking Machines – Interview experts in a specific research space and looks at an algorithm in the latter section of the podcast.


  • Siraj Raval’s entertaining and educational YouTube videos on deep learning.
  • DeepLearning.TV’s YouTube channel on Deep Learning. 
  • Dr. Andrew Ng’s Coursera course on Machine learning as a channel on YouTube.
  • Dr. Fei-fei Li CS231n Lecture series on the MachineLearner YouTube channel.


Websites / Blogs

Papers Cited in My Research

  • S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally, “EIE: Efficient inference engine on compressed deep neural network,” Seoul, Republic of Korea, in Proc. 43rd Int. Symp. Comp. Archit., pp. 243–254, 2016, Doi: 10.1109/ISCA.2016.30.
  • S. Han, H. Mao, and W. Dally, “Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding,” CoRR, vol. abs/1510.00149, 2015,
  • C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich,, “Going deeper with convolutions,” in Proc. Comput. Vis. Found., Jun. 2015.
  • C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing FPGA-based accelerator design for deep convolutional neural networks,” in Proc. ACM/SIGDA Int. Symp. Field-Programmable Gate Arrays, 2015, pp. 161–170, doi: 10.1145/2684746.2689060.
  • Y.-H. Chen, J. Emer, and V. Sze, “Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks,” in Proc. IEEE Int. Solid-State Circuits Conf., 2016, pp. 367–379, doi: 10.1109/ISCA.2016.40.
  • M. Furer, “Faster integer multiplication,” in Proc. 39th Annu. ACM Symp. Theory Comput., 2007, pp. 57–66, doi: 10.1145/1250790.1250800.
  • S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, “Deep learning with limited numerical precision,” in Proc. 32nd Int. Conf. Mach. Learn., Lille, France, pp. 1737–1746, 2015.
  • T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Tema, “DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in Proc. 19th Int. Conf. Archit. Support Program. Languages Operating Syst., 2014, pp. 269–284, doi: 10.1145/2541940.2541967.

Other Interesting / Useful Papers

  • H. Lee, R. Grosse, R. Ranganath, A. Ng,“Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations,” in Proc. 26th Int. Conf. Mach. Learn., Montreal, Canada, pp 609-616, 2009, doi: 10.1145/1553374.1553453
  • V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” in NIPS Deep Learning Workshop, 2014.
  • L. Orseau, S. Armstrong, “Safely Interruptible Agents,” in Proc. 32nd Conf. on Uncertainty in Artificial Intelligence, Jun. 2016.
  • Y. Yang, Y. Li, Y. Aloimonos, C. Fermuller, Y, Aloimonos, “Robot Learning Manipulation Action Plans by “Watching” Unconstrained Videos from the World Wide Web,” in Proc. 29th AAAI Conf. on Artificial Intelligence, 2015, pp. 3686-3692.



Email Newsletters