This page will focus on the research I am undertaking as part of my PhD in Microarchitectural Optimisations of Machine Learning. I’ll update the page as and when I find another useful resource.
My Research Publications
Here is where you can find my Publications in FPGA and ASIC implementations and power saving methods of Machine Learning.
Here is a list of resources I’ve found interesting, useful, educational or just purely entertaining whilst undertaking my PhD research.
- Machine Learning Specialisation Coursera course by University of Washington professors Emily Fox (Amazon Professor of Machine Learning) and Carlos Guestrin (Amazon Professor of Machine Learning).
- Machine Learning Coursera course by Andrew Ng (Associate Professor, Stanford University)
- Stanford’s CS229 Machine Learning Course (Andrew Ng, John Duchi)
- Stanford’s CS231n: Convolutional Neural Networks for Visual Recognition (Fei-Fei Li, Justin Johnson, Serena Yeung)
- Neural Networks for Machine Learning (Geoffrey Hinton, Professor University of Toronto)
- Fast.ai Course (Jeremy Howard, Fast.ai founding researcher)
- Deep Learning Book by Ian Goodfellow, Yoshua Bengio and Aaron Courville
- Constraining Designs for Synthesis and Timing Analysis-A Practical Guide to Synopsys Design Constraints (SDC) by S. Gangadharan and S. Churiwala
- The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
- A Mind for Numbers: How to Excel at Math and Science (Even If You Flunked Algebra) by Barbara Oakley
- How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellenberg
- Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence by Jerry Kaplan
- How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil
- Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian, Tom Griffiths
- Thinking, Fast and Slow by Daniel Kahneman
- Thinking Statistically by Uri Bram, Hannah Vazquez
- Machine Learning: The New AI (The MIT Press Essential Knowledge series) by Ethem Alpaydin
- Your Brain at Work: Strategies for Overcoming Distraction, Regaining Focus, and Working Smarter All Day Long by David Rock
- Top Brain, Bottom Brain: Surprising Insights into How You Think by Stephen Kosslyn, G. Wayne Miller
- Brain Rules (Updated and Expanded): 12 Principles for Surviving and Thriving at Work, Home, and School by John Medina
- Machine Learning Guide – Teaches the high level fundamentals of machine learning without getting too engrossed in the maths.
- Learning Machines 101 – Teaches general artificial intelligence with lots of resources on the web site too!
- TWiML & AI – Interviews a lot of high profile people in the ML and AI space.
- Data Skeptic – Usually a couple of hosts discussing stories and tutorials to help understand data driven worlds.
- Talking Machines – Interview experts in a specific research space and looks at an algorithm in the latter section of the podcast.
- Siraj Raval’s entertaining and educational YouTube videos on deep learning.
- DeepLearning.TV’s YouTube channel on Deep Learning.
- Dr. Andrew Ng’s Coursera course on Machine learning as a channel on YouTube.
- Dr. Fei-fei Li CS231n Lecture series on the MachineLearner YouTube channel.
- Xilinx has open sourced their PYNQ-BNN code on github.
- See Hardware below for the PYNQ-Z1 board.
- ZynqNet: An FPGA-Accelerated Embedded Convolutional Neural Network
- See Hardware below for ZC706 Evaluation Board.
Websites / Blogs
- Animation of Multi-Channel Multi-Kernel Convolution
- How Convolutional Neural Networks Work
- Deep Learning Demystified
- Machine Learning Mastery
- Neural Networks and Deep Learning
- Deep Learning
- KD Nuggets
- A Beginner’s Guide To Understanding Convolutional Neural Networks (part 1)
- A Beginner’s Guide To Understanding Convolutional Neural Networks (part 2)
- The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3)
- The Neural Network Zoo
- A Visual and Interactive Guide to the Basics of Neural Networks
- Open AI
- Christopher Olah’s Blog
Papers Cited in My Research
- S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally, “EIE: Efficient inference engine on compressed deep neural network,” Seoul, Republic of Korea, in Proc. 43rd Int. Symp. Comp. Archit., pp. 243–254, 2016, Doi: 10.1109/ISCA.2016.30.
- S. Han, H. Mao, and W. Dally, “Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding,” CoRR, vol. abs/1510.00149, 2015, http://dblp.uni-trier.de/rec/bib/journals/corr/HanMD15
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich,, “Going deeper with convolutions,” in Proc. Comput. Vis. Found., Jun. 2015.
- C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing FPGA-based accelerator design for deep convolutional neural networks,” in Proc. ACM/SIGDA Int. Symp. Field-Programmable Gate Arrays, 2015, pp. 161–170, doi: 10.1145/2684746.2689060.
- Y.-H. Chen, J. Emer, and V. Sze, “Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks,” in Proc. IEEE Int. Solid-State Circuits Conf., 2016, pp. 367–379, doi: 10.1109/ISCA.2016.40.
- M. Furer, “Faster integer multiplication,” in Proc. 39th Annu. ACM Symp. Theory Comput., 2007, pp. 57–66, doi: 10.1145/1250790.1250800.
- S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, “Deep learning with limited numerical precision,” in Proc. 32nd Int. Conf. Mach. Learn., Lille, France, pp. 1737–1746, 2015.
- T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Tema, “DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in Proc. 19th Int. Conf. Archit. Support Program. Languages Operating Syst., 2014, pp. 269–284, doi: 10.1145/2541940.2541967.
Other Interesting / Useful Papers
- H. Lee, R. Grosse, R. Ranganath, A. Ng,“Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations,” in Proc. 26th Int. Conf. Mach. Learn., Montreal, Canada, pp 609-616, 2009, doi: 10.1145/1553374.1553453
- V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” in NIPS Deep Learning Workshop, 2014.
- L. Orseau, S. Armstrong, “Safely Interruptible Agents,” in Proc. 32nd Conf. on Uncertainty in Artificial Intelligence, Jun. 2016.
- Y. Yang, Y. Li, Y. Aloimonos, C. Fermuller, Y, Aloimonos, “Robot Learning Manipulation Action Plans by “Watching” Unconstrained Videos from the World Wide Web,” in Proc. 29th AAAI Conf. on Artificial Intelligence, 2015, pp. 3686-3692.
- Movidius’ Fathom Neural Compute Stick
- Xilinx PYNQ
- See Code above for PYNQ-BNN
- Xilinx ZC706 board
- See Code above for ZYNQ-Net
- Berkley’s Caffe
- Google’s TensorFlow
- Université de Montréal Theano
- Microsoft’s CNTK
- Over 40 other software development environments