Professor George A. Constantinides
Title: Rethinking Arithmetic for Deep Learning
We will consider the problem of efficient inference using deep neural networks from a hardware accelerator perspective. Deep neural networks are currently a key driver for innovation in both numerical algorithms and architecture. While algorithms and architectures for these computations are often developed independently, we will argue for a holistic approach. We will explore computation on typed graphs as a unifying paradigm for deep neural networks and digital circuits. This will allow us to explore – and make precise – some of the links between neural network design methods and hardware design methods. Bridging the gap between specification and implementation requires us to grapple with questions of approximation, which we will formalise and explore opportunities to exploit. In particular, we prove that – on the one hand, extreme quantisation down to one-bit values is sufficient for any practical neural network, but on the other hand that such extreme quantisation requires a rethink of network topologies. We shall explore some promising ideas for future efficient hardware topologies.
Prof George A. Constantinides holds the Royal Academy of Engineering / Imagination Technologies research chair in Digital Computation at Imperial College. He leads the Circuits and Systems Research Group, consisting of approximately 90 researchers exploring a broad range of analogue, digital, and bio-inspired technology. Prof Constantinides is a leader in the field of FPGA-based computing, where he is known for his development of numerical approximation techniques and research into high-level synthesis. He also leads the international research Centre for Spatial Computational Learning (http://spatialml.net), a collaboration bringing together Imperial, Southampton, UCLA and Toronto. He has chaired the major FPGA-related conferences FPGA, FPL and FPT, and is currently Associate Editor of IEEE Transactions on Computers.
Professor George Roussos
Title: Data-driven Digital Healthcare: Developing Effective Digital Biomarkers
for Parkinson’s Disease
Sustained improvements in healthcare, nutrition and technology have resulted in humans living longer. An unintended consequence of this trend is that humans also live longer with illness and disability so that recent decades have witnessed accelerated growth in the prevalence of long-term neurodegenerative diseases such as Huntingtons, Parkinson’s and Alzheimers disease and other dementias. Parkinson’s Disease (PD) in particular is associated with a wide spectrum of motor and non-motor symptoms including tremor, slowness of movement and freezing, swallowing difficulty, sleep-related difficulties and psychosis. Since there is no cure, symptom management is a life-long process that typically involves pharmacological treatment with L-Dopa, physiotherapy, and surgery in its latter stages. The expanding population of People with Parkinson’s place considerable pressure on healthcare services for specialist assessment of symptoms and monitoring disease progression. In this context, the popularity of smartphone apps and wearables offer distinct opportunities for self-monitoring. To this end, we have designed, developed and validated PDkit, a comprehensive software toolkit for the management and processing of performance data captured continuously by wearables or by high-use-frequency smartphone apps such as mPower and cloudUPDRS. In this talk, we demonstrate how PDkit facilitates the application of a comprehensive data science methodology to the analysis of patient data leading to the development of robust measures of disease progression.
In particular, we highlight how the adoption of this approach can support reproducibility of therapeutic clinical trials outcomes.
George Roussos is the Professor of Pervasive Computing at Birkbeck College, University of London, where he is the Head of the Experimental Data Science section and leads the Internet of Things Laboratory. He has over 20 years of experience in leading and successfully delivering research projects, more recently the development of PDkit , and open source data science toolkit for Parkinson’s (supported by the MJ Fox Foundation), and cloudUPDRS, the first mobile app to receive Class I medical device certification in Europe for the assessment of the motor symptoms of PD (supported by InnovateUK). His work also pioneered participatory cyber‐physical computing as the predominant methodology for the construction of mobile and pervasive computing systems. With contributions in systems architecture, privacy protection and human dynamics his work demonstrated how the user’s activity can be exploited as the core ingredient for building such systems.
He is the author of four books and over 100 research papers. He is currently the Associate Rapporteur for Study Group 20 of the International telecommunication Union on the IoT and Smart Cities and during 2011/12 was a member of the EU‐China Internet of Things Expert Group. In 2011, his work on the iBats app received the Medal for best environmental project at the British Computer Society awards. Since 2004, he serves on the ACM US Public Policy Committee with special interest in data privacy and security.
Further details: http://www.dcs.bbk.ac.uk/~gr/
Birkbeck group page:
Dr. Mehrnoosh Sadrzadeh
Title: Principles of Natural Language, Logic, and Tensor Semantics
The first formal approaches to natural language go back to the division calculus of Ajdukiewic in the 30’s, where structures similar to groups were used to provide a functional interpretation for grammatical types and their composition. In the 50’s, these systems were refined with two, rather than one, division operators and Lambek developed a residuated monoid semantics and a cut-free sequent calculus for them. I will show how one can develop a vector space semantics for residuated monoids and how this solves an open problem of the field of “distributional semantics”, in Statistical Natural Language Processing. This semantics provides higher order tensor representations for sentences by composing the vectors/tensors of the words therein, themselves populated by the statistics of occurrence in large corpora of data. I will present experimental results showing that these models beat non compositional baselines in tasks such as disambiguation, similarity and entailment. I will also go through recent work where adding a copying and moving operation to restated monoids enables us to lift the models from sentence level to the level of discourse and reason about phenomena such as ellipsis and anaphora. These models enjoy a categorical foundation in terms of functors between compact closed categories and Frobenius and Bialgebras over them.
Mehrnoosh Sadrzadeh graduated with a BSc in Computer Software Engineering and an MSc in Logic from Sharif University of Technology, Tehran, Iran. After obtaining her PhD in 2006 in Montreal, in 2008 she got an EPSRC Postdoctoral Fellowship on Algebra/Coalgebra in Oxford and a Research Fellowship in Wolfson College; in 2011 she received an EPSRC Career Acceleration Fellowship on Compositional Distributional models of Natural Language Processing in Oxford. In 2013, she became a lecturer in the School of Electronic Engineering and Computer Science, Queen Mary University of London and co-founded the Computational Linguistics Lab. From August 2019, she is an associate professor in UCL and has a second round of a Royal Academy of Engineering Industrial Fellowship to work with the BBC R&D Data Team.
Professor Peter Cochrane OBE
Title: Science and Engineering Out of The Box!
There has never been a time in the history of our species that has seen such innovation and rapid progress; and we have never been so confounded by the world we have realised! For sure, we have crossed the Rubicon from a linear past to a non-linear future and find ourselves lacking many of the basic tools we need to fully address the major problems confronting us.
Engineering solid solutions has never been so difficult and challenging!
In such an environment we have to prepared to be ‘unreasonable’, to challenge established wisdoms, conventions and practices. So in this session I present three challenging cases that do just that:
1) Wireless Spectrum: It is actually infinite and there is no bandwidth crisis!
2) Cyber Security: We need an auto-immune systems aka biology
3) Information War: The biggest threat to the survival of our species
seasoned professional with decades of hands on management, technology and operational experience who retired from BT as CTO in 2000 to form his own consultancy company. This saw the founding of eBookers, Shazam Entertainment, and a raft of smaller start ups. Peter has also seen assignments with UK, Singapore and Qatar government departments; HP, Motorola, 3M, Dupont, Ford, Sun, Apple, Cisco, Rolls Royce, BMW, Jersey Tel, Chorus, FaceBook & QCRI et al.
In 2017 Peter was appointed Prof of Sentient Systems to the University of Suffolk, and a visiting Prof to The Universities of Herfordshire, Nottingham Trent and Salford. He has received numerous awards including the IEEE Millennium Medal, Martlesham Medal, Prince Philip Medal, Queens Award for Export and Technology and an OBE. He has also received numerous honorary degrees.
Dr. Lonneke van der Plas
Title: Natural Language Processing for Creative Thought: Predicting Novel Concepts
Creative thinking is an essential cognitive ability that underlies innovation and open-mindedness. There has recently been an upsurge of labs and projects in many large IT companies on computational creativity – ranging from recipe creation to the automatic generation of music and art.The key to all this is to use machine learning as a tool in the creative process; yet, there has been relatively little focus on computational creativity from a linguistic perspective. Since the so-called statistical revolution, the field of Natural Language Processing (NLP) has been mostly concerned with finding patterns in data, mostly texts, and using these patterns to categorise texts, to structure them and discover their meaning. These analyses are used for many well-known NLP tasks, such as named entity recognition, question answering and machine translation.
In this talk, I will bring computational creativity and NLP together by showing some recent work that I have been developing with my team on the automatic generation of novel concepts through compounding. Compounding is a linguistic process that combines different words to create new concepts. Compounds are known for their productivity (novel compounds are created every day: ‘echo chamber’, ‘dark web’), and flexibility (a given combination can have multiple interpretations: ‘baby belly’), and are therefore a great vehicle for creative thought. After discussing some important characteristics of compounds and some tools we created for analysing them, I will show how we used a time-stamped corpus to train a system to predict novel, but plausible combinations of words.
Lonneke van der Plas is senior lecturer at the Institute
of Linguistics and Language Technology of the University of Malta and currently acts as
the chair of the Erasmus Mundus European Masters
Program in Language and Communication Technologies. Before this, she held a junior professorship at the University of Stuttgart, and a post-doc position
at the University of Geneva. She earned a PhD
from the University of Groningen and an MPhil from the University of Cambridge. She managed several research projects among which a project funded by the German Research Foundation on the cross-lingual analysis of noun compounds. She has over 50 publications
in the field of natural language processing, more in particular on the following topics: cross-lingual natural language processing for lesser-resourced languages, distributional semantics, terminology extraction, question answering, semantic role labelling,
and computational creativity.