Intelligent Artificiality and the Algorithmic Foundations of Learning and Action



We are developing models and modeling frameworks that can help humans learn more effectively, reliably and accurately from the ways in which machines learn, in part by using the algorithmic and meta-algorithmic approaches that have enabled the rapid development of self refining algorithms (‘machine learning’) and their use at scale, and in part by critically examining and re-engineering the modeling assumptions of ‘fundamental models of humans’ in the social sciences to reflect our enhanced understanding of the algorithmic structure of learning and intelligent behavior. We are developing models of humans tasks – such as deciding, predicting, optimizing, designing sequences of motor actions, relating, communicating, and making deductive, inductive and abductive inferences – and the learnability of the skills that enable humans to perform them – that will enable us to engineer faster and more productive approaches to human learning and skill development.

Current questions we are focusing on include:

How can humans use the algorithmic ‘tips and tricks’ that have enabled machine learning engineers to implement self-refining algorithms at scale, working in near real time, to become better everyday problem solvers?

How should the foundational assumptions of decision theory and the microeconomic foundations of modeling be modified in light of what we understand about how machines process information?

What are the constraints that ‘using a brain’ and ‘minding a body’ impose on a mid, and what are the opportunities they afford?

How do brains perform calculations that take megawatt-hours of energy for server banks training large language models while using less than 300 Watt-hours per day?

How should we think about causation and causality in order to amplify and refine our own abilities to cause desired effects under time pressure and energy constraints?

How can we design ideograms and ideographies that perform better at communicating thoughts across linguistic or disciplinary barriers that previous attempts have?

How can we use insights from the ways in which large language models ‘generate’ text to help humans become better generators of word and action sequences?

What is the brain analogue to ‘backpropagation’ and how can we use neural net training and encoding architectures to accelerate learning?

How can distributed learning architectures help humans learn more quickly and thoroughly a net new skill or ability?

What are useful machine learning analogues of skill transfer? How can we use these analogues to enhance the transferability of skill in humans?

What encodings of objects, events, predicates and ‘scenes’ enable machines to ‘get around environments’ and what can humans learn from these encodings?

Representative Publications

• Moldoveanu, M.C. 2023. Explananda and Explanatia in Deep Neural Network Models of Neurological Network Functions, Behavioral and Brain Sciences.

• Reeves, M., M.C. Moldoveanu, and A. Job. 2023. Radical Optionality. Harvard Business Review. May, 2023 (nominated for HBR/McKinsey Prize).

• Moldoveanu, M.C. 2023. A Source And Channel Coding Approach to the Analysis and Design of Languages and Ideographies, Behavioral and Brain Sciences.

• Moldoveanu, M.C. 2022. Probably, Approximately Useful Frames of Mind: A Quasi-Algorithmic Approach. Behavioral and Brain Sciences, 45.

External Collaborators:
Martin Reeves, Managing Director, Boston Consulting Group and Chairman, BCG Henderson Institute, Professor Mike Ryall, Florida Atlantic University, Dr. Joel Leibo, Google DeepMind.

Contact


Desautels Centre for Integrative Thinking
Rotman School of Management
105 St. George Street, Toronto, Ontario M5S 3E6