Join me as I explore the limits of knowledge and technology in weekly interviews with academics, writers & entrepreneurs. In discussion with philosophers, poets, and physicists we’ll delve into foundational questions in arts and science.
Coffee table conversations with people thinking about foundational issues. Multiverses explores the limits of knowledge and technology. Does quantum mechanics tell us that our world is one of many? Will AI make us intellectually lazy, or expand our cognitive range? Is time a thing in itself or a measure of change? Join James Robinson as he tries to find out.
34| Animal Minds — Kristin Andrews on why assuming consciousness would aid science
byJames Robinson
There is no consensus on what minds are, but there is plenty of agreement on where they can be found: in humans. Yet human consciousness may account for only a small proportion of the consciousness on our planet.
Our guest, Kristin Andrews, is a Professor of Animal Minds at the University of York, Ontario, Canada. She is a philosopher working in close contact with biologists and cognitive scientists and has spent time living in the jungle to observe research on orangutans.
Kristin notes that comparative psychology has historically resisted attributing such things as intentions, learning, consciousness, and minds to animals. Yet she argues that this is misguided in the light of the evidence, that often the best way to make sense of the complexity of animal behavior is to invoke minds and intentional concepts.
Recently Kristin has proposed that the default assumption — the null hypothesis — should be that animals have minds. Currently, biologists examine markers of consciousness on a species-by-species basis, for example looking for the presence of pain receptor skills, and preferential tradeoffs in behavior. But everywhere we have looked, even in tiny nematode worms, we find multiple markers present. Kristin reasons that switching the focus from asking “where are the minds?” to “what sort of minds are there?” would prove more fruitful.
The question of consciousness and AI is at the forefront of popular discourse, but to make progress on a scientific theory of mind we should draw on the richer data of the natural world.
Coffee table conversations with people thinking about foundational issues. Multiverses explores the limits of knowledge and technology. Does quantum mechanics tell us that our world is one of many? Will AI make us intellectually lazy, or expand our cognitive range? Is time a thing in itself or a measure of change? Join James Robinson as he tries to find out.
Science and poetry are sometimes caricatured as opposing paradigms: the emotional expression of the self versus the objective representation of nature. But science can be poetic, and poetry scientific. Our guest this week, Sam Illingworth, bridges these worlds. He’s researched scientists who were also poets, and organized workshops for scientists and laypeople using the medium of poetry to create an equitable and open dialogue.
In addition to being an Associate Professor at Edinburgh Napier University, Sam is the founder of Consilience, a peer-reviewed journal publishing poetry (which presents such beautifully titled gems as What You Don’t See on David Attenborough is All the Waiting) and hosts the Poetry of Science podcast where, each week, he writes a poem in response to recent scientific research.
Science and poetry are sometimes caricatured as opposing paradigms: the emotional expression of the self versus the objective representation of nature. But science can be poetic, and poetry scientific. Our guest this week, Sam Illingworth, bridges these worlds. He’s researched scientists who were also poets, and organized workshops for scientists and laypeople using the medium of poetry to create an equitable and open dialogue.
In addition to being an Associate Professor at Edinburgh Napier University, Sam is the founder of Consilience, a peer-reviewed journal publishing poetry (which presents such beautifully titled gems as What You Don’t See on David Attenborough is All the Waiting) and hosts the Poetry of Science podcast where, each week, he writes a poem in response to recent scientific research.
Instead of undertaking the perhaps foolhardy (and certainly arduous) task of attempting to summarise this conversation, I will share some of my favourite quotes from this conversation and Sam’s book, A Sonnet To Science:
… a poet essentially comprises and unites both these characters [legislator and prophet]. For he not only beholds intensely the present as it is, and discovers those laws according to which present things ought to be ordered, but he beholds the future in the present, and his thoughts are the germs of the flower and the fruit of latest time.
Miroslav Holub on the demise of his Lepidoptera collection, as quoted in A Sonnet to Science
Unfortunately the world is real. Unfortunately within ten years all my butterflies in their twenty-five boxes were eaten by museum carpet-beetles and other parasites which nurture a grudge agains human immortality. Unfortunately all that’s left is the pins the boxes and the labels. And so I started writing poems again. Poems aren’t eaten by anything, except stupidity.
Coffee table conversations with people thinking about foundational issues. Multiverses explores the limits of knowledge and technology. Does quantum mechanics tell us that our world is one of many? Will AI make us intellectually lazy, or expand our cognitive range? Is time a thing in itself or a measure of change? Join James Robinson as he tries to find out.
Space and time appear in charts as axes oblivious to the points they demarcate. Similarly, we may feel that we, and all the objects of our worlds, are like such points — and spacetime is a container in which we sit.
Julian Barbour is a physicist who has spent six decades arguing against this. He takes the relationist approach of Leibniz and Mach: there is no space without objects and no time without change. Rather space is just the geometric relationships between things.
Julian has pioneered theories that recover the predictions of Newtonian mechanics and General Relativity that drop their invocation of imperceptible space, time, and spacetime. Recently he has taken an iconoclastic approach to the arrow of time — looking to a new measure of structure, complexity, and the expansion of the universe instead of the traditional accounts in terms of entropy.
Space and time are intimately familiar, without them we experience nothing. In t’s and x’s and μ’s they suffuse the equations of physics. Yet their nature remains the subject of debate and conjecture. Are they entities in and of themselves, or do they simply represent relationships between objects or events?
Our guest this week is Julian Barbour, a physicist, and thinker who has led a program of work over six decades to champion the relationist position. According to this view, there is no space without objects — space is a codification of the geometric relationships between objects, likewise, there is no time without change — time emerges from the varied configurations of objects in different “nows”.
This is in direct contrast to a tradition going back to Isaac Newton:
Absolute, true, and mathematical time, in and of itself and of its own nature, without reference to anything external, flows uniformly and by another name is called duration.
For this claim, Newton provided scant arguments, however, he was able to make a case that space existed independently of particles. His law of inertia showed that rotating masses would experience angular momentum. This is easily measurable, for example, spinning a bucket of water leads to the surface of the liquid curving. What is it spinning relative to that should cause this? Newton reasoned it had to be absolute space.
Newton’s contemporary and arch-rival, Gottfried Wilhelm von Leibniz pushed back against this absolutist picture but although he offered many good arguments he was unable (or unwilling) to provide an answer to this bucket experiment. Two centuries later the physicist Ernst Mach was deeply uncomfortable with this intangibility of absolute space and proposed that the forces of angular momentum might be caused not by motion relative to the ethereal substance of space but by movement relative to ordinary stuff — in particular the fixed stars.
In 1977, with Bruno Bertotti, Julian made good on Mach’s principle for a universe of N (any finite number) particles and vanishing overall angular moment. That is, the predictions of Newtonian mechanics — including the water in the bucket — were completely recovered without any reference to absolute space. This was an extraordinary achievement.
Julian has gone on to show that, again with some constraints (that the universe is closed, roughly speaking that it curves back on itself), the predictions of General Relativity can also be recovered within a completely relational theory — Shape Dynamics, developed with collaborators Tim Koslwoski and Flavio Mercati. While in General Relativity space and time are presented as malleable substances with a starring role in determining motions, in Shape Dynamics they are gone.
In Julian’s latest work, presented in his book The Janus Point, he has turned to the question of the arrow of time. Why do processes seem to happen only in one direction even though the laws of physics are symmetric? Unsatisfied with the frequently given explanation in terms of entropy increase (and in particular the lack of explanation for the “past hypothesis” — that entropy of the universe needed to have been low) Julian had an insight into an alternative picture.
Once again he explored the Newtonian problem first to show that, for arrangements of particles interacting under gravity with a vanishing or negative total energy, there will be a moment of closest approach from which they will move away. Julian presents strong arguments that on either side of this moment of closest approach, there will be two arrows of time and provides a mathematical proof that a quantity he introduces — the complexity — that measures the amount of structure in the world will inexorably increase as a configuration expands relative to its earlier scale.
Physics is not settled. And Julian freely admits that there could be observations that scupper the relationist ship. But his intuitions and evidence are strong, let us suppose then that the relationists are right. What would it mean?
Stand up, spin, and let your arms fly out. You are feeling the dust clouds, stars, and black holes of the universe.
Watch the unfurling of a leaf and the folding of a paper swan. That too is the cryptic touch of gravity.
Questions I’d ask if I had this conversation again
How did being an independent academicshape his research and collaboration?
Julian chose not to follow a typical academic path. He lives on a small farm near Oxford and has strong relations with members of the Physics and Philosophy faculties there, latterly being a visiting professor, but he has never been a fully paid-up member of the academy. He has commented that the “Publish or perish” culture does not encourage the sort of work he has done — which one may characterize as infrequent, but groundbreaking papers that challenge the foundations of physical thought. I wonder what challenges and opportunities this choice presented him.
What did you learn from translating seventy million words of academic texts?
For income, for many years Julian would translate physics texts from German and Russian for three days a week, the other two he would dedicate to his research. How did he find this rhythm, had he been given the means to spend five days a week on the questions of time and space, would he have taken it?
How would you compare the senses of wonder and insight you get from reading Kepler and Shakespeare?
In The Janus Point, there are more quotes from Shakespeare than from Newton, Leibniz, or Mach. Julian is somewhat unusual in that he is a physicist who reads the original works of natural philosophy — such as Newton’s Principia, Descartes’ The World — I believe he reads them for both pleasure and insight. Has Shakespeare also informed his thinking on time, space, and structure?
Coffee table conversations with people thinking about foundational issues. Multiverses explores the limits of knowledge and technology. Does quantum mechanics tell us that our world is one of many? Will AI make us intellectually lazy, or expand our cognitive range? Is time a thing in itself or a measure of change? Join James Robinson as he tries to find out.
We live in a branching universe. If it can happen, it does happen.
These are the almost incredible claims of the Many Worlds Interpretation of quantum mechanics. Yet today’s guest, David Wallace, makes a case that this is the most grounded way of reading our best theory of nature.
While at first sight quantum mechanics seems to say that things (famously, cats!) can occupy impossible states, David argues that a careful reading shows we can take seriously “superpositions” (these apparently weird states) not only at the microscopic level but all the way up to the scale of the universe.
This way of thinking about quantum mechanics was first proposed in 1957 by Hugh Everett, David has made important contributions — particularly in the “preferred basis” or “counting problem” which asks how many worlds are there; and also in understanding how a deterministic theory of the world appears indeterministic — probabilistic — to agents.
David has PhDs in both physics and philosophy from the University of Oxford and currently holds the Mellon Chair in Philosophy of Science at the University of Pittsburgh.
“When you come to a fork in the road, you should take it”
So goes the jocular advice of Yogi Berra.
But what if this is what the universe actually does? What if we live in a garden of forking paths, of events that can go one way or another and in fact do both?
This is the subject of today’s podcast with David Wallace. David holds the Mellon Chair in Philosophy of Science at the University of Pittsburgh. He is one of the leading advocates of the Everett interpretation of quantum mechanics.
Originally proposed in 1957 by Hugh Everett and also called the Many Worlds Interpretation it offers a way of unpacking the precise mathematical predictions of this theory.
The issue is this: quantum mechanics permits small things — atoms, electrons, photons — to be in frankly weird states. A single particle can be in many places at once, or traveling at many different speeds.
These superpositions are key to Quantum Mechanics. Integral to this theory which explains how stars work, has enabled us to build computer chips and, indeed, lies at the very foundations of our understanding of matter — the uninspiringly named “Standard Model” is a quantum mechanical theory.
Perhaps you have no issue with these tiny particles being in such weird states — you can’t see them. But quantum mechanics does not draw a qualitative distinction between small and big things. Like the rest of physics, it treats big things as agglomerations of small ones, the difference between them is of quantity, not kind. So quantum mechanics appears to tell us that big things like people, or (famously) cats can be in multiple places at once or even in seemingly contradictory states of being dead and alive. This is not what we observe.
The original attempt to explain this away was to say that when a measurement takes place these superpositions break down and crystallise into a single state. You might have come across phrases like “the collapse of the wavefunction” to describe this idea that things go from being spread out, or wavelike, to being localised. But what’s so special about measurement that it should provoke such a change of behaviour, what even is measurement if not just another physical process?
Other attempts propose modifying quantum mechanics — adding a new mechanism that would cause the crystallisation or collapse that doesn’t privilege measurement. However, it is no mean feat to try to modify a theory that has had such predictive success.
But what if we do not try to explain anything away?
What if we take seriously this idea of superpositions at all levels, not just the microscopic but all the way up to human and even universal scale?
Does quantum mechanics tell us we will observe something being in two states at once? No. Hugh Everett, David Wallace, and many others reason that quantum mechanics tells us that the world branches and that as the small superpositions become large those large ones represent worlds and each world looks much like the world we inhabit — where objects are one thing or another but never both at once.
When a photon can follow two different paths, it does, when the detection apparatus can observe it in two different places it will, when I can see that apparatus registering two different things, I will. But there is one me, and another me and neither sees anything extraordinary. The world has followed the fork in the road.
This is a theory with almost incredible consequences. But it has unassuming origins. It does not assume that there is anything special about measurement nor that quantum mechanics is incomplete. It is a radical theory for it goes to the roots of quantum mechanics, from these the branches emerge.
David was my tutor when I studied Physics and Philosophy as an undergraduate at Oxford, my thanks to him for giving his time to my curiosity once again.
References
The Yogi Berra line is one Harvey Brown used in his lectures on MWI.
The Emergent Multiverse — the most comprehensive book so far on the Many Worlds interpretation — if you’re still curious after listening to the podcast and reading this far, this book might be for you!
Coffee table conversations with people thinking about foundational issues. Multiverses explores the limits of knowledge and technology. Does quantum mechanics tell us that our world is one of many? Will AI make us intellectually lazy, or expand our cognitive range? Is time a thing in itself or a measure of change? Join James Robinson as he tries to find out.
Casey Handmer is the founder of Terraform Industries (TI).
TI is pioneering air-to-fuel technology to manufacture methane (natural gas) from the air. Currently, we continue to extract enormous quantities of hydrocarbons from the crust, burn them, and release carbon dioxide. Instead, TI wants to mine the air: displacing the transport of carbon from the crust to the atmosphere.
Casey Handler has a PhD in theoretical astrophysics from Caltech, he’s worked at Nasa’s JPL and on Hyperloop One.
For a transcription and further references see multiverses.xyz
My thanks to Mark Shilton, Sam Westwood and Maciej Pomykala for help with this episode.
Hydrocarbons are not bad. Over the last three hundred years they have propelled global growth and technology. Steel, trains, planes, plastics, and processors owe a historical debt to the energy that we have readily extracted from coal, oil and gas.
The way we get hydrocarbons is bad. In taking them from the ground and burning them we transport carbon from the crust to the air. The deleterious consequences of this on our climate are well known. Furthermore, the inhomogeneity in their distribution has led to global iniquities, indeed reserves of such natural resources continue to prop up unsavory regimes, even eliciting deference from other powers that profess to uphold more democratic principles.
Casey Handmer, founder and CEO of Terraform Industries (TI) joined me on the inaugural episode of the Multiverses podcast. He has a plan to create a cleaner, fairer hydrocarbon economy:
Extract hydrocarbons from the air. Keeping the atmospheric balance intact — even improving it.
Make this work almost
everywhere. By relying on more equitably distributed resources: sunshine and air
Do this more cheaply than drilling.
Casey has Ph.D. in theoretical astrophysics and has worked at NASA’s JPL and at Hyperloop One.
The technology behind the plan is old: scrub carbon from the air (like plants!), use water to create Hydrogen (electrolysis — discovered ~1800), combine the hydrogen and carbon using the Sabatier process (discovered ~1900) to produce methane. Methane, or natural gas, is the gateway hydrocarbon — CH4 — easy to transport and can be used as the basis for more complex synthetic fuels.
The economics that makes this work is new. It requires copious, low-cost energy from solar PV for this process to undercut crust-mined methane. That energy is used to turn the fans that churn through the air, electrolyse the water and run the Sabatier process. Using projections of solar energy costs, Casey estimates that in the mid-2030s it will be cheaper in most inhabited places to generate hydrocarbons this way than by drilling.
Because TI is confident that solar power will continue to fall, its efforts are focused on building something that can quickly get mass adoption — that means building cheap machines rather than ones using expensive components that could operate at higher efficiency. When PV is super cheap, we can be wasteful of it if it means a faster transition to net zero. We don’t need highly efficient processes to create fuels, we just need a lot of solar.
If the TI thesis plays out, it will enable a phase change in solar adoption. In many cases it does not currently make sense to connect more solar to grids — it only adds to generation at hours that are already well covered. More storage solutions and HVDC to move energy between time zones will change that. Even then, it’s hard to connect new solar farms — it can require years of permits to get the grid interconnections laid. If it becomes cheaper to produce methane from the air then the grid constraints are bypassed, a solar farm can be constructed anywhere with access for trucks and the methane produced can be stored in the mundane ways (tanks).
I hope it happens.
Questions I’d ask if I had this conversation again
Is there a floor to solar costs? A couple of reasons to think there might be:
It uses up some natural resources and the cost of these has floor. Does plywood installation display a learning rate? Perhaps slightly, but it’s masked by resource costs.
Solar needs land area, another constrained resource.
What about air-to-food? Startups like solar foods are following a similar model in turning energy from the sun into hydrocarbons. Could there be any advantages to colocating facilities for food an methane production? Will the prices of food and fuel equalize in terms of $ per joule? The cheapest food is currently about 18MJ Joule per dollar (see https://efficiencyiseverything.com/calorie-per-dollar-list/) whereas gasoline is more like 60MJ per dollar — so it’s about three times as cheap. Will food become relatively cheap compared to gas? Even with gas coming down in price. More good news?!
Update — Casey got back to me by email with some comments on these:
Cost floor:Might get as low as $30k/MW. Land cost becomes important at that level without severe regulatory assistance! [JR: for ref, the turnkey installed cost is $900k per MW at the low end currently — so the floor is a long way down]
Air-to-food:Food is actually about 100x more expensive than gasoline per usable unit of mechanical energy. Probably better to collocate synthetic food factories (if any) with centers of demand, as food is less transportable than natural gas through existing natural gas pipelines!
I am working on Podcast discussing topics across the arts and sciences — the branching of the multiverse, of texts, and of ideas. Currently, my work on this consists of infrequent imaginary interviews conducted in my head. Once I sort out my mercurial broadband connection and set aside some time, my daydreams will be realized.