Sentient Stuff
top of page

Sentient Stuff


Those of us who have worked in the data storage industry often wonder how our computers match up to the processor we carry around in our own heads. Comparisons are difficult to come by – we can estimate the average number of neurons in a human brain (~ 86 billion), but they are quite different from the “bits” that comprise computer memory. If they functioned in the same binary way, we would have the storage equivalent of a typical flash drive, and we’d have to start deleting less important memories by the time we reached sixth grade.


Each neuron shares information with about 1000 others, putting the total number of connections at around a trillion. We also know that neurons cooperate with one another in storing memories, resulting in an overall estimated capacity of several Petabytes. This amount of computer memory would store about 3 million hours of video. If you think of your life as one big reality TV show, that’s about 300 years’ worth of narcissistic binge watching.


The size of an individual human memory is difficult to estimate; our more detailed memories probably take up the most room. As we grow and learn, some memories are discarded to clear up space, while others are just too good to let slip away. A great deal of information we consume is just not worth remembering in the first place. Computers and brains have much in common.


The more interesting comparison which has intrigued great thinkers for centuries involves consciousness. How sentient can a non-human entity be? We like to think that a computer doesn’t have feelings and can only mimic them. But is it aware of itself and if so, how does it feel about that? If you tell a new computer that it will never amount to much because its memory is too small or its processor is too slow, will it eventually need counseling?


The presence of consciousness is more than just idle conjecture. Neuroscientist Alysson Muotri of UCSD maintains Petri dishes in his lab where hundreds of tiny sesame seed-sized brains float around. Known as brain organoids, they have been connected to walking robots, used as models for advanced AI systems, and lately employed in the testing of SARS-CoV-2 drugs. None of this seems too alarming - except perhaps for the walking robots.


The point where things get a bit disconcerting is documented in the Muotri Group’s August 2019 Cell Stem Cell article. In this research, the little organoids began to generate coordinated waves of activity much like that seen in a conscious brain. Anticipating the philosophical and moral questions that would surely arise, Dr. Muotri shut down the experiment after a few months. In the meantime, other researchers were having their own epiphanies.


Developmental Biologist Madeline Lancaster knows that, like a computer, a brain without input and output isn’t worth much. Her research team tried growing brain organoids next to the spinal column of a mouse. Once a connection was established, the muscles began to contract. Harvard Molecular Biologist Paola Arlotta was able to induce light sensitivity in some brain organoids. He then observed that their neurons started firing when illuminated. These discoveries and others like them have produced some attention-grabbing research papers – and put many ethicists and theologians on notice – but where do we go from here?


There are some uniquely human conditions (e.g. autism) that cannot be studied in animal models. Effective research on these could benefit greatly from “consciousness in a jar”. In a culture that still debates the dangers of genetically modified tomatoes, this is a heavy lift. Both for the research itself, and for the ethical guidelines that must be developed, a standard way to define and measure consciousness is required. So far this has proven elusive.


Peter Singer, a philosopher and advocate for living things, famously noted that a particularly brilliant chicken might surpass some humans in certain capacities. A quick stroll through the meat department should convince you that this isn’t a very good metric.


Computers and brains have some similarities, but comparisons are sketchy at best. Our silicon tools start with simple, Boolean logic gates and build on those to produce striking complexity. Similarly, brain organoids grown in the laboratory start out as simple multi-cellular structures which can be coaxed into some very human-like behaviors. Whether or not consciousness is one of those remains to be seen, but how will we ever know if that collection of organoids in a jar is sentient?


Someday we may be able to just ask it.


Author Profile - Paul W. Smith - leader, educator, technologist, writer - has a lifelong interest in the countless ways that technology changes the course of our journey through life. In addition to being a regular contributor to NetworkDataPedia, he maintains the website Technology for the Journey and occasionally writes for Blogcritics. Paul has over 40 years of experience in research and advanced development for companies ranging from small startups to industry leaders. His other passion is teaching - he is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines. Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara.

31 views

Recent Posts

See All
bottom of page