We don’t know enough about human consciousness to know that for sure. Plenty of animals have fewer braincells than humans, but we don’t know enough about their consciousness to say whether they have an internal experience.
That’s what I mean. It’s hubris to assume we can culture human braincells in a petri dish just because there’s a lack of evidence one way or the other whether it’s aware.
There’s a lack of evidence for anything not being conscious.
Neurons work by generating electrical signals in response to stimulus (either electrical inputs from other neurons or physical/sensory inputs activated by light or touch etc.) and they do this in a physical way.
If they’re conscious, then there’s a pretty good chance that power plants are conscious, computers are conscious and pretty much everything else in the world is conscious.
I’m not sure there’s any requirement for consciousness to include “human-like reasoning” or “understanding” for it to have some kind of experience and perspective or awareness. Humans make a lot of assumptions about the world to make it fit the patterns we’re used to.
A cluster of neurons trained to play doom might have consciousness but it’s not likely to think like a human, just like a rock or a plant or an ant or an iPhone might have consciousness.
Whether it’s ethical to squash an ant or turn off an iPhone or stimulate a lab-grown neuron depends on your ethical framework and your philosophical worldview.
I think there’s a lower limit of complexity for sentience, based on memory-persistence, self-firing, and self-recognition. I think there’s no need for moral concern for non-sentient things. (But, that’s just my ethical framework and philosophical worldview; the only “evidence” I’m at all aware of is thin and vague.)
But, as far as having a subjective experience, I think that might go quite small and alien including fungi and plant or even certain sub-cellular structures. Probably anything that maintains a border and internal homeostasis including parts of the bodies of larger experiencers could be having an internal perspective – and any human words applied so those experiences would tell you more about human bias than the experience.
In my view, although I am neither a neurologist nor a philosopher, things should absolutely scale with neuron blob complexity, and it should do so in a non-linear way. I dislike harming an animal with a complex brain like mammals, cephalopods etc. much more than I dislike harming the equivalent nerve mass in insects, for instance.
That’s also the way I feel, but I think that’s probably human bias and closely related to the evolutionary pressure behind my mirror neurons and how strongly they trigger correlates with outside sentient phenotype.
I think if I knew what it felt like (if anything) to be an ant colony, I might have different views around the causal use of boric acid (and related) to keep them out of human spaces.
There’s a lack of evidence for anything not being conscious.
So should we just assume that nothing is conscious? After all, I can’t prove that you’re conscious, nor you I. So should we relegate ourselves to an amoral solipsism?
Neurons work by generating electrical signals in response to stimulus and they do this in a physical way.
I know how neurons work. Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.
I’m not sure there’s any requirement for consciousness to include “human-like reasoning” or “understanding” for it to have some kind of experience and perspective or awareness.
That’s… irrelevant. I never said they have “human-like reasoning” or “understanding.” I said we don’t understand enough, meaning humanity writ large, including the experts. There are too many unknowns about the nature of consciousness.
A cluster of neurons trained to play doom might have consciousness but it’s not likely to think like a human
Again, it doesn’t need to think like a human in order to be capable of experiencing suffering. Babies don’t “think like humans,” or at least we don’t have any solid evidence that they do, but they’re certainly capable of suffering.
Your mentality is the same one people have used for generations to justify circumcising infants without anaesthetics. How far are you willing to extend it? Do pets “think like humans”? Do uncontacted tribes “think like humans,” in whatever vague way you define it in order to justify cultivating human braincells in a petri dish?
Do you not see how problematic this is? What if the technology grows and in a decade they’re studying a clump of 2 billion neurons in a vat? Will it suddenly become human enough to deserve your consideration? What about when it becomes 20 billion?
Whether it’s ethical to squash an ant or turn off an iPhone or stimulate a lab-grown neuron depends on your ethical framework and your philosophical worldview.
Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is? Amoral people exist, moral cynicism exists, nihilism exists, solipsism exists, hell even social darwinism exists.
Any of those frameworks and worldviews can be used to justify atrocities in the minds of those who hold them. And yes, an unethical or even anti-ethical persuasion is still an “ethical framework,” in the strictest sense of the term.
Just because something can be seated in philosophical jargon doesn’t mean we should grant it license to do whatever it wants.
So should we just assume that nothing is conscious?
Not at all! In fact, I believe that we should assume almost everything is conscious. I think it’s a bit of human arrogance to think that we brain creatures have a monopoly on perspective.
Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.
Exactly my point.
That’s… irrelevant
I don’t think it is. If the argument is that it’s unethical to poke a neuron because it might have consciousness, would the same argument not apply to anything else? I think you might be getting a bit hung up on the “think like a human” thing. My point is not that it’s okay to torture something if it doesn’t “think like a human.” It’s that there are potentially a lot of things in the world that are conscious that don’t often get the same consideration.
capable of experiencing suffering
This is an interesting one. It shifts the question from “does it have a consciousness?” to “does it have a consciousness that is suffering or able to suffer?”.
The idea of suffering is a very human concept that we have a whole section of our brains devoted to. There’s a lot of ethics devoted to alleviating suffering (eg. Humanitarianism) and we sorta use it as a means of directing our goals - we avoid things that make us suffer and seek things that bring us happiness. What makes us happy or makes us suffer varies a bit from person to person due to experience and learning/training but a lot of it is biologically evolved. Physical and emotional pain makes us suffer for evolutionary reasons.
So in one sense, you could define suffering as a stimulus that some conscious system avoids? In which case, training neurons essentially teaches them what suffering is. They’re trained to activate or not activate based on what avoids irregular stimulus (suffering) and results in regular stimulus (happiness).
If that’s how you define it though, there could be many other systems that work the same way. Obviously animals and plants and fungi etc. But also Computers and lots of mechanical systems do that too. If making decisions to avoid or seek electrical stimulus is suffering then a computer is basically a pleasure/torture box.
Personally I think that suffering is more than that. I think it’s a larger system we brain creatures have developed that doesn’t necessarily apply very well outside the context in which we use it. Would a vat of 20 billion neurons be able to suffer? I think that depends on how they’re arranged and whether they have that concept.
Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is?
Just because different ethical frameworks and worldviews exist, doesn’t mean they should all be treated equally. Sure, if someone is super utilitarian they might be fine with torturing people for medical research when they feel that the ends justify the means. If someone has a strict deontological code of ethics that tells them homosexuality is a sin punishable by death, they might campaign for that. I think those people suck and their beliefs are evil because of my own ethics and worldview.
When it comes to a question like “is an ant capable of suffering?” Or “is it okay to swat a fly or set a mouse trap?” Or “how many human neurons does it take to suffer while changing a light bulb?” You’ll get varying answers from people based on who they are. Personally, I think the right answer to those questions is dependent on the brain of the person answering them.
Moral universalists have the same slippery slopes you mentioned. If right and wrong are fixed and objective and not dependent on people, then groups claiming to know the one true morality will use it to persecute those labelled as evil or morally bankrupt (see the homophobic asshole example above).
Moral relativism doesn’t mean that morality doesn’t matter or that it’s wrong to fight against what you think is evil. I believe you should fight for what is right and I’m hopeful that the things that I think are good will win out against the things that I think are evil. Absolutism is maybe a bit easier for that because it simplifies moral choices a lot, but I think it’s hubris to think that evil is the same everywhere to everyone and not an artifact of the human mind.
thats not nearly enough cells to have an internal experience, they’re fine.
We don’t know enough about human consciousness to know that for sure. Plenty of animals have fewer braincells than humans, but we don’t know enough about their consciousness to say whether they have an internal experience.
That’s what I mean. It’s hubris to assume we can culture human braincells in a petri dish just because there’s a lack of evidence one way or the other whether it’s aware.
There’s a lack of evidence for anything not being conscious.
Neurons work by generating electrical signals in response to stimulus (either electrical inputs from other neurons or physical/sensory inputs activated by light or touch etc.) and they do this in a physical way.
If they’re conscious, then there’s a pretty good chance that power plants are conscious, computers are conscious and pretty much everything else in the world is conscious.
I’m not sure there’s any requirement for consciousness to include “human-like reasoning” or “understanding” for it to have some kind of experience and perspective or awareness. Humans make a lot of assumptions about the world to make it fit the patterns we’re used to.
A cluster of neurons trained to play doom might have consciousness but it’s not likely to think like a human, just like a rock or a plant or an ant or an iPhone might have consciousness.
Whether it’s ethical to squash an ant or turn off an iPhone or stimulate a lab-grown neuron depends on your ethical framework and your philosophical worldview.
I think there’s a lower limit of complexity for sentience, based on memory-persistence, self-firing, and self-recognition. I think there’s no need for moral concern for non-sentient things. (But, that’s just my ethical framework and philosophical worldview; the only “evidence” I’m at all aware of is thin and vague.)
But, as far as having a subjective experience, I think that might go quite small and alien including fungi and plant or even certain sub-cellular structures. Probably anything that maintains a border and internal homeostasis including parts of the bodies of larger experiencers could be having an internal perspective – and any human words applied so those experiences would tell you more about human bias than the experience.
In my view, although I am neither a neurologist nor a philosopher, things should absolutely scale with neuron blob complexity, and it should do so in a non-linear way. I dislike harming an animal with a complex brain like mammals, cephalopods etc. much more than I dislike harming the equivalent nerve mass in insects, for instance.
That’s also the way I feel, but I think that’s probably human bias and closely related to the evolutionary pressure behind my mirror neurons and how strongly they trigger correlates with outside sentient phenotype.
I think if I knew what it felt like (if anything) to be an ant colony, I might have different views around the causal use of boric acid (and related) to keep them out of human spaces.
So should we just assume that nothing is conscious? After all, I can’t prove that you’re conscious, nor you I. So should we relegate ourselves to an amoral solipsism?
I know how neurons work. Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.
That’s… irrelevant. I never said they have “human-like reasoning” or “understanding.” I said we don’t understand enough, meaning humanity writ large, including the experts. There are too many unknowns about the nature of consciousness.
Again, it doesn’t need to think like a human in order to be capable of experiencing suffering. Babies don’t “think like humans,” or at least we don’t have any solid evidence that they do, but they’re certainly capable of suffering.
Your mentality is the same one people have used for generations to justify circumcising infants without anaesthetics. How far are you willing to extend it? Do pets “think like humans”? Do uncontacted tribes “think like humans,” in whatever vague way you define it in order to justify cultivating human braincells in a petri dish?
Do you not see how problematic this is? What if the technology grows and in a decade they’re studying a clump of 2 billion neurons in a vat? Will it suddenly become human enough to deserve your consideration? What about when it becomes 20 billion?
Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is? Amoral people exist, moral cynicism exists, nihilism exists, solipsism exists, hell even social darwinism exists.
Any of those frameworks and worldviews can be used to justify atrocities in the minds of those who hold them. And yes, an unethical or even anti-ethical persuasion is still an “ethical framework,” in the strictest sense of the term.
Just because something can be seated in philosophical jargon doesn’t mean we should grant it license to do whatever it wants.
Not at all! In fact, I believe that we should assume almost everything is conscious. I think it’s a bit of human arrogance to think that we brain creatures have a monopoly on perspective.
Exactly my point.
I don’t think it is. If the argument is that it’s unethical to poke a neuron because it might have consciousness, would the same argument not apply to anything else? I think you might be getting a bit hung up on the “think like a human” thing. My point is not that it’s okay to torture something if it doesn’t “think like a human.” It’s that there are potentially a lot of things in the world that are conscious that don’t often get the same consideration.
This is an interesting one. It shifts the question from “does it have a consciousness?” to “does it have a consciousness that is suffering or able to suffer?”. The idea of suffering is a very human concept that we have a whole section of our brains devoted to. There’s a lot of ethics devoted to alleviating suffering (eg. Humanitarianism) and we sorta use it as a means of directing our goals - we avoid things that make us suffer and seek things that bring us happiness. What makes us happy or makes us suffer varies a bit from person to person due to experience and learning/training but a lot of it is biologically evolved. Physical and emotional pain makes us suffer for evolutionary reasons.
So in one sense, you could define suffering as a stimulus that some conscious system avoids? In which case, training neurons essentially teaches them what suffering is. They’re trained to activate or not activate based on what avoids irregular stimulus (suffering) and results in regular stimulus (happiness).
If that’s how you define it though, there could be many other systems that work the same way. Obviously animals and plants and fungi etc. But also Computers and lots of mechanical systems do that too. If making decisions to avoid or seek electrical stimulus is suffering then a computer is basically a pleasure/torture box.
Personally I think that suffering is more than that. I think it’s a larger system we brain creatures have developed that doesn’t necessarily apply very well outside the context in which we use it. Would a vat of 20 billion neurons be able to suffer? I think that depends on how they’re arranged and whether they have that concept.
Just because different ethical frameworks and worldviews exist, doesn’t mean they should all be treated equally. Sure, if someone is super utilitarian they might be fine with torturing people for medical research when they feel that the ends justify the means. If someone has a strict deontological code of ethics that tells them homosexuality is a sin punishable by death, they might campaign for that. I think those people suck and their beliefs are evil because of my own ethics and worldview.
When it comes to a question like “is an ant capable of suffering?” Or “is it okay to swat a fly or set a mouse trap?” Or “how many human neurons does it take to suffer while changing a light bulb?” You’ll get varying answers from people based on who they are. Personally, I think the right answer to those questions is dependent on the brain of the person answering them.
Moral universalists have the same slippery slopes you mentioned. If right and wrong are fixed and objective and not dependent on people, then groups claiming to know the one true morality will use it to persecute those labelled as evil or morally bankrupt (see the homophobic asshole example above).
Moral relativism doesn’t mean that morality doesn’t matter or that it’s wrong to fight against what you think is evil. I believe you should fight for what is right and I’m hopeful that the things that I think are good will win out against the things that I think are evil. Absolutism is maybe a bit easier for that because it simplifies moral choices a lot, but I think it’s hubris to think that evil is the same everywhere to everyone and not an artifact of the human mind.
We should teach ants to play Doom.
Nononono we don’t want World War Ants