To Be Immortal

turritopsis-nutriculaTurritopsis nutricula can do something amazing. It can turn back the clock.

This tiny creature, which beings life as a polyp and then metamorphoses into a jellyfish, can change itself back into a polyp after it has reached maturity. This reversal of aging is caused by cellular transdifferentiation – the transformation of a cell from one type to another.

Turritopsis nutricula has been nicknamed “the immortal jellyfish” because it does not have to age continuously and therefore, as long as it can survive disease, starvation or predation, it appears that it will never die.

As expected, news of the existence of this ageless hydrozoan has engendered speculation on whether we humans can be ageless, too.  Can scientists learn how to apply the processes used in Turritopsis return to youth as a means of forestalling human death?

To prevent death, we must understand what death means.

However, when we speak of death, we are really referring to the end of the Self, the thing that makes each of us the person that we are.  This can be the soul, the spirit, the mind or a collection of electrochemical reactions between neurons.

David Hume defined the Self as a collection of perceptions.  The idea that I am a continuous string of my memories seems valid.  I am the person who remembers calling herself Marcia Malory, having tuna for lunch yesterday and getting married in 2005.

People with Dissociative Identity Disorder have different Selves that share the same body. These Selves have different perceptions and different memories.  I wonder if it is possible for one of these Selves to die – to cease perceiving, while the body it had once associated with continues to function.

When you remember a dream, you remember the protagonist in the dream – the one who jumped off the top of a building and was able to fly, who suddenly became paralyzed when fleeing from an attacker, who was standing stark naked in front of an audience of 100 people – as being you, even though the body that you always thought to be your body did not take part in the dream.

In Buddhism, there is no discrete self. Instead, there are the Khandas – five components that make us who we are: These are physical form, sensation, perception, volition and consciousness.

If someone promised me immortality, I would expect that they would enable my Self – my mind, my soul, the parts of my brain that make me – to exist forever

What if science enabled me to return to my youth, as Turritopsis nutricula can return to its polyp stage?  Would I have lost the memories of my adulthood?   (My brain would have become a more youthful brain.) When I grew to adulthood once again, I would gain new experiences and have new memories and new perceptions. The memories of Marcia I would not be the memories of Marcia II. Therefore, Marcia I would have died.

Immortality is not so easy, after all.

Science and Speculation

Science and Speculation

Scientist Seeks Woman to Give Birth Neanderthal Baby

In an interview with Der Spiegel Magazine last month, Dr George Church, Professor of Genetics at Harvard University, claimed that now that the Neanderthal genome has been sequenced, a Neanderthal could be cloned using human stem cells, and that a human woman could serve as a surrogate mother.

This fascinating suggestion was repeated in a number of different media outlets. It became a topic of discussion on various social media sites. After all, it does have important ethical implications.

However, a number of headlines appeared to indicate that Dr Church was actually seeking a woman to give birth to a Neanderthal RIGHT NOW. In some articles, this idea went beyond the headline and infiltrated the article itself, so that Dr Church’s musings became an “ambitious plan”.

Thus, with a few words, Dr Church became transformed from a brilliant scientist – one who helped to initiate the human genome project – into a crackpot.

As would be expected, this made Dr Church upset.

Nobody likes to have their words misinterpreted, especially when it makes them seem like a nut.

Dr Church got into trouble because he was speculating, but his words were interpreted as though they were statements of fact.

We live in a world where people want their information fast. They get their news via Tweets and soundbites.

Web designers are advised to make their sites as skimmable as possible.

Unfortunately, this means that it becomes difficult for scientists, and others to present complicated ideas to a mass audience. It becomes harder for speculation to be recognized as speculation.

Skimming means that people miss phrases like “What if”. Long, conditional sentences become misinterpreted because people don’t take the time to figure out exactly what those sentences mean.

In fact, I’ve seen people who write about science endure criticism for speculating because they were “spreading ideas that weren’t true” – sometimes the critics have been scientists or people who share an interest in science.

I’ve seen writers who have speculated be criticized for “not understanding the science”, when it was clear to me that they did understand.

A while back, I wrote an article in which I asked what would happen if scientists could turn gay people straight – I was looking at the philosophical implications of this possibility.  I stated in the article that nobody has ever been able to change anyone’s sexual preference.  In my mind, this was obviously a thought experiment.

Nevertheless, I was criticized because “I might give homophobes wrong ideas”.

Do scientists and people who write about science have a responsibility to refrain from speculating? Should Dr Church have restricted his interview to speaking about the sequencing of the Neanderthal genome and allowed discussions of Neanderthal cloning to be found only in books clearly labeled Science Fiction?

To me, the beauty of science is that it allows me to speculate.  I read about an exoplanet in a star’s habitable zone and I imagine what intelligent life would be like in another section of the universe – even though as of yet there is no proof that life, let alone intelligent life, exists anywhere other than our planet.

I want to know about this (imaginary) Neanderthal child with a human mother.  Does the child learn to speak, go to school, have friends?

I want scientists to be able to discuss their wildest ideas without having to worry about being misunderstood.

Having It All

Recently, I heard someone say that it is unusual for someone to be creative and to understand science and technology.

As someone who has been reading science fiction for a very long time, I found this statement quite surprising.

You can’t write good science fiction without knowing about science and knowing how to write.

I know people who are scientists and engineers and, at the same time, photographers and musicians.

A voice teacher once told me that people who are good at mathematics usually also have musical talent, because mathematical s and musical ability both involve recognizing patterns. In my personal experience, I have found this to be true.

I probably shouldn’t have been surprised, though.

In our society, there is a myth that if you excel in one field, you must be a failure in another field and that any positive qualities you have must be countered by negative ones.

If you write poetry, you can’t possibly understand calculus.

Make a living writing code? Your social skills can’t be very good.

Good looking? A good athlete?  Popular? You must be stupid.

In reality, things don’t work out that way.

A recent study published in Social Cognitive & Affective Neuroscience appears to reveal a link between general intelligence (the kind of intelligence measured on IQ tests) and emotional intelligence.

So much for the stereotype of the scientific genius who doesn’t know how to get along with other people.

Beautiful but dumb? Actually, there appears to be a positive correlation between physical attractiveness and intelligence. One possible explanation of this phenomenon is evolutionary – a long time ago, the smart guys got all the hot women and together, they made a bunch of beautiful, brilliant babies.

Why then does the myth that you can’t have it all persist?

Perhaps the perpetuation of this myth is a way of maintaining a society’s status quo – by discouraging people from developing their abilities to the extent that they become capable of challenging the social order.

If someone who is good looking and popular constantly receives the message that they can’t also be intelligent, they probably won’t go out of their way to learn how they can change things around them or why things should change. Thus, they won’t be able to use their looks and popularity as tools to help effect those changes.

Someone who thinks of innovations in technology that could revolutionize the way essential tasks are performed may never share their ideas with the rest of the world if they have been convinced, from a young age, that they will never develop the skills required for eloquent communication.

Convincing a brilliant person that smart people can’t be popular can be a way of preventing such a person from seeking political power.

People who are especially talented or good looking are likely to be victims of bullying. This could indicate that societies don’t like people who are too smart, too skilled or too good looking – they threaten the stability of the social structure. Bullying is a way of keeping such people in their place.

Powerful people have often been threatened by intelligence. Good looks, talent, charisma and intelligence combined can be even more threatening.

Bullying, though, may be a last resort. Perhaps it is easier to convince someone that they can never be popular, intelligent or creative, so that they never try to develop their social skills, learn new things or discover effective ways of expressing themselves, than to terrify them after they have already become aware of their potential.

Planes, Trains and Past Participles

I’ve just returned from a short break in Stratford-upon-Avon, where Shakespeare was born and where my husband was flying drones (unmanned aerial vehicles).

This has made me think about the juxtaposition of the old and the new – not something unusual to me, as I am a science writer living in a walled city that was once part of the Roman Empire, on a street that, as I heard a tour guide put it, “has architecture from the 15th century, the 19th century and the 1980s” – and why we seem to accept changes in science and technology more willingly than we accept changes in language.

UK television is rife with programs telling us about the technological advances of the Celts, the Romans, the Victorians.  The British are expected to be proud of their country’s innovations in the fields of medicine, electronics and engineering, just as members of other nations are proud of their country’s achievements.

Few, if any, Americans are ashamed of the fact that the first person to walk on the Moon was born in Ohio.

When it comes to language, though, we are Luddites.

Changes in English that happen naturally – because all languages evolve, and language itself has been evolving since a human being living in prehistoric times uttered the first word- are seen as corruptions of a tongue that should be pure and unblemished.

A high speed broadband connection is considered superior to a dial-up, and both are seen as better than snail mail (itself a derogatory term,) but the use of “u” for “you” or the omission of an apostrophe are treated by some as almost  criminal acts. Why isn’t spelling out the word “for” when the symbol “4” could be substituted called “snail English”?

We read jokes about the misunderstandings that come about through misspellings, the incorrect use of capital letters and faulty punctuations, but that’s what puns are.  The response to a language-related misunderstanding does not always have to be anger or violence; it can be laughter and the acknowledgement that different people think in different ways.

Besides, spoken language existed for tens of thousands of years before alphabets, spelling and punctuation came into existence (and even when they did, many remained illiterate), yet somehow people managed to survive without being able to decorate their sentences with commas, semi-colons and question marks.   There are cultures today where people get along despite a lack of a written language.

So why are we so conservative when it comes to language, when many of us will gladly replace relatively new technology for something that is even more up-to-date?

My husband’s fellow British drone operators will purchase the latest UAV equipment so that their vehicles can fly faster and further than ever before, yet they still waste time by spelling colour with a “u”.

Why use “they’re” (7 characters) instead of “there” (5) when almost everyone will be able to figure out what you mean either way. Why refuse to make the change when you have no problem replacing your old iPad with one with that has a faster processor – and the language upgrade is free.

It’s not as though English hasn’t experienced changes like these before.

Some English past participles use to have the prefix “ge-“in front of them; the word “sung” was once “gesungon”.  The prefix was lost because life was easier without it. According to the linguist John McWhorter, author of The Power of Babel, the Vikings got rid of it. They were people who certainly depended on tools (language is a tool) being functional and easy to use.

So why the discrepancy?

Both language and tool building are inherent parts of being human; rudiments of both language and tool-making ability can be found in all of the other great apes (orangutans, gorillas, bonobos and chimpanzees), indicating that these skills probably developed in a common ancestor.

Why do we seem more likely to accept change when it comes to our tools and less likely to accept change when it comes to our words?

Is it possible that engineering and technology draws the interest of people with a more progressive outlook, while, contrary to common stereotypes, those who are interested in “softer” pursuits like writing tend to be more set in their ways.

Is gender an issue?  In western culture, more men than women seem to be drawn to technology, while women are thought to have better language skills than men.  (Whether these differences are mostly biological or cultural is a matter of debate.)

Maybe society encourages men to seek out new opportunities and women to protect the status quo – Man goes forth and hunts; Woman makes sure that the home runs smoothly.  Therefore, technology is associated with the daring hunter, while language is associated with the comforting homemaker.

An apostrophe before a possessive “s” is like a drop of milk in a nice, warm cup of tea.

Class issues are almost certainly involved. Language has often been used to stratify people by class. After the Norman Conquest, Norman French became associated with the upper classes, Old English, with the lower classes – why today it is preferable to (Latin-derived) defecate than to (Old English-derived) shit.

Later on, the standardization of spelling and punctuation (after Shakespeare – the literary genius spelled his own name more than one way) made it easy to distinguish the educated from the uneducated – the ability to obtain a formal education has often been directly related to social class.

Today, we can designate someone as “one of us” or “one of them” according to whether or not she or he uses language the same way we do.

By restricting the rate at which language can change, we uphold the barriers between correct and incorrect English (and the distinctions between the right and the wrong sort of people).

Technology ownership, like language use, is also a measure of social class.  People with more money tend to have more things – more expensive things and newer things.  The possession of well-maintained “classic” items can also be a sign of high social standing.

However, some producers of technology may find that in the long run, it is more advantageous to sell many products to people of various social classes than to sell a small number of products to members of an elite.  Microsoft began with the vision of “a computer in every home”.

The constant purchase of new and improved products doesn’t just encourage the development of new technology; it helps technology producers to make more money. Planned obsolescence has often been a successful marketing strategy.

I find it interesting that changes in language have often been associated with technological changes.  Pidgin languages developed as a way of facilitating trade (which has also been helped by advances in transportation technology). The invention of the telegraph resulted in the development of a short, clipped punctutationless writing style, telegraphese, known for the use of STOP to represent the end of a sentence. And we have since developed txtspk.

Language, like technology, is affected by capitalism, but in the case of language, the commodity that is bought and sold is often the aforementioned formal education.  So convincing a parent that knowledge of when to use the subjunctive is essential for their child’s futures success is, in a way, similar to convincing them that their children can only be healthy if there is a Wii Fit in the house.

The difference, of course, is that society is structured so that your ability to conform to language norms really does have a powerful effect on your social status in and, consequently, on your life chances.

Maintaining old language norms increases your chance of success in life; hanging on to old technology does not.

Science and Race

The study of race from a scientific perspective has always been controversial.

In the 19th century, studies of skull structure were used to “prove” that different races were at different rungs on the evolutionary ladder – the “inferior” races being more ape-like.

Some scientists believed that Africans were a missing link between apes and “true”  (i.e. European) humans; this resulted in  Africans being placed in zoos.

In The Mismeasure of Man, Stephen J. Gould contends that the results of some of these studies were falsified to fit with society’s preconceived notions of white racial superiority.

Today, we know that it is impossible to distinguish physical differences between races because race cannot be defined scientifically. Tens of thousand of years of human migration and interbreeding throughout the world means that it is impossible to divide humanity into a few genetically distinct “races”.   Studies of the human genome prove this.

A recent genetic study has revealed that three hunter-gatherer populations in Africa show evidence of interbreeding with an unknown group of hominin that diverged from humans and Neanderthals between 20,000 and 80,000 years ago, further evidence that when it comes to sex, humans don’t always stick to their own kind.

There are small, genetically isolated human populations in which certain medical conditions and diseases are more prevalent than in the general human population, but these groups are usually thought of as being too small to be considered separate races.

Race is, in fact, a concept that is defined by culture, and racial definitions vary from culture to culture. Definitions of race differ according to culture.

A person who is considered mixed race in one culture may be thought of as black, white or Asian in another.  This can be very important when laws treat people defined as mixed race differently than people defined as being purely of one race.

“Hispanic” is a racial term used in the United States to categorize people with a wide variety of genetic backgrounds whose ancestors spent some time in Latin America.

In America, people of South Asian (Indian, Pakistani, Bangladeshi) ancestry have sometimes been classified legally as Asian and at other times as white.

Having been born in America myself, I was raised to think of South Asians as white; the Asian race consists of people whose ancestors are from East Asia. (Yes, Americans do realize that the Indian subcontinent is part of Asia.)

When I moved to the UK, I was surprised to find that people from South Asia are considered non-white.

Because today’s scientists realize that race cannot be defined scientifically, they no longer study the differences between races or attempt to create a hierarchy of races.

Now, scientists are examining our attitudes toward race and trying to understand how and why these attitudes develop.

Evolutionary biologists looking at race focus on the human tendency to form large social groups and to divide human beings between members of one’s own group and outsiders (us vs. them).

They look at antecedents to racism in other primates.  A 2011 study shows that macaques spend a longer time looking at photos of unfamiliar macaques than at photos of macaques in their social group.  The researchers who performed this study interpreted the results as meaning that the macaque subjects viewed the strange macaques as more threatening than the macaques that they knew well.

Chimpanzees take the tendency to prefer members of their own group to outsiders to extremes; male chimpanzees within a troop will sometimes form a coalition that works together to attack and kill members of another group.

Nevertheless, even if humans have inherited a tendency toward racism, we are not necessarily destined to be racist.

Studies examining the neurobiological bases for racism in humans have found that when someone looks at a photo of a member of another race, activity increases in the amygdala, which is associated with fear and with learning to be afraid of things. (Racism can be encouraged by social conditioning.)

However, the anterior cingulate cortex (ACC) and the dorsolateral prefrontal cortex (DLPFC) are also activated.

The ACC is associated with error detection and dealing with conflicting information; it is activated during the Stroop test.  The ACC may be activated when people are presented with photos of members of other races because they experience conflict between an instinctive or learned fear of people who are different and the knowledge that they should treat all people equally.

The DLPFC is believed to play a part in regulating behavior.  While the amygdala may be signaling someone to behave negatively toward a member of another race, the DLFPC prevents them from acting on that behavior.

Thus, our human brains allow us to overcome our primate racism.

Implicit Association Test

The Implicit Association Test (IAT) is supposed to determine whether people have racist feelings, and if they do, how intense these feelings are.

During the IAT, subjects separate words (such as “happy” or “disgusted”) into positive and negative categories by clicking on one of two keys on a computer keyboard.

At the same time, they click on the same two keys to separate photos of human faces into two racial groups.

Test results seem to show that the majority of people are inherently racist, as they are more likely to associate negative words with members of the group that they do not identify with.

Before I took the IAT online myself, I was warned that it was likely that I would discover that I was at least slightly racist, and if I couldn’t deal with it, I shouldn’t take the test.

In fact, my results showed that I have no preference for one race over the other. – Apparently, this is unusual.

One flaw in the test is that it may actually prime people to identify a particular race with good or bad qualities.

When I took the test (which was designed for Americans and identified people as either European-American or African-American),  I first had to click on one key if I either read a “good” word or saw a photo of a European-American and click on another key if I  read a “bad” word or saw a picture of an African-American.

Although the categories are later switched – African-Americans and good words get one key, while  European-Americans and bad words get another – by that time the test may have already primed the subject to associate African-Americans with negative qualities and European-Americans with positive ones.

Some studies have shown that changing the names of the racial categories on the IAT can affect the outcome of the IAT.

Additionally, people who score as very racist on the IAT often show no signs of racist behavior.

On Altruism

Why do people do things that benefit others but disadvantage themselves?

On the surface, altruism seems to be inconsistent with the process of evolution.  If we view evolution as a competition in which the winners survive and reproduce at the expense of the losers, then selfishness should create an evolutionary advantage.

In fact, this interpretation of Darwin’s concept of natural selection has been used to justify social policies that hurt those who are already disadvantaged (because of income, class, nationality or race), with the idea that the laws of nature dictate that those who are less able to survive should not survive.

If altruism were at odds with evolution, then it would be very rare.

But altruism can be found throughout the animal kingdom.

In the Origin of Species, Darwin points out the apparent contradiction of worker bees risking their lives to protect their hive.  He explains this by saying that “selection can be applied to the family, as well as the individual.”

In the Descent of Man, he states that animals that live in social groups have warning systems, in which one member of the group signals to the rest of the group when it discovers a threat, exposing itself to danger in the process, and that sometimes a few animals will risk their own lives fighting to defend their entire group.

Biologists such as Fisher, Haldane and Hamilton further developed the idea of kin selection – that animals will sometimes behave in ways that are harmful to themselves as individuals in order to benefit their relatives.

In The Selfish Gene, Dawkins states that kin selection makes sense once you understand that, ultimately, evolution is not about the survival of individuals but about the survival of genes. When you help your siblings (or other close relatives with whom you share a large percentage of genetic material) to survive, you increase the chances that your genetic material will continue to be replicated.

Humans and other animals, however, do not confine acts of altruism to their immediate genetic relations, or even their own species. We would not be surprised by a story of a dog risking its life for its owner.

There have been anecdotes, from ancient times to the present day, of dolphins surrounding humans to protect them from sharks and pushing drowning humans to the water’s surface so that they can breathe.

And human beings are known for treating cats, dogs and members of various other species like their own children.

Since it is not always possible to determine who is one’s kin, evolution may have led some animals to develop the predilection to help other animals that are part of its social group (a dog sees itself as part of its owners “pack”) or that are like them in some way (a dolphin identifies humans as well as members of its own species as creatures that need air to survive).  This can make decision-making quicker, which is essential when there is a threat. If it shares your home, shares food with you, plays with you and cuddles you, protect it.  If a shark is swimming toward it, form a circle around it.  If it appears helpless and hungry, feed it.

Altruistic behavior in non-humans has been seen in the laboratory. In, an experiment studying altruism in rats, which was performed in 2011, pairs of rats were placed in pens, with one rat trapped in a cage while the other rat was able to roam free. In the majority of cases, the free rat learned how to open the cage and then liberated the other rat. The free rats did not open empty cages or cages that had toy rats – they specifically chose to help their fellow rats.

When the free rats were given the choice of freeing the trapped rat or opening a cage that was full of chocolate (a favorite rat treat), most rats opened both cages, and then shared their chocolate with the other rat.

Researchers at the University of Zurich have just discovered a connection between altruism in humans and the structure of the brain.  In an experiment, subjects played a game in which they had an opportunity to divide money between themselves and another player.  Subjects who behaved more altruistically had more gray matter in the right temporoparietal junction (TPJ) than those who behaved more selfishly.

The right TPJ is associated with the development of theory of mind – the ability to understand the mental state of another.  Understanding another’s mental state can lead you to develop empathy toward them, which can in turn cause you to put their needs above your own.

The natural tendency toward altruism has philosophical and political implications.  The ancient Golden Rule – “Treat others as you wish to be treated” – can be seen as a manifestation of natural altruistic impulses.  When you put someone else’s wellbeing above your own, you help to ensure the survival of genetic material that may be your own.  (This might explain why the Golden Rule is sometimes forgotten when there is an opportunity to make a sacrifice for someone of a different nationality or ethnic background.)

Kant’s dictate that you should only perform acts that could become universal laws – for example, you should not steal because if everybody were allowed to take whatever they wanted when they wanted it, there would be chaos, and therefore stealing could not be allowed universally  – reflect a concern for the well-being of the group above that of the individual.

The Enlightenment theory that human societies were formed by social contracts, in which people gave up some of their individual liberties in order to ensure the protection of the group, is also reflective of natural examples – the vervet monkey who gives an alarm call to warn other monkeys that a predator is near gives up his “right” to protect himself from by danger by hiding quietly.

There are implications regarding how we treat non-human animals. If other animals behave altruistically toward others, do we have a moral obligation to behave altruistically toward them?

If it is possible to identify how altruistic a person is by examining their brain, as the University of Zurich study suggests, then will we someday define people as altruistic or selfish before we have observed their actions?

Will reports of the amount of gray matter in a defendant’s right TPJ be used as evidence in criminal proceedings?  “Your Honor, my client could not have stolen that bracelet.  His fMRI shows that he is not a selfish person.”

As human brains are plastic, perhaps children’s “altruism levels” will be tested in school to ensure that they have the right level of altruism in adulthood.

If the gray matter in a child’s right TPJ is deemed too low, she might be sent to special classes where she learns how to share and get along with others better. Conversely, if a child is thought to be “too altruistic”, based on an examination of his brain, perhaps he will be taught to stand up for himself more and avoid being bullied.

Of course, many factors will be involved in determining how altruistically a person will behave in a given situation, including past experiences, their relationship with the other person, and whether there is a perceived possibility of long-term benefit, and it is extremely unlikely that we will be able to predict whether someone will be generous or selfish by looking at one small part of that person’s brain.

Nevertheless, these are interesting possibilities.

Meat Eating and Animal Intelligence

Remember when cans of tuna used to say they were dolphin friendly? Some wise guy would always say, “They’re not friendly to tuna, though, are they?”

Of course, we all knew what that was about.

Dolphins are intelligent. They form social groups. They communicate with sound. They pass the mirror test.

A dolphin can learn how to do tricks, perform in front of an audience or in front of a camera.

Dolphins have names.

Tuna, on the other hand, are just fish.

But it gets more complicated.

A recent study suggests that an animal’s perceived intelligence is the main factor in determining whether or not we will be disgusted by the thought of eating it.

The key word here, though, is “perceived”.

According to this study, people tend to ascribe lower levels of mental functioning to animals that they are about to eat.

So do we eat tuna because tuna aren’t very intelligent, or do we tell ourselves that tuna aren’t very intelligent so that we can eat them?

What about cultures where eating dogs or horses is considering normal? Would the people who eat these animals ever think about training them to perform in front of an audience?

The thing is, our beliefs about animal intelligence don’t always mesh with scientific observations.

This paper reports on experiments suggesting that chickens might have a primitive form of self-consciousness (an understanding that one is an individual separate from other individuals), have a limited sense of time and delay gratification in exchange for a greater reward.

Some fish seem to be able to recognize other individuals within their shoals, to work together to catch food, to form long term memories, and to use tools.

It seems that the more we learn about animal cognition, the more we are surprised by how intelligent familiar animals can be.

If the thought of eating an intelligent animal is repulsive to you, then the obvious solution is to become a vegetarian.

In fact, vegetarian organizations go out of their way to publicize research suggesting that animals are smarter than most people think.

That’s a simplistic answer, though. Although it is certainly possible to live one’s life as a vegetarian, the health benefits of vegetarianism vs. omnivorism are in dispute – and like any other topic that touches on ethical beliefs and cultural traditions, research on the topic is subject to confirmation bias.

When looking at meat eating from an ethical standpoint, it is important to understand that ethics is never about deciding between absolutes – it is about making tough decisions and understanding the necessity of compromise.

Behaving ethically means considering whether it is better to lie or to hurt someone’s feelings, or whether you should report the poor old woman who has hidden food in her handbag to the store manager or pretend to look the other way.

If you believe that an omnivorous diet is essential for optimum human health, then you have to make some difficult choices.