More than a few readers sent me today’s Dilbert strip by artist Scott Adams:
Well, I’m with Dilbert (the guy in the red shirt). But one reader added this:
I must say Wally [the bald guy] expresses a conclusion that I also reach… that if humans have no free will there is no argument against manipulating or reengineering their behaviours in any way one might please… “A Clockwork Orange” world indeed…..
My response: of course there’s an argument against manipulating or re-engineering behaviors to control people. Just because it’s possible in theory to do that—and if think you can’t do it, at least in principle, you’re a dualist—doesn’t mean that “everything is permitted.” The argument against it is that manipulating behavior erodes people’s sense of agency, a feeling that is real, and could in that and other ways be harmful to society and well being. So not only is that an argument against it, but such arguments can be effective, for they can dissuade people from doing bad stuff. (Some people also think that if there is no free will, it’s useless to try to convince others.)
Statements like the one from the reader above—a reader who was kind enough to send me the cartoon—sometimes makes me think that many people who argue in favor of free will don’t really grasp the issues it raises.

Saw it this morning and wondered if it would be posted. Happy you did.
As an engineer, I see Dilbert as a documentary.
Same here, though too often the reality (in terms of the ineptitude of management types) is far worst than what Dilbert depicts… unfortunately.
It seems he is compassionate. I only had one pointy haired boss in my career, but my best friend, a programmer, had one who used to “improve” his code after work using his master password to access the files. Fortunately, my friend quit and became a contract programmer having a better life and income than before. One of his best clients was his former employer, on the proviso, that his former boss have nothing to do with his work.
I think the weirdest people work in IT. I’ve seen a lot of behaviour like this along with just plain abusive behaviour. Some of the issues I think are that promotions in IT centre on moving into management (though this is getting better) and many people aren’t good at that but want the promotion.
The only thing competing with IT weirdness is in sales where the best salesmen are promoted into management where they are barren of the team skills needed to succeed as opposed to the Lone Ranger persona of the super salesman.
In the 1970s, I was a director level manager (managers reported to me) in a software/ hardware company. They hired a guy as a director for a parallel group. The first day he was there he came in my office, said he doesn’t like me and my department is worthless. I just laughed at him. Four months later they called the local police to escort him out of the building permanently and obtained a restraining order.
Given your encounter, it’s weird why it would take 4 months to get rid of him. I think people are always hoping the miscreant will straighten up and avoid a confrontation. Nobody wants to take the bull by the horns.
There were several reasons why it took so long. He came highly recommended and they were looking for someone to shake things up because upper management had been going the wrong direction. Another reason is they hired a new vice president who, it turned out, had a form of Alzheimer’s. I was the one who discovered it when he couldn’t remember a four hour Friday meeting he had with me on Monday morning. No one believed me. He was fired about the same time which is about the time I left to become a consultant for a couple of decades. None of us could have done otherwise. 😊
Yeah or they just don’t want to deal with it because they are bad leaders (because of said poor development paths).
I once worked (for a very short time once he was promoted) for a manager that was a true mysogynist. He tormented women and would try to get them to quit. He was always making inappropriate sexual jokes. He hadn’t gotten around to harassing me but he did condescend me and refuse to give me things like a smartphone where he allowed it for the guys. He actually said I couldn’t have a smartphone because I wore skirts so where would I put it.
I got the hell out before he got a chance to make my life hell. I was very clear why I was leaving and what he had said in my exit interview. I also gave HR permission to share my interview. Still, nothing was done. He fired many women in the department after months and years of harassment and caused many a lawsuit when these women sued. Still, it took 10 years before he was let go!
I’m glad I left right away. I had been through misery with bad leaders in the past and promised myself not to let that go on again.
Diana, that’s appalling!
It was quite surreal. I’m glad I saw the warning signs and got out. My male coworkers were appalled as well and they all left eventually too.
Right you are. I spent a nearly 40-year career in IT resisting promotion. I kept asking, “Would you rather have a first-rate developer or a lousy manager?” Fortunately I worked for a company early in my career that paid its best developers well enough that any temptation that might have arisen was easy to resist.
I wish more managers I had, had resisted the Peter Principle. It explains why managers rise to the level of their incompetence, giving everyone else indigestion.
It’s funny because we were talking about development paths at work and I talked about how the push of technical people into management when they didn’t really want it but saw no other path, resulted in hell for their reports. I often got saddled with new managers like this. I don’t know why it happened all the time but it did, especially earlier in my career. Now I try to avoid those people. Thankfully, when you work in IT long enough, you start to know people or know people who know people so you can usually avoid the bad ones.
The Peter Principle operates powerfully. I still remember a visit by Rohm and Haas to Cornell, where I was a chemistry graduate student. The Rohm and Haas representatives wanted let us know what a career in the chemical/pharma industry would be like and presented a series of examples – chemistry PhDs career paths at Rohm and Haas. Chemistry PhDs were moved not just out of research into development, but to marketing, sales, etc. That’s fine if that’s where their talents really resided, but that’s not usually the case for science grad students. The one person who remained as a research group leader was viewed as having a dead end career.
For the company, it’s a damned-if-you-do-damned-if-you-don’t situation. Scientists and engineers often resent bosses who don’t understand their work – they want someone above them with the professional chops, someone who can act as a sounding board or at least discuss with them intelligently the pros and cons of various ideas. But, more often than not, that person is a lousy people person and then the workforce ends up resenting them for not being a good manager.
What we really want is a manager who is good at both. But that is really hard to find, and honestly a bit of an unrealistic standard for us grunts to demand someone meet.
In some orgs they have leaders that manage services and leaders who manage people and help with your career and development. It works pretty well and gives non management types a place to go.
Girl, you said a mouthful! I know one IT professional whose hobby is spotting nightjars and making smartass comments on a certain science website. And his avatar is running away with my briefcase!
I’m on the creative side of things so I deal with a whole different breed of slightly-more-human weirdos, but I’m never surprised by the lack of basic people skills among techies: I think it is mostly the people who don’t need people that self-select into interacting with computers.
People…
Who don’t need people…
Are the luckiest people
in the world….
I often get along with the ones that lack people skills. Probably because they give me what I need without the social overhead. 🙂
Yup. As long as you don’t insult our code (as in software), we’re pretty impervious. You’re welcome!
That is common in lots of fields (all? most?).
It really does seem to be common that people rise to a level they are incompetent at. It is not just the individual’s ambition. Our culture, particularly any kind of business, is fixated on, and demands, constant growth, expansion. It often doesn’t matter that a person is very good, and happy, doing what they do at a certain level. That’s not good enough. It is either up or out, so you get a lot of people pushed to a position that they can’t perform well at.
Then there’s that whole Dunning Kruger thing too.
It actually makes sense. People who complain about the Peter Principle are basically using 20/20 hindsight to armchair quarterback promotion decisions. In reality, the Peter Principle is working like this: the company keeps promoting a person who is doing well until they stop doing so well. Then the promotions stop, because they are no longer deserved. Should they have stopped one grade/step below that? Yes. Could the company have known that beforehand? No. You know it after the decision because you have 20/20 hindsight. They, however, do not have 20/20 foresight with which to predict when to stop promoting people.
At a company I’m familiar with, quite often people are moved from tech to management and, if they feel it’s not working, they move back to tech. I think in an ideal world, you would be able to find technical people who spent time getting a management degree to fill the ranks.
In the real world though, for the worker bees, it usually amounts to having good or bad luck.
“As an engineer, I see Dilbert as a documentary.”
Me too, sadly.
By the way, Wally is possibly my favourite cartoon character of all time.
I like Alice which may say more about me than I should reveal in a posting.
I often feel like Alice, especially when she says, “must stop fist of death”
That’s the heart of it. 🙂
I can’t help thinking of the “Argument from Consequences” fallacy, the argument that something can’t be true because, you know, think of the children!
This is a disease afflicting all ideologies. It has given rise to the totalitarian notion that some ideas are ‘politically incorrect’.
That reader definitely does not understand free will. Being able to pursuade someone or laws that punish/induce behavior are part of the set of inputs to make decisions but are not ‘free will’ any more than the urge to eat.
B.F. Skinner wrote a book about this. Beyond Freedom and Dignity.
It created quite a stir back around 1970.
sub
I read some of the comments on the dilbert strip too. Clearly those people have no idea what free will means and why it can’t exist.
I suspect only a tiny fraction of humanity ever even give free will a thought. They assume dualistic, contracausal freewill, because that’s what life feels like. ‘Nuff said.
For most of my life I was taught dualism. It never really “felt” like I was some soul that could exist without a body, but that is what my mom insisted was true and I thought everybody else believed it. I had no idea how crazy it was at the time until I learned more about psychology.
Of course dualism still doesn’t help free will but most aren’t educated enough to see that.
Yes, the early training will get you every time. But, even for many without that exposure, the default assumption comes from intuition. I think it takes some concerted effort to overcome it.
I was under the impression that America had already become, with the objection of no one, a manipulocracy.
At least it isn’t a kleptocracy. Imagine living in a manipulocracy kleptocracy!
The first thing we would have to do would be to ban the suffix ocracy.
Secretly and in a cunning, Machiavellian way.
Oh my!
The whole “libertarian free will” thing annoys me. It basically assumes that “will” happens in a vacuum. It doesn’t (unless you’re a substance duallist of the daftest kind).
I have no idea what this is still even an issue. We need to be arguing about degrees, not about a best-case scenario.
Sigh.
Not necessarily. That’s only if you manipulate badly. The key is to manipulate someone so that they don’t know they’re being manipulated.
And we do that all the time. When I say “thank you” and smile, I’m manipulating someone into increasing the frequency of the behavior that pleased me.
Or you could actually be doing it because he were happy and thankful.
That just changes the manipulator to evolution. 🙂
But many people are grateful and happy, yet they don’t express those things.
Yes, and they call us cold when we don’t express ourselves. I remind people that just because we aren’t going bezerk, we still have feelings. 🙂
“they call us cold”
Another attempt at manipulation. 🙂
Those explanations for smiling and saying “thank you” are not mutually exclusive.
Do I have to point out that most people who are already engaged in such manipulation already probably believe in free will? and a soul, as well?
Actually, I’m not sure about that. The advertising business is probably the most manipulative of all businesses and I would think that the most effective “hidden persuaders” are likely to be deeply cynical and may believe their marks to be little more than sheep.
Ironically, being “deeply cynical and believing your marks to be little more than sheep” will probably make you a poor persuader. I dont think you have to believe in free will to recognise that “the Hidden Persuaders” are nowhere near as smart as they think they are. Does anyone read Vance Packard nodding their head in agreement?
Cop pulls over a person speeding in a school zone.
Driver: “But officer at the time I was speeding, I could not have done anything different.”
Cop: “I agree, and at this time I can’t do anything different than give you this ticket.”
Driver now has a new input that will affect behavior (hopefully to slow down).
I don’t get why a good number of people conflate the idea of determinism with fatalism. They’re really two entirely different things.
Well, I would say that there are very sound reasons why it is a better thing that free will should exist. Of course I totally agree that this is just an argument from consequences. But I have also argued strongly that free will is also an ACTUALITY elsewhere on numerous other threads here at WEIT, so I think it’s worthwhile this one time to just focus (as Wally does) on the consequences of whether free will exists or no. I believe the whole issue that arises is a crucial moral one – in that the concept of freewill is the only concept that supports, endorses and necessitates any allowance at all for freedoms which are built into human social structures. If efficiency is all that matters, then social insects have a far more effective structure than we do. Why cater for individual will and aspiration in our own social structure if no such individual-will really exists.. if we are equivalent to nothing more than complex “human ants” or robots? Any consideration that necessarily arises for individual agents rests solely on the assumption that this autonomous “free-willed” personal agency exists – otherwise there is no moral imperative whatsoever to cater for it.
Those who dismiss the existence of free will often argue that their stance supports a more lenient treatment of those who exhibit anti-social behaviours. I would argue that it is only the concept of a necessary respect due to the wrongdoers independent-free-agency which makes it morally wrong to simply reprogram them in some Clockwork Orange fashion.
This is the same argument we get from theists asserting that if there’s no god, then there can be no morality/purpose/meaning/etc.
“This is the same argument we get from theists asserting that if there’s no god, then there can be no morality/purpose/meaning/etc.”
I claim that yours is simply a straw man argument- if only for the fact that god does NOT exist but free will does. Moreover, god as a concept is a “third party” (and an immaterial one at that) – a causal agent without any material basis, upon which theists base their arguments and justifications. Free will however (this is compatibilist, Daniel Dennett defined free will) is materialistically based, and is compatible with that material nature. It is real in a material sense. Moreover free-will, unlike god, is solely an innate human factor in what we, as humans, actually are. It is not a third person construct.
But then there is the more pressing moral question to which I and Dennett allude:
“What do I, as a robot, owe to other robots with respect to considering their needs and their “rights”? versus the question of what do I, as an autonomous free-willed agent owe to another autonomous free-willed agent with regard to its needs and “rights”? The different moral imperatives arising from our true nature – robot or free-willed agent – is the core of the requirement for any morality at all.
Your arguments, and even the point of your arguments, don’t make much sense to me. After reading and listening to these discussions for a couple of years now one thing that seems very clear to me is that the phenomena that constitute what you are using the label “free will” to identify are agreed upon by both compatibilists and noncompatibilists. For your arguments to make sense incompatibilists would have to disagree with you about the phenomena themselves, not just the label.
I don’t think anything in your reply has anything to do with what I wrote.
I am finding myself more sympathetic to what at least some compatibilists are saying, which is that “free will” can be, and very often is, thought of as the capability complex, conscious agents have that rocks don’t; ie, the difference between them. That difference may be a matter of degree, but it is one hell of a huge degree and we should acknowledge it.
Anyhoo, what I wrote wrote does not presume an incompatibilist stance. But the argument I quoted is still wrong, and it is like the argument theists make that claims without god (free will), there’s no reason to treat others well (no reason not to manipulate or reengineer behavior).
The first thing that’s wrong with it is that manipulating peoples’ behaviors isn’t necessarily wrong to begin with, as the argument seems to imply. As Greg Esres wrote above, smiling and saying “thank you” is a subtle kind of manipulation.
If we assume the argument only addresses really diabolical manipulation, then the other thing that’s wrong with it is that you can in fact determine such manipulation is wrong in the absence of free will. The existence or non-existence of free will doesn’t bear on the fact that even if we are complete automatons, lying, cheating, manipulative behaviors will lead to an unstable society, and we need stability. Even if we are automatons. It’s the same argument we atheists make for a naturally evolved morality.
Yeah. That whole argument seems non sequitur to me. A clean miss.
This sounds very much like Dennett’s argument against incompatibilism: it erodes people’s sense of agency by denying the reality of their experience, and thereby harms society and individual well-being.
I was thinking the same thing. I also wonder whether Jerry therefore thinks that such manipulation is OK as long as the manipulators are sufficiently surreptitious that the manipulated remain unaware of being manipulated (“unaware” meaning that they would tell you they have not been so manipulated).
I seriously doubt it.
The logical arguments based on materialism tell me that freewill can not and does not exist. But the illusion-of-freewill does exist. It is real. And it is that illusion-of-freewill which prevents harm to society and individual well-being. There is no contradiction here.
But isn’t that a Little People argument of the sort Jerry accuses Dennett of making? That the folk must be allowed to keep their illusions?
I think Dennett’s actual argument is different, and more respectful of the folk. He thinks their intuitions about agency are largely correct. Decisions do get made, and our thoughts and intentions do play a role in that process. So the best way to prevent harm is to recognize these truths, while correcting any misunderstandings about the physical basis of thoughts and intentions.
I’d like to ask a question that’s been bothering me(just a bit. I’ve not been shaking strangers by the shoulders about it) about free-will and how it comports with strict determinism and natural selection.
It’s this – assuming that the universe is determined by physics, and assuming that as a result free-will doesn’t exist…what possible adaptive purpose is served by self-awareness? Why should we also be self-aware?
There’s no apparent reason why we should be self-aware organisms lacking in free-will as opposed to just organisms lacking in free-will – if our actions are determined(and I believe they are – I’m just genuinely puzzled about this) what’s the point of being self-aware into the bargain?
What can we do(assuming determinism and no free-will) with self-awareness that couldn’t be done if we lacked it?
If this has come up before please tell me. I’d be interested in the arguments – I’ve looked online and I couldn’t find anything specifically relating to this.
If I could sum up my problem it’s that, in a deterministic context, self-awareness seems rather like the windows in a train carriage – very nice and all that but far from necessary.
In the first place, physics isn’t quite as deterministic as it was for Newton. In the second–or so it seems to me–this is another one of those areas where philosophers have leapt to conclusions in advance of the science. Partly it’s a matter of definition–what exactly is ‘free will’ anyway?–and partly it’s a matter of staking out either-or positions. My guess–and it’s only that–is that, when we understand the brain better, there will be a residue of rational and autonomous decision-making lurking behind the veil of compulsion. But maybe I have to think that.
There already are a lot of autonomous functions and your brain happily edits out a lot of information for you before it even gets to your visual cortex. Thanks brain, for doing the heavy lifting and not bringing it to my consciousness because that gives me time to obsess over things. 🙂
Well that is probably wrapped up with the hard problem of consciousness but it could be that a sense of self is needed and advantageous to the organism in being able to assemble not only its past and future and all the stuff that happens to it in a coherent way but also in keeping things coherent as in knowing you have a body and that there are organisms outside the body.
I see that, but all that information only needs to be integrated if you’re making decisions. Otherwise it could all be reflexively processed, like knee-jerk reactions. We could conceivably operate in such a fashion – without self-awareness playing any part.
I’m not sure, but you come close to saying self-awareness is epiphenomenal, and that’s the only explanation for its presence that I can think of – that it’s a side-effect of some other adaptation. If it is an adaptation I’m having trouble seeing what purpose it serves.
I think a unified self has definite advantages over other organisms and I suspect that consciousness itself is a phenomenon arising from complex systems. Self awareness is a higher form of consciousness.
Well, what Dennett would say on this matter is that Evolution favours “evitability” – the ability of an individual organism to avoid any undesirable (from a survival/reproductive point of view) possible consequences of letting things just take their “natural” course. Evolution favours agency, the more effective the agency the better. Now in humans this agency has become incredibly sophisticated -involving not only the ability to model possible outcomes and select the “better” ones, but in having a massive memories to store historic and factual information. And to solve these problems of survival it must recognise itself as being an entity in the problem to be solved (self awareness). Above all is the capacity of humans to “self programme” to achieve high evitability. At some point this sophistication crosses a line – where the evitable agent can be described/defined as being free-willed.
Yeah, the problem with this is that with all our sense of agency we can still on look on aghast as the usual culprits- greed, power hunger, and a restless desire for expansion- deestroy the planet. We do not appear to have any free will to change this behaviour. We also have a lamentable tendency to be manipulated into adoring heroes, looking for saviours and following cults, which means that charlatans- who can be scientists and engineers as well as preachers- are able to come along and persuade us that their self-interest is for the good of all. The only purpose of self-awareness in this context, as I see it, is the brief consolation of the perception of beauty: like admiring the fantastic sunsets that environmental degradation brings about.
Are you suggesting Jason, that if we did not have a sense of agency the situation would somehow be better?
It seems that a sense of agency, self awareness, would be beneficial in modeling other organisms, i.e. empathy. And being able to model others seems like a very advantageous ability, especially among social species.
Sorry for being late to the reply – time differences I’m afraid.
I perfectly understand all the advantages that self-awareness would bring if we actually had free will – in that context, like you say, modelling the environment and being able to ‘let your thoughts die in your stead’ would be an obvious, uncontroversial advantage. But without free will no decisions are being made, so why have this complex self-awareness to model things in the first place? What can we do with self-awareness that we can’t do without it?
Does anyone else see a problem here? Imagine a rollercoaster had evolved. Now imagine it evolving self-awareness – what’d be the point? Self-awareness doesn’t give it any more control over its direction or speed. That’s all determined by the tracks and the dodgy carny at the controls.
I am not sure I understand your quandry, but this jumped out at me. From that excerpt I take it that “free-will is necessary to be capable of making decisions” is an accepted and key premise in your argument?
I disagree with that premise. It of course comes down to definitions, and I am not sure what your concepts of “free will” and “decision” is. But, in any sense that humans can be said to make decisions, computers are also capable of doing the same.
Regarding free will, I agree with the conception that compatibilists and incompatibilists share (though ICs, of course, disagree about the label), which expressly rejects any notion of dualism or being able to bypass determinism in some way, and I don’t see any conflict between that and decision making.
“complex self-awareness to model”
This premise is questionable to me. It very well could be that self-awareness is not really a very complex phenomenon. After all it appears that a number of other mammal species have significant indications of self-awareness. It could be that all they lack is enough of the right type of intelligence to allow for recursive thinking – such as the recursion we use in language. There is not really a vast chasm between recognizing object outside us and recognizing ourselves as individual objects with a role to play in the world. The human perspective seems to me to be a natural outgrowth of the development of language and social living. It is difficult to imagine a social species arising which achieved the ability to communicate through language without self-awareness and what we call conscious experience.
You’d be right. I believe that in order to make decisions free will is necessary. I don’t think that’s a particularly controversial viewpoint.
The general definition of ‘a decision’ is ‘a choice between two or more possibilities, that the chooser might have decided not to make’. If the world is deterministic then no choice is being made – the laws of physics are simply unfurling naturally. There’s no ‘decision’ being made.
It’s possible to redefine ‘decision’ to mean ‘the route through the world that an organism takes due to entirely determined, physical causes’. That’s a redefinition. You could say that ‘something that appears to be a decision takes place, although it’s still entirely determined’, but again that’s a redefinition. I don’t think you are saying anything so unsubtle but I’ve read plenty of compatibilist arguments that do use those definitions.
Re. compatibilism – I flat out think it doesn’t make sense. It flits between contradiction and simple semantic sophistry.
“it of course comes down to definitions”. Agreed.
“in any sense that humans can be said to make decisions, computers are also capable of doing the same” Again, agreed, but in a deterministic world I see no way decisions are possible, unless one has a particular, different definition of ‘decision’.
How can one ‘decide’ to do something that is entirely determined? There’s a world of difference between ‘a decision being made’ and ‘the appearance of a decision being made’.
I think, as you say, that your definitions differ from mine. Nevertheless the quality of your writing(and others’) is helping me wrestle the question into shape.
Saul, do you think an electronic calculator calculates? We all agree that whatever it does is a deterministic physical process, electrons doing what electrons must do. But does it calculate?
It seems to me that by your eliminative logic you would have to insist that it doesn’t, and that there is nothing more going on than particles interacting.
If that’s not your position — if you think all that electronic activity adds up to real calculation, and the fact that it gets the right answer is not a coincidence — then what prevents you from understanding decision-making in the same terms? It’s a form of calculation in which a lot of low-level physical or neurochemical activity adds up to high-level information processing that evaluates and ranks alternatives. And the result of that calculation guides our behavior.
To use another analogy, lightning is still lightning, even though we now understand it in terms of atmospheric electricity rather than the wrath of the gods. That’s not a redefinition; that’s a step forward in our knowledge of the world.
Definitions can become a slippery slope which I don’t want to approach except to say that I think the notion “free-will” and the use of the term, in many cases, for many people, entails dualism (surreptitiously). Once dualism is denied, the term and the notion are left adrift like an abandoned net to tangle any users thereafter. There is, it seems, no word or clear definition of “free-will” remaining. This makes it hard to discuss or think about.
Then you’re using a different definition of “decision” than I am. I thought it would be uncontroversial to say that humans make decisions, and the question is whether those decisions are based on some spooky “free will” or are based on the laws of physics.
For my definition of “decision,” a computer doing a complex task makes decisions also. The example I always use is a computer monitoring a complex manufacturing process, adjusting chemical flows, temperatures, throughput rates, etc. If the output result is complex and not readily apparent, then “decision” seems the right word to use.
To some extent it’s a question of what you mean by self-awareness. If you mean our ability to think about ourselves as agents and mentally model our own actions, it’s hard to imagine how we could plan for the future without that ability. And presumably you’ll grant that planning for the future is advantageous.
If you’re wondering why it subjectively feels like something to do all that thinking and modeling and planning, if you imagine that a robot could do all of that without feeling like anything, then I think you’re flirting with the zombie fallacy. Could smart robots or unconscious humanoids really do everything we can do, without actually being conscious? Could they, for instance, talk about their feelings, or write scholarly articles about the nature of consciousness?
I don’t think so. I think it’s self-evident that the subjective sensation of consciousness does play a causal role in influencing our behavior, at minimum to the extent of enabling us to talk about consciousness, and probably to a much larger extent than that. As such, it’s fodder for natural selection, like any other behavioral trait.
In my view intelligent robots could be conscious, but not really in a human way. That’s because our self awareness and awareness in general is not just information managed by the brain. We also are emotional creatures. Our brains are bathed in hormones which push us along through our experience. Some perceptions and experiences are given special significance because they stimulate emotional reactions. Computers and robots do not have this extra dimension (yet).
As far as consciousness and awareness, I’d say it is largely epiphenomenal and probably arose as a spandrel. But, again, I think we experience it within a complex emotional framework, and that may be what gives it it’s awesomeness.
You might want to check out Dan Dennett’s, “Consciousness Explained”.
“As far as consciousness and awareness, I’d say it is largely epiphenomenal and probably arose as a spandrel” – that would be my very tentative conclusion too
One objection might be that intuitively speaking self-awareness seems pretty complex, pretty adaptive, to be just a spandrel. But then what does intuition know?
I’d suggest that spandrels can become adaptive once they are established.
Again, agreed, but what adaptive function does self-awareness serve? Remember, I’m talking about a universe without free will…
Thanks for the book recommendation BTW. I read Darwin’s Dangerous Idea which was brilliant(didn’t like Breaking The Spell though) so, once I get through the Albatross and various others I might give it a go.
But as I said, the view that self-awareness is epiphenomenal is my tentative hypothesis too. Tentatively, always tentatively, like a little chimp on a stringy bridge.
It is difficult to tease out the notion of self-awareness from the behaviors which include self-aware individuals. When I am negotiating in a job interview I am, in a way, hyper self-aware. Psychologically my behavior is highly constrained by my sense of self. There is a huge emotional component involving adrenalin which significantly affects the outcome. In such a situation, self-awareness is both critical to my negotiating success (the goal I have set for myself), and a potential penalty. It is fortunate that prior to the interview I had the self-awareness, knowing me, to down a couple of sedatives to enhance my performance. I got the job, got married, and had 6 children – all of whom share my predisposition to become hyper-aware under stress.
I don’t think you quite get what I mean. It’s possible to imagine us doing all the things you mention whilst not being self-aware(difficult to imagine us talking about self-awareness perhaps, but what adaptive advantage would such an ability confer?).
We could talk, we could feel(self-awareness is not a necessary component of the mind of a feeling lifeform), we could do pretty much anything without the self intervening.
The crux is this: in a deterministic world, where you are not making decisions, why is it necessary for self-awareness to get involved? We can’t use the subtleties and reflexive modelling abilities of self-awareness to guide our decisions because we’re not making any. Why should self-awareness evolve in a deterministic world?
By the way, I don’t think this is a major objection to hard determinism – it feels like a puzzle the solution to which requires the kind of subtlety and determination(mind the pun) I don’t have.
I can imagine self-awareness arising epiphenomenally, as a kind of side-effect of something else, like language, but I can’t see why it should have evolved by natural selection.
“I can’t see why it should have evolved by natural selection.”
There was likely an amount of serendipity here. The development of language and higher order thinking that accompanies it may have spawned self-awareness as a spandrel or side effect. Once it became part of the mix, it offered our ancestors advantages that then became adaptive.
Self awareness would be an input to your decision process. Lacking free will does not mean you cannot make decisions but that your decisons are really made without you being conscious of them (you are aware of the decision after it has been made and have an illusion of having made the decision consciously).
“but I can’t see why it should have evolved by natural selection.”
I think it’s not answerable by anyone ever.
Probably you mean “how it could have evolved by natural selection? Natural selection has no foresight or intention.
To the how question my answer would be:
Well we don’t know (yet).
To put it in perspective, some animals seem to have self-awareness.
I’ve heard of a suggestion that the self is needed to attach memories to, and to generate memories in the future (for planning).
There are people with the Cotard Syndrome who seem to have a distorted self-awareness. They think they don’t exist.
I wonder if f MRI show that the Cotard’s syndrome patients all have the same brain anomaly. That might tell us that there is a specific function for self-awareness.
This from the book “the Tell-Tale Brain” V.S. Ramachandran:
“mirror neurons might not in modelling other peoples behavior but may also turn “inward” to inspect your own mental states. This could enrich introspection and self-awareness.”
And he comes with what functions might not be working Cotard syndrome:
“all or most pathways to the amygdala seem to be severed, together with a derangement of the reciprocal connections between between the mirror neurons of the neurons and the frontal lobe system”.
All this leads to “losing the self and losing the world in patients with the Cotard syndrome”.
Of course speculation.
Ramachandran sounds like a good read.
How much free will do we have? It isn’t absolute, some try to change what is free to do and what isn’t. We have limitations in our own brains and therefor our genome too. There are outliers as part of the over all aspect of our species. Most are ignored, killed, trapped or don’t reproduce. Regardless there are freedoms. Just depends on how far you will go. Psychopaths usually because of lack of empathy with anyone else do things most people would not do. Even Dilbert had a head injury and became a psychopath for awhile. Doing things that didn’t bother him even though he knew it should but didn’t care.
I like the analogy of the free choices a rat has in a maze. Is that free will to choose within limitations set up by others?
Jerry – “The argument against it is that manipulating behavior erodes people’s sense of agency, a feeling that is real, and could in that and other ways be harmful to society and well being. So not only is that an argument against it, but such arguments can be effective, for they can dissuade people from doing bad stuff.”
I do not follow this argument Jerry. Are you saying that the feeling that we have free-will based agency is a necessary one in social structures? If this is so, wouldn’t you as a materialist, still be endorsing an argument that we need to have illusions? But if we allow one sort of illusion, why not all sorts of illusions? And to dissuade people from doing “bad stuff” by endorsing their illusions is surely not the path to a rationally based society?
And finally, just to underpin the points I have been making here let me again link to the study carried out by Vohs and Schooner which provides some experimental evidence that moral behaviour really does draw upon a belief in free will, while less moral behaviour is influenced by believing that free will does not exist.
http://assets.csom.umn.edu/assets/91974.pdf
Surely then, if free will is just an illusion, we as rationalists really do not want to foster good moral behaviour merely by means of promoting falsehoods. Better to justify a Clockwork Orange reprogramming of human behaviours, along the lines that Dilbert suggests.
I think you are confusing “illusion” with “delusion”. It’s not a matter of allowing an illusion. It either exists or it doesn’t. The illusion-of-freewill and the illusion-of-self are real. You don’t allow them. They are real existents.
The illusion of freewill is the feeling that we really do have freewill when we actually don’t. That feeling is real, and the feeling that we really do have freewill feels real enough for us to act as if we actually do have freewill.
Illusion or delusion, both are a false sense of reality. The scientific mind cannot build any sort of rational infrastructure on false premise.
It seems to me somehow that hard determinists want to have their deterministic cake, but also want the pleasure of eating the free-will cake too.
In today’s (Feb 5, 2015) follow up Dilbert, the robot is a rather decent papal approximation, I’d say …
There seem to be two threads in this argument. Paradoxically I am aware that I am not generally acting out conscious decisions and can almost enjoy the sensation that things happen to me, by me and around me without taking any account of my conscious mind. However, earlier on there was discussion of manipulators. How is that possible? I prefer my explanation that those people who think they are manipulating the system are motivated by the same old greed, hunger, sexual drives etc that the rest of us are.
We “manipulate” people all the time. There is a lot of baggage associated with the word.
Which of these is not manipulation:
education
punishment
parenting
marriage
politics
advertising
religion
???
just about every form of human interaction has a degree of “manipulation” associated with it. If we move to the rest of the world we see it there too.