
[Contains spoilers for The Bell Curve, The Second Apocalypse series, Twilight, The Dark Fields, Ender’s Game, and Card’s Gatefather series.]
I.
If the most certain, well-studied, and robustly proven result in a scientific field is widely doubted both within the field and outside of it, then one has to wonder – why do people trust any other results that the field in question generates? In fact, anyone who did continue to trust under these conditions might seem to lack a certain degree of, well, intelligence.
The scientific field Jordan Peterson is referring to (I’ll link to the lecture later when I find it again) when he says this is psychology. And the robustly proven result is, yes, the existence of general intelligence.
So, do you, friend, believe there exists a general factor of intelligence, the so-called ‘g’? And if so – precisely how ‘general’ do you suppose it to be?
How dare I discuss such a topic without using data! someone objects. And yet, considering the above, recourse to data hardly seems likely to generate truth here. For a field that to all appearances possesses no more of truth in it than a dead Dûnyain has of compassion, I will consider myself free to speculate as I please. Perhaps someday neuroscience will advance to the point at which it can give more definite answers, but that day is not this day.
Where science fails, though, please recall that one can always fall back on intuition and common sense – and my common sense here suggests that the existence of a g factor is at least plausible, if not outright probable. Taking a closer look at modern institutions and some of the writings of our contemporaries reveals that I am not alone in this. And yet, there remains significant variation in how ‘general’ exactly people suppose this factor to be.
II.
Belief in the nature of general intelligence or the non-existence thereof comes in a few different degrees.
At one end of the spectrum are the people who do not believe in g at all, and who take all such research on intelligence from the field of psychology as mistaken or fraudulent. These people may either believe intelligence is specific only and cannot be carried over between domains, or else they may not believe the concept is correct in the first place. In either case, these people must believe the correlations observed in say, standardized testing in individuals between academic fields are not due to some underlying factor of general intelligence, but are instead products of cultural factors or bias or poor test or study design – a view we might find not unreasonable were it not for these claimants’ immediate about-face when it comes to studies that agree with whatever their pet beliefs are. The various ‘multiple intelligences’ theories fall into this category, as well as the belief, defensible only in a moral sense, that all people are inherently equal in ability to reason.
But what about those who do agree that g exists, whether because they believe in the studies, or because common sense suggests it? Within this category, there are further degrees.
In the first degree, a person might agree that g exists as measured, but only in the strictest sense possible. The SAT and ASVAB are measuring something, the same something, but it’s a wholly academic type of ability that doesn’t extend any further. This still can be called ‘general’ intelligence, since it describes a factor of ability underlying several fields, but it is relatively restricted in its application.
But that’s impossible! a friend from the earlier category cries. You need only look around you to see that a single factor of intelligence cannot exist! Haven’t you met people that are good at writing essays, but bad at, say, math?
Of course I have. But the mere existence of counterexamples does not disprove a theory of this sort, applying as it does across entire populations. While the existence of g suggests that abilities such as math and writing should be correlated, it by no means precludes the existence of intrapersonal variation in these traits. This is what all the regression graphs are for. Although g is a single factor that stays more or less constant for an individual throughout their life, the way it manifests in any given individual and affects their skill in any given domain will still depend on other factors – genetic, environmental, personal preferences, etc. Thus, ability an any g-weighted field like writing, math, logic, and so forth, will be correlated when measured across a population – that is, people who are good at math are more likely to also be good at writing. But taking a single individual, there is no guarantee their ability across different g-weighted fields will match up exactly. Only in the aggregate can the existence of g be truly perceived. (How much aggregate you need to see the correlation is another question, which I will not discuss here.)
As you can see, this sort of question can only be answered with ‘recourse to data’ – and thus when one doesn’t trust the data, truth and falsehood about the subject become very hard to determine, making one’s reliance on one’s own intuition proportionately greater.
But there are fields that are not academic that are also relevant to general human activities – social abilities, what might be euphemistically referred to as “problem-solving” abilities, artistic abilities, etc. – domains of ability that fall outside of academics. Is there some underlying factor determining skill in these fields as well? Does the g-correlation extend even this far? Is there a reason to think it would? Is there a reason to think it wouldn’t? Is it even reasonable to believe that so-called academic abilities truly fall into an inherent class of their own, separate from everything else?
In the second degree come those who are willing to take the concept of g a bit further. Those who agree that g must correlate across all domains and not just academic ones, will here be referred to as believing in an “holistic g”. This is only a small extrapolation from what Herrnstein and Murray are arguing in The Bell Curve. They find correlations in their data between measured intelligence and life-outcomes, even after accounting for socio-economic status or educational background. This is exactly what one would expect from the existence of an holistic g – intelligence is helping those who have more of it succeed in any domain they find themselves in, not just school.
I would guess more people hold this view implicitly than would ever admit to it (especially in public). In some ways, this view is the default today, in the sense that a faith in its truth is implied by certain practices and institutions of modern society.
Consider the process of hiring and promotion. An employer’s first choice for a job is of course someone with the right training and experience – if such a person can be found. But when they cannot, the employer’s next choice is to hire someone who has been trained on something else, sometimes something similar but often not, trusting that the fact said employee was trainable at one task will carry over and the new hire will also be trainable on whatever task the employer is hiring them to do.
Or consider promotion within a company. If someone is doing a good job in their old position, they get moved to a new position with different, probably greater, responsibilities. The employer, then, is trusting that their ability to learn to handle their old position will translate into an ability to learn the work of their new position. For all that people make jokes about the Peter principle, would this method still be in use if it didn’t to some fair extent actually work?
So holistic g is an even more general interpretation of general intelligence, where the relevance of the single underlying factor is taken to extend to ability at most tasks in most domains. But – why stop there? There are more domains in heaven and hell than are dreamt of in The Bell Curve’s philosophy – and there is a third degree of belief possible, an even more general interpretation of general intelligence.
Consider, if you will, “moral intelligence.”
Ha, the modern reader chuckles condescendingly, there’s no such thing. Isn’t it obvious from the fact that different individuals and cultures can possess different values that morality is not subject to objective measurement?
The nineteenth century just sent a telegram. It would beg to differ. (Previous centuries are expected to also agree, but their ponies have yet to arrive.)
But even moderns might profit from taking a moment to consider the point. Some things are almost universally held to be more morally good or bad than other things, right? This is why people can find common ground enough to form societies and follow laws. For example, most people believe peace (at least within a society) is better than war, or helping deserving family members or even strangers in their time of need is better than turning them away or murdering them. And some people can be more morally good or bad than other people as well, yes? There are people who will return a wallet or stop to help a pregnant mother stopped on the side of the road change her car tire, and there are people who rob convenience stores and beat their wives, and these people are not seen as having the same quality of moral values. This knowledge of right and wrong and the implementation of these values in one’s actions and life, taken together, we define as “moral intelligence.” We can then ask, who has more of it and who has less of it? You perhaps couldn’t easily measure it with a standardized test – but then, you don’t need to, you can just look at how people act in their lives (the same as you can with regular intelligence – there’s a reason there’s such a thing as a job interview).
So, assuming the existence of holistic g, we might then ask – are people with higher holistic g more likely to have higher moral intelligence? Are these intelligences, moral and skill-based – correlated? If you believe they are, and positively, we will say you believe in “theological g”.
If one does not believe this, the obvious alternative is to believe there is no correlation. Good people come in high and low intelligences at random, as do bad. Of course, there is another alternative as well – that of anti-correlation. That is, people of higher holistic intelligence could be more likely to have lower moral intelligence – and there are people who believe this too.
I have not seen this question posed very often, likely for the obvious reason that even just considering the consequences makes people upset. When people become distressed about the social implications even of regular old g, much less holistic g, then even less can we expect calm and rational debate on the concept of theological g. Thus, pretty much no one writing under their own name or with a readership of more than five people seems willing touch this question with a ten-foot pole. But as I am neither of those things, I shall take a firm grip on the end of the rod and give the bear a poke.
III.
Most people, whether they’ve explicitly considered the subject or not, do in fact have some beliefs on this topic, and these beliefs are implicit in their actions and worldview.
This often becomes quite apparent in novels. Just as an author’s belief in which systems of government work and which don’t will influence his portrayal of the success of various governments in his books, so too the characters in a novel will follow the rules and correlations of whichever degree of g the author believes in.
Like I said earlier, not everyone believes in holistic g, common though the belief may be (at least implicitly). For example, take Stephanie Meyer, author of the Twilight series. Here is a woman who believes in a general g of academic ability, but not an holistic g. We can see this, since vampire transformations in the aforementioned novel series come with certain benefits, among which are increased speed, reflexes, strength, and, notably, intelligence. Thus, when her protagonist becomes a vampire, she receives boosts in many areas – in attractiveness, strength, and reflexes, but also in academic abilities like math. But revealingly, there is no change in the protagonist’s social skills.

Compare her worldview to that of Alan Glynn, author of Limitless (or by its other title, The Dark Fields). In this novel, the protagonist begins taking a drug, MDT-48, which greatly increases his intelligence. And what are the specific effects of this increase? Not only increased skill in math, writing, memory, and speed of learning, but also increased drive and concentration, organization, social skills, motivation – this is as holistic as it gets.
But notice that Glynn does not believe, so far as I can tell, in theological g. In fact, he quite possibly believes in its opposite. When the protagonist is using the drug, he becomes more cutthroat in his dealings, more calculating of cold-blooded self-benefit at the expense of compassion and friendship. At one point he even starts having blackouts and committing murders. Now, whether Glynn intends this moral decline to have been merely a side effect of the drug (alongside several other undesirable side effects), or whether he believes that people with higher intelligence in general care less about their fellow humans, is difficult to say. But it is clear that becoming more intelligent does not make the protagonist a better person – if anything, it makes him worse in the moral sense.
On the other hand, there are those who very clearly and distinctly believe in a theological g. For example, Orson Scott Card. We see this to some extent in Ender’s Game, and to an even greater extent and more clearly stated in his later Gatefather series. Characters with higher intelligence have higher abilities in all areas – academic, social, political, etc. They have more agency, more ability to act and impact the world in the way they please. But not only do they have more abilities in these domains – they are also better people morally, in Card’s framework. They are more compassionate, less petty in their dealings with others, more caring not only about humanity generally but also about alien species, and more likely overall to make morally correct decisions and to act on them. (The explicitly religious nature of some of this in Card’s novels may or may not have something to do with the fact he’s a crazy Mormon, but I digress.) This division brings to mind the old idea of aristocracy, from the Greek aristos kratia, “rule by the best”. This scale as portrayed in Card’s work is the same sort of thing Carlyle is referring to when he describes an “aristocracy of nature”. The same goes for Ayn Rand’s distinction between those who love life and work, and those who in their actions avoid the latter and in their hearts revile the former. These authors, it is clear, really do believe that some people are born with higher abilities across all fields – including the field of distinguishing right from wrong and then acting on these distinctions.
And of course, we would be remiss if we did not consider Bakker here as well. It is clear that Bakker at least believes in holistic g, considering the domain-spanning abilities of the Dûnyain. But beyond that Bakker is a little hard to place, since there aren’t really any clear moral divides in his work and all the characters seem to be more or less evil. But some characters are more evil than others, and the Dûnyain and the Consult, which top the list, have some of the highest levels of intelligence and general ability in the books. In fact, the Dûnyain tradeoff – emotions for logic, compassion for relentless pursuit of goal – suggests that Bakker may believe morality, at least in the deontological sense, and intelligence, are anti-correlated – that the further you can see and reach with your actions, the worse of a person that will make you morally. Only stupid people can see in black and white, and thus act unambiguously in good faith. Intelligence means one always act in the grey, and one knows it. But then again, it’s another question whether Bakker really believes in morality at all.
Everyone, not only novel authors, has some intuition about the nature of g, of course. And these assumptions come across in their other, less fictional work as well.
Herrnstein and Murray, for all that their evidence points to the existence of a holistic g, do not take this any further, or appear to believe it can be taken any further. At the end of section one of The Bell Curve, after positing the emergence of a so-called cognitive elite enabled by the increasingly meritocratic, intelligence-based ordering of modern society, the authors, in a tone of moderate hysteria, pose certain loaded questions:
As we leave Part I here is a topic to keep in the back of your mind: What if the cognitive elite were to become not only richer than everyone else, increasingly segregated, and more genetically distinct as time goes on but were also to acquire common political interests? What might those interests be, and how congruent might they be with a free society? How decisively could the cognitive elite affect policy if it were to acquire such a common political interest?
That is, if intelligent people came to rule the world, what malicious things might they set about doing?
The first time I read this I guessed: Um, rule more intelligently?
But of course, my answer then was naïve. Ruling effectively and ruling benevolently are by no means the same thing. Herrnstein and Murray saw that my guess was by no means self-evidently true, since people of higher intelligence are not necessarily more moral than others (or even of average morality).
Is there any reason for us to expect higher intelligence to correlate with higher morality, or with the opposite?
Especially in a consequentialist framework, there are a few reasons to believe people with higher g might be expected to be more moral. Since if the results of one’s actions are what determines one’s moral worth, those with higher intelligence would be more able to figure out what the results of their actions would be, and what actions would have the results they desire, and thus to know how to act more morally. That is, they would have a greater capacity for morality.
But by the same token, by being able to purposely achieve more far-reaching consequences, they would have a greater capacity for evil as well. Ability to do something does not necessarily correspond to desire to do it. A certain will to goodness is needed beyond just mere ability, in order for a person to be good.
And what about more mainstream beliefs? Scott Alexander has some discussion of this in a recent post, in which he claims the very commonsense holding that everyone has equivalent moral worth regardless of intelligence. For him this means that morality and intelligence are independent of each other, which is probably the most commonly-held opinion, especially among those who haven’t thought about it.
But then Scott Alexander is contradicting himself a bit, considering some previous articles he wrote where he correlated people’s perceptions of the moral worth of various animals with their neuron count, and came to the conclusion that people judge moral worth based to some extent on intelligence. I suppose he draws a clear bright line between animals and humans – but what about superhumans? This is a funny thing for a near-transhumanist like Alexander to not have thought about. Since if humans are more morally valuable than animals because they are more intelligent – would not then superintelligent-AIs be much more morally valuable still? Should we all bow down and worship Skynet in its goodness and perfection?
IV.
Of course, you might object, I am mixing up moral values possessed and moral worth as a person. Two people might possess different moral values, one might be a ‘good’ person who believes in compassion and kindness, while the other might be a ‘bad’ person who believes in hate and violence, and yet they might still possess the same moral worth as people. That is, when the out-of-control trolly is rolling down the tracks, we might still think it best to redirect it from running over two bank robbers to running over one Doctors-Without-Borders-volunteer who donated his entire trust fund to charity, simply because they are two and the other is one and we value all human lives equally. We might make this choice consistently, believing all people have the same moral worth as people, even without trying to claim bank robbing is more virtuous than charity.
While this might seem wrong in the limit – what if it’s four Hitlers on one side of the tracks and on the other two Mother Theresas? – it does solve the problem of whether we ought to give up all our resources to superintelligent computers just because they’re better than us.

But what if the computers are both? What if they have not only more moral worth, but actually better values than us? Or if they are able to more perfectly carry out and implement our own values? Wouldn’t saying ‘no go away’ to them then be a contradiction, since we should value our values for themselves and not for the mere fact that it’s us implementing them? (That is, if you can save a starving child by personally giving them food, or save ten starving children by using drones to deliver their families food, which is more moral? And would pursuing the former instead of the latter, or even hindering efforts at the latter in order to allow for more of the former, be moral?)
Alternatively, if more intelligent beings can be expected to be more evil, then perhaps there is a reason to fear the development of superintelligent-AI. Some people are fairly certain that these AIs would instantly develop value-misalignment with humanity as a whole and go tearing off on ends-justify-means paperclip-maximizing quests that would destroy all that made humanity human, perhaps even end human existence entirely, and that this outcome can only be prevented by the most careful and controlled development of AI-value-frameworks. These people likely do not believe in theological g.
But who is correct? Does having higher intelligence make one more moral, less moral, or have no effect? (Of course, I still mean here whether there exist population-wide correlations.) Certainly, I can think of people who are both very intelligent and very good, and others who are not very intelligent and not very good. But I can also think of people who are not very intelligent but are very good, and others who are very intelligent but not very good. But even if one feels the former two classes occur more than the latter, these are all anecdotes, and the plural of ‘anecdote’ is not ‘multiple regression plot’ – the data, of course, not being randomized enough (“I only hang out with the more sophisticated type of bank robber – at the very least they must have read Dostoyevsky and Flaubert.”). But without falling back on data, is there any way to pass judgement over the question?
Of course, if there are at root no moral values to speak of, then the question is somewhat moot. In this case, we might expect more-intelligent people to be less conventionally moral in their ultimate values and in private when no one is watching, since they would be more likely to recognize morality as just a socially convenient fiction; however, they could at the same time be more moral in their public appearances, since they are better equipped to judge what will make them look good socially (we’re assuming holistic g here) and to implement it in their actions (at least the high-visibility ones).
If something like MDT-48 were created, that could bring everyone all at once up to the same level of intelligence, then it would quickly become apparent what effect intelligence had on the honesty and integrity of people in general. Would we suddenly live in a fair world where everyone had equal opportunity? Or would we have a world much like the current one, but instead of a bachelor’s in art history, now your barista has a PhD in theoretical physics? Or would murderous gangs run rampant through the streets, using game theory to calculate the net expected effects of tit-for-tat vs purely expansionist strategies? Alas, without MDT, we can only guess.
From a personal perspective, of course, morality and intelligence should be judged independently. That is, attempting to become a more moral person will not make you a whit more intelligent, any more than learning Chinese will make you better at calculus.
But even so, you might ask whether higher intelligence gives people, such as possibly yourself, higher moral obligations to humanity, god, justice, or whatever you like, that you must then live up to. Did god put more shot into the shell because he expects it to do more damage? Or was someone on the factory line just off to take a bathroom break, and let random chance play out as it wills? Anasûrimbor Kellhus seems to believe such obligations exist. Hopefully though, friends, we ourselves are less insane. After all, couldn’t we just as easily ask whether greater intelligence gives one an obligation to greater evil?
But if we truly do live in an unfair and amoral world, then the correlation that really matters is not intelligence with morality, but that between intelligence and power – and after all, wasn’t that what The Bell Curve was all about, in the end? So maybe Herrnstein and Murray were right after all, and we really should dread the thought of our new high-IQ overlords. At least they aren’t superintelligent AI’s – but, well, give it another thirty years.

