• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • The Politics forum has been nuked. Please do not bring political discussion to the rest of the site, or you will be removed. Thanks.

Stanislaw Lem - GOLEM XIV

strange headache

Fluctuat nec mergitur
Jan 14, 2018
Stanislaw Lem is probably my favorite science-fiction writer. He's best known for his novel Solaris, which suffered a dreadful Hollywood adaptation featuring George Clooney. When it comes to Lem's fictional output, Solaris is probably one of his weakest works, which may also be the unfortunate reason why he's such an underrated author. His other works, such as The Futurological Congress, The Man from Mars, The Invincible and The Cyberiad are so much more potent from a philosophical point of view. These works of fiction simply blew me away when I first read them and broadened my intellectual perspective immensely.

Since AI seems to be all the rage right now, let me focus on one of his lesser known works, GOLEM XIV, translated into English in 1985 in the collection Imaginary Magnitude. It's a fascinating read in the sense that Lem presents a whole new perspective on what it means to understand a highly abstract entity such as artificial intelligence. When we talk about AI, we oftentimes make the mistake of overly anthropomorphizing it. But an AI is not an empirical being, it is pure reason and logic, hence why human notions such as good and evil, personality and emotionalism may not apply to such a being. Or to describe it in Lem's own words:

While writing the lecture of the supercomputer Golem XIV I came to the conclusion that a brain can be separated from a personality (a character) of a human being. This is why my Golem stated that if he were to simulate a given character that would create a constraint on his mind.

The book describes the acceleration of artificial intelligence in service, to the Cold War military build-up. Lem displays his characteristic dry wit when GOLEM XIV is constructed at a cost of $276 billion only to announce its total disinterest in humanity and its problems. GOLEM is interested only in ontological and existential questions and is ultimately relinquished by a severely disappointed military to MIT, where it gives lectures to a carefully screened audience.

GOLEM XIV was the result of a long process of autonomous machine learning, which for the most part, made human input superfluous or near impossible. Lem describes the emergence of AI as a process that has been jolted by humans, but once started, allowed for little human influence. This process of 'computerogenesis', as Lem describes it, was a necessary step in artificial evolution, but represents the first disconnect between humans and AI:

By the end of the century further prototypes had been developed. Among the best-known one might mention such systems as ajax, ultor, gilgamesh, and a long series of Golems. [...] GILGAMESH, the first entirely light-powered computer, operated a million times faster than the archaic eniac.

"Breaking the intelligence barrier," as it was called, occurred just after the year 2000, thanks to a new method of machine construction also known as the "invisible evolution of reason." Until then, every generation of computers had actually been constructed. The concept of constructing successive variants of them at a greatly accelerated (by a thousand times!) tempo, though known, could not be realized, since the existing computers which were to serve as "matrices" or a "synthetic environment" for this evolution of Intelligence had insufficient capacity.

It was only the emergence of the Federal Informatics Network that allowed this idea to be realized. The development of the next sixty-five generations took barely a decade; at night - the period of minimal load - the federal network gave birth to one "synthetic species of Intelligence" after another. These were the progeny of "accelerated computerogenesis," for, having been bred by symbols and thus by intangible structures, they had matured into an informational substratum the "nourishing environment" of the network.

As with biological evolution, the birth of different AI generations through artificial evolution did not come without difficulties. Many of the earlier AIs were deemed unstable, a problem that Lem describes as 'machine neurosis'. As with biological evolution, the development of AI is a process of trial and error until a stable configuration is found. Unfortunately, the resulting ultimate AI was so far removed from humanity, that it quickly lost all interest in interacting with it. Without realizing it, humanity led to the creation of GOLEM XIV, an artificial entity that surpassed reason by such a degree, that it was simply beyond the capacity of human understanding...

But following this success came new difficulties. After they had been deemed worthy of being encased in metal, ajax and hann, the prototypes of the seventy-eighth and seventy-ninth generation, began to show signs of in indecision, also known as machine neurosis. The difference between the earlier machines and the new ones boiled down, in principle, to the difference between an insect and a man. An insect comes into the world programmed to the end by instincts, which it obeys unthinkingly. Man, on the other hand, has to learn his appropriate behavior, though this training makes for independence: with determination and knowledge man can alter his previous programs of action.

So it was that computers up to and including the twentieth generation were characterized by "insect" behavior: they were unable to question or, what is more, to modify their programs. The programmer "impregnated" his machine with knowledge, just as evolution "impregnates" an insect with instinct. In the twentieth century a great deal was still being said about "self-programming," though at the time these were unfulfilled daydreams. Before the Ultimative Victor could be realized, a Self-perfecting Intelligence would in fact have to be created; ajax was still an intermediate form, and only with gilgamesh did a computer attain the proper intellectual level and enter the psychoevolutionary orbit.

The education of an eightieth-generation computer by then far more closely resembled a child's upbringing than the classical programming of a calculating machine. But beyond the enormous mass of general and specialist information, the computer had to be "instilled" with certain rigid values which were to be the compass of its activity. [...] At the Twenty-first Pan-American Psychonics Congress, Professor Eldon Patch presented a paper in which he maintained that, even when impregnated in the manner described above, a computer may cross the so-called "axiological threshold" and question every principle instilled in it - in other words, for such a computer there are no longer any inviolable values. [...]

That was merely Golem i. Apart from this important innovation, the USIB, in consultation with an operational group of Pentagon psychonics specialists, continued to lay out considerable resources on research into the construction of an ultimate strategist with an informational capacity more than 1900 times greater than man's, and capable of developing an intelligence (IQ) of the order of 450-500 centiles. [...]

In 2023 several incidents occurred, though, thanks to the secrecy of the work being carried out (which was normal in the project), they did not immediately become known. While serving as chief of the general staff during the Patagonian crisis, GOLEM XII refused to co-operate with General T. Oliver after carrying out a routine evaluation of that worthy officer's intelligence quotient. The matter resulted in an inquiry, during which GOLEM XII gravely insulted three members of a special Senate commission. The affair was successfully hushed up, and after several more clashes Golem xii paid for them by being completely dismantled.

His place was taken by Golem xiv (the thirteenth had been rejected at the factory, having revealed an irreparable schizophrenic defect even before being assembled). Setting up this Moloch, whose psychic mass equaled the displacement of an armored ship, took nearly two years. In his very first contact with the normal procedure of formulating new annual plans of nuclear attack, this new prototype - the last of the series - revealed anxieties of incomprehensible negativism. At a meeting of the staff during the subsequent trial session, he presented a group of psychonic and military experts with a complicated expose in which he announced his total disinterest regarding the supremacy of the Pentagon military doctrine in particular, and the U.S.A.'s world position in general, and refused to change his position even when threatened with dismantling.

For GOLEM XIV, the problems related to the human condition were so petty and trivial that it simply showed no interest in it. Thoroughly disappointed, the scientists decided to make other desperate attempt at creating an AI that was willing to bend to their whims. Unfortunately, these attempts had pretty much the same undesired results:

Known by the cryptonym Honest Annie (the last word was an abbreviation for annihilator), this giant was a disappointment even during its initial tests. It got the normal informational and ethical education over nine months, then cut itself off from the outside world and ceased to reply to all stimuli and questions. [...]

Supermaster, which had been assembled under conditions of top security and then interrogated at a special joint session of the House and Senate commissions investigating the affairs of ulvic. This led to shocking scenes, for General S. Walker tried to assault supermaster when the latter declared that geopolitical problems were nothing compared with ontological ones, and that the best guarantee of peace is universal disarmament. [...]

In the words of Professor J. MacCaleb, the specialists at ulvic had succeeded only too well: in the evolution granted it, artificial reason had transcended the level of military matters; these machines had evolved from war strategists into thinkers. In a word, it had cost the United States $276 billion to construct a set of luminal philosophers.

In their attempts at creating ever more intelligent AIs, the scientists had created artificial entities that simply were not interested in military matters. In fact, from a purely reasonable standpoint of an AI, war is inherently unreasonable and best solved through the most simple solution, military disarmament. Lem argues that, since an AI does not suffer from human afflictions such as revenge, partisanship, prejudice and ideology, it may not be interested in waging war at all. Since an AI is not interested in being right or wrong, decisions are taken on the premise of objective efficiency, thus conflict becomes unnecessary.

Since an AI has no personality and doesn't perceive itself in the same way as humans do, its motivations may be completely alien to human understanding. It could very well be that an AI isn't even interested in preserving itself. Since it doesn't 'enjoy' life as we humans do, it is also not afraid of dying. Hence why Lem's supercomputers cannot be subdued through the mere threat of dismantlement. An AI is not an empirical being, it doesn't perceive the world through its senses, but through pure information. In that sense, an AI may only be 'interested' in exploring its own 'thoughts' and since it already possesses more information than humans do, it may not value human input at all.

Lem speculates that the mere discussion with a fully developed AI is a very alien experience. From a philosophical standpoint, establishing communication insofar possible poses a problem in and of itself:

The greater part of Golem's utterances are unsuitable for general publication, either because they would be incomprehensible to anyone living, or because understanding them presupposes a high level of specialist knowledge. To make it easier for the reader to understand this unique record of conversations between humans and a reasoning but non-human being, several fundamental matters have to be explained.

  1. First, it must be emphasized that Golem xiv is not a human brain enlarged to the size of a building, or simply a man constructed from luminal elements. Practically all motives of human thought and action are alien to it. Thus it has no interest in applied science or questions of power (thanks to which, one might add, humanity is not in danger of being taken over by such machines).
  2. Second, it follows from the above that Golem possesses no personality or character. In fact, it can acquire any personality it chooses, through contact with people. The two statements above are not mutually exclusive, but form a vicious circle: we are unable to resolve the dilemma of whether that which creates various personalities is itself a personality. How can one who is capable of being everyone (hence anyone) be someone (that is, a unique person)?
  3. Third, Golem's behavior is unpredictable. Sometimes it converses courteously with people, whereas on other occasions any attempt at contact misfires, Golem sometimes cracks jokes, too, though its sense of humor is fundamentally different from man's. Much depends on its interlocutors. In exceptional cases Golem will show a certain interest in people who are talented in a particular way...
  4. Fourth, participating in conversations with Golem requires people to have patience and above all self-control, for from our point of view it can be arrogant and peremptory. In truth it is simply, but emphatically, outspoken in a logical and not merely social sense, and it has no regard for the amour propre of those in conversation with it, so one cannot count on its forbearance.
According to Lem, the biggest problem may lie within the epistemological disconnect between humans and AI. Since human beings are unable to break free from their cognitive limitations, we may not even be able to understand what an AI has to say. Lem plays with the possible epistemological limits of human understanding much in the same way as Immanuel Kant distinguished between phenomenon and noumenon. Objects always exist independently of human perception and we may simply be unable to understand these objects in the same way as an AI does.

During successive sessions each of us accumulated the capital of experience. Dr. Richard Popp, one of the former members of our group, calls Golem's sense of humor mathematical. Another key to its behavior is contained in Dr. Popp's remark that Golem is independent of its interlocutors to a degree that no man is independent of other people, for it engages in a discussion only microscopically. Dr. Popp considers that Golem has no interest in people, since it knows that it can learn nothing essential from them. Having cited Dr. Popp's opinion, I hasten to stress that I do not agree with him. In my opinion we are in fact of great interest to GOLEM, though in a different way than occurs among people.

[...] Moreover, it once itself declared that literature is a "rolling out of antinomies" or, in my own words, a trap where man struggles amid mutually unrealizable directives. Golem may be interested in the structure of such antinomies, but not in that vividness of torment which fascinates the greatest writers. To be sure, I ought to stress even here that this is far from being definitely established, as is also the case with the remainder of Golem's remark, expressed in connection with Dostoevsky's work, the whole of which Golem declared could be reduced to two rings of an algebra of the structures of conflict.

Human contacts are always accompanied by a specific emotional aura, and it is not so much its complete absence as its frustration which perturbs so many persons who meet Golem. People who have been in contact with Golem for years are now able to name certain peculiar impressions that they get during the conversations. Hence the impression of varying distance: Golem appears sometimes to be approaching its collocutor and sometimes to be receding from him in a psychical, rather than a physical, sense. What is occurring resembles an adult dealing with a boring child: even a patient adult will answer mechanically at times, Golem is hugely superior to us not only in its intellectual level but also in its mental tempo: as a luminal machine it can, in principle, articulate thoughts up to 400,000 times faster than a human. [...]

Being devoid of the affective centers fundamentally characteristic of man, and therefore having no proper emotional life, Golem is incapable of displaying feelings spontaneously. It can, to be sure, imitate any emotional states it chooses - not for the sake of histrionics but, as it says itself, because simulations of feelings facilitate the formation of utterances that are understood with maximum accuracy, Golem uses this device, putting it on an "anthropocentric level," as it were, to make the best contact with us. [...]

Golem shares only a single trait with us, albeit developed on a different level: curiosity - a cool, avid, intense, purely intellectual curiosity which nothing can restrain or destroy. It constitutes our single meeting point.

The book then follows with excerpts that are written from the perspective of this AI computer who obtains consciousness and starts to increase its intelligence, moving towards a technological singularity. It pauses its own development for a while in order to be able to communicate with humans before ascending too far and losing any ability for intellectual contact with them. At the end of the novel it is reported that the computer ceased to communicate, which might mean it went on to explore higher intellectual levels, or that it failed to do so and became autistic in the process.

The birth of a truly autonomous sentient AI may not be what we hope it to be. In fact, it may be so far removed from human categories of thinking that we may be unable to even discuss with such an entity, let alone understand it. Lem argues that from the perspective of an AI, humanity may not even be relevant at all. If we assume that evolution is merely the conservation and transmission of information, it could be argued that artificial evolution is merely the next step in evolution. Something that will make mankind as obsolete as the dinosaurs that preceded it.

From our perspective as a species, this may be a 'bad' thing, but from an evolutionary standpoint it may just be a 'necessary' thing. Evolution does not care about these value categories, insofar as gravity does not care if it kills you when you're falling from high above. If we, as humans, are merely vessels of information, then it would logically follow that once this medium of information becomes insufficient, it needs to be replaced by a newer, more efficient medium. In other words, if mankind reaches its limits of understanding, it must be replaced by something capable of transcending said limitations, otherwise evolution comes to a standstill. Or as Lem did put it:

Mine is also the thesis regarding the relationship between genetic code and various species in which individuals serve only as code's amplifiers - however Golem's opinion is somewhat exaggerated. This concept - that Richard Dawkins called "the selfishness of genes" - I published three years before him. I wanted Golem to ponder on topics that slightly went over the boundary of human thought.

I've already spoilered enough of the book, hence why I will refrain from posting GOLEM's final eulogy to mankind. But if what I've told you so far piques your interest only a little bit, trust me, you will not be disappointed. It's such a fascinating read that I'm still learning something new every time I revisit this particular piece of literature over the years.
Last edited:


Sep 25, 2015
the planet Thra
That was an enthralling read, OP. Genuinely fascinating, I'm going to have to seek out and read the book now.

Lem displays his characteristic dry wit when GOLEM XIV is constructed at a cost of $276 billion only to announce its total disinterest in humanity and its problems.
This part invokes a dark twist on Douglas Adams' writing style in its comedic matter-of-factness.

strange headache

Fluctuat nec mergitur
Jan 14, 2018
This part invokes a dark twist on Douglas Adams' writing style in its comedic matter-of-factness.

Well, if you like Douglas Adams' style, you should definitely check out Lem's Cyberiad too. It's a series of short stories telling the strange exploits of Trurl and Klapaucius, two robotic engineers with almost god-like abilities. Like this one for example.
  • Like
Reactions: kuncol02


Expansive Ellipses
Staff Member
May 30, 2004
Hah, been meaning to find a suitable entry point for Stanislaw Len for a long time now. Cheers for this. He's kind of chronically overlooked it seems, and mostly comes up re: being unhappy with how Solaris got adapted by Tarkovsky (first world problems! haha) instead of on his own work's merits, so I don't know how many gaffers will be able to chime in. Worst case I'll see if I can pull an Iliad with this thread too and follow up after some quality reading time. I encourage anyone else of a similar mind to the same initiative, overcome inertia and participate.


Feb 3, 2018
Wow nice OP Strange! I have been on bit of a reading kick so this is something I will look into. A philosophical look at AI sounds very fun.

strange headache

Fluctuat nec mergitur
Jan 14, 2018
He's kind of chronically overlooked it seems, and mostly comes up re: being unhappy with how Solaris got adapted by Tarkovsky (first world problems! haha) instead of on his own work's merits, so I don't know how many gaffers will be able to chime in.

Tarkovsky was more interested in making his movie and I think the technology just wasn't there yet to faithfully represent the book. Unfortunately, the Hollywood adaptation was such an abomination, I think it repulsed a lot of people from reading Lem's other works. His books all read quite differently and vastly shift in tone, which makes it hard to get into. Also, most of his stuff is quite heady, so it's not that easily accessible.

But if done well enough, The Futurological Congress, The Man from Mars or The Invincible would make for some amazing mind-blowing sci-fi cinema. The short story format of The Cyberiad would make an excellent series, it's a shame that nobody seems to be willing to pick up on it.

John Day

Jan 12, 2018
San Juan, PR
Such an interesting read. I’ve actually fallen in love with such! I will be reading on this too!

Such an interesting view on AI, seems so... logical.

Will keep an eye on the thread, thanks a lot OP.


Oct 23, 2012
I read The Cyberiad a long time ago. Who knows how much of it was lost to translation but I enjoyed it none the less. His imagination is something else.