My AI Experiment
Claude and I write a book
Last month, my son challenged me to give AI a try. So I did, worked with Anthropic’s Claude to write a sci-fi novella set in the universe of The Expanse TV series and set of books and built around my ruminations about cosmology and Nietzsche. It kicks off here, and you can find the entire text here. I also used Claude to outline a few possible travel itineraries.
My experience has taught me a few things. First, working with an advanced AI like Claude can feel almost like talking with a smart and eager young person. There is no doubt in my mind that there is a kind of intelligence at work in these advanced large language models. This is not captured by the simple description of these LLMs as very good at predicting the next word. The models run on hardware versions of neural networks. Our own human intelligence runs on bio-ware such networks. While not conscious nor organic, hardware neural networks can produce a kind of intelligence. Even human intelligence is the result of neural processing that we may become conscious of only with the output. Where consciousness itself come from is another, deeper question. I see no reason to believe AIs – however “intelligent” they become – are, or will become, conscious.
This brings me to the next thing. As the number of processing chips running these networks increase – and there is no limit on that other than money and the electricity to run them – AI intelligence will become truly formidable. This may be unambiguously good in some areas, perhaps most usefully in medicine, transformative in many and bad or dangerous in others. Among the transformative, clearly an impact on jobs of all sorts. My novella co-authored with Claude shows how this may affect even fiction writers. I was impressed by its ability to draft character, dialogue and plot. Of course, it has the entire Internet to draw upon, basically sucking up thousands of years of human intelligence. But AI is already being used by and plagiarizing the work of human authors.
Of the dangers, there are many. The conflict between the Pentagon and Anthropic revolves around how AI might be used to help kill and surveil humans. AI deep fakes can poison the information stream on any subject. AI also lies and hallucinates, often in the case of being unable to provide a response despite its programmed eagerness to please.
I won’t be using AI to write anymore fiction. Some day, I may try my hand at vibe-coding (a long time ago I wrote simple programs in BASIC). But for now, I’ll just follow the stream for a while.


We are doing this because we can, not questioning whether it is the right thing to be doing.
And by "we", I mean those with the money and power to do so, not the poor schmucks who will have to live with the consequences.
Not everything that can be done should be done.
Just my two cents.