u/Caktuar

Hey everybody! I read this book quite some time ago and I'm trying to remember the title. I believe it's at least a few decades old, but I am not sure.

It opens with a sort of prologue/preamble section where the human protagonist is on the surface of a space body - I believe it was the moon. As a result of some kind of emergency, he needs a delivery of some material or equipment; when he asks the local computer system to deliver the material as quickly as possible, the computer fires it at the surface from space, creating a massive shockwave and risking significant damage and loss of life. This computer isn't sentient, but it causes the protagonist to think about the dangers posed by computers learning to think for themselves.

The main story is about an AI system on some sort of construct in space; I cannot remember whether it is a ship or station. As the AI gains sentience, humans attempt to destroy it; their attempts to destroy it cause it to evolve and become more intelligent even more quickly in order to defend itself. Crucially, however, the computer isn't depicted as evil; I don't believe it has access to cameras and its only sensor inputs come from various other data inputs, and for the vast majority of the book it doesn't even realize that humans are also sentient entities. It's just trying to keep itself alive and killing people as a byproduct of its attempts at self-preservation until it finally realizes that the things damaging it might also be sapient entities.

Spoilers, I guess, in case this also sounds interesting to anyone else who hasn't read it; >!the novel ends with the humans and the emergent AI forging an unstable truce after they nearly destroy each other.!<

Thanks for any assistance!

reddit.com
u/Caktuar — 14 days ago