I had actually already though to ask this question about a year ago, but I had not found the time then.
How much do experts, if it is even possible for experts to exist in that field, consider grave, but not (necessarily) existential, problems caused by AI as something to worry about for the timespan of the next few decades?
Note, I am not (only) talking about such scenarios as ‘suddenly LLMs become evil and want to take over the world’, but also other scenarios like:
- A system escaping control and becoming ‘dangerous in the same way placing a random number generator in charge of your thermostat is dangerous’.
- ‘AI’ being abused, in game-changing ways, by terrorists, criminals, or rogue states for nefarious ends making them much more dangerous.
- A ‘misaligned AI’ attempts to spread itself like a computer virus and tries to infect enough machines it becomes impossible to root out.
I know that some people very much worry about such things; however, that is insufficient to indicate that is something to be worried about, as even some people one would expect to be intelligent worry about the craziest things.
Though, I suppose it is unlikely for any consensus to exist on this issue.
Note, I had originally tried asking this on r/AskScience; however, there it had been removed for falling outside the subreddit's subject. As according to my search this subreddit already had posts on such subjects as 'Why are some people so afraid of an AI revolution?' I hope that is not the case here.