StarLog – Artificial Intelligence

Next piece of homework for Star Trek: Inspiring Culture and Technology:

Where do you see Artificial Intelligence going? Will it be Data, The Doctor or something new? Do we need to fear it, embrace it or something in between? 

On the very long term – definitely something like Data og the EMH, that is general artificial intelligence with an awareness of self. Intelligence and self-awareness, appears to be an emergent phenomenon of complex networks. We have it. The more we look for it, the more higher animals we observe it in. That means that it is probably just a question of time, before we are able to build a neural network, that is sufficiently complex that self-awareness and the ability to learn, will emerge.

That will take some time. A lot longer than what we are led to believe by the marketing. On the shorter term, we are going to see a lot more specific artificial intelligence.

So – where is it going? Definitely towards Data. In very small steps. And there is a very long way to go before we get to the EMH.

Should we fear it? We should fear it kinda like we fear the eventual death of the sun. Yes, it will burn out, it will end life on Earth. But not in our lifetimes, and not in our grandchildrens lifetime. Right now we should embrace it. As mentioned in the interview, artificial intelligence as we know it now, has given us more time to do what truly brings value to our work, ourselfes, and our fellow human beings. That is not a threat.

But we should begin to think hard about the moral and ethical implications of how we use artificial intelligence. The US drones are now capable of taking off themselves, flying to the target area on autopilot, identify targets, and returning after the killshot. As it appears from outside the intelligence community, the only reason that the artificial intelligence in the drones does not also pull the trigger and launch the missile on its own, is ethical considerations.

And those are things we need to think about. And perhaps be a little fearful of. Not the artificial intelligence, but rather the all to real human stupidity. We should think about who will be held responsible for mistakes. If the missile launch is determined to be a breach of the laws of war, who is responsible? The programmer? The people inputting data into the neural networks? The designers of the training sets? When the selfdriving car makes a mistake, and hits someone – who is responsible? When it makes a choice between hitting the stroller and the retiree – was that the right answer.

We might as well get started on those issues. They are only going to get more complicated. And don’t get me started on the moral implications of what we should do when we reboot the computer network that has gained self-awareness.

A good place to start, would be to watch some Star Trek. Because these issues has been discussed before. The trial determining the humanity of Data and the issues regarding holodeck malfunctions to take just to cases.

And that should get me to the next rank – Lieutenant Junior Grade: