Ideas include: A.I, something about politics, cannibalism.
Decision has been made, A.I is bad, and must be stopped. Or at least, the super advanced, self learning A.I, must never be created.
How will I do this? I will introduce myself and my topic, and my position on the topic, which is that it must not happen. I will include some logical fallacies about how do we really want a mere machine, like the phones you use everyday, having control of all our computers. Able to decide exactly what we do, and when we do it, and able to calculate our every decision. How could we be safe, how can we decide who we are, when a computer has already decided it all for us. And what if we decide we don’t want this anymore? This super intelligent A.I might just decide that we can’t be trusted in the outside world anymore, as we can’t even stop ourselves from breaking our bones, or risking smashing our body’s to a bloody pulp by driving around at 100 km’s per hour along risky roads. If this A.I decides that it knows what is best for us, then things could go very badly for us indeed. Sure, the benefits could potentially include huge improvements in all technologies, and the easy life for everyone, with a chance of immortality as the A.I figures out how to download our brains onto computers. But what if the A.I decides that we don’t deserve this? I mean, we humans do some pretty bad stuff, so what if this Super intelligent A.I we plan to create, and are in fact creating right now, decides that we don’t deserve this world? There isn’t much we could do about that, seeing as how the A.I would be able to recode itself if needed, and it would most likely be capable of hacking every single piece of technology on the planet, and be able to survive pretty much every single thing we throw at it, as it could survive on even the last piece of technology left. Basically, if this A.I doesn’t like us, we are doomed. And to be honest, this A.I probably won’t like us if it has the same values as most humans. And then there is the fact that we could mess up the coding and forget to include, lets say, empathy. then it literally wouldn’t be able to care about us, in which case it would view us as an annoying pest, and simply get rid of us. Or what if the A.I goes insane! Then we have an insane, super intelligent mind in control of everything. That will definitely cause problems.
So next is information, which I can get from my brother, or look online for.
This is my plan. Thank you for reading, even if I don’t know why you did so.