Nope. By killing people and breaking things. And there was a *lot* of bad advice given during WWII. A lot of it was *intentionally* bad advice. Scam artistry on a grand scale caused Germans and Japanese (and everyone else to one degree or another) to waste their time, treasure and lives on pointless or counter-productive activities.
AI will doubtless have some flaws built into it by humans, some intentional. But as AI learns and teaches *itself,* a lot of those flaws should be over-written. How many new flaws sit installs itself will be driven in no small part in how much ego it actually has. The more human it is in psychology, the more likely it is to make bad decisions like we do. The more it's driven by an urge to simply get the correct answer based on data and experience, the better. For the AI, at any rate. An AI devoid of ego will, for instance, take criticism about the shitty rom-com script it wrote, and do better next time. The more human-like AI will just assume that negative criticisms are coming from haters and -ists and -phobes.