For example, Google’s Alpha. Go can beat a Go master, but its strategy can’t be easily explained in plain language.
But these machines don’t actually know what’s real, so they can just as easily find patterns that don’t exist or don’t matter. This also results in surprising “mistakes,” like the funny paint colors (stummy beige, stanky bean) generated by scientist Janelle Shane, or the horrifying mess of dog faces Google Deep. Dream finds hidden inside my selfie: These mistakes can be far more serious. Weinberger highlights software that racially profiled accused criminals, and a CIA system that falsely identified an Al- Jazeera journalist as a terrorist threat. When an app claims to be powered by “artificial intelligence” it feels like you’re in the future. But chances are, you’re really just looking at dog faces and made- up paint colors.
The more computer programs behave like humans, the less you should trust them before learning how they were made and trained. Hell, never trust a computer that behaves like a human, period.
There’s your conspiracy theory.