Against AI sensationalism
Yes, AI is scary, but it is ultimately a reflection of our current system, and it is the introduction of AI into an already repressive environment that we must question.
Yes, AI is scary, but it is ultimately a reflection of our current system, and it is the introduction of AI into an already repressive environment that we must question.
Increasingly, with the never-ending burning of fossil fuels, Ground Zero is no longer a single city of any sort, but this planet itself and, whether we’ve already found a third way to destroy ourselves (and so much else) or not, there is something awesomely ominous about our urge to destroy so much with our multiplying versions of fallout.
The Cuban experience now looks to me like an even more impressive success story, showing purely human intelligence coping with a seriously life-threatening situation at nation-state scale.
But I must admit that AI, whatever its positives, looks like anything but what the world needs right now to save us from a hell on earth.
The arrival of this AI challenge to our adaptive capacities is occurring precisely at the historical moment in which all of the other facets of the polycrisis are reaching a kind of historical crescendo or apogee.
ChatGPT and its successors and rivals, whatever their virtues, are the latest agents in the corruption of the public sphere by digital technology, threatening to extend and deepen the misinformation, fabulism, and division stoked by Twitter and other digital media.
If we reduce all of our efforts at addressing our problems to language a machine can understand, we will get machine solutions. What we need, however, are solutions that come from our deep connections to this planet as beings of this planet, connections that no machine will ever fathom.
Thus, we should consider whether the superintelligent AI future some fear might already in action, but at perhaps a slower and more subtle pace than some pontificate might happen after “the singularity” when AI becomes more capable than humans.
The death of a pedestrian during a test drive of a driverless vehicle calls into question not just the technology—which didn’t seem to detect the pedestrian crossing a busy roadway and therefore didn’t brake or swerve—but also the notion that driving is nothing more than a set of instructions that can be carried out by a machine.