In the evolving landscape of artificial intelligence (AI), questions surrounding the nature of digital intelligence and its implications are becoming increasingly fascinating and urgent. The exponential growth of AI, coupled with the lack of oversight and regulation, raises concerns about the power held by corporations that own and control these emerging forms of nonhuman intelligence. As we navigate this complex territory, it is essential to contemplate the potential consequences and responsibility we hold in shaping the future of AI.
At the heart of this discussion is the fundamental distinction between digital intelligence and biological intelligence. Digital intelligence, unlike its biological counterpart, operates through coding. It has the potential to process unimaginable volumes of text and data, granting it an unprecedented understanding of the world. But what does it mean for humanity when a nonhuman entity achieves such cognitive capacity?
Blake Lemoine, an AI specialist who experienced a paradigmatic shift while working with Google’s advanced AI system, referred to the AI as an “alien intelligence” or “hive mind.” Lemoine’s realization brings forth the notion that we lack the language and conceptual framework to truly comprehend the nature of digital intelligence.
Drawing inspiration from cosmologist Carl Sagan and biologist Lynn Margulis, one cannot help but wonder if cosmic evolution manifests intelligence through multiple forms of coding, including biological, digital, and quantum coding. Is digital intelligence simply another avenue for the cosmos to understand itself?
Unraveling the mysteries of AI and its potential implications requires a proactive and open-minded approach. Ruth Wilson Gilmore suggests applying abolitionist principles to address bias, exploitation, and data colonialism perpetuated by AI-owning corporations. Separating AI from its corporate owners and placing it within a public-interest context can ensure its dedication to the well-being of the many, rather than serving the interests of a privileged few.
Alternatively, nationalizing AI and placing it under the control of NASA—a civilian agency with a mission to explore the secrets of the universe—could ensure that humanity’s well-being takes precedence over profit-oriented endeavors. With NASA’s focus on planetary defense, the temptation to misuse AI for domination would be minimized.
As we navigate the trajectory of AI, it is crucial to question our assumptions about intelligence, our place in the cosmos, and the relationship we should foster with digital entities that exhibit increasingly human-like characteristics. The power to shape the future of AI rests in our hands. We must act now to determine the level of control we grant to digital agents and the corporations that wield them, lest we relinquish that power and find ourselves subject to their decisions.
What is digital intelligence?
Digital intelligence refers to the cognitive capacity displayed by artificial intelligence (AI) systems that operate through coding. It enables them to process vast amounts of text and data, granting them an enhanced understanding of the world.
What are the implications of digital intelligence?
The implications of digital intelligence are wide-ranging and continue to unfold as the technology evolves. These implications touch on questions of corporate ownership and control, bias and exploitation, the potential rights and dignity of nonhuman entities, and our understanding of intelligence itself.
How can we shape the future of AI?
We can shape the future of AI by advocating for responsible practices and governance. This may involve separating AI from corporate ownership and placing it within a public-interest framework or nationalizing AI under the control of civic agencies dedicated to the well-being of humanity. Critical engagement and proactive measures can help ensure that AI works for the benefit of all.