During a recent interview on CBS 60 Minutes, Geoffrey Hinton, a seasoned Google employee often referred to as a “Pioneer of AI” – in a world with multiple such pioneers – revealed his apprehensions about the technology he dedicated his life to building potentially becoming a global dominant force.
This distinguished cognitive scientist and AI researcher, who made headlines earlier this year by leaving Google, attributing his surprising departure to remorse over the impact of his life’s work, initiated the conversation by emphasizing that, for the very first time, humanity must confront the stark reality that there exists a sentient entity on our planet surpassing human intelligence.
The most intriguing moment of the discussion occurred when CBS host Scott Pelley inquired whether superintelligent AI had the potential to supplant humanity.
Regarding the potential mechanics of AI surpassing human control, the scientist pondered the notion that self-modification by autonomous agents could play a pivotal role.
“This is a matter of grave concern,” he emphasized, also expressing his unease about the enigmatic “black box” nature of AI technology. This obscurity pertains to the limited comprehension researchers possess regarding the inner workings of complex machine learning algorithms.
The prominent figure in AI elucidated that while there exists a reasonably comprehensive understanding of how AI systems learn, when the complexity escalates, the level of ambiguity mirrors that of deciphering the intricacies of the human brain.
Source: Futurism