Some scientists are worried about technology like Elon Musk’s Neuralink. Cognitive psychologist Susan Schneider wrote an op-ed (paywall) that it could be “suicide for the human mind.”
The worry with a general merger with AI, in the more radical sense that Musk envisions, is the human brain is diminished or destroyed. Furthermore, the self may depend on the brain and if the self’s survival over time requires that there be some sort of continuity in our lives — a continuity of memory and personality traits — radical changes may break the needed continuity.
I’m no neuroscientist but I subscribe to emergentism, which is the idea that consciousness is an emergent property of the brain. An easy explanation is here, but basically it means that consciousness isn’t a property of the physical brain, but rather something that happens when you get enough neurons interconnected. This isn’t something that could be replicated with code.
Check It Out: AI Tech Like Neuralink Could be ‘Suicide For the Human Mind’
Hmmm… but if conciseness “ isn’t a property of the physical brain, but rather something that happens when you get enough neurons interconnected.” Then I see no reason if you connect enough computer circuits together that you would not have conciseness in a computer.
Consciousness could very well be an emergent property, but that’s a most uninformative statement. The key question is what is the mechanism or process that causes it to emerge. Or a bit more precisely, in what way does connecting more and more neurons cause consciousness to arise? (Keeping in mind that there are different types of neurons and different configurations of interneuronal connections and different types of synapses and different types of neurotransmitters that jump across the synapses, etc.)
Emergentism to me is just short hand for “The answer lies here somewhere but I don’t know (yet) what it is or how to arrive at it.” Well, that’s at least as good a start as any towards getting an answer.