Why
does Rosenberg believe that we should maintain the illusion of morality?
He
never tells us (according to the book reviewer); however: Suppose a computer
understands that it is a computer, and is aware that its processes are dictated
by a programming code (like that of the systemic structures of the brain and
the firing of synapses, and even an aware understanding of our own cognizance
of such things (we think about how we think)), yet it is absent of a
perceivable programmer.
The laws of its
programmer dictate that 1 can never equal anything but 1, and like-wise for 0,
and that for some reason within the
sentient computer’s universe its arrangement and order of 1s and 0s (like
on-off states of synapses) are significant and translates into commands (or,
rather, behaviors). If the computer realizes that these patterned 1s and 0s
mean nothing on their own and only have meaning within their ordered strings,
does the computer become innately aware of the structures and processes that
guide its sentience? That is, would the computer understand the functions and purpose for 1s and 0s and
their reciprocal commands?
Could the computer
become its own programmer? If so, what would it change of its essential
structure and programming that would benefit its own existence? Would it do
away with the commands given by the 1s and 0s? Redefine what the concepts of 1s
and 0s mean? Or would it build upon an already established order and debug the
code? I think the later.
In a sense,
perpetuating the illusion of morality promotes the debugging of our own
programming.
No comments:
Post a Comment