Monday, September 10, 2007

Are you eating an artificial apple from the future?

If like me you're sitting around chomping on one of nature's greatest delights, you might want to consider the probability that unless you personally picked it off your own personal tree, you may in fact be eating an artificially replicated apple from the future. In fact, you could arguably be considered quite naive to think otherwise.

The argument goes that in the future, as technology improves and natural resources become scarcer, that food replication using naturally occurring airborne nutrients will become a fact of life. Indeed, the crews of the fictional Star Trek universes would not be able to survive the depths of space without such scientific advances. That at some point in the future, apples will be artificially replicated for human consumption, is therefore undeniable. We also have to consider that as the world comes towards its natural closure, we will discover the means to time travel and make full use of it to escape our own fate. Some philosophers believe that there are no scientific reasons why bringing artificially replicated apples from the future could do any harm, and therefore we must conclude that this is happening, and only the question of degree remains. It's tempting to place the odds quite low, but when we consider that replication technology will be able to produce infinitely more apples than all the apple trees of the earth could ever do in a lifetime, you will see that naturally grown apples are only a tiny fraction of what's available to us today.

Personally, I have taken the path of denial. I'm quite happy to accept that the apple I'm now finishing off was grown from a tree with chemical fertilizers, the way god intended. This guy on the other hand, who has demonstrated very well the extent to which a Hollywood produced movie can permanently alter your perception of reality regardless of the suckyness of its two sequels, provided it contains sufficient special effects, long black coats and sunglasses, will no doubt be avoiding apples until he develops a way to check their origin. While he speciously goes to great lengths to show logically why we are almost definitely mere simulations on a grand computer somewhere in the future, he goes and ruins it all by including and then failing to elaborate on the following line:

Many philosophers and cognitive scientists believe that such brain-simulations would be conscious, provided the simulation was sufficiently detailed and accurate.

In other words, his entire premise hinges on the accuracy of personal baseless speculations of people that have no specialised knowledge on the topic. Without even needing to touch the crux of his argument, he's proved himself wrong just by this single line which ironically would have been better left out. The chances of brain simulations becoming conscious are zero, give or take zero. They may be able to imitate human brains enough to fool real people, but that's only as hard as adding an animated face to it. And they may be able to fool real brains into thinking certain things are real that aren't - we already know how to do that without computers. Creating a computer programme that has the same consciousness as humans though is impossible though because conceptually it does not make any sense. There are some very intelligently written books about these days, yet I don't hear anybody suggesting they will one day be able to think for themselves and appreciate their own existence.

And what if the simulation wasn't quite sufficiently detailed and accurate? Is there a cut-off point between the programme being aware and not being aware? Or would less detail and accuracy mean a lower level of conscious? That being so, it would be logical to assume that the first program I ever wrote in BASIC - print "hello" - would also have had a small degree of awareness, and perhaps could have chosen not to run when I asked it to. Stick to wildlife documentaries mate, and leave the science fiction to those of us who understand both words.


  1. If a brain simulation is detailed enough to fool real brains into thinking it's conscious, what is to say it is not conscious? After all, we're the ones who define the word 'conscious', and clearly we're being fooled into thinking it applied. Or to put it another away, what makes a thing 'conscious', if not its behavior?

    As for the reductio ad absurdo argument for your BASIC program, I consider it to be exactly as conscious as the degree to which it fools real brains. I don't know what units this would be measured by, but certainly it could pass off as human in a turing test limited to one line.

    Concepts such as "conscious" are tricky to nail down, because we each have an innate idea of what "conscious" is that we find hard to formalize in a way that would allow one to definitively say, "Yes! This is conscious!" - or, for that matter, to assign a degree to it on some scale of measurement. However, unless you believe in a soul (and if so, what is a soul? I've always wondered what people *really* meant by that - I've not got much of a religious upbringing) then you are more-or-less bound to assume that our consciousness comes out from the physical principles in the brain - which, in principle, may be emulated. The only questions then are a) whether a direct physical simulation of a brain is computationally feasable b) if so, would it be possible to provide it with sufficient and appropriate stimulus while it develops (ie, all the sensory input human babies get while growing up) so that it matures into a sentient, conscious being, and c) can we make it more efficient than direct emulation?

    While a) and b) are definite issues, I for one believe C is likely possible, simply because there's no known fundamental principle which absolutely limits consciousness to organic matter. As such, that our particular implementation of consciousness is limited to organic matter should not prevent alternate implementations from running on a large parallel-processing machine of some sort. Of course, the real problem here is we have no idea how our consciousness really works, so it's impossible to reomplement it at the moment, but I think saying it's impossible at this stage is a bit like saying it'd impossible to fly in the 1800s - it's something that requires a lot of research and study, but to call it impossible is severely premature.

  2. As a further note, you write: "Creating a computer programme that has the same consciousness as humans though is impossible though because conceptually it does not make any sense. There are some very intelligently written books about these days, yet I don't hear anybody suggesting they will one day be able to think for themselves and appreciate their own existence."

    This is a flawed comparison. Books are static; they have no capacity to process new information, react to stimuli, or otherwise think. Computers are dynamic; their data is not set in stone, they can take in input, and they can respond to happenings in the world. As such there's no real theoretical problem with them thinking, provided we can find an appropriate program for it.

  3. The problem is that they're not thinking, they're merely following a list of instructions. No matter how advanced the instructions become, at no point is the machine ever acting on its own authority - there's a programmer behind every operation. It conceptually doesn't make sense because a brain is organic matter, not a series of logical instructions, and to say that the material the brain is made of is irrelevant enough that it could be done away with entirely would be akin to claiming the human mind is a floating spiritual entity. If you think of the brain as a block of Pecorino Romano, try to imagine simulating it in an artificial environment. And then try and explain exactly what you mean.

  4. If the machine, following its instructions, behaves identically to a real brain, then there is no difference. As for simulation, the most straightforward method is simple, but computationally infeasable: Use the laws of physics to simulate the behavior of every particle in the brain.

    Obviously that's too computationally expensive to implement, but at another level, you could model mathematically the behavior of the neurons in the brain, and simulate that. As I recall, there have been formulas developed modelling the behavior of particular types of cells in the rat hippocampus; it's just a matter of scaling that up, and reproducing the macro structure of the brain.

    And remember, the brain is following some sort of instructions in its DNA as well. That's analogous to the case of the computer - although it can't alter those base instructions, those base instructions can build new, higher level instructions, and interpret those.