The argument goes that in the future, as technology improves and natural resources become scarcer, that food replication using naturally occurring airborne nutrients will become a fact of life. Indeed, the crews of the fictional Star Trek universes would not be able to survive the depths of space without such scientific advances. That at some point in the future, apples will be artificially replicated for human consumption, is therefore undeniable. We also have to consider that as the world comes towards its natural closure, we will discover the means to time travel and make full use of it to escape our own fate. Some philosophers believe that there are no scientific reasons why bringing artificially replicated apples from the future could do any harm, and therefore we must conclude that this is happening, and only the question of degree remains. It's tempting to place the odds quite low, but when we consider that replication technology will be able to produce infinitely more apples than all the apple trees of the earth could ever do in a lifetime, you will see that naturally grown apples are only a tiny fraction of what's available to us today.
Personally, I have taken the path of denial. I'm quite happy to accept that the apple I'm now finishing off was grown from a tree with chemical fertilizers, the way god intended. This guy on the other hand, who has demonstrated very well the extent to which a Hollywood produced movie can permanently alter your perception of reality regardless of the suckyness of its two sequels, provided it contains sufficient special effects, long black coats and sunglasses, will no doubt be avoiding apples until he develops a way to check their origin. While he speciously goes to great lengths to show logically why we are almost definitely mere simulations on a grand computer somewhere in the future, he goes and ruins it all by including and then failing to elaborate on the following line:
Many philosophers and cognitive scientists believe that such brain-simulations would be conscious, provided the simulation was sufficiently detailed and accurate.
In other words, his entire premise hinges on the accuracy of personal baseless speculations of people that have no specialised knowledge on the topic. Without even needing to touch the crux of his argument, he's proved himself wrong just by this single line which ironically would have been better left out. The chances of brain simulations becoming conscious are zero, give or take zero. They may be able to imitate human brains enough to fool real people, but that's only as hard as adding an animated face to it. And they may be able to fool real brains into thinking certain things are real that aren't - we already know how to do that without computers. Creating a computer programme that has the same consciousness as humans though is impossible though because conceptually it does not make any sense. There are some very intelligently written books about these days, yet I don't hear anybody suggesting they will one day be able to think for themselves and appreciate their own existence.
And what if the simulation wasn't quite sufficiently detailed and accurate? Is there a cut-off point between the programme being aware and not being aware? Or would less detail and accuracy mean a lower level of conscious? That being so, it would be logical to assume that the first program I ever wrote in BASIC - print "hello" - would also have had a small degree of awareness, and perhaps could have chosen not to run when I asked it to. Stick to wildlife documentaries mate, and leave the science fiction to those of us who understand both words.