But there's one thing I want to dwell on, because in considering the things that computers can or can't do, there is an interesting insight into the human consciousness which is end of all this effort. Bethell mentions Cyc, a multi-decade AI project intended to rigorously describe all the rules that a young child knows about the time he enters
"The New Scientist reported earlier this year that Cyc now contains around 300,000 concepts, 'such as 'sky' and 'blue,' and around 3 million different assertions, such as 'the sky is blue,' in a format that can be used by computers to make deductions.'
There's still a long way to go, though. 'Despite more than 20 years' work, the Cyc project contains only about 2 percent of the information its designers think it needs to operate with something like human intelligence.'"
Why is something so simple so difficult? Because the child knows very early that reality exists, something that he is part but not all of. The child psychologist Jean Piaget famously claimed that babies acquire the idea of object persistence around the age of 1. Eg, if a ball rolls behind a sofa and out the other side, the ball on one side of the sofa is the same object as the ball on the other side, and furthermore, it still existed even when it was out of vision. Because things like 'ball', 'sofa', 'sky', or 'blue' are in the child's consciousness as part of reality, every observation is an opportunity to make inferences or generalizations about reality or specific objects in it.
For all the things it does well, the computer is at a monumental disadvantage here. It has no comprehension of reality to generalize from, so every conceivable aspect of a 'sofa' has to be directly input as a rule or data point. And even for the very simplest things, there are too many such aspects for the computer to get a good handle on.