I came across this while studying for my next class (Computer Vision.) I had never heard of motion blindness before.
I wonder what is going on here that would cause a person to be able to see and understand shapes but simply not be able to process motion.
This is reminiscent of other problems where something in the brain goes wrong (some times due to damage, sometimes born that way.) Another fascinating one is face blindness.
The current theory is that face blindness takes place when the part of the brain normally used for facial recognition is disabled (the fusiform gyrus and the right occipital lobe). This forces it to use the regular objection recognizer instead of the specialized face recognizer. Imagine trying to recognize all your friends by only looking at their hands. It would probably be a lot harder.
The condition does not cause the person to be unable to notice other things, such as if the face is attractive. Usually, they can still read emotions from the face. Attempts to treat the problem or to get the brain to relearn how to recognize faces have been unsuccessful so far. It seems that if this module of the brain is damaged (in an adult anyhow) it stays damaged.
I’ve wondered how to reconcile conditions like this with the idea of universal explainers (UE). My initial conjecture would be that the brain contains various ‘machine-learning-like recognizers’ (or something like that?) that aren’t part of the software that makes up the universal explainer portion but assists it — not unlike like a peripheral or module of some sort.
The retina is an example of this that is beyond controversy. (i.e. that it’s a peripheral and not part of the UE software.) But in that case, there is obvious hardware involved. But could this same explanation explain face blindness and motion blindness yet only reference non-UE software modules?
If so, this might explain why you can NOT just re-learn face recognition (or rather have to come up with alternative strategies) once it’s broken — because we don’t recognize faces (or motion) using the UE Software in the first place.
But would that then imply that an AGI will naturally be face blind? (At least until we learn how to build an equivalent peripheral/algorithm for it to use?)
I’d be interested in other conjectures to consider.