Sunday, August 31, 2008

Learning to Teach

Developing AI's that can "Learn" has always been a challenge. One way to test if a program is truly "Learning" is to see if it can Teach a similar program the same material without "unlearning" the things it has learned.

The problem with most AI's is that they assume that the input being sent to them is from a reliable source. So if someone tells them "The world is flat", it'd believe it. If that program was to teach a second program, it'd get dumber by misinterpreting the learning process (asking questions, making both true and false assumptions) as valid input while the student program will get smarter learning from the input. At the end, both programs would end up wrong, thinking that the world is neither flat nor round, but instead a cube.

The question proposed here is whether the program can learn and then teach what it has learned effectively to a second program without affecting it's own knowledge.

Categorization and judgment based on past experiences could possibly be the solution. (See "Concept AI" & "Concept AI II")

2 comments:

Mark said...

Cobalt,
My opinion is that the assumption is wrong (pun intended). A good teacher must also learn from his students. A good student will also question the teachings received.

I may remind you, humans "knew" the world was flat for centuries! It wasn't untill someone said "I don't believe that and I will prove it" that the idea evolved.

It is true that not all teachings should be questioned, though. Maybe just challenged for further proof if proof isn't enough to be "undeniable".

The teacher will, then, ask himself if the knowledge he is imparting is actually accurate on the face of a good question.

Food for thought. Thanks again for an interesting blog.

Cobalt said...

Thanks mark!

Yes, a good AI would be able to distinguish good questions from bad ones. But a series of tests like these, combined with the Turing test, would help us distinguish good Artificial Intelligence Programs from Intelligence Simulators.