# Information Theory: 400 years in the making.

**Posted:**February 6, 2011

**Filed under:**Information Theory 1 Comment

Mathematics is a language. We use it to describe and quantify things. Our first exposure to the language is when we learn to describe counts of things: one apple, two cats, three dogs, etc. Later in life, we use Mathematics innocuously: when we order a pizza, we order a certain diameter – 16″. Our sub-concious mathematician, visualizes the area of the pizza as π*(16″/2)². It then splits the pizza eight-ways and figures out that we probably need another large pizza to feed the guests on game night. In our day-to-day lives, this deductive language is never spoken except when it renders a result (we need another pizza) or succinctly describes an event (a 16″ pizza). We are out of practice when it comes to communicating with each other using our innate mathematical language. And so like a student that learns French grammar for a year and is dropped in Paris, a student with less than a year of college calculus finds himself incapable of communicating more than his name and awkwardly revealing how English is his first (and only) language when dropped into a graduate Maths course.

My current exposure to an Information Entropy course has been so far an alienating experience. From day one, I was flooded by definitions, theorems, lemmas and their proofs, three at a time, twice a week in one hour sessions. Frustration and self-disappointment are hallmark experiences of a graduate student but this was far more frustrating and discouraging than the usual. Why wasn’t it clicking? Why couldn’t it make a coherent story for me?

It then hit me that as students we never draw enough parallels between learning Maths and learning a language: you can’t really write a good French essay on political-reform without years spent absorbing vocabulary, sentence structure, conversing and making mistakes. Once we understand that it takes practice to get concepts right, we need to approach them with patience. After all, it takes years, even centuries to come up with the very basics of any Mathematical subject, even if this isn’t evident from class: a theorem and its proof are drawn out on the blackboard in less than twenty minutes. This pedagogic process creates a sense of student impotence: if you didn’t get the implication of the theorem, you are set up to fail on the next round of theorem-proving. If you missed a derivation step and were led astray to thinking about your lack of mental faculties, you might as well get up and leave class.

But, if we are patient, if we understand that these simple proofs never came into existence by a super genius in less than twenty minutes, but took years to come into form, we are more likely to digest and may be even fall in love with these theorems. This blog series is dedicated to tracing the history of Information theory from its infancy (1600’s) to its current state. The series will mostly describe key ideas in chronological order but the very first post will be by necessity on Claude Shannon’s work on quantifying information in the 1940’s as it will provide a framework for this series. Stand by for more to come.

Lovely article. Can’t wait to read the rest if the series