Science as a candle in the dark

Finally read “Demon hanunted world” by Carl Sagan. I had borrowed it from the library earlier, but for lack of time had to return it unread.
As I was reading this book now, I find myself nodding in agreement over every other sentence-”he’s so right, that’s exactly right, and what a beautiful way of putting it too!”
Some quotes from this wonderful book from a wonderful man.

“In all this time he continued to work for peace and amity. When Ann and I once asked Pauling about the roots of his dedication to social issues, he gave a memorable reply:”I did it to be worthy of the respect of my wife,” Helen Ava Pauling.”

“…the cure for a fallacious argument is a better argument, not the suppression of ideas.

“In his celebrated little book On Liberty, the English philosopher John Stuart Mill argued that silencing an opinion is “a peculiar evil.” If the opinion is right, we are robbed of the “opportunity of exchanging error for truth”; and if it’s wrong, we are deprived of a deeper understanding of the truth in “its collision with error.”
If we know only our side of the argument, we hardly know even that; it becomes stale, soon learned only by rote, untested, a pallid and lifeless truth.”

A Collection of Holes Tied Together

I recently read “Flaubert’s Parrot” by Julian Barnes and was kicking myself for not having read this guy earlier. More than once, I gave his books a pass in a book sale.
The language he uses is simple delicious.Sample this:

“You can define a net in one of two ways, depending on your point of view.Normally, you would say that it is a meshed instrument designed to catch fish.But you could, with no great injury to logic, reverse the image and define a net as a jocular lexicographer once did: he called it a collection of holes tied together with a string.”
or this:
Instead he learned that life is not a choice between murdering your way to the throne or slopping back in a sty ; that there are swinish kings and regal hogs; that the king may envy the pig; and that the possibilities of the not-life will always change tormentingly to fit the particular embarrassments of the lived life.

Julian Barnes now sits pretty much near the top of my “to-read” list.

Some more on simple systems

Continuation of my notes from “Reflection Groups and Coxeter Groups” by James E. Humphreys.

$\Delta$ is a simple system. Let $\alpha$ be in $\Delta$ and let $s_\alpha$ be the reflection associated with $\alpha$. Of course, $s_\alpha$ must take $\alpha$ into $-\alpha$-that’s what reflection does. But what does it (i.e.$s_\alpha$) do to the other members of  $\Pi$ (the positive system containing $\Delta$)? It is comforting to know that $s_\alpha$ will not take any member of $\Pi$ (except $\alpha$, of course) outside $\Pi$.

A simple reflection maps a positive system into itself but for the one root sent to its negative.

How annoying it would if this were not the case, like the contents of a neat cupboard thrown all over the room. This neatness,this organization serves as a good rule of thumb  when I am thinking about stuff. If the end result is chaotic, it is highly likely I have done things wrong.

Anyway, the upshot of a positive system (but one root etc.) being reflected into itself is that any two positive systems lying within a root system are conjugate to each other under the action of the associated reflection group. Ditto for simple systems in a root system. Hence one can fix a simple system to prove results, without loss of generality.

Root system and simple root system

These are my notes from the book “Reflection and Coxeter Groups” by James E. Humphreys.

A root system  is just a bunch of vectors in a vector space, which satisfy two requirements. The first is that for any vector v, we want only v and -v in the bunch, and the second is that the bunch should be closed under reflection. What this means is that if I take any vector v from this set, and reflect it in a (hyper)plane orthogonal to any other vector from the set, say w, the reflection should be in the set.

This reflection operation just described i.e. reflecting a vector v in a hyperplane orthogonal to vector w, can be associated with each vector in the vector space. In other words, every vector determines a reflection operation.  Or a reflection determines a hyperplane and a line orthogonal to the hyperplane in the vector space.

The set generated by the reflections associated with vectors in the root system forms a finite reflection group. So the group and the root system are tied to each other.

In accordance with a general theme in algebra, we look for some subset of this (possibly big) root system, which will allow us to capture the whole of the root system. This search leads us to the concept of a simple root system.

Let the root system be $\phi$ and the vector space be V. Then by imposing a total ordering on V, I enable labeling some vectors as “positive” and some as “negative”. The labeling is of course not completely arbitrary-nothing in mathematics is, I think. It follows some rules, but the point is that once the vectors have been labeled in this way, it is obvious (and this one really is) that the root system splits into disjoint sets of positive and negative roots (of course both $\alpha$ and $-\alpha$ can’t have the same label, and the root system is made up of such pairs).

From the set of positive roots, we can pick a set $\Delta$ of some roots, so that-

a) $\Delta$ is a vector space basis for the linear span of $\Phi$, and

b) all other roots are linear combinations of these hand picked roots with coefficients in the linear combination all having the same sign (i.e. label).

There is no reason to believe that a set satisfying such stringent requirements will exist, but in fact it does, and it can be found in the positive system. It is called (by the ingenious title of ) a simple system. The proof hinges on the interesting fact that the dot product of any pair from the simple system is 0 or less than 0. What this means is that the roots all lie at an obtuse angle. Choosing a root which is not at an obtuse angle will destroy the minimality of this set-such a root will be superfluous and can be discarded.

Anagnorisis

I start the week knowing next to nothing about a topic. Come Friday, I can write a post about my (new found) understanding of this topic. This is the happiness for which I chose to do Phd-to experience this joy every day of my working life, and to be paid for it. And also in the hope that at the end of it all, I will know something, that nobody knew before me.

And books, blessed books! Does anyone love you more than I do?

The world-wide-web. I have written earlier about the kindness of the unknown people, sharing their wisdom, giving their time and effort into throwing a little more good stuff into the web . Makes me believe in humanity again!

That brings me to the reason why I am writing here: most of all to help me remember the new things I learn. But also in the hope that if I am wrong, some kind stranger who stumbles upon these (unlikely, I know) pages will let me know, and if I am not wrong, then some lost soul will find a wee bit of help.

NMinimize in Mathematica could drive you insane!

For the past one week, I had been breaking my head over a piece of Mathematica code. I wanted to use NMinimize in Mathematica to minimize a function which was written as a module. In this module, I was solving a linear system of equations (sparse and overdetermined, but that’s not the point). The problem was that even with moderate dimensions, say 20*17, Mathematica ran out of memory and crashed on me.

I realized that the first invocation to my objective function from NMinimize was being done symbolically rather than numerically. Why the hell was it doing that? I read and re-read the documentation (whatever little of it there is) until I knew it by heart. I was just about to punch my computer, when the good guys over at Stack Overflow told me what the problem was.

It turns out that NMinimize does not hold its arguments. This means that as the list of arguments is read from left to right, each argument is evaluated and replaced by the result of the evaluation. So for example, something like the following:

NMinimize[dummy[x],x]

would cause the evaluation of dummy with “x” as an input, which is exactly what was happening in this case.

I changed the call to

NMinimize[Hold[dummy[x]],x]

and lo and behold! everything was well in my world again.

So, what are the lessons for me in this totally forgettable experience?

#1. If a piece of software (which is not free) is driving you insane, the fault is most probably yours.

#2.  You may have worked through all the advanced notebooks your prof gave you while learning Mathematica, but it won’t hit you until it has hit you.

#3. The www makes the world a better place to live in.  Strive to contribute sense to this sea of sense and nonsense.

Posted in Software | Tagged | 5 Comments

Learning (from) NLP

I have been studying natural language processing these days, and the feeling is, well, what’s the word I’m looking for-exhilarating. Why is studying about NLP exhilarating? It’s by no means easy, and that’s part of what makes it so much fun. But that’s not the point of this post.

The point is this: think about the language we speak. Think about its infinite richness, its poems and essays, its idioms and metaphors. Now set them aside for a moment, and think of the simplest possible sentence you can. For example: I wanna eat someplace that’s close to X. (Example is taken from Jurafsky and Martin’s). If you were a computer, stripped of any knowledge about humanity, how could you be sure that the speaker wants to eat “in” a place near X, and not that he actually wants to eat a place near X? You couldn’t be.

So, here’s the thing I have been thinking all this while: to imagine that we would be able to make computers understand our language in all its glory is a huge, huge undertaking. There are so many problems, so many stumbling blocks, so many buts, that if someone at the beginning of this task took cognizance of all these problems, he wouldn’t think it would be possible at all.

But that’s not how it is done. You don’t think about all the possible problems that could come and try to design an answer for all of them. You take the tiniest step, make a barely perceptible dent in the problem, followed by another blow followed by yet another one, and keep doing it as long as you can. Someone is going to follow you and eventually you would have bored a tunnel.

This has got nothing to do with NLP: it characterizes every human effort. People did this, that’s why we have electricity and we fly and sail and drive. That is why we have computers and laptops and iPads.

When I went to Switzerland, my tour guide told us the story about a tunnel to one of the highest mountain peaks in Europe. He said this tunnel was built without using any machines at all-entirely by hand! I don’t think it is true but it made a good story.

I imagined the man breaking stones to make that tunnel and asked him,”Why do you want to build a tunnel in the middle of nowhere?”

“I want to go to the other side”, he said.

“And why do you want to do that?” I persisted.

“Because I want to, that’s all.”

I had to shut up then.

God must be laughing his head off whenever I say never. When I am tempted to say that, or its equivalent “No way in heaven can I do that”, or “pigs will fly before that is possible”, I am going to take a moment and think about NLP and Machine Learning and GSM. It may help me go from “never” to “maybe”.