Friday, April 26, 2013

Time In Programming Languages

One of the most remarkable aspects of programming is the way in which time is modeled in our programs. I try to explain the way in which time is handled in functional and non-functional languages in this blog post.

What is a variable?

A variable in an imperative language is a named memory location. 

that is,  in a statement like

                       int a = 37;      //in c and c++ and Java

'a' denotes a named memory location which can hold a value. That memory location gets updated each time we assign a new value to a. That is,

if you say,
                  a = a+1;

a now holds the value 38.

(Note that in languages like python, variables are not typed but values are. So if you were to say a = [1, 2, 4] and later say a = 20 , the principle is still valid. That is, 'a' refers to a memory location which can be updated and accessed. The difference is:  a is not constrained to hold objects of a single type.)

What does this have to do with time?

As it turns out, everything. If you do not have assignment in your programming language, your programming language becomes purely functional in nature. That is, it loses the ability to model time.

Let me illustrate that with an example from Lisp where assignment is discouraged.

     computing a factorial of a number in Lisp

        (define    (factorial    n)
            (define  (fact-iter   fact   n)                      ;;inner function for making the process iterative
                   (if (or (= n 0)  (= n 1))   fact            ;;return fact when n becomes zero or one
                        (fact-iter (* fact n) (- n 1))))        ;;recursive call
          (fact-iter 1 n))                                           

In this factorial function, we have no variables to which anything is assigned.  No assignment is necessary. If you were to write the factorial function in C or its descendants, it would look something like this:

int factorial(int n)
       int fact = 1;
       for(int i = 1; i <=n ; i++)
             fact  *=  i;
       return fact;

Notice that there is assignment in almost every statement. 

What is a variable in  a functional language?

In functional languages, a "variable" stands for  a value. That is, you must stop thinking about a variable as a location in memory somewhere that holds a value.  In fact, you must think of the variable as a "shorthand".

So instead of typing 3.14159 each time, you alias it by saying

(define Pi 3.14159)

in Lisp. There is no concept of updating the value of pi to a different value "later on". Why? because "later on" doesn't even make sense when you have no time.

It still isn't clear what I mean when I say time doesn't exist without assignment. So let me explain further: When you have assignment, you are updating a value in a memory location somewhere. So if you were to call a function with a variable as an argument, it will return a result. If you now update the variable's value and call the same function with the same variable passed in as argument, you now get a different result. That is, there are points in time when you get different results. And the reason you get different results is because you have assigned a different value. So time comes into play. There is a distinct concept of "result before assigning the new value" and "result after assigning the new value".

What happens when you get rid of assignment?

If you have no assignment, it means that variables truly are the values they alias. So, no matter how many times you call a function with a variable, you always get the same answer. If you want to get a different answer, you call the function with a different value (variable). Note that "before and after" don't exist in this scenario. 

Why is time or the lack of it important?

Well, if you don't have time in your language, then the programs you write will be mathematical in nature. They will be akin to mathematical functions like f(x) = x^2 or f(x,y) = x+y which specify a distinct mapping. So they exist "timelessly" which means, there are no synchronization errors. Also, the order of substitutions don't matter. What do I mean by that? well, consider the sequence of statements:
1. i = 1;
2. j = 2;
3. j = i +1;
4. i  =  j + 1;

If  I interchange statements 3 and 4, I get different values for i and j. So, the order of substitution matters. However, in the factorial procedure written in lisp, in the line:

(fact-iter (* fact  n) (- n 1))
the order in which I substitute the value of n doesn't matter. Because n is the same throughout. If n is say,  5, the expression becomes:

(fact-iter  (* fact 5) (- 5 1))

On the other hand, if you have time in your programming language, then the order of statements matters and your programs will have 'state'. As it turns out, having state in a programming language leads to some horrible things like worrying about synchronization when you have multiple threads or when you are running  parallel algorithms. And you get a lot of bugs if you update the wrong variable first. 

The advantage of course is that, you have the power to represent and model the time [which we observe in the real world] in your computation. There are some situations where modeling time is of immense importance: how would you model the transactions of a  bank account in a purely functional language for example? The time of transactions is tied to having the correct balance.

There seem to be situations which purely functional languages cannot handle. So even though Lisp is considered a functional language, it provides an assignment operator called SET!. And Lispers are careful not to use it too often. The challenge is to retain as much functional nature as possible while admitting state into our programs.

Why did you write this post?

Nobody explained to me the consequences of having an assignment statement and how it relates to time. In fact, I had not even thought about it. Luckily, I read Structure and Interpretation of Computer Programs and watched the Abelson and Sussman videos which explained what the consequences of having an assignment statement in a language were. I hope readers of this post see assignment in a new light. And I am fascinated at how a simple thing like an assignment statement in a programming language can raise questions about a deep concept we call time. Perhaps things would be different if we all existed in some timeless eternal universe...

Saturday, April 13, 2013

Not quite at Home?

As companies try to control and compete for users' attention, value diminishes.

A couple of days ago, David Pogue reviewed Facebook's latest offering: Facebook home for android. An app that comes pre-installed on HTC First and is downloadable for certain other android phones.
He raised a very important and pertinent question:

 What exactly is the point ?

Mobile devices are experiencing a meteoric rise in their usage. PC sales have dropped 14% this quarter.
How do you compete when the screen is just 5 inches or even 10 inches?
Google figured this out very nicely. It open-sourced the Android OS so that hardware manufacturers used it to make smartphones. These smartphones would all come pre-loaded with Google's apps. Its search engine, voice search, Streetview and tons of other apps. Google then created an eco-system similar to Apple's. The genius lies in the fact that 1) The ecosystem is so pervasive now that most people (nearly all) look at what apps are available for a phone before they buy it. 2) Google made this happen out of thin air: without the hassles of making their own handsets and without the cost and associated risks of entering the hardware market (Motorola acquisition is relatively recent). But others were not so fortunate and fell behind. (Yahoo! comes to mind.)

Google gained the users' attention through a process that can only be described as 'passive Hypnosis'.
This does not mean that Google is bad. I am merely appreciating the genius and the nonchalance with which it was able to place itself in the center of the smartphone market. Other companies have been slow to react.(Yahoo! comes to mind). Facebook paid Instagram a billion dollars for a reason. Yahoo! acquired Summly for the same reason.

Yet, for all the hype created by the companies that ask you to download their app, the value provided to the user is actually diminishing. Websites were neat and pretty once. Now every webpage is a mammoth that hogs network and carries a lot of surplus- stuff you don't actually want to look at. Same with smartphone apps: Google removed the ad-block which was one of the most popular apps so ads could be displayed on the apps that you used. And every time I access Facebook through my mobile web browser, there is an ad (thinly disguised as a "suggested post") right in the middle of my news-feed  .Facebook's new "Home for Android" is its latest attempt to imitate Google- to become what Google has become on smartphones - The very core.

The early reviews suggest that users are not very happy with this new offering. As Pogue writes,"The ads are coming soon." That means whenever you wake your phone, there may be an ad waiting. Suppose I use my phone to find out what time it is, and instead of just showing the clock widget on the home-screen, I am (probably going to be) greeted with a nice offer to buy jewelry or movie tickets. Nobody would ever want to own such a phone. In fact,  and I am really sticking my neck out here: It may increase the popularity of G+.

Users were once controlling technology. You chose to call someone. You decided to text your friends.
You decided which website was your homepage. That isn't the case anymore. All the stuff that's out there is thrown at you whether you like it or not. And you have to mine all that stuff to find what you are really looking for.

We should control technology. The user must be free to choose what sites he/she wants to visit. What their wallpaper looks like, what their default search engine should be. When and where they want to connect with their friends and most importantly: the user should decide to log in to your website rather than you trying to log in to his/her life.

Monday, March 11, 2013

Distance from Philosophy

I was playing around with the html parsers available for python.  I wrote various little 'toy-scripts' to scrape content from websites. While doing this absolutely pointless thing just for fun, I remembered something I had read in an XKCD webcomic  mouseover text: If you start from any page on the Wikipedia and click on the first non-underlined link and continue doing this, you will eventually end up on the Wikipedia's page for 'philosophy'. 

I didn't believe it at first of course. It seemed absurd. So I decided to verify if the claim was true and started picking random stuff - totally unrelated to philosophy, like say, cats, or apple. And began to follow the links from those pages. I was amazed when, in every single case, I ended up on the page to philosophy. 

As I was writing my toy-scripts and playing around, I had an idea to measure "how far from philosophy everything was". It sounds crazy when you hear it. Its even weirder to type. (Having such thoughts is probably the weirdest of all.)

So I wrote a script which would accept a word entered from the user, go to the corresponding Wikipedia page and start following the first non-underlined links and count how many links it had to follow before it hit on the page for philosophy. I thought I'd share it with you, in case you want to take a break and kill time and amuse yourself:

from HTMLParser import HTMLParser
import httplib2

links = []

class MyHTMLParser(HTMLParser):
         def handle_starttag(self, tag, attrs):
              if tag == "a" and attrs[0][1].startswith("/wiki/") and (not attrs[0][1].startswith("/wiki/File")) and  (attrs[0][1].find("Wikipedia") == -1):

def countlinks(keyword):
'''this function counts the number of pages which
are visited when you begin with the 'keyword' on
wikipedia till you reach the page for 'philosophy'
h = httplib2.Http()
site = ''
url =  site + keyword
parser = MyHTMLParser()
count = 0
keywords_seen = []
global links
url =  site + keyword
while(keyword != "Philosophy"):
response,content = h.request(url)
content = str(content)
content =content.decode("utf-8").encode('ascii','ignore')
start = content.find("<p>") #wikipedia explanations begin at a <p> tag
keyword = links[0].replace("/wiki/",'')
initial = 1
while keyword in keywords_seen:
keyword = links[initial].replace("/wiki/",'')
initial += 1

links = []
url = site + keyword
print "visiting page about: ",keyword
count += 1

print "\n found Philosophy after ", count ," links\n"


 Save this code as "" and run using:
python <enter>
<enter the word you want to start with e.g: dog, cat, Rolling stones, etc> <hit enter>

P.S: The distance from 'dog' to philosophy is  23 links.

Sunday, February 17, 2013

Lisp versus C/C++/Java

I was reading an interesting study that Peter Norvig has mentioned on his website called "Comparing Java vs. C/C++. Efficiency Issues to Interpersonal Issues " which asked 38 programmers to implement versions of a program in C, C++, or Java. Another follow up study was conducted subsequently. I was curious to know how Lisp fared when compared with other languages. The results of the study showed that Lisp is comparable in its run-time efficiency to C and C++. The authors say, "this contradicts the conventional wisdom that Lisp is slow. Conventional wisdom is plain wrong." They also wonder why Lisp is not used as much as C and C++.

I started learning Lisp six months ago and I can now understand why Lisp is not used as much as C and C++. The programs that you write in Lisp look so different that people who have backgrounds in other programming languages wonder "what the heck is going on here?!" the first time they see a Lisp program. I have experienced that feeling myself when I started Lisping. The same is not true of programmers who are switching from C/C++ to Java or C++ to Python. Those languages have a definite "look and feel". The reason I think that is so is because they are all (mostly) imperative languages. Python is actually closer to Lisp than people realize but when you look at a Python program, you don't feel like you're looking at an ancient manuscript which modern day historians have failed to decipher. That is perhaps one of the reasons why people don't take the trouble to learn Lisp. I visited all the major bookstores in the city looking for Paul Graham's "ANSI Common Lisp". I never found it. Instead there were lots and lots of books about C, C++ ranging from "C++ for dummies" to "Professional C++".

Another reason may be because not a lot of teachers teach Lisp in Colleges. And even when it is taught, only a tiny portion of the language is taught and it is not given the rigorous treatment that languages like C and C++ receive. I have not seen any university insist that students implement a project of reasonable size and complexity in Lisp. 

Different set of tools

Richard Feynman once said that the reason he succeeded in solving integrals while he was at MIT was because he had learnt how to differentiate under the integral sign while he was at high school. It is a method to evaluate integrals that was not normally taught to students. So whenever any integral became too hard to solve, they would give it to Feynman who would solve it become a  superhero in their eyes.
In fact, he had a reputation for evaluating integrals when he was at MIT. And the reason, he admitted, was because he had a different box of tools. He knew all the usual ways of solving integrals- the ways everyone knew. But he also had this extra tool in his mental tool set. Lisp actually provides you with something similar. It actually provides you a whole set of new tools - things you cannot find in other languages.

The most useful tool I have come across till now is the  treatment of functions like first-class-citizens. The way Lisp programmers think of data and functions is perhaps unique. They make no distinction at all. And no other language that is "mainstream" has yet succeeded in incorporating functions in data-structures like Lisp has. Also, there is a limit to the "closure" property that C and C++ try hard to achieve. In C/C++ you can have arrays of arrays. But after the first or second usage of closure, the code becomes messy and tangled. And a small error  in a pointer assignment could crash your whole program or worse.
 Java is no better. Though it is like a paradise for someone who has coded extensively in C++,  because of its support for synchronization and threading and automatic garbage-collection, the language is too verbose. It also forces you into doing things the  Object-Oriented way. So you are forced to crudely map situations (that don't naturally resolve themselves into situations that can be modeled as well defined objects interacting with one another ) into some UML mapping. And it has a very ugly way of dealing with multiple inheritances and fails to account for situations where object-orientation is superfluous.

Language addiction

I have observed something I have dubbed "language addiction" among programmers. Once they get really good at using a particular language, they are very reluctant to switch. And most who try to switch end up going back to their original language. This happens because sticking to what you know is easy and languages take time and effort - lots of effort to master. Anyone can start writing programs in C++ if they have programmed in C. But there is substantial difference between a seasoned C++ programmer and a novice C++ programmer who has not yet assimilated the object-oriented way of thinking.
The only way to gather new tools is by diving in and learning as many languages as you can. But you should be careful in choosing which languages to learn. (No Java programmer would learn C# willingly.) Learning languages that teach you to think in different 'paradigms' is better than learning languages simply because they are popular.

Sunday, January 27, 2013

Wanda The Fish in Ubuntu Unity (Precise 12.04)

When I upgraded my Ubuntu OS to 12.04 I was really really happy at the way things were. The new interface is a huge improvement over the old Gnome environment. And the launcher is such a brilliant idea it makes you wonder why it took so long for someone to come up with it. But I was missing a rather fun little applet- Wanda The Fish was gone. So I decided to do what all Linux users do and wrote it myself.

I have used  GTK+  with Python to access the fortune database and display fortunes. (I didn't like the Indicator-fish port because of the absence of a "Speak Again!" button.) The code for resurrecting good old Wanda is hardly a page long. Here it is.


#!/usr/bin/env python
from gi.repository import Gtk
import os

class Wanda(Gtk.Window):

    label = None

    def __init__(self):
        Gtk.Window.__init__(self, title="Wanda The Fish Says...")
        self.label = Gtk.Label()
self.display_fortune()                             #first fortune when it starts up.
        table = Gtk.Table(1, 2, True)

        button1 = Gtk.Button(label="Speak Again!")
        button1.connect("clicked", self.on_button1_clicked)

        button2 = Gtk.Button(label="Enough Wisdom for Now!")
button2.connect("clicked", self.on_button2_clicked)

        table.attach(button2, 1, 2, 1, 2)
        table.attach(button1, 0, 1, 1, 2)
    def on_button1_clicked(self, widget):

    def on_button2_clicked(self, widget):
    def display_fortune(self):
      os.system("fortune -a > .last.txt")
       with open(".last.txt") as f:
        text = str(
win = Wanda()
win.connect("delete-event", Gtk.main_quit)

You could make it a Start up application and then lock it to the launcher. Wanda will continue to swim on our desktops peacefully for many years to come.
For complete instructions follow this link to github:

Wednesday, January 16, 2013

Why social ads may take off on Graph Search

Facebook fired a missile directly at Google this week by launching what it calls Graph Search. In essence, the graph search is a version of search which can take arbitrary queries and process them in the 'social context' - that is exactly like Google processes your search query to find relevant web pages. The key difference is that the search results you get through Graph Search include results not from the web, but from your Facebook connections.

For example, you can search for 'Friends who haven't watched 'Life of Pi' ' and the Graph Search may look through the friends' status updates and filter those who have made a status update or commented on the movie (a possible indicator that  they have seen it) and give you a list of those whom it thinks haven't seen it and you could then invite them to watch it with you. That's just an example of what Mark Zuckerberg says Graph Search can do according to this wired article. Now, I hardly know any friends who click on the ads they see on Facebook pages. How many times have You clicked on an ad on your profile?

'Social Ads' have not performed well on Facebook. (At least not till now.) I explored a cause for this in my blog before. Logging in to your Facebook account and seeing who is online and connecting with them is like being at a party - virtually. No guest at a party will notice the huge advertising billboard right across the street. Instead, they concentrate on talking to  people, meeting new people and catching up on stuff. That's exactly the kind of atmosphere which exists on a typical Facebook user's homepage. He doesn't even notice the ads for about 90% of the time. So clicking them is probably out of question and if he does click, its probably by accident.

All that could change with the Graph Search. Just consider the scenario I described earlier. If you could see all the theaters near you which are playing the movie 'Life of Pi' along with the show timings and the ticket prices (as ads) alongside friends who haven't seen it yet, you could find out which theater is most convenient for your friends and yourself and decide to book tickets online right away- WITHOUT using Google to find out the movie and theater information.

That's the key point. It seems now that Google receives one less query thanks to Graph Search. But multiply that one query by millions of Facebook users and you have a substantial amount of queries that don't reach the Search Giant. So Facebook may in fact have an edge over Google here. Social ads may propel Facebook's revenues and thus, its market value.

Of course, the relevance of Social Ads needs to match that of Google's or there won't be any takers. And Google is a veteran in that field. So there are hurdles that Facebook must cross before it can (if ever) overtake  (takeover?) Google. But Facebook seems to be on the right course. If it can even partially cut into the search share of Google, it means billions more in ad revenue for the Social Network.

We'll just have to wait and see what Google has to counter this missile launch. I'm betting on its android  products and Google Glasses and the rising popularity of Google+.  Whatever the end result is, its way more interesting to watch companies out-do each other through innovation rather than watch as one company takes over the world and accept its monopoly.