Archive for the ‘Opensource’ Category

Portable Meld on Windows

February 12, 2011

Often I need to compare and merge files or folders together for changes between them. Gladly, many applications exist for such a task. This can come as using the diff command line tool, sadly it is not visually pleasant. Therefore several visual interfaces to ease the comparison side by side. Examples of such are:

A detailed comparison between the them and others can be found here. My personal favorite of these is meld (specially for folder comparison). On a Linux system, it is quite easy to set up, specially if it is deb bases because there is a package for it. A problem shows up when wanting to use meld in Windows. After some googling I found this guide showing how to install it for windows. Using this guide, I managed to get it to work, but I prefer a portable installation to take it anywhere.

Following are the steps I followed to get a portable version of meld up and running. Take care that performing the steps need administrative rights, but it can run anywhere.

  1. Create a directory to contain meld binaries and its dependencies. Let us reference to it as <meld install dir>
  2. Download portable python 2.6, and install it under <meld install dir>\Python26
  3. Add the portable python installation to the registry. This is done by opening the portable python interpretor an running the script from here
  4. Download the GKT+ 2.2 run time all in one bundle from here. Extract it to <meld install dir>\gtk+
  5. Download an install the latest PyGTK libraries. These are the PyGTKPyCairo, and PyGObject modules. This will install to the correct python based on the registry entry from step 3.
  6. Download meld 1.5 from here. Extract it to <meld install dir>\meld-src
  7. Create a file “<meld install dir>\start-meld.bat”. The content of this file is:
    @echo off
    set PATH=%CD%\Python26\App;%CD%\gtk+\bin
    python meld-src\bin\meld

And that is it, run start-meld.bat and enjoy 🙂

Layar POI service using Google App Engine

April 4, 2010

Layar is a cool handheld augmented reality application that allows to overlay “Layers” on the image seen by a handheld device’s camera. One can think of these layers as content that is seen based on your current location. This allows to overlay digital data over actual live imagery.

The Layar API depends that the source of data (Points Of Interest) is a RESTful web service, that sends an HTTP GET request, and expects back  a JSON object. The details of the GET parameters, and the required JSON object could be found here in the layar API documentation.

To create a layer, one needs to provide such a service that provides the Points Of Interest (POI). Such a service could easily be written and provided by google app engine. This article will discuss how to do so using the google app engine python SDK, and is highly depends on the getting started guide for google app engine through python. Sections 1-4,6 are sufficient for understanding of the coming content.

Handling Requests

Since requests from layar to the POI web service come in the form of GET requests, the parameters can simply be accessed in the request handling method through:


POI Response

The POI response that layar expects is a json object. The API place many restrictions on the response. The first comes in the content type of the HTTP response. Thus, it has to be set as follows:

self.response.headers['Content-type'] = 'text/javascript; charset=utf-8'

Another restriction is in JSON object returned as a response, where almost all the fields are required by layar. To make things simpler, the object is to be represented as a python object. This could be later converted to a JSON object. The JSON object responded with could be represented as follows (as a minumum):

{'layer':'layer name', 'hotspots':list_of_POI, 'errorCode':0, 'errorString':'ok'}

Where hotspots is a list of POI object. These object could be represented as:

class POI:
    def __init__(self,poi_id,title,lat,lang):
        self.actions = [] = poi_id
        self.imageURL = None = lat
        self.lon = lang
        self.distance = None
        self.title = title
        self.line2 = None
        self.line3 = None
        self.line4 = None
        self.attribution = ""
        self.type = 0
        self.dimenion = 1
        self.transform = {'rel':True, 'angle':0, 'scale':1.0}
        self.object = {'baseURL': ""}

From python to JSON

As python follows a batteries all included strategy, there are libraries that convert python dictionaries to JSON objects. Even though google app engine uses python, the “json” module is not available. Thankfully, an equivalent one is found through “from django.utils import simplejson”. It could be used to convert a python dictionary (that looks frightfully like a JSON object) to a JSON object string as follows:

simplejson.dumps({'layer':'guc', 'hotspots':poi_list, 'errorCode':0, 'errorString':'ok'})

The problem with simplejson is that it only takes dictionaries or lists. Thus, this poses a problem when using the POI class mentioned earlier. Again, python comes with a rescue, where a dictionary representing the object could be obtained as follows:

poi = POI('C1','C1',29986707,31438864)
poiDictionary = poi1.__dict__

Putting the code together

Now that we have the POI object, and the conversion mechanism, we are ready to have a request handler for layar requests. An example of static POIs is as follows:

from django.utils import simplejson
from poi import POI
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app

class POIHandler(webapp.RequestHandler):
	def get(self):
		self.response.headers['Content-type'] = 'text/javascript; charset=utf-8'
		#latitude and longitude is an integer will be divided by 10^6
		# so take care of accuracy after the division
		poi1 = POI('C1','C1',29986707,31438864).__dict__
		poi2 = POI('C2','C2',29986744,31439272).__dict__
		poi3 = POI('C3','C3',29986995,31438923).__dict__
		poi4 = POI('C4','C4',29987153,31439245).__dict__
		poi5 = POI('C5','C5',29986326,31438810).__dict__
		poi6 = POI('C6','C6',29986688,31438569).__dict__
		poi7 = POI('C7','C7',29986442,31438370).__dict__
		pois = [poi1,poi2,poi3,poi4,poi5,poi6,poi7]
		# final getPOI response dictionary
		d = {'layer':'guc', 'hotspots':pois, 'errorCode':0, 'errorString':'ok'}

application = webapp.WSGIApplication(

def main():

if __name__=='__main__':


With the new layar 4 api coming soon, and the new features being in beta2, un update of how to add one of my favorite new features is needed. This is having different actions on the entire layar. It can be easily done by adding an action attribute to the final getPOI response dictionary (see code above). An example of the addition is as follows:

d['actions'] = [{'uri':'','label':'action on the entire layar'}]

SVN post-commit hook cronjob

April 4, 2010

Few SVN hosting sites provide ssh access to their svn servers. Such is needed to create hooks, like a post-commit hook that sends an email with every commit. Such hooks are often required by the development team. This article discusses the creation of a cron job that polls the SVN repository to simulate a post-commit hook that sends an email with every commit.

Setting up sendEmail

sendEmail is a nice tool to send emails via command line. This is needed to script in the cronjob sending an email. On ubuntu, install it from apt-get by:

sudo apt-get install sendemail libio-socket-ssl-perl libcrypt-ssleay-perl

The latter 2 packages are needed for connecting to an SMTP server that uses a secure connection (like Gmail, which is used in the example).

The post-commit hook script

The following is a python script, that could be added as an entry in the cron table

#! /usr/bin/env python

import os,re

log_file = LOG_DIR
svn_log_seperator  = '------------------------------------------------------------------------'
svn_dir = SVN_DIR
mail_filter = '[special mail identifier in subject]'
message_file = '/tmp/commit_mail'

mail_from = MAIL_FROM
mail_to = MAIL_TO

smtp_server = ""
smtp_user = SMTP_USER
smtp_pass = SMTP_PASS

def send_mail(subject,mail):
    mail_file = open(message_file,'w')
    mail_command = 'sendEmail -f %s -t %s -u "%s" -s %s -xu "%s" -xp "%s" -o message-file="%s"' % (mail_from, mail_to, subject, smtp_server, smtp_user, smtp_pass,message_file)
    print mail_command

print "analyzing old log file"

old_log_f = open(log_file)
old_log =

print "updating svn"
os.system("/usr/local/bin/svn update %s" % svn_dir)
os.system("/usr/local/bin/svn log %s > %s" % (svn_dir,log_file))
print "got new log"

new_log_f = open(log_file)
new_log =

delta = set(new_log) - set(old_log)

for commit in delta:
    l = [x for x in commit.split('\n') if x]
    details = l[0].split('|')
    subject = "%s %s %s" %(mail_filter,details[0],details[1])
    f_diff=os.popen('/usr/local/bin/svn diff -c %s %s' % (details[0], svn_dir))
    diff =
    mail = "%s\n%s" % (commit, diff)

Perquisites before running the script

      Having a working copy of the svn reposity
      Running the following command once before the script:

      svn log > $PATH_TO_LOG_FILE

Open source Projects, The Life Cycle

November 21, 2009

Recently, I have been involved into the development of an open source project, that is currently under major modifications. Seeing this change happening before my eyes got me to think, how do open source projects evolve, and how do exiting well matured (old) projects reached their current state.

I feel that most open source projects start with a crazy idea, usually driven by some sort of need (the itch as called by The Cathedral and the Bazaar). But when one thinks about it, not many people have the guts to go out in the open to satisfy that drive. Mostly people who do aim to get an up and running prototype that shows the idea to the community. This is mainly to get people in the community excited, and people who had a similar itch involved.

To reach that initial prototype, the team working is often small, as the community knows nothing yet. The lack of the outsider perspective might limit the developers to their own coding style, and the drive of getting a prototype out might loose focus on certain coding aspects.

After the prototype is out, the effects of the bazaar model is seen, where bugs are fixed, code is optimized, improved, and documented. At this stage, the project takes it’s first step to maturity.

The changes over time could range the minor to the drastic. The main issue that developers wanting to get involved  with an evolving project, whether rapidly or slowly, patience is need. This is due to the need to take time to obtain a deep understanding of the code. Furthermore, patience is needed when a project is changing rapidly, as they mostly could end up in code breaks, but keep in mind they will eventually be fixed (possibly by you). Such an aspect also requires high flexibility.

After these many iterations by the community, the project starts to stabilize into a state of maturity. Usually this is called the 1.0 release. After that stage, progress starts to slow down, and focus starts to go to having new features. The main code base is rarely changed unless with major following released, or core feature additions that require that due to the structure of the code.

“Bug Free Zone”

November 2, 2009

As you can see from the title of my post, it’s all about bug. But not bugs as in insects, but as in computer bugs/problems. In the open source development course, we were introduced to the mozilla development ecosystem.

This introduction was done via Thunderbird, the Mozilla email client. The first step was to compile a debug build for Thunderbird from source. Instructions for that could be found here. Along the process, several bumps along the way were met, but managed to get by them.

After the successful compilation, I had to start tackling the  the bug famous in the open source educational community. The Thunderbird auto linkification bug. This bug is usually exists in Thunderbird, and usually mainly found under the test bugzilla.

Thunderbird Autolinkification Bug

To start on fixing that bug, a search of the source code is needed. Since Thunderbird is a Mozilla project, they offer an on-line portal for searching their code using MXR. This tool searches the content of the repository. After some searching, the best query to find the file related to this problem was “mailto”. This is because the bug is related to the mail rendering component, when looking at it from purely a user’s point of view.

At finding the correct file, the hacking started. Once that was done, I generated a patch of the file i changed from inside the comm-central/mozilla folder.

Thunderbird Bug Fixed

Thunderbird Bug

Once the results of my fixing looked promising, i decided to submit a bug report on the test bugzilla containing a patch. But before the submission, i decided to test my patch first on an automated reviewer. To my surprise, it showed me problems that would have never came up on my mind.


JST Review

JST Review

Passing the automated reviewer gives one a boost of confidence, even if it responds to you with the message of “Congratulations! I found no coding problems in your patch. That does not mean that it is ready for check in though… you still need to pass something more intelligent than a script. :-)”.

Now, it was time to submit the bug, attach the patch, and request review. This was amazingly simple. Just filling a form, being very detailed, and attaching the patch file. After the submission of the bug report, I requested reviews from 2 parties, which I am still waiting for a response for.

On a final note, I found this exercise and experience very entertaining and thrilling. The rush of hacking the code, and submitting was a new feeling for me. This test experience drove me to submit an actual bug related to bespin’s new refactoring code. Hopefully I get a good review on it too, and possibly a commit :D.

Opensource Needs Open Internet

October 26, 2009

When installing opensource software, dependencies are are usually downloaded over the internet. This is done via build automation scripts. Putting this in mind, when trying to install software in a location with restricted internet access, downloading dependencies is not possible. Thus, installing software is challenging.

This was seen while trying to install bespin locally. Under restricted access, was not possible. With open internet, too 3 minutes.

The ubiquitous jQuery, the nouns, and the documentation

October 19, 2009

With starting up the German University in Cairo Open Source Community (g-osc) working along the open source course, we were assigned to write ubiquity commands for our community to use. This would make it easier for users to access different functionalities. I choose to build a command that would submit issues to the g-osc issue tracker.

As I had some experience building very simple commands for some of my own day to day internet surfing, I decided to focus on this command a bit more to see the full potential of ubiquity. This was to provide the means of submitting an issue in the most natural way. As I went through details of the ubiquity tutorial, I figured that a natural approach for defining how the command will be used is as follow:

gosc post issue {issue title} to {user issue will be assigned to} as {type of issue}

Such a formulation is similar to how it would be written in normal English.

The first design question was, how will be the issue submitter be known. This is automatically figured out by mediawiki, though the browser session. In other words, if a user is already logged in, then mediawiki knows this information on its own. With that issue out of the way, there was no need to write specific login functions.

The next step, was to reverse engineer the issue posting form, so that AJAX would be able to send the information as if a user was filling the form manually. After obtaining the form information, the next obstacal was actually send the data. This was straight forward task, due to the inclusion of the JavaScript jQuery library within  ubiquity. This a very strong feature, due to strength of jQuery.

With that out of the way, the command was able to submit issues to the tracker. The next step was to be able to format the command natually, and to provide auto-completion for the available users and types of issues. This was done using ubiquity nouns. These tell ubiquity that a certain argument has a specific format. This idea is same as auto-completion,  where the specific format is a specific set of word.  In both the users and issue types, this data has to be dynamically loaded. Again, jQuery was used to load the form page, and parse the data to obtain the data inside the combo boxes.

A problem that comes up with auto-completion and loading dynamic data, is the need for doing the AJAX call with every key stoke. This is very expensive. The first solution that came to my mind was to cache the data, and refresh it every time the ubiquity command line is loaded. Sadly, as i mentioned in a previous post, the ubiquity documentation is not very accurate. Thus this cache technique was not successful, as the function for handing the loading function was not called. The alternative solution, was to cache the content only once.

With both the auto-completion working well, and a preview of the possible users present the command was complete. The complete code for the issue submission command could be found here.

Software versions, documentation, and the PAIN

October 12, 2009

When working on a piece of software using a framework or an API, the first source of help to rely on is the official documentation. As development on that code base continues, so does the documentation that changes accordingly. Sadly this is not always the case!!

At the time of this post, the latest stable release of the ubiquity project is version 0.54  and the version pulled from mercurial is 0.55pre8. I was trying to write an enhancement to provide feedback to a command’s author. My testing environment had the latest stable version, and as the ubiquity wiki is under 0.5, I assumed it matches my currently running version. At the end of the wiki, a link is provided with the documentation of the CmdUtils library, containg the command CmdUtils.getCommand(id).

Such a command is very helpful as it returns the object representing the command matching the given id. Thus, details about the command’s author could be obtained, and hence sending to them feedback. Being very excited about this, I gave it a test, and to my shock nothing worked. Even to my bigger surprise, firebug that is used for debuging ubiquity commands showed CmdUtils.getCommand is not a function.

This caught curiosity, so I opened the ubiquity extension’s installation folder, and took a look at the “modules/cmdutils.js” file. With no surprise, the file did not contain the CmdUtils.getCommand function.

At the same time, Abdallah El Guindy was working on the same enhancement, reached the same conclusion, but had the function correctly working. When I asked about his running ubiquity version, it as the latest development version 0.55pre8. After taking and installing that version of the code, the “modules/cmdutils.js” file was the one matching the current online documentation.

Having the correct version number attached to the online documentation, would have saved development time and confusion. Such a problem is not restricted to the ubiquity project, but is also seen in other frameworks or APIs. The main lesson learned here, is that one has to be very careful during development, and not give up easily on finding the root of the problem.

On a small side note, this same problem with ubiquity 0.54 is also in 0.55pre7 (the current latest beta). So the CmdUtils.getCommand(id) is a brand new addition to the code base.

%d bloggers like this: