## The Recurring Beauty

December 27, 2009

Recently, I stared reading one of the essential books for a computer scientists, namely Concrete Mathematics. After the first chapter, it really got me thinking “Oh My!! Recurrences could solve anything.” This is of course not matched to the thrill of the challenge in funding recurring patterns in a structure.

Along my college years, colleges tend to step away from recurrences. In fact, many cases it is the easier approach to tackle. Furthermore, it could be in some cases the more efficient approach. This reminds me of many algorithm techniques that largely depend on finding substructures or small solutions that are used for generalizations. The most vivid example comes to me in the form of memoization in dynamic programming (DP). For me the corner stone of DP is finding the “magical” recurrence. Thus, from super slow, to super fast (lots of memory use though :D).

Other examples of how recursive thinking makes complex problems elegantly simple could be seen in graph theory. Just by examining some properties on nodes, arcs, degrees, huge complexities just boil down to some elegantly beautiful properties that are easily generalizable.

Such beauty could be easily seen when recurrences could end up in something that is visually breath taking. An example are Fractals, a stunning example of how simple math with recursion lead to amazing imagery.

## Two Columned LaTeX Presentation Slides

December 7, 2009

Coming from a Power Point background of creating presentation slides, i am often tempted to used a two column approach when laying out the content of my slides. This is where text is shown on the left column and related pictures are placed on the right.

After i decided to to move all my document creation work to be in LaTeX, I started to also create my presentations that way too. Such a feat is made possible using the beamer document class, whose user guide could be found here.

The environment for a single slide is the “frame” environment. It is used a follows:

\begin{frame}{frame title}

\end{frame}


To be able to split the frame (slide) into two columns, the “columns” environment is places inside the slide content. It is used as follows:

\begin{columns}
\column{column_width} %first column
column 1 content
\column{column_width} %second column
column 2 content
\column{column_width} %third column
column 3 content
...


It is best to have the column_width in terms of the text width. So for example:

\column{0.5\textwidth} %half slide's text width


An example for a two columned slide looks as like:

\documentclass{beamer}
\mode<presentation>
{
\usetheme[width=2cm]{Hannover}
\setbeamercovered{transparent}
}
\title[Presentation]{Presentation}
\author[Saher Mohamed El-Neklawy]{Saher Mohamed El-Neklawy}

\begin{document}
\begin{frame}{Frame Title}
\begin{columns}
\column{0.5\textwidth}
\begin{itemize}
\item Item 1
\item Item 2
\item Item 3
\end{itemize}
\column{0.5\textwidth}
\includegraphics[width=\textwidth]{i1.eps}

\includegraphics[width=\textwidth]{i2.eps}
\end{columns}
\end{frame}
\end{document}


Two Columned Slide

The next step is to sequence the appearance of the images along with the images. In the beamer class, this could be done using the “” operator, following a certain component in the frame. The “n” represents when the component appears in the frame. For example:

\begin{itemize}
\item<2> Item 1 % seen after 1 transition
\item<3> Item 2 % seen after 2 transition
\item<4> Item 3 % seen after 3 transition
\end{itemize}


In a sense, the “” splits a frame into n slides, placing the respective component in the nth slide of that frame. For more advanced uses of “”, check out the beamer class user guide.

To do the same for images, the “\visible” or “\only” macros could be used along with the “” operator.

With using “\visible”, the slides would now look like:

\documentclass{beamer}
\mode<presentation>
{
\usetheme[width=2cm]{Hannover}
\setbeamercovered{transparent}
}
\title[Presentation]{Presentation}
\author[Saher Mohamed El-Neklawy]{Saher Mohamed El-Neklawy}

\begin{document}
\begin{frame}{Frame Title}
\begin{columns}
\column{0.5\textwidth}
\begin{itemize}
\item<2> Item 1
\item<3> Item 2
\item<4> Item 3
\end{itemize}
\column{0.5\textwidth}
\visible<2>
{
\includegraphics[width=\textwidth]{i1.eps}
}
\visible<3>{
\includegraphics[width=\textwidth]{i2.eps}
}
\end{columns}
\end{frame}
\end{document}


Sequenced Two Columned 1

Sequenced Two Columned 2

The problem with “\visible” is that it keeps the place of the component from the previous transition reserved in the next, just not seen. The fix to this issue is the use of “\only”, where the space for components is not reserved across transitions. Such makes the slides look as follows:

\documentclass{beamer}
\mode<presentation>
{
\usetheme[width=2cm]{Hannover}
\setbeamercovered{transparent}
}
\title[Presentation]{Presentation}
\author[Saher Mohamed El-Neklawy]{Saher Mohamed El-Neklawy}

\begin{document}
\begin{frame}{Frame Title}
\begin{columns}
\column{0.5\textwidth}
\begin{itemize}
\item<2> Item 1
\item<3> Item 2
\item<4> Item 3
\end{itemize}
\column{0.5\textwidth}
\only<2>
{
\includegraphics[width=\textwidth]{i1.eps}
}
\only<3>{
\includegraphics[width=\textwidth]{i2.eps}
}
\end{columns}
\end{frame}
\end{document}


Two column slide using only macro 1

Two column slide using only macro 2

When taking a closer look at the output pdf, it will be observed that there is a slight change in the position of the bulleted list. This is most seen between pages 3 and 4 of the pdf. Such is due to the nature of the \only, as it does not reserver the position of component across transitions of the frame. Thus, the location of the bulleted list differs when there is a transition with a slide having a slide to one without, and vice versa.

This problem could be fixed by going back to the “\visible” macro, as it reserves the spaces for components. The difference this time is is that the location of the image in the slide will be forced. This could be done using the “picture” environment as follows:

\begin{picture}(0,0)(x,y)
\put(0,0){\includegraphics[width=\textwidth]{i1.eps}}
\end{picture}


Take care to consider the x and y values from the top right corner of the column.

The final code results in this pdf, and looks like follows:

\documentclass{beamer}
\mode<presentation>
{
\usetheme[width=2cm]{Hannover}
\setbeamercovered{transparent}
}
\title[Presentation]{Presentation}
\author[Saher Mohamed El-Neklawy]{Saher Mohamed El-Neklawy}

\begin{document}
\begin{frame}{Frame Title}
\begin{columns}
\column{0.5\textwidth}
\begin{itemize}
\item<2> Item 1
\item<3> Item 2
\item<4> Item 3
\end{itemize}
\column{0.5\textwidth}
\visible<2>
{
\begin{picture}(0,0)(40,50)
\put(0,0){\includegraphics[width=\textwidth]{i1.eps}}
\end{picture}
}
\visible<3>
{
\begin{picture}(0,0)(50,50)
\put(0,0){\includegraphics[width=\textwidth]{i2.eps}}
\end{picture}
}
\end{columns}
\end{frame}
\end{document}


## Open source Projects, The Life Cycle

November 21, 2009

Recently, I have been involved into the development of an open source project, that is currently under major modifications. Seeing this change happening before my eyes got me to think, how do open source projects evolve, and how do exiting well matured (old) projects reached their current state.

I feel that most open source projects start with a crazy idea, usually driven by some sort of need (the itch as called by The Cathedral and the Bazaar). But when one thinks about it, not many people have the guts to go out in the open to satisfy that drive. Mostly people who do aim to get an up and running prototype that shows the idea to the community. This is mainly to get people in the community excited, and people who had a similar itch involved.

To reach that initial prototype, the team working is often small, as the community knows nothing yet. The lack of the outsider perspective might limit the developers to their own coding style, and the drive of getting a prototype out might loose focus on certain coding aspects.

After the prototype is out, the effects of the bazaar model is seen, where bugs are fixed, code is optimized, improved, and documented. At this stage, the project takes it’s first step to maturity.

The changes over time could range the minor to the drastic. The main issue that developers wanting to get involved  with an evolving project, whether rapidly or slowly, patience is need. This is due to the need to take time to obtain a deep understanding of the code. Furthermore, patience is needed when a project is changing rapidly, as they mostly could end up in code breaks, but keep in mind they will eventually be fixed (possibly by you). Such an aspect also requires high flexibility.

After these many iterations by the community, the project starts to stabilize into a state of maturity. Usually this is called the 1.0 release. After that stage, progress starts to slow down, and focus starts to go to having new features. The main code base is rarely changed unless with major following released, or core feature additions that require that due to the structure of the code.

## Running launchpad remotely

November 4, 2009

Launchpad is a software collaboration platform developed by Canonical (famous for Ubuntu :D). Recently the the launchpad project went open source. This allowed people to host copies of it locally. The process for the installation process could be found here.

The next natural step is to configure remote computers to be able to access launchpad. After following the how-to by launchpad, and trying the access the local copy of launchpad from an external source, the following error is shown by the running launchpad server:

NotFound: Object: <canonical.launchpad.webapp.servers.ProtocolErrorPublication
object at 0x8157b50>, name: ''

The solution to this lies in the /etc/hosts file on the client accessing the lauchpad copy. The file should contain the following:

{ip for server}      launchpad.dev answers.launchpad.dev api.launchpad.dev

{ip for server}      bazaar.launchpad.dev

{ip of the server} is the ip of the machine containing the running copy of the launchpad server. Also take care the lines starting with {ip of the server} until an empty line should be a single line in the file. For client systems that are not linux, check out the Wikipedia article concerning hosts for the files equivalent to “/etc/hosts”.

On a side note, when installing launchpad, it does not add scripts for running automatically at startup, this process has to be done manually.

## Searching Source Code: LXR and grep

November 2, 2009

Looking for a certain functionality in someone else’s code could prove to be a tough challenge. This is increased by the size of the given project. Many systems exist today facilitating easy search in a project. For Java, eclipse is best used for that task, as it contains a strong in java class indexing, classification, and searching.

Some projects provide for you a portal to search their code base. Famous examples of these are LXR for the Linux kernel, and it’s customized version MXR used for Mozilla projects.

When all the above options are not available, the alternative exists natively in most linux systems. This is namely “grep”. Grep is a strong command for regular expression matching. On if it’s features is that it can also match to the content within files. Putting that in mind, the following command could be very useful:

 grep -R "search query" .

This looks recursively for all mentions of search query in all files under the current directory. This prints output in the console the file name where this query was found, and the line that included it.

To be able to store this search result in a file, the output could be piped as follows:

 grep -R "search query" . > log.txt

## “Bug Free Zone”

November 2, 2009

As you can see from the title of my post, it’s all about bug. But not bugs as in insects, but as in computer bugs/problems. In the open source development course, we were introduced to the mozilla development ecosystem.

This introduction was done via Thunderbird, the Mozilla email client. The first step was to compile a debug build for Thunderbird from source. Instructions for that could be found here. Along the process, several bumps along the way were met, but managed to get by them.

After the successful compilation, I had to start tackling the  the bug famous in the open source educational community. The Thunderbird auto linkification bug. This bug is usually exists in Thunderbird, and usually mainly found under the test bugzilla.

To start on fixing that bug, a search of the source code is needed. Since Thunderbird is a Mozilla project, they offer an on-line portal for searching their code using MXR. This tool searches the content of the repository. After some searching, the best query to find the file related to this problem was “mailto”. This is because the bug is related to the mail rendering component, when looking at it from purely a user’s point of view.

At finding the correct file, the hacking started. Once that was done, I generated a patch of the file i changed from inside the comm-central/mozilla folder.

Thunderbird Bug

Once the results of my fixing looked promising, i decided to submit a bug report on the test bugzilla containing a patch. But before the submission, i decided to test my patch first on an automated reviewer. To my surprise, it showed me problems that would have never came up on my mind.

JST Review

Passing the automated reviewer gives one a boost of confidence, even if it responds to you with the message of “Congratulations! I found no coding problems in your patch. That does not mean that it is ready for check in though… you still need to pass something more intelligent than a script. :-)”.

Now, it was time to submit the bug, attach the patch, and request review. This was amazingly simple. Just filling a form, being very detailed, and attaching the patch file. After the submission of the bug report, I requested reviews from 2 parties, which I am still waiting for a response for.

On a final note, I found this exercise and experience very entertaining and thrilling. The rush of hacking the code, and submitting was a new feeling for me. This test experience drove me to submit an actual bug related to bespin’s new refactoring code. Hopefully I get a good review on it too, and possibly a commit :D.

## Converting System::String to char *

October 31, 2009

The official way to convert a visual c++ System::String to a char * could be found here. Such strings are everywhere in windows forms applications.

A simpler way (for people who are used to using standard c\c++) to do the conversion is as follows:
 void string2charPtr(String ^orig, char *&out) { int length = orig->Length; out = new char[length+1]; for(int i=0;i<length;i++) out[i] = (char) orig[i]; out[length] = ''; } 

Please feel free to comment on the function above.

## Opensource Needs Open Internet

October 26, 2009

When installing opensource software, dependencies are are usually downloaded over the internet. This is done via build automation scripts. Putting this in mind, when trying to install software in a location with restricted internet access, downloading dependencies is not possible. Thus, installing software is challenging.

This was seen while trying to install bespin locally. Under restricted access, was not possible. With open internet, too 3 minutes.

## Processing.js:Reinventing the wheel

October 26, 2009

HTML5 is the new standard for web pages. One of the most exciting features that it introduces is the canvas tag. This component allows rendering vector graphics on the browser. This was not previously possible with older HTML versions.

With that advancement in HTML, new JavaScript libraries emerge to be able to create vector visualizations at the client side. One of these libraries is Processing.js, which is a port of the java processing library.

For a first look at processing, the examples show very exciting examples. But the week point of the framework shows when try to develop an application that required much interaction week. To reach such an application, the “processing.js” written has to be highly optimized. No room for unnecessary code execution, use of floats or doubles, cache as much as you can, etc….

My other issue with Processing.js is the fact that they ported from the java library everything, even the syntax. This comes to be very frustrating to web developers. The first reason for that is most web developers are used to the syntax of JavaScript, so having things like int x=0; and int [] arr = new int[4]; come as a big surprise. This could be good for the java developer, but not the the web developer.

My other bone to pick with Processing.js is how primitive it is relative to the current state of the web. There are no components, and you can’t define events on specific components. Everything  has to be done from scratch. Further strengthening the point of needing highly optimized code.

Backing to the title of my post, Processing.js tries to provide an alternative for Flash. In the web, the major disadvantage of Flash is it’s isolation from the rest of the webpage, making it an independent entity. Processing.js does not over come this problem, as the visualization code for each canvas is a world of it’s own. It can only access normal JavaScript code in the same page (which is possible with ActionScript 3’s ExternalInterface). This problem even escalates with lack of direct communication between different canvases directly using Processing.js code. Furthermore, Flash is currently cross browser compatible, unlike the canvas tag due to it’s new addition to the HTML standard.

Putting all the above factors in mind, I feel that using a the much mature flash plugin libraries which are closer to web developers is the best option currently in the market until the canvas tag matures more with richer libraries.

## The ubiquitous jQuery, the nouns, and the documentation

October 19, 2009

With starting up the German University in Cairo Open Source Community (g-osc) working along the open source course, we were assigned to write ubiquity commands for our community to use. This would make it easier for users to access different functionalities. I choose to build a command that would submit issues to the g-osc issue tracker.

As I had some experience building very simple commands for some of my own day to day internet surfing, I decided to focus on this command a bit more to see the full potential of ubiquity. This was to provide the means of submitting an issue in the most natural way. As I went through details of the ubiquity tutorial, I figured that a natural approach for defining how the command will be used is as follow:

gosc post issue {issue title} to {user issue will be assigned to} as {type of issue}

Such a formulation is similar to how it would be written in normal English.

The first design question was, how will be the issue submitter be known. This is automatically figured out by mediawiki, though the browser session. In other words, if a user is already logged in, then mediawiki knows this information on its own. With that issue out of the way, there was no need to write specific login functions.

The next step, was to reverse engineer the issue posting form, so that AJAX would be able to send the information as if a user was filling the form manually. After obtaining the form information, the next obstacal was actually send the data. This was straight forward task, due to the inclusion of the JavaScript jQuery library within  ubiquity. This a very strong feature, due to strength of jQuery.

With that out of the way, the command was able to submit issues to the tracker. The next step was to be able to format the command natually, and to provide auto-completion for the available users and types of issues. This was done using ubiquity nouns. These tell ubiquity that a certain argument has a specific format. This idea is same as auto-completion,  where the specific format is a specific set of word.  In both the users and issue types, this data has to be dynamically loaded. Again, jQuery was used to load the form page, and parse the data to obtain the data inside the combo boxes.

A problem that comes up with auto-completion and loading dynamic data, is the need for doing the AJAX call with every key stoke. This is very expensive. The first solution that came to my mind was to cache the data, and refresh it every time the ubiquity command line is loaded. Sadly, as i mentioned in a previous post, the ubiquity documentation is not very accurate. Thus this cache technique was not successful, as the function for handing the loading function was not called. The alternative solution, was to cache the content only once.

With both the auto-completion working well, and a preview of the possible users present the command was complete. The complete code for the issue submission command could be found here.