January 15th, 2008 § § permalink
I’m reading Blink, by Malcolm Gladwell, which is about the ability to arrive at correct decisions from minimal information–in other words, in a instinctive or intuitive way. I’ll write more about the book later, by I’ve been thinking about Gladwell’s argument and how it would apply in the field of software development.
What occurred to me is that experienced programmers are able to the make the same kind of instantaneous judgments, especially when they are debugging a program. I can remember countless occasions in my programming career when the simple act of looking at the code, without even trying to read in detail what was written, would generate a clear picture of what was wrong with that specific part of the application.
I think any other programmer would be able to say the same. That ability seems to be a mix of general programming knowledge and specific application knowledge. And the longer you program, the better you will be at spotting problems in the presumed function and structure of the code. It doesn’t matter if the problem is simple–duplicate rows because of a missing join statement, for example–or complex–subtle behavior problems in the application because of slightly changed configuration parameters.
It’s interesting to compare the behavior of two differently experienced programmers. Curiously, I have been doing something like that for a while, even before I started to read the book, and I think Gladwell is quite right here. I don’t agree with many of his arguments in the book, but the basic relationship between expertise and intuition is something we often miss.
The converse is also interesting, the times when instinct fails. That may cause a programmer to spend hours looking for a ridiculously small problem–a wrong letter in a protocol definition that will prevent the entire program from working and a misleading error message. The fact the this kind of problem can be solved by falling back (taking some time away from the problem or using a second opinion) indicates that the mechanism is, to a certain extent, resettable.
Anyway, it’s quite interesting to think about the way our mind works and the ability it has to make those instantaneous comparisons and classifications.
January 11th, 2008 § § permalink
The equivalence between elegance, beauty and correction is almost an axiom in the field of mathematics. Bertrand Russell expresses this correlation thus:
Mathematics, rightly viewed, possesses not only truth, but supreme beauty–a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as poetry.
— Bertrand Russel, The Study of Mathematics
Code, once we consider its mathematical roots, presents the same intrinsic correlation. Although it is too much mutable to evoke the cold and austere beauty to which Russell alludes, the fact that code and its other products exhibits the same aesthetic imperatives is obvious even to the most inexperienced programmers. Even users can occasionally apprehend those aspects of code when they talk about the way a given application works and how functional and usable it is.
Most of that elegance derives from the incremental economy one can achieve by successively refining a body of code. The author of The Little Prince describes those steps with the following words:
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
— Antoine de Saint Exupéry, Terre des Hommes
Exupéry criterion is an excellent validation tool for what code should be–and by extension, any of its products–in its final form. There is beauty and perfection to be found in code, to borrow Russell words, as surely as there is beauty and perfection in the most cherished poems.
To the intelect of programmers, this beauty is visually clear in what they product, easily expressed in the successive reductions they can perform to achieve a core of functionality that will stand the test of time. Obviously, that perfection depends both on the programmer and the tools he chooses to employ, but it’s available to any practitioner of the craft willing to make the effort to become a master craftsman. As another great programmer said:
Ugly programs are like ugly suspension bridges: they’re much more liable to collapse than pretty ones, because the way humans (especially engineer-humans) perceive beauty is intimately related to our ability to process and understand complexity. A language that makes it hard to write elegant code makes it hard to write good code.
— Eric S. Raymond
Being a function of a developed sense of programming, I believe it’s possible to purposefully chose to code beautifully. It’s a matter of time and options, something about every programmer should think regularly in the course of his career. Training oneself to recognize beauty may seem far fetched, considering that reading code is much harder than writing it, but that may be the key to the task: beautiful code will be much more readable than ugly code, and that will help programmers to identify and recognize good code.
Ultimately, the challenge of every programmers is to learn to code elegance, teaching himself or herself to recognize code that meets standards of concision, simplicity and beauty–which brings us to another quotation:
Simplicity carried to the extreme becomes elegance.
— Jon Franklin
My advice to those who are beginning their programming careers and also to those who are feeling that their code is becoming bloated and unwieldy is this: train yourself to code in a way that will show the problem solving intent of each line, and that your code is the best way to solve the problem at hand.
In less time than you will realize, elegance will be second nature to you, with all benefits it brings. It’s hard work, but worth it.
January 6th, 2008 § § permalink
Tim Bray began his predictions for 2008 saying that this is the decisive year for RIA applications: either they become mainstream or they will be relegated to the dust bin of history. Given the news about Microsoft planning to overhaul its entire site to show their RIA platform, Tim Bray is probably right in saying this will be a important year for RIA technologies.
Bray makes an interesting point when he says that he tends to associated “richness” not with interface–which, he also says, is something only developers care about–but with the interactive capabilities of applications, regardless of the technology they use. I agree, but I also think that Silverlight and Flex (and similar technologies) may have a useful role in a different place, providing different levels of interface in a very specific class of applications: internal sites.
Another interesting point Tim Bray makes is that most applications are Web-enabled to some extent–even if users don’t realize it. Add that to the growing research in offline/online integration and we are dealing with an entirely different playing field.
Contrary to Bray, I will risk a definitive prediction: RIA, with regards to Flex and Silverlight, will indeed be recognized as a secondary option this year, and no big applications–Microsoft site notwithstanding–will be launched using either technology. Conversely, we will see people using Silverlight or Flex in internal applications.
The rest of the year, however, will belong to Ajax.
January 5th, 2008 § § permalink
I guess I can safely say that most programmers consider testing is an essential part of the software development process–even those who are not using any format framework right now beyond following a prepared script about what should be tested and how it should be tested.
Ironically, the parallel ascension of Web applications as the preferred form of modern user interfaces and agile methodologies as a more efficient alternative to the usual coding creation systematics offered a unique opportunity to experimentation in the testing arena. Web applications are usually easier to test because you can automate most of the testing. Since they are not event-driven but based on linear protocols, testing Web applications can be done with less cost and more productivity. Likewise, agile methodologies bring to the playing field a need of experimentation to create more competitive practices that generated hundreds of new tools with a very pronounced effect on testing.
The end result is an increased awareness by developers of the testing process. Automated tests are becoming a premise of modern development techniques instead of a optional step in the development process. The benefits are clear: better management of changes in requirements, more robust products, improved integrations, and even better documentation depending on the tools a developer is using.
Even though those benefits are always touted as the main gains from testing, there is an additional benefit that is always overlooked people talk about the subject: the motivational gains testing can bring to the development process in the day to day coding.
Most new projects have complicated beginnings, with choices being made in the spur of the moment that will heavily influence their life-cycle. The motivational benefits of tests in the beginning of such projects can contribute to their development in two different ways: first, by making visible the project quality level from the first second; second, by the pure pleasure a passing test suite can bring to a developer.
People can be strongly influenced by what they see and a passing test suit can show that the work being done is not random but follows a precise structure that developers will then strive to keep.
Even legacy projects can use that to their advantage. By incrementally creating a testing process, developers will feel they are gaining control of a otherwise unyielding mass of code and that will be converted in other benefits as well, with better understanding of the code and progressive knowledge diffusion being two of the most important ones.
To underestimate the effect this kind of motivation can have on developers is the same as underestimating the human factor. Testing provides exactly the characteristics needed to increase motivation while also providing tangible technical benefits. And although the human factor is rarely factored in the choice of a methodology, the past few years have shown an increased awareness in this subject that is quite heartening.
So, the next time somebody complains that testing is a waste of time, maybe you don’t need to point only the technical benefits–the human benefits can be a strong selling point as well.
December 23rd, 2007 § § permalink
In the past couple of .NET projects my company developed, since we met with no objections from our clients, we decided to use Castle (by way of its two sub-projects MonoRail and ActiveRecord) to see how well it would perform. Unsurprisingly, considering the care that went into the Castle code, they made .NET development altogether more bearable.
Castle is a collection of projects that includes database access layers (using NHibernate to power a ActiveRecord implementation), templating engines (of which NVelocity and Brail are but two examples), and a series of other services geared to rapid application development.
Although my experience with Castle is still small, I’m liking it. I always considered C# a good programming languages and many of its characteristics fit very nicely with the way I like to develop when using a ORM implementation. For example, the way Castle implements ActiveRecord is, at least in my opinion, a much better way to see what’s going one–a nice blend, indeed, of the Rails and Django approaches.
Obviously, since C# is not a dynamic languages, some things are much hard–or at least, much less flexible–than their Rails or Django counterparts. Castle is also lacking some accessories we’ve grown to love in Rails; to wit, the console and the database shell. Nonetheless, it also shines when debugging is necessary since Rails lacks a decent debugger (although Netbeans, if you are incline to use it, solves the problem nicely) and Django is also missing debugging tools.
Looking at the changes already present in C# 3.0, I can see Castle becoming even more pleasant. At the moment, it is already saving us a lot of work and I’m sure it will be a lot better in the near future.
December 6th, 2007 § § permalink
The first programming tool I ever used was Turbo Pascal 5.0, in 1994. A 5 1/4 disk, passed around by a professor, was the gate to a world that had interested me since my first readings about computers and their capacity to be told what to do. From release 5.0, I quickly jumped to 5.5, which offered rudimentar OOP support, and soon was using 6.0, which allowed programmers to use much more interesting OOP features and had excellent graphic support. I started programming my own graphical window manager until I realized it would be too hard to compete with Windows.
My interest in Borland products didn’t dwindle soon. After a brief fling with Turbo C++ 3.0, I went on to program in Delphi from 1997 to 2003, with sporadic uses until 2006. When the company I worked for changed its entire product platform to .NET, I had no choice but to follow. Borland’s frequent strategic mistakes didn’t help as well. Soon, one of my favorite tools was just a memory. I still have a copy of Borland Delphi 6, which I purchased with my own money, but the CD has probably stopped working by now.
After so much time away from the community–I used to be very active in the Borland newsgroups, specially those related to Web programming–I was surprised to hear that Borland restored and modernized their Turbo line of tools. There is now both Turbo Delphi and Turbo C++, new versions of Turbo Pascal and Turbo C++. For those into Microsoft tools, there is also a Turbo Delphi for .NET and Turbo C#.
Obviously, those are basic versions, stripped from any professional or enterprise features. Nonetheless, it’s nice to see Borland returning to its roots, even though those tools won’t sell enought to justify their existence. Then again, who knows, names can be powerful. Since there a free version of Turbo Delphi, I guess I’ll be programming in Delphi soon again.
Better than Turbo Delphi would be a new version of Turbo Pascal. I still have a disk around with lots of interesting programs to run.
March 5th, 2007 § § permalink
The project has been approved at RubyForge, so it can be downloaded from there now or installed via the usual gem install htmldoc command. The documentation is accessible as well, and there is a Subversion repository for those interested in download the files directly.
As I said in the previous entry, I hope this library is useful to others as it was to me.
March 5th, 2007 § § permalink
Anyone using Ruby and Rails knows that there are few decent reporting tools, and many of them are very hard to use. Since there’s no native solution, most developers resort to a combination of other tools to put their reports together.
In my last projects, I tested a lot of solutions and found one setup that I consider satisfactory: I’m using HTMLDOC to generate PDF reports from HTML input. HTMLDOC is an open-source application that converts HTML input to formatted HTML, PDF or PostScript output.
HTMLDOC is an excellent tool, but it’s an application, which means you need to invoke it and control its execution as a process. To make the job easier, I created a plugin/gem that allows me to use HTMLDOC via a Ruby class, with a couple of simple methods.
While I search for a permanent place for the project (RubyForge, for example), I’m making both the gem and the plugin available here. The usage information can be found in the README.txt file.
In order to use the class, you will obviously need the executable. Although HTMLDOC is open source, binaries are only available in paid versions. For Linux, the easiest option is to compile it yourself, which is quite simple. For Windows, old binaries can be found on the Internet. Obviously, they lack the most recent fixes. If you are deploying to Linux, though, that’s not much of a problem since you can test most of the options on Windows and run the most recent binary on Linux.
I hope this library will be as useful to others as it has been to be. If you have any comments, suggestions or fixes, please contact me.
February 15th, 2007 § § permalink
After hours of frustrated attempts, I finally managed to get a working Squeak development image with both Seaside and Glorp loaded–with invaluable help from Ramon Leon of On Smalltalk fame, who found out what I was doing wrong. The image is built upon the full 3.9 Squeak image, and was tested on both Windows and Linux.
It took me a few tries to get Glorp loaded into the image since the port was created for the 3.8 release, but it seems to be working well now. I had to patch a couple of methods, and now most of the tests pass (only three out of seven hundred are failing now, and I believe one of them is too version specific and should be rewritten anyway). This version requires PostgreSQL 8.2 with plaintext authentication enabled.
If you are interested in giving Smalltalk a try, Squeak is a good way to start. This development image requires only the stable release, which can be easily found on the Squeak site, a nd will work on any of the support platforms.
February 7th, 2007 § § permalink
In the past few days, I downloaded and briefly tried a bunch of different Smalltalk implementations, trying to decide for one of them. There are dozens of different implementation, each with its own pros and cons. The language itself is, obviously, the same for all implementations. What makes each implementation unique are the kind of environment offered to the developer, which can vary from a simple command-line workspace to packages allowing a developer to build and deploy applications or services to multiple platforms.
One of the first things I noticed about the implementations was the price of some of them. There are free implementations, of course, and non-commercial versions of most of the paid ones. But some non-free versions, like VA Smalltalk, can cost upwards of eight thousand dollars. For others–like VisualAge, from IBM, and Cincom Smalltalk, from Cincom–I couldn’t even find the price. I don’t want to imagine what they would cost for a single developer or a small company. In my opinion, if a company has to hide the price of a product, it’s always beyond the reach of mere mortals.
Both Cincom and Object Arts make free implementations available for non-commercial development: Cincom with its VisualWorks product, and Object Arts with Dolphin Smalltalk. Dolphin Smalltalk is a purely Windows implementation, much like Smalltalk MT, from Object Connect.
As I intend to develop both desktop and Web applications, my primary choice, of course, would be an implementation with support for both tasks. For Web applications, I’d like to use Seaside and Glorp, a requirement which narrows the playing field. On top of this, I’d like to use an implementation supporting both Windows and Linux. There’s only one implementation that fits those criteria: Cincom’s VisualWorks.
There’s also Squeak, which fits those requirements to a certain extent but whose implementation is too geared for educational use, and whose own version of a graphical interface is too weird and changes too much for my tastes. It’s a fast and good implementation, but not what I’d like to use now.
At the moment, I have Squeak, Smalltalk MT, Dolphin Smalltalk and VisualWorks running on my computers. Smalltalk MT and Dolphin Smalltalk are excellent products, very polished, which is kind of expected since they only run on Windows and can afford to adopt all the conventions of that plataform. On Linux, I have Squeak (just for the fun of it) and VisualWorks. VisualWorks is very complete, but lacks some polish in terms of GUI development. On Windows, it uses its own components and on Linux runs on top of OpenMotif. As far as I know, there’s no support for Qt or GTK.
With all those differences, I wonder if that’s one of the causes behind the lack of enthusiasm about a language that, for all accounts, is still way ahead of its time. Even the fact that it’s based on images is not a problem considering that the possible objections for that–lack of “proper” executables and team development–have been addressed a long time ago. I can understand that Lisp failed to reach mainstream acceptance because it’s too esoteric. But Smalltalk is a normal imperative language, with a simple and powerful syntax. I don’t know if a standard implementation would make it more acceptable in market terms, but it would certainly make it more interesting. I guess that’s why people are interested in #Smalltalk, which is based on .NET. Unfortunately, it isn’t ready for production yet.
Those considerations apart, Smalltalk remains a mature and extremely relevant language. Seaside and Glorp, with their conceptual similarity to Rails and ActiveRecord, are giving it a lot of visibility now and I wouldn’t be surprised if more public commercial applications based on that combination started appearing this year. Undoubtedly, most languages in use today would gain a lot with a growing Smalltalk user base.