Building LLVM’s shared library on Mac OS X 10.6

January 15th, 2011 § 0 comments § permalink

I’ve been working a little bit on the ruby-llvm project (Ruby bindings for LLVM using Ruby-FFI)–mainly adding tests to the already existing functionality–and so had to build LLVM as a shared library.

Building LLV as a shared library on most platforms is trivial. It’s just a matter of enabling a flag on the configure phase of the build.

./configure --enabled-shared

I actually use brew to build it, but the principle is the same:

brew install llvm --shared

However, just building it like this on Mac OS X 10.6 results in the following errors when loading the library on Ruby-FFI:

dyld: loaded: /Users/<user>/llvm/2.8/lib/libLLVM-2.8.dylib
dyld: lazy symbol binding failed: Symbol not found: 
    __ZN4llvm2cl6Option11addArgumentEv
  Referenced from: /Users/<user>/llvm/2.8/lib/libLLVM-2.8.dylib
  Expected in: flat namespace

dyld: Symbol not found: __ZN4llvm2cl6Option11addArgumentEv
  Referenced from: /Users/<user>/llvm/2.8/lib/libLLVM-2.8.dylib
  Expected in: flat namespace

Trace/BPT trap

After some investigation and an e-mail exchange with Takanori Ishikawa, I arrived at the following patch which solves the problem and allows LLVM to load cleanly as a shared library:

diff --git a/Makefile.rules b/Makefile.rules
index 9cff105..44d5b2d 100644
--- a/Makefile.rules
+++ b/Makefile.rules
@@ -497,7 +497,7 @@ ifeq ($(HOST_OS),Darwin)
   # Get "4" out of 10.4 for later pieces in the makefile.
   DARWIN_MAJVERS := $(shell echo $(DARWIN_VERSION)| sed -E
's/10.([0-9]).*/\1/')

-  SharedLinkOptions=-Wl,-flat_namespace -Wl,-undefined,suppress \
+  SharedLinkOptions=-Wl,-undefined,dynamic_lookup \
                     -dynamiclib
   ifneq ($(ARCH),ARM)
     SharedLinkOptions += -mmacosx-version-min=$(DARWIN_VERSION)

The options above use the default two-level namespace on OS X and change name resolution to run-time resolution.

Using those options doesn’t seem to have any ill effects but I’m curious why LLVM doesn’t do that already, especially considering many other dynamic libraries for Mac OS X are compiled using the new options specified above. In fact, the former options actually seem to be remnants from pre-10.3 days. Just in case, I’ve asked that very question on the LLVM-dev mailing list.

Meanwhile, the patch works for me. I also made available a modified version for brew. YMMV.

On jedis, ninjas, and samurais

January 8th, 2011 § 0 comments § permalink

Geeks of all stripes often like to refers themselves as being the Computer Science equivalents of some warrior society or other that, let’s be honest, have the tons of cool in their day-to-day affairs, imagined or not, that real people don’t. I can’t really fault anyone who does that because I’ve done it as well.

Granted, I never liked Star Wars that much. Being a Star Trek fan, I always considered Star Wars something you grew up from after a while–good for kids but not much else (please, don’t kill me, I’m just kidding–well, not that much). Star Wars is fantasy, Star Trek is science. But, yes, the Jedi are cool. I’d rather yield a light-saber than a phaser, but give me a quantum torpedo any day over any weapon the Empire or the Republic can devise.

And there are also the samurai–that old-school, valiant, often involved in hopeless, honor-bound matches. From Seven Samurai to The Last Samurai–and let’s not forget Eiji Yoshikawa‘s novel–we Westerns have always admired the way those mostly Japanese warriors conducted themselves, considering their Way of the Warrior something at least to aspire to.

Finally, there are always the ninja or shinobi. Sure, their are not very popular nowadays, but there was a time they were the rage of the young population. As with the samurai, theirs was an art grounded in pretty much the same principles of honor and duty–although they were the functional equivalent at their height to a Black Ops team while the samurai could be considered more of Special Forces, sometimes for hire (as ronin) and sometimes bound to a given House.

But one thing all those orders have in common is that they were mostly monastic-like orders, based on an strict code as how to proceed, on a very strict training discipline, and, in many cases, honor-bound not to contract any format relationships beyond those formed with their brothers in arms.

As geeks, we often like to compare ourselves to members of those orders because, as I said, they are cool. Except for Ninjutsu, which preserves many of the training shinobi, you can’t really become one of them but you can aspire to some of the same ideals, try to live by the same codes because as much of them apply to interpersonal relationships as to war and survival itself.

But there’s something we mostly forget about those orders–the fact that they were primarily and above it all, about discipline. Both the real, historical training of the samurai and ninja and the imagined Jedi education required an immense and live-long commitment to discipline that overshadowed anything else the person would do. And, most of the times, required sacrifices.

Which brings me to my point.

In the past four years, I’ve been part of almost ten different teams. I’ve seen teams succeed and fail, to recover and proceed, to bond and become great, to be disbanded and go on with their lives. In short, I’ve been part of a large number of situations in which to participate and observe how teams interact and get things done.

And in all those years, one of most important thing that separated bad and even good teams from great ones was discipline, often the most overlooked part in the examples geeks try to emulate when choosing their heroes.

It’s quite ironic that people often profess to like Agile methodologies because they seemingly create order from chaos through self-managed teams, teams that supposedly don’t need much direction to get going and do great things, teams that don’t need to be told what to do.

But the truth is, Agile will only succeed with teams that are very disciplined and that understand the trade-offs you will need to make in order to make a project happen. Yes, Agile is about embracing change but that only means you will have to make sure you work better with your peers and with the organization as a whole–understanding change, and those trade-offs requires discipline and a down-to-earth approach that most people seem to overlook when becoming enchanted with Scrum and its sister disciplines.

I was talking to a friend a couple days ago and we were discussing how often geeks of the younger generations are using the semi-ADD excuse to go off track on projects and postpone things. Geeks, he was saying, are notorious by their short-attentions spans.

I think–and said that to him–that the opposite is true. The true geeks are those disciplined enough to maintain their focus and keep going in spite of distractions. You need to be pretty focused if you want to debug that heinsenbug that has been plaguing you for the past 40 hours and keeping your server crashing each couple of hours. You need discipline to keep poring over documentation, going back and forth, to find that elusive piece of information that will optimize your routine so that it will really run for large datasets. And you need a strong sense of direction to participate in a team and keep track of everything that’s going on in an ever-changing environment.

In short, discipline is what separates the dilettantes from the craftsmen. It’s what makes thing happen and what really creates great teams. It doesn’t mean you need to be a prick, or that you can’t have fun, or even that you need to follow pre-ordered steps every time you do something. But it means you need to practice and give thought to what you’re doing until it becomes second nature, until you really master your art.

And that’s what ninjas and Jedi and samurai do. They don’t dabble, they don’t run when the going gets weird and the tough turn pro. They just–you know–do it, and do it well.

I’d rather have a whale

April 8th, 2009 § 0 comments § permalink

The whole Twitter brouhaha impressed me particularly in one key aspect: how people who have no experience whatsoever in big system think they can give valid opinions about them (regardless of language or framework or platform used).

I won’t offend readers saying I do have extensive experience in the matter; also, I won’t say I have any knowledge beyond what a good software engineer should have. My current experience with the matter is centered around closely following the development of an application that recently surpassed 60 millions monthly page views, and which is also growing constantly each month.

This particular application is entirely written in Ruby on Rails and considering how much effort is needed to maintain, evolve and operate it, I have nothing but sympathy for the Twitter team. Keeping an application the size of Twitter online, with all the distributed complexity it implies, is laudable.

It’s even more impressive how people assume the Twitter code is shitty. Even if it was–and even assuming it is–criticizing it for that is still bullshit. Even for an application riddled with technical debt, the balance between that debt and the value delivered for the user–which is something even the Twitter detractors have to agree on–is a fundamental and sound business decision.

Martin Fowler talks eloquently about that balance in one of his recent articles:

The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline. The all too common problem is that development organizations let their debt get out of control and spend most of their future development effort paying crippling interest payments.

From my point of view, the fact that Twitter has experimented with other technologies, has benchmarked the application and has sought better solutions is a clear indicator that they are trying to fix their debts. Asking for more than that is a shallow display of arrogance and ignorance regarding how business is done and how real code is produced.

To freely and publicly admit to problems, trying to create a coherent discourse is something I respect. Saying things like “As far as I’m concerned, Twitter is a case-study in how Ruby on Rails does scale, even in their hands”, on the other hand, eliminates any possibility of a rational dialogue. The Rails community should be ashamed of its luminaries by now.

Programmers do not operate on ideal worlds. Until the people criticizing Twitter are able to show that they’ve done their homework dealing with the questions Twitter is facing, I’ll rather have the whale. Only proper for humans, after all.

The last D in TDD is for Design

February 3rd, 2009 § 0 comments § permalink

In my last post, I wrote about my opinion on how tests are meant to express the relationship between specific parts of the code and not to repeat knowledge of interfaces and contracts. In my experience, the most valuable tests are those who exercise those interfaces and contracts indirectly, through the particular architecture implicit in their design.

The growth of agile tests is a recent phenomenon, which is offering now a good opportunity to talk about good practices, philosophy and methodologies of development in the context of Agile testing. In special, the Rails community is doing an exceptional work in bringing tests to the forefront of the Agile discussion in the Web development community.

However, the success of testing lends itself to a lot of misunderstanding among novice developers and also among those developers not so used to TDD and BDD. More so, the also recent multiplication of testing frameworks has resulted in a lot of bad code as frameworks try to compete with each other offering new features that, in some cases, are actively detrimental to the health of the test suites.

In some ways, that is the same discussion about what is the real difference between TDD and BDD, but I think the particulars of the subject deserve a little more emphasis. To sum up the argument, that point is that you should never use tests as a replacement for good architectural practices.

That may sound simple and obvious, but is easy to find examples where testing frameworks not only fail to abide by that principle but actively encourage bad behavior. Taking Shoulda, for example, it’s very common to see code like that in projects using it:

class UserTest < ActiveRecord::TestCase
  should_belong_to :account
  should_have_many :posts
  should_have_named_scope('recent(5)').finding(:limit => 5)  
  should_have_index :age
end

This kind of code doesn’t prove anything about the architecture of the class. The code above:

  1. It’s redundant, because the three first clauses can and will be tests in their use on other parts of the code, viz., the controllers;

  2. It’s brittle, because it’s too tied to the class implementation details;

  3. It’s little more than sanity testing to see if the developer remembered to properly declare some model stuff;

  4. It’s exposing orthogonal implemental issues, like the fact that the application is using a database-based persistence engine in the case of the index matcher.

Overall, the tests above are almost completely useless. There may be some justification for the name scope test but it’s still redundant.

Yet worse, that are some examples like the Remarkable matcher named shouldhavebeforesavecallback, which is actually detrimental. A test that exposes so much of the inner functionally of a business object has absolutely no justification to exists in the first place. It’s a complete deviation from what TDD represents.

Tests, once again, are about interoperability between parts of the code. They are part of a architectural discourse that tries to remain focused not in implementation details but on the growth of the code base. The goal, as always, is to write the smaller body of tests–axioms, if you will–that will give a proper indication about the validity of a given body of code. Simplicity, in other words, which, as I believe, should be an explicit goal of good architectures.

Tests: Pragmatism or ideology?

February 1st, 2009 § 2 comments § permalink

I like most of what Joel Spolsky and Jeff Atwood write, but the last conversation between the two of them in their regular postcast show a blatant lack of knowlege about what tests and TDD really are.

At the core of their arguments is the idea that high code coverage through tests–Jeff Atwood mentions the 95%-plus range–makes the maintenance of the tests themselves time consuming, considering the proportion of the tests that need to be changed when the code changes. A secondary argument is that tests are more suited for legacy code, except for the kind of new code that has natural rigidity, as, for example, the specification for a compiler.

The solution for the second argument is simple: all code is legacy. Simple as that. Code the becomes production code is instantly made legacy and the argument that there is some difference between “older” and “newer” code is dubious in the best of the cases.

Reading the transcription of their dialog is possible to identify a confused notion of what tests really are–especially when both talk about the relationship between testing and architecture, something that in the agile context is commonly referred as TDD or BDD.

That confusion–that tests are meant to cover method or class interfaces–is extremely common even among practitioners of agile testing methods, be it among those who propose tests as design tools, as it’s the case of TDD and BDD adopters or be it among those who simple use tests as post-coding tools to verify code behavior in an automated way.

I can sympathize with the argument that 100% code coverage is usually unnecessary. In face, 100% code coverage never means that your code–and by extension your architecture–is without flaws.

First, because 100% of real code coverage is really impossible to achieve for any meaningful body of code. Dependencies make that a given. Second, because no matter how much tests you have, cyclomatic complexity will always get you in the most inappropriate times. No matter how much white- or black-box testing you’re doing, complete coverage is always directly exponential to your code.

There is also another factor represented by a causal variation in the 80/20 rule: the most benefits you will ever achieve from testing are always in the most complex parts of your code, but the real gain comes from the tiny deviations that blindside you on a lazy Tuesday. In this case, the more coverage you have, the easier it will be to introduce new tests.

And that’s the real reason why Spolsky and Attwood argument fails: tests are not about interfaces, or APIs or contracts. They’re rather about the relationship between the different pieces of your code. In that distinction is the root of one of the biggest debates raging in the agile test community: what’s the real difference between TDD and BDD.

My answer is centered around a small reinterpretation of what TDD is. Instead of seeing it as Test-Driven Development, I see it as Test-Driven Design.

If you’re using tests as a way to guide your design, that means you’re worried more about knowing how the pieces fit together than about how they work, as mentioned above.

Joel says:

But the real problem with unit tests as I’ve discovered is that the type of changes that you tend to make as code evolves tend to break a constant percentage of your unit tests. Sometimes you will make a change to your code that, somehow, breaks 10% of your unit tests.

Of course you can make changes that will break 10% of your tests, but in my experience that will only happen if your tests are brittle and if your design is already compromised. In that case, you can throw away the tests because they’re not helping anyone.

A couple of weeks ago, I made a substantial change in a system I wrote. I had to change a middleware protocol engine from DRb (distributed Ruby) to JSON over HTTP. This particular code is 100% covered.

Because of the protocol change, a considerable part of the code was touched in some way. But only three or four new tests had to be written to deal with representation changes–something that will also be of use in future protocol additions–and none of the existing tests was modified. Code was moved around, changed to new classes, but, all in all, the tests remained the same.

The explanation for what happened in simples: while there are a few tests dealing with specific interfaces, most of them are concerned about the relationship between the parts of the application: about how data leaves this part of the application in that format and is reinterpreted in a different format suitable for another part, how a given AST is reorganized to suit the language generator in a differente part of the application, and so it goes.

Jeff continues to say:

Yeah, it’s a balancing act. And I don’t want to come out and say I’m against [unit] testing, because I’m really not. Anything that improves quality is good. But there’s multiple axes you’re working on here; quality is just one axis. And I find, sadly, to be completely honest with everybody listening, quality really doesn’t matter that much, in the big scheme of things…

This is something that made me to rethink the entire context of the discussion. I’m really surprised that somebody that considers Peopleware and The Mythical-Man Month basic references for programmers would say something like that. Both books have entire discussions about quality being the focus of robust code that can be delivered in less time and that can add more value to business and users. Saying that quality is just one axis is the same as saying that good is enough, even if you have to throw it away later and start all over again because you couldn’t bother to design your architecture in a better way.

To sum up, TDD or testing is not an end in itself. But the argument that using tests is an ideologic waste of time fails when one considers how it can help to insure architectural decisions.

Joel is very known for his pragmatic approach to bug fixing. Tests are a very programatic way to ensure that a given set of conditions won’t trigger the same flaw in your applications. That’s that business value–in hours saved–that Joel and Jeff are talking about.

At the end of the day, pragmatism is what really counts. And tests, when done right, are some of the most pragmatic tools a programmer has in his arsenal.

A conversation with Randal L. Schwartz

May 2nd, 2008 § 3 comments § permalink

During FISL, I had the opportunity to watch Randal L. Schwartz talk about Seaside. Schwartz is very well known in many open source communities–especially in the Perl one–and now is evangelizing Smalltalk and Seaside. I asked him if we could talk a bit about the subject, given my previous interest in the field, and he graciously agreed to an interview.

Without further ado, here’s what we talked about:

Tell us a bit about you: what’s your background, how did you started programming, what are you doing today?

I taught myself programming when I was 9. By the time I was 15, I was teaching programming from the front of the room to my classmates, and writing contract code on the weekends for real money.

I worked for three different companies for a total of eight years, before starting Stonehenge in 1985. Stonehenge has grown over the years: we’ve can count 17 of the Fortune 100 Companies as our clients.

I spend a lot of my time lecturing and writing these days, but I also still design, create, and review code as well. I answer questions for free for about an hour or two each day on the dozens of mailing lists and blogs and web communities I frequent.

You are extremely famous in the Perl community, but now you are strongly advocating Smalltalk/Seaside. What did change? When did you start using Smalltalk?

I started using Smalltalk before Perl was even invented, back in 1982. I’ve already written that story up at my blog.

What are Smalltalk advantages over other traditional languages like Perl, Ruby or Python, for example?

Smalltalk has a very simple syntax: I can teach the entire syntax in about 20 minutes, and include it as part of my talk introducing people to Seaside. The major Smalltalk implementations (except GNU Smalltalk) also have a mature IDE, allowing easy exploration of code relationships, and to learn the libraries as needed by looking at both the implementation and the uses.

And that’s a bonus as well: we have two commercial smalltalks (Cincom and GemStone/S) as well as two open smalltalks (Squeak and GNU Smalltalk) all supporting Seaside. This allows a nervous manager who might be hesitant at selecting a strictly “volunteer-based” language to also have two commercial vendors to pick up support. Options are good!

Do you believe Smalltalk will finally reach mainstream status?

Well, it *had* mainstream status in the mid 90s, just before Java entered, at least with large Wall Street firms and other institutions who wanted rapid GUI development to stay ahead.

But yes, I believe Smalltalk is positioned today to reenter as a major player. For details, see my “Year of Smalltalk” post.

Also, your talk was entitled, “Seaside: Your Next Web Framework”. What is really interesting about Seaside?

I like how Seaside can abstract both control flow (along one axis) and representation (along the other axis) with relative ease. Seaside seems to put the right related things near each other. I also like the “debug the broken webhit within the webhit”: when something blows up, I can explore in the standard debugger, fix what’s broken, patch up any mess, and then continue within the same web hit, as if nothing broke.

Also, the traditional Rails persistence is provided with Active Record, which requires objects to go through an object-relational mapper to drive SQL queries. Seaside can do the same thing (via GLORP), but a better solution is to avoid the mapping entirely, using things like the open source Magma solution, or the commercial GemStone/S Virtual Machine. When you can get rid of the ORM layer, you get a lot of speed back, and a much easier programming environment.

What do you see in Seaside’s future, and how does it compare to the future of the other frameworks?

The Seaside team is currently refactoring and repackaging Seaside so that portability will be easier to manage and so that you can pull in just the parts that you need. I also see a lot of bolt-ons being created, like the Pier CMS and adaptors for various APIs such as Google Graphs.

Do you think the market is ready for Seaside?

Yes. Ruby on Rails reopened the discussions about what to do in a post-Java world, by going back to the late-binding languages like Perl and Python and Smalltalk. And Seaside is a mature framework, being even older than Rails, but just not as well known. I’m hoping to change that.

Have you deployed anything using Seaside? If so, what were the challenges?

I’m working on a few projects now, but nothing is public yet. The initial challenge was the relative lack of documentation, so I spent the better part of two days going through every posting to the Seaside mailing list. I feel much better informed now, but my eyes were pretty bleary. I hope to repackage the knowledge I gained into postings to my blog as well as helping to answer questions on the IRC channel and mailing list.

You are now part of the Squeak Foundation Board. What are your plans for the Foundation?

My primary concerns are licensing issues, release management, and proper publicity. All of these issues are being addressed, but of course, we’re all volunteers and always looking for more qualified volunteers to help.

Are there any Squeak Foundation plans for Seaside?

Nothing formal that I’m aware of. However, Squeak is the primary development platform for Seaside, so we’re sure that Squeak will remain an essential component.

What are the most promising developments in the Smalltalk/Seaside world currently?

Well, what got me involved is GLASS, the GemStone/Linux/Apache/Seaside/Squeak solution to get people up and running with Seaside quickly. This also entailed the GemStone management creating a zero-cost commercial license for a fully functional (but limited) version of GemStone/S. With this free version of GemStone/S, you can build a business, and when your business exceeds the capabilities, there are strategies about migrating to larger licenses that are reasonable. It’s a great solution for getting a rock-solid commercially-supported Smalltalk VM with persistence and clustering into your plans.

What about next year’s FISL. How did you manage to get three entire days for Smalltalk?

As I said, “it all started over a couple of Caipirinhas…”

What are your plans for those three days? Do you plan to bring other Smalltalkers?

I will be working with the FISL organizers and the various vendors and groups of the Smalltalk community to produce a full mini-conference. I hope to have both beginning and advanced Smalltalk training, as well as various Seaside tutorials. I expect this conference will attract a significant number of Smalltalk developers to FISL for the first time, as well as expose Smalltalk to the remainder of FISL, so it’s a win for everybody.

Many thanks, Mr. Schwartz, for the interview.

Arc

March 3rd, 2008 § 0 comments § permalink

Arc’s Out:

Arc is still a work in progress. We’ve done little more than take a snapshot of the code and put it online.

I’ve working on this for a long, long time and realized I’ll never get it done properly so I’ll release it anyway.

Why release it now? Because, as I suddenly realized a couple months ago, it’s good enough.

It’s shit but I’m famous enough that people will be talking about it for a long time. People will think it’s good even if it’s really just a bunch of macros on top of Scheme.

I worry about releasing it, because I don’t want there to be forces pushing the language to stop changing.

I’m not going to change it, but if you idiot enought to want to use it, remember that there’s not documention. In other words, don’t call me if you can understand a single line of the code.

Which is why, incidentally, Arc only supports Ascii. MzScheme, which the current version of Arc compiles to, has some more advanced plan for dealing with characters. (…) But the kind of people who would be offended by that wouldn’t like Arc anyway.

I don’t understand and don’t care for any other character set other than my precious ASCII. I learned it forty years ago and I’m not giving it up now. No way. Ah, that why Yahoo! completely rewrote the application I sold them. Bunch of losers.

Why? Because Arc is tuned for exploratory programming, and the W3C-approved way of doing things represents the opposite spirit.

Also, I don’t understand anything about new and modern standards and technologies like XHTML and CSS. And I’m not waste my precious VC time learning them. And I don’t care about you people who dare to make the Web less complicated. Did I mention why Yahoo! had to rewrite the program they bought from me?

Tables are the lists of html. The W3C doesn’t like you to use tables to do more than display tabular data because then it’s unclear what a table cell means.

I told you. I don’t understand anything about HTML.

So experience suggests we should embrace dirtiness. Or at least some forms of it; in other ways, the best quick-and-dirty programs are usually quite clean.

Look! A dumpster! Let’s have some fun!

Arc tries to be a language that’s dirty in the right ways. It tries not to forbid things, for example. (…) For now, best to say it’s a quick and dirty language for writing quick and dirty programs.

I lost so much time with this shit that the world should share my pain. Basic, watch yourself. It’s Arc time! Agora é a vez do Arc.

The Seaside Bookshelf

February 14th, 2008 § 0 comments § permalink

To those curious about the way Seaside applications are structured or just looking for a simple example to see how they differ from the other more usual Web frameworks, I’m making available the code of a simple experiment of mine: a small system to keep information about the books I’m reading, have read or intend to read.

For the remote possibility somebody asks about this, the system is inspired and modeled after Caffo‘s bookshelf. Of course, his system is prettier–and faster too, at the moment.

Some caveats about the code:

  • It’s running on a very old and underpowered server.
  • This is an alpha version so don’t expect subtleties in the code. I’m still learning Seaside, and migrating from 2.6 to 2.8 proved an interesting exercise.
  • The application depends on an instance of [GOODS]. The connection data for the instance can be configured in the application settings.
  • The login is a beautiful example of how things should not be done. I started with a normal login system, got lazy in the process, and adapted it to allow just one user to log. The user can be configured in the application settings as well.
  • I’m not using any deployment optimizations. Everything is in memory, and thumbnails are generated on the fly.
  • The code is Squeak-specific.

That said, the system shows how a Seaside application runs, and how Magritte can be used to model data. It’s enough to show how Seaside is different from any other of the usual Web frameworks in use today.

The code can be found below:

While the code will run in any Squeak 3.9 image, I recommend Damien Cassou’s Squeak-Web image. With his latest image, it’s just a matter of loading GOODS and the code to begin development. GOODS configuration, of course, is left as an exercise to the reader.

Software Craftsmanship

February 7th, 2008 § 2 comments § permalink

Recently, somebody recommended me to take a look at Software Craftsmanship, by Pete McBreen, as a good treatment of software engineering versus software craftsmanship as approaches for software development.

The theme is indeed interesting, but I was surprised to see how badly the book is written. McBreen, granted, does a decent job of presenting the main arguments for both sides–which is more than you would expect from a proponent of a specific approach–but he also repeats those same arguments endlessly. I don’t know how an editor managed to let something like that happen, but if the incessant repetition were to be eliminated the book would lose at least three quarters of its almost three hundred pages.

McBreen’s argument is simple: Software engineering is appropriate only for huge projects (those is the 100 developer-years and above). For simple projects, needing faster development and no critical hardware infrastructure, the old concept of craftsmanship is much more interesting: a master craftsman running a team of journeyman and apprentices.

I agree with the arguments and many of the other conclusions presented my McBreen. In fact, to the extent I’m concerned, that’s exactly the way I’ve been running my own small company. The results, so far, have been excellent.

Mane people reading the book, however, will quickly give up after reading two or three entire chapters essentially saying the same thing. They won’t look kindly also, to statements like the one below:

Software craftsmanship is the new imperative because many members of the software development community are stating to chase technology for its own sake, forgetting what is important.

The fact that the second part of that sentence is painfully obvious and that the relationship between the first and second part is clearly a non-sequitur doesn’t seem to bother McBreen, tough.

Nevertheless, much of what McBreen is talking about is valid and necessary, as when he describes how good enough software is not really good to users or the industry. Some of this analysis of the prevalent (and wrong) metaphors–like car building versus car design–were interesting enough to motivate me to finish the book.

Ultimately, the book is necessary and part of one of the most important debates taking place today in the industry. I’m afraid, however, that many readers will abandon the book after a couple chapters after being put off by McBreen’s redundant style. More’s the pity because a good editor would made the book the new Peopleware.

Needed: a new paradigm for Web development

January 18th, 2008 § 0 comments § permalink

In the past few days I have been thinking about the future of development–especially about the growing interest in tests and domain specific languages, and about the new trends in Web development. I was surprise to realize that, despite the fact the much talking has been done about how they may revolutionize the field, no significant application or framework is combining those concepts in something truly new.

The historic of the field is abysmal. We are now forty years into a period of very few changes in the conceptual basis of software development. For twenty years we have been using basically the same tools, practicing the same moves, and not moving are all. The industry remains bound to the minefield of object oriented programming, relational databases, and bottom-up design.

With regards to Web development, for example, although innovative in many ways, Rails and Django share two flaws that will make them obsolete as quickly as the other many frameworks that have graced the field in the last decade.

The first flaw is conceptual fragmentation. In an attempt to make Web development “painless”, those two frameworks and their descendants have diluted the way the application domain is considered in the application. It’s a more manageable–dumbed-down, if you will, way to develop application but the disconnection between code and domain is fairly evident.

The second flaw is the fixation of opinionated solutions. The use of REST by Rails is a good example of this kind of fixation. REST is a very useful concept, even necessary for some applications, but Rails half-baked external solution, full of accessory tricks, is sub-optimal. But Rails developers are sticking to it without questioning what it represents for their applications because Rails is opinionated software.

In fact, many of those so-called modern frameworks are just pretending complexity does not exist or that they can be easily managed by the use of small methods and a couple of tests.

Test-driven development is now being considered a silver bullet. New developers are using it as a panacea–as a way to guide design as if it would be possible to analyze the problem domain of the application by peering at the small window offered by TDD. The enormous amount of examples showing how to test what has been already tested is quite insane.

Seaside, which I tend to defend as a next step in Web development because of its innovative application of old concepts and its refusal to submit to the common solutions, is not the full solution though. Is great, it’s necessary, but it is still a step below what we really need.

Hopefully, the interest in concepts like language oriented programming will invite attempts to solve the Web development problem is new ways that will transform the field until the next revolution is needed.

Maybe we need a way to generate executable specifications that will really a way to build applications, and not a inferior way to model the expected behavior of an application. Maybe that can be a New-Year resolution: to think of a way to connect the dots, to join the loose treads created in the past twenty years. Is anybody up to it?

Where Am I?

You are currently browsing the Software Development category at Reflective Surface.