Building LLVM’s shared library on Mac OS X 10.6

January 15th, 2011 § 0 comments § permalink

I’ve been working a little bit on the ruby-llvm project (Ruby bindings for LLVM using Ruby-FFI)–mainly adding tests to the already existing functionality–and so had to build LLVM as a shared library.

Building LLV as a shared library on most platforms is trivial. It’s just a matter of enabling a flag on the configure phase of the build.

./configure --enabled-shared

I actually use brew to build it, but the principle is the same:

brew install llvm --shared

However, just building it like this on Mac OS X 10.6 results in the following errors when loading the library on Ruby-FFI:

dyld: loaded: /Users/<user>/llvm/2.8/lib/libLLVM-2.8.dylib
dyld: lazy symbol binding failed: Symbol not found: 
    __ZN4llvm2cl6Option11addArgumentEv
  Referenced from: /Users/<user>/llvm/2.8/lib/libLLVM-2.8.dylib
  Expected in: flat namespace

dyld: Symbol not found: __ZN4llvm2cl6Option11addArgumentEv
  Referenced from: /Users/<user>/llvm/2.8/lib/libLLVM-2.8.dylib
  Expected in: flat namespace

Trace/BPT trap

After some investigation and an e-mail exchange with Takanori Ishikawa, I arrived at the following patch which solves the problem and allows LLVM to load cleanly as a shared library:

diff --git a/Makefile.rules b/Makefile.rules
index 9cff105..44d5b2d 100644
--- a/Makefile.rules
+++ b/Makefile.rules
@@ -497,7 +497,7 @@ ifeq ($(HOST_OS),Darwin)
   # Get "4" out of 10.4 for later pieces in the makefile.
   DARWIN_MAJVERS := $(shell echo $(DARWIN_VERSION)| sed -E
's/10.([0-9]).*/\1/')

-  SharedLinkOptions=-Wl,-flat_namespace -Wl,-undefined,suppress \
+  SharedLinkOptions=-Wl,-undefined,dynamic_lookup \
                     -dynamiclib
   ifneq ($(ARCH),ARM)
     SharedLinkOptions += -mmacosx-version-min=$(DARWIN_VERSION)

The options above use the default two-level namespace on OS X and change name resolution to run-time resolution.

Using those options doesn’t seem to have any ill effects but I’m curious why LLVM doesn’t do that already, especially considering many other dynamic libraries for Mac OS X are compiled using the new options specified above. In fact, the former options actually seem to be remnants from pre-10.3 days. Just in case, I’ve asked that very question on the LLVM-dev mailing list.

Meanwhile, the patch works for me. I also made available a modified version for brew. YMMV.

On jedis, ninjas, and samurais

January 8th, 2011 § 0 comments § permalink

Geeks of all stripes often like to refers themselves as being the Computer Science equivalents of some warrior society or other that, let’s be honest, have the tons of cool in their day-to-day affairs, imagined or not, that real people don’t. I can’t really fault anyone who does that because I’ve done it as well.

Granted, I never liked Star Wars that much. Being a Star Trek fan, I always considered Star Wars something you grew up from after a while–good for kids but not much else (please, don’t kill me, I’m just kidding–well, not that much). Star Wars is fantasy, Star Trek is science. But, yes, the Jedi are cool. I’d rather yield a light-saber than a phaser, but give me a quantum torpedo any day over any weapon the Empire or the Republic can devise.

And there are also the samurai–that old-school, valiant, often involved in hopeless, honor-bound matches. From Seven Samurai to The Last Samurai–and let’s not forget Eiji Yoshikawa‘s novel–we Westerns have always admired the way those mostly Japanese warriors conducted themselves, considering their Way of the Warrior something at least to aspire to.

Finally, there are always the ninja or shinobi. Sure, their are not very popular nowadays, but there was a time they were the rage of the young population. As with the samurai, theirs was an art grounded in pretty much the same principles of honor and duty–although they were the functional equivalent at their height to a Black Ops team while the samurai could be considered more of Special Forces, sometimes for hire (as ronin) and sometimes bound to a given House.

But one thing all those orders have in common is that they were mostly monastic-like orders, based on an strict code as how to proceed, on a very strict training discipline, and, in many cases, honor-bound not to contract any format relationships beyond those formed with their brothers in arms.

As geeks, we often like to compare ourselves to members of those orders because, as I said, they are cool. Except for Ninjutsu, which preserves many of the training shinobi, you can’t really become one of them but you can aspire to some of the same ideals, try to live by the same codes because as much of them apply to interpersonal relationships as to war and survival itself.

But there’s something we mostly forget about those orders–the fact that they were primarily and above it all, about discipline. Both the real, historical training of the samurai and ninja and the imagined Jedi education required an immense and live-long commitment to discipline that overshadowed anything else the person would do. And, most of the times, required sacrifices.

Which brings me to my point.

In the past four years, I’ve been part of almost ten different teams. I’ve seen teams succeed and fail, to recover and proceed, to bond and become great, to be disbanded and go on with their lives. In short, I’ve been part of a large number of situations in which to participate and observe how teams interact and get things done.

And in all those years, one of most important thing that separated bad and even good teams from great ones was discipline, often the most overlooked part in the examples geeks try to emulate when choosing their heroes.

It’s quite ironic that people often profess to like Agile methodologies because they seemingly create order from chaos through self-managed teams, teams that supposedly don’t need much direction to get going and do great things, teams that don’t need to be told what to do.

But the truth is, Agile will only succeed with teams that are very disciplined and that understand the trade-offs you will need to make in order to make a project happen. Yes, Agile is about embracing change but that only means you will have to make sure you work better with your peers and with the organization as a whole–understanding change, and those trade-offs requires discipline and a down-to-earth approach that most people seem to overlook when becoming enchanted with Scrum and its sister disciplines.

I was talking to a friend a couple days ago and we were discussing how often geeks of the younger generations are using the semi-ADD excuse to go off track on projects and postpone things. Geeks, he was saying, are notorious by their short-attentions spans.

I think–and said that to him–that the opposite is true. The true geeks are those disciplined enough to maintain their focus and keep going in spite of distractions. You need to be pretty focused if you want to debug that heinsenbug that has been plaguing you for the past 40 hours and keeping your server crashing each couple of hours. You need discipline to keep poring over documentation, going back and forth, to find that elusive piece of information that will optimize your routine so that it will really run for large datasets. And you need a strong sense of direction to participate in a team and keep track of everything that’s going on in an ever-changing environment.

In short, discipline is what separates the dilettantes from the craftsmen. It’s what makes thing happen and what really creates great teams. It doesn’t mean you need to be a prick, or that you can’t have fun, or even that you need to follow pre-ordered steps every time you do something. But it means you need to practice and give thought to what you’re doing until it becomes second nature, until you really master your art.

And that’s what ninjas and Jedi and samurai do. They don’t dabble, they don’t run when the going gets weird and the tough turn pro. They just–you know–do it, and do it well.

Doing it with GroupOn

December 27th, 2010 § 1 comment § permalink

There is a big discussion going on about GroupOn‘s business model. After the company refused Google’s acquisition offer, nobody could decide whether the company owners were just too crazy or too brilliant, confident in their business ability to outperform any kind of offer Google could come up with–and that after Google virtually double its initial offer.

John Battelle is one of the ones who think that GroupOn made the right choice. He writes:

Good sources have told me that GroupOn is growing at 50 percent a month, with a revenue run rate of nearly $2 billion a year (based on last month’s revenues). By next month, that run rate may well hit $2.7 billion. The month after that, should the growth continue, the run rate would clear $4 billion.

Battelle attributes this to a combination of factors (relationships, location, and timing–see the article for a more in-depth explanation) that make GroupOn’s appeal to small business pretty much irresistible. As he notes in his article, that run rate is the triple of what Google itself experienced in its early years.

I was talking to a friend a while back about social buying and he said the major problem that will affect and eventually kill GroupOn’s–and all of it other clones, by extension–is churn, that is, the fact the a lot of the offers were creating problems for the businesses using the platform. In fact, there have been a lot of reports about people being mistreated if they came bearing a GroupOn or equivalent coupon, and I have heard some of those stories first-hand. In many of those cases, the business owners had miscalculated what they could or should offer and were unhappy with the entire experience, consequently becoming less and less interested in working again with GroupOn or their local clone.

But I believe that the churn we are seeing right now is just a consequence of the way new markets behave. If Battelle is right–and I believe he is–the rate of churn will fall with time as business begin to find their sweet spots in the social buying ecosystem. I don’t see why, for example, GroupOn can’t offer a tool that will allow business to input some parameters and find the ideal price for a given offer. Granted, that will never be exactly precise but will give most business owners using the platform a way to avoid the most extreme problems.

But, ultimately, I believe that GroupOn will succeed because it’s changing the way people are relating to those business using social buying to attract them. Recently, talking to two other friends, they told me how our local clones had impacted their buying patterns.

One of them, a 40-ish divorced guy, said he wouldn’t dine out anymore unless he had a coupon and that the coupons were helping him to raise the bar with regards the kind of places he was able to go to in a single month. Previously, going to a more expensive place meant he was able to do that just a couple times each month. With the help of coupons, he was going to more expensive places one or more times each week. That’s a huge change in spending patterns and one that’s benefitting him and the restaurants he likes.

But other guy, a 30-year-old or so guy, said social buying was actually helping him to get laid. You see, this is a single guy who is using a variety of coupons–for restaurants, spas, clothing, small items–to impress and convince women to have sex with him. He is still using a considerable amount of money, but GroupOn and the likes are helping him to spend that money in a more efficient way towards his objects–which, right now, are pretty much limited to getting laid as many times as possible with as many women as possible. And it’s pretty evident from the way the market works that any business that helps people to get laid–or find any measure of sexual satisfaction for the matter–is in much better condition to thrive.

So there you have it: people are getting laid using GroupOn. That makes GroupOn business position a much stronger one. Battelle is right, from the small business’ point of view–but my friend is also right, from the consumer’s point of view.

Either way, GroupOn wins.

Use dynamic languages

August 12th, 2009 § 1 comment § permalink

Ladies and gentlemen of the class of 2009:

Use dynamic languages.

If I could offer you only one tip for your future programming careers, dynamic languages would be it. The long term benefits on dynamic languages have been proved by thousands upon thousands of programmers, whereas the rest of my advice has no basis more reliable than my own admittedly limited experience.

I will dispense this advice now.

Enjoy the power and expressiveness of homoiconic languages. Or forget they exist. You will never really understand the power and expressiveness of homoiconic languages until you have spent forty hours straight debugging some heisenbug. But trust me, twenty years from now, you’ll look back at all the code you have written and wish you had used a homoiconic language. You non-homoiconic code is elegant, but not that elegant.

Don’t worry about the LOC of your programs. Or worry, but know that measuring lines of code is as effective as trying to count parenthesis in Lisp. The real troubles in your programming career will come from metrics that never crossed your mind, like the number of type declarations in your classes, the kind that will make your curse the compiler for its pretense safe type system at 4am in some caffeine-driven code marathon.

Write one line of code everyday that scares other programmers.

Comment your code.

Be careful with other people’s code. Don’t put with people who are not careful to keep your shared code as easily maintainable as when you wrote it.

Don’t use TODO, HACK or FIXME comments in your code.

Don’t waste time on programming languages wars. Sometimes your favorite language is ahead on the TIOBE index, sometimes it’s not. The race for delivering the code is long, in the end, only your lines count.

Remember the forks and patches your code receives. Forget the innuendo about its quality. If you succeed in doing this, tell me how.

Throw away obsolete documentation. Keep old beautiful code.

Fork.

Don’t feel guilt if you still haven’t learned Assembly. The best programmers I know only bothered to learn it when they really needed it. Some of the most incredible programmers I know make a point of not learning it.

Drink coffee moderately. Be kind to your hands. You’ll miss them when RSI comes knocking.

Maybe you’ll write a compiler, maybe you won’t. Maybe will write a Linux kernel driver, maybe you won’t. Maybe you’ll write artificial intelligence systems in ML, maybe you won’t. Whatever you do, remember any of those accomplishments is as relevant as discussing whether Emacs is better than Vi.

Enjoy your test suites. Use them in whatever way you need. Don’t be afraid of what people say about TDD or of what people think of BDD. Sanity when developing is the greatest tool you’ll ever have.

Celebrate every successful build even if you are alone in the datacenter and nobody can share your happiness.

Write a Makefile at least once, even if you have never to bother with writing one again.

Don’t read Microsoft’s technological magazines, they will only make you despair of seeing beautiful code.

Get to know the big names in computing. You will miss knowing what Alan Turing and Donald Knuth did some day. Be kind to your fellow programmers. In the future, they will be the ones who will help you find the proper libraries when you need.

Understand that languages come and go, but that there are a few you should always keep yourself proficient in. Work hard to understand the features of each language you come across because, the older you get in your career, the more you will need to understand the purpose of certain features and techniques.

Write a couple programs in C, but dump the languages before it makes you believe manual control of memory is good. Write a couple programs in Haskell, but dump the languages before you come to believe that cryptic error messages are tolerable. And remember to learn a new language now and then.

Accept certain inalienable truths: market languages like Java and C# suck, dynamic typing is better than static typing, and your programming career will end someday. When when it does, you will fantasize that when you were a hot shot programmer, market languages were not that bad, that static typing was safer, and that your career would never end.

Respect those whose careers have ended because they contributed for you to be in the place you are now.

Don’t expect anyone to teach you to be a better programmer. Maybe you will have a mentor. Maybe you have access to better manuals. But you never know when either one might run out.

Collect a reusable code library but don’t add too much to it or you will find, just when you need it, that most of the code there is too terrible to use.

Be careful whose algorithms you use, but be patient with those who created them. Algorithms are like pets. Everybody thinks theirs are trustable, clean and fast but the truth is always different from that and they rarely are worthy the bytecode they generate.

But trust me on the dynamic languages.


Best enjoyed while listening to “Wear Sunscreen”, of which, I hope you notice, this text is an obvious parody.

I’d rather have a whale

April 8th, 2009 § 0 comments § permalink

The whole Twitter brouhaha impressed me particularly in one key aspect: how people who have no experience whatsoever in big system think they can give valid opinions about them (regardless of language or framework or platform used).

I won’t offend readers saying I do have extensive experience in the matter; also, I won’t say I have any knowledge beyond what a good software engineer should have. My current experience with the matter is centered around closely following the development of an application that recently surpassed 60 millions monthly page views, and which is also growing constantly each month.

This particular application is entirely written in Ruby on Rails and considering how much effort is needed to maintain, evolve and operate it, I have nothing but sympathy for the Twitter team. Keeping an application the size of Twitter online, with all the distributed complexity it implies, is laudable.

It’s even more impressive how people assume the Twitter code is shitty. Even if it was–and even assuming it is–criticizing it for that is still bullshit. Even for an application riddled with technical debt, the balance between that debt and the value delivered for the user–which is something even the Twitter detractors have to agree on–is a fundamental and sound business decision.

Martin Fowler talks eloquently about that balance in one of his recent articles:

The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline. The all too common problem is that development organizations let their debt get out of control and spend most of their future development effort paying crippling interest payments.

From my point of view, the fact that Twitter has experimented with other technologies, has benchmarked the application and has sought better solutions is a clear indicator that they are trying to fix their debts. Asking for more than that is a shallow display of arrogance and ignorance regarding how business is done and how real code is produced.

To freely and publicly admit to problems, trying to create a coherent discourse is something I respect. Saying things like “As far as I’m concerned, Twitter is a case-study in how Ruby on Rails does scale, even in their hands”, on the other hand, eliminates any possibility of a rational dialogue. The Rails community should be ashamed of its luminaries by now.

Programmers do not operate on ideal worlds. Until the people criticizing Twitter are able to show that they’ve done their homework dealing with the questions Twitter is facing, I’ll rather have the whale. Only proper for humans, after all.

The last D in TDD is for Design

February 3rd, 2009 § 1 comment § permalink

In my last post, I wrote about my opinion on how tests are meant to express the relationship between specific parts of the code and not to repeat knowledge of interfaces and contracts. In my experience, the most valuable tests are those who exercise those interfaces and contracts indirectly, through the particular architecture implicit in their design.

The growth of agile tests is a recent phenomenon, which is offering now a good opportunity to talk about good practices, philosophy and methodologies of development in the context of Agile testing. In special, the Rails community is doing an exceptional work in bringing tests to the forefront of the Agile discussion in the Web development community.

However, the success of testing lends itself to a lot of misunderstanding among novice developers and also among those developers not so used to TDD and BDD. More so, the also recent multiplication of testing frameworks has resulted in a lot of bad code as frameworks try to compete with each other offering new features that, in some cases, are actively detrimental to the health of the test suites.

In some ways, that is the same discussion about what is the real difference between TDD and BDD, but I think the particulars of the subject deserve a little more emphasis. To sum up the argument, that point is that you should never use tests as a replacement for good architectural practices.

That may sound simple and obvious, but is easy to find examples where testing frameworks not only fail to abide by that principle but actively encourage bad behavior. Taking Shoulda, for example, it’s very common to see code like that in projects using it:

class UserTest < ActiveRecord::TestCase
  should_belong_to :account
  should_have_many :posts
  should_have_named_scope('recent(5)').finding(:limit => 5)  
  should_have_index :age
end

This kind of code doesn’t prove anything about the architecture of the class. The code above:

  1. It’s redundant, because the three first clauses can and will be tests in their use on other parts of the code, viz., the controllers;

  2. It’s brittle, because it’s too tied to the class implementation details;

  3. It’s little more than sanity testing to see if the developer remembered to properly declare some model stuff;

  4. It’s exposing orthogonal implemental issues, like the fact that the application is using a database-based persistence engine in the case of the index matcher.

Overall, the tests above are almost completely useless. There may be some justification for the name scope test but it’s still redundant.

Yet worse, that are some examples like the Remarkable matcher named shouldhavebeforesavecallback, which is actually detrimental. A test that exposes so much of the inner functionally of a business object has absolutely no justification to exists in the first place. It’s a complete deviation from what TDD represents.

Tests, once again, are about interoperability between parts of the code. They are part of a architectural discourse that tries to remain focused not in implementation details but on the growth of the code base. The goal, as always, is to write the smaller body of tests–axioms, if you will–that will give a proper indication about the validity of a given body of code. Simplicity, in other words, which, as I believe, should be an explicit goal of good architectures.

Tests: Pragmatism or ideology?

February 1st, 2009 § 2 comments § permalink

I like most of what Joel Spolsky and Jeff Atwood write, but the last conversation between the two of them in their regular postcast show a blatant lack of knowlege about what tests and TDD really are.

At the core of their arguments is the idea that high code coverage through tests–Jeff Atwood mentions the 95%-plus range–makes the maintenance of the tests themselves time consuming, considering the proportion of the tests that need to be changed when the code changes. A secondary argument is that tests are more suited for legacy code, except for the kind of new code that has natural rigidity, as, for example, the specification for a compiler.

The solution for the second argument is simple: all code is legacy. Simple as that. Code the becomes production code is instantly made legacy and the argument that there is some difference between “older” and “newer” code is dubious in the best of the cases.

Reading the transcription of their dialog is possible to identify a confused notion of what tests really are–especially when both talk about the relationship between testing and architecture, something that in the agile context is commonly referred as TDD or BDD.

That confusion–that tests are meant to cover method or class interfaces–is extremely common even among practitioners of agile testing methods, be it among those who propose tests as design tools, as it’s the case of TDD and BDD adopters or be it among those who simple use tests as post-coding tools to verify code behavior in an automated way.

I can sympathize with the argument that 100% code coverage is usually unnecessary. In face, 100% code coverage never means that your code–and by extension your architecture–is without flaws.

First, because 100% of real code coverage is really impossible to achieve for any meaningful body of code. Dependencies make that a given. Second, because no matter how much tests you have, cyclomatic complexity will always get you in the most inappropriate times. No matter how much white- or black-box testing you’re doing, complete coverage is always directly exponential to your code.

There is also another factor represented by a causal variation in the 80/20 rule: the most benefits you will ever achieve from testing are always in the most complex parts of your code, but the real gain comes from the tiny deviations that blindside you on a lazy Tuesday. In this case, the more coverage you have, the easier it will be to introduce new tests.

And that’s the real reason why Spolsky and Attwood argument fails: tests are not about interfaces, or APIs or contracts. They’re rather about the relationship between the different pieces of your code. In that distinction is the root of one of the biggest debates raging in the agile test community: what’s the real difference between TDD and BDD.

My answer is centered around a small reinterpretation of what TDD is. Instead of seeing it as Test-Driven Development, I see it as Test-Driven Design.

If you’re using tests as a way to guide your design, that means you’re worried more about knowing how the pieces fit together than about how they work, as mentioned above.

Joel says:

But the real problem with unit tests as I’ve discovered is that the type of changes that you tend to make as code evolves tend to break a constant percentage of your unit tests. Sometimes you will make a change to your code that, somehow, breaks 10% of your unit tests.

Of course you can make changes that will break 10% of your tests, but in my experience that will only happen if your tests are brittle and if your design is already compromised. In that case, you can throw away the tests because they’re not helping anyone.

A couple of weeks ago, I made a substantial change in a system I wrote. I had to change a middleware protocol engine from DRb (distributed Ruby) to JSON over HTTP. This particular code is 100% covered.

Because of the protocol change, a considerable part of the code was touched in some way. But only three or four new tests had to be written to deal with representation changes–something that will also be of use in future protocol additions–and none of the existing tests was modified. Code was moved around, changed to new classes, but, all in all, the tests remained the same.

The explanation for what happened in simples: while there are a few tests dealing with specific interfaces, most of them are concerned about the relationship between the parts of the application: about how data leaves this part of the application in that format and is reinterpreted in a different format suitable for another part, how a given AST is reorganized to suit the language generator in a differente part of the application, and so it goes.

Jeff continues to say:

Yeah, it’s a balancing act. And I don’t want to come out and say I’m against [unit] testing, because I’m really not. Anything that improves quality is good. But there’s multiple axes you’re working on here; quality is just one axis. And I find, sadly, to be completely honest with everybody listening, quality really doesn’t matter that much, in the big scheme of things…

This is something that made me to rethink the entire context of the discussion. I’m really surprised that somebody that considers Peopleware and The Mythical-Man Month basic references for programmers would say something like that. Both books have entire discussions about quality being the focus of robust code that can be delivered in less time and that can add more value to business and users. Saying that quality is just one axis is the same as saying that good is enough, even if you have to throw it away later and start all over again because you couldn’t bother to design your architecture in a better way.

To sum up, TDD or testing is not an end in itself. But the argument that using tests is an ideologic waste of time fails when one considers how it can help to insure architectural decisions.

Joel is very known for his pragmatic approach to bug fixing. Tests are a very programatic way to ensure that a given set of conditions won’t trigger the same flaw in your applications. That’s that business value–in hours saved–that Joel and Jeff are talking about.

At the end of the day, pragmatism is what really counts. And tests, when done right, are some of the most pragmatic tools a programmer has in his arsenal.

A conversation with Randal L. Schwartz

May 2nd, 2008 § 3 comments § permalink

During FISL, I had the opportunity to watch Randal L. Schwartz talk about Seaside. Schwartz is very well known in many open source communities–especially in the Perl one–and now is evangelizing Smalltalk and Seaside. I asked him if we could talk a bit about the subject, given my previous interest in the field, and he graciously agreed to an interview.

Without further ado, here’s what we talked about:

Tell us a bit about you: what’s your background, how did you started programming, what are you doing today?

I taught myself programming when I was 9. By the time I was 15, I was teaching programming from the front of the room to my classmates, and writing contract code on the weekends for real money.

I worked for three different companies for a total of eight years, before starting Stonehenge in 1985. Stonehenge has grown over the years: we’ve can count 17 of the Fortune 100 Companies as our clients.

I spend a lot of my time lecturing and writing these days, but I also still design, create, and review code as well. I answer questions for free for about an hour or two each day on the dozens of mailing lists and blogs and web communities I frequent.

You are extremely famous in the Perl community, but now you are strongly advocating Smalltalk/Seaside. What did change? When did you start using Smalltalk?

I started using Smalltalk before Perl was even invented, back in 1982. I’ve already written that story up at my blog.

What are Smalltalk advantages over other traditional languages like Perl, Ruby or Python, for example?

Smalltalk has a very simple syntax: I can teach the entire syntax in about 20 minutes, and include it as part of my talk introducing people to Seaside. The major Smalltalk implementations (except GNU Smalltalk) also have a mature IDE, allowing easy exploration of code relationships, and to learn the libraries as needed by looking at both the implementation and the uses.

And that’s a bonus as well: we have two commercial smalltalks (Cincom and GemStone/S) as well as two open smalltalks (Squeak and GNU Smalltalk) all supporting Seaside. This allows a nervous manager who might be hesitant at selecting a strictly “volunteer-based” language to also have two commercial vendors to pick up support. Options are good!

Do you believe Smalltalk will finally reach mainstream status?

Well, it *had* mainstream status in the mid 90s, just before Java entered, at least with large Wall Street firms and other institutions who wanted rapid GUI development to stay ahead.

But yes, I believe Smalltalk is positioned today to reenter as a major player. For details, see my “Year of Smalltalk” post.

Also, your talk was entitled, “Seaside: Your Next Web Framework”. What is really interesting about Seaside?

I like how Seaside can abstract both control flow (along one axis) and representation (along the other axis) with relative ease. Seaside seems to put the right related things near each other. I also like the “debug the broken webhit within the webhit”: when something blows up, I can explore in the standard debugger, fix what’s broken, patch up any mess, and then continue within the same web hit, as if nothing broke.

Also, the traditional Rails persistence is provided with Active Record, which requires objects to go through an object-relational mapper to drive SQL queries. Seaside can do the same thing (via GLORP), but a better solution is to avoid the mapping entirely, using things like the open source Magma solution, or the commercial GemStone/S Virtual Machine. When you can get rid of the ORM layer, you get a lot of speed back, and a much easier programming environment.

What do you see in Seaside’s future, and how does it compare to the future of the other frameworks?

The Seaside team is currently refactoring and repackaging Seaside so that portability will be easier to manage and so that you can pull in just the parts that you need. I also see a lot of bolt-ons being created, like the Pier CMS and adaptors for various APIs such as Google Graphs.

Do you think the market is ready for Seaside?

Yes. Ruby on Rails reopened the discussions about what to do in a post-Java world, by going back to the late-binding languages like Perl and Python and Smalltalk. And Seaside is a mature framework, being even older than Rails, but just not as well known. I’m hoping to change that.

Have you deployed anything using Seaside? If so, what were the challenges?

I’m working on a few projects now, but nothing is public yet. The initial challenge was the relative lack of documentation, so I spent the better part of two days going through every posting to the Seaside mailing list. I feel much better informed now, but my eyes were pretty bleary. I hope to repackage the knowledge I gained into postings to my blog as well as helping to answer questions on the IRC channel and mailing list.

You are now part of the Squeak Foundation Board. What are your plans for the Foundation?

My primary concerns are licensing issues, release management, and proper publicity. All of these issues are being addressed, but of course, we’re all volunteers and always looking for more qualified volunteers to help.

Are there any Squeak Foundation plans for Seaside?

Nothing formal that I’m aware of. However, Squeak is the primary development platform for Seaside, so we’re sure that Squeak will remain an essential component.

What are the most promising developments in the Smalltalk/Seaside world currently?

Well, what got me involved is GLASS, the GemStone/Linux/Apache/Seaside/Squeak solution to get people up and running with Seaside quickly. This also entailed the GemStone management creating a zero-cost commercial license for a fully functional (but limited) version of GemStone/S. With this free version of GemStone/S, you can build a business, and when your business exceeds the capabilities, there are strategies about migrating to larger licenses that are reasonable. It’s a great solution for getting a rock-solid commercially-supported Smalltalk VM with persistence and clustering into your plans.

What about next year’s FISL. How did you manage to get three entire days for Smalltalk?

As I said, “it all started over a couple of Caipirinhas…”

What are your plans for those three days? Do you plan to bring other Smalltalkers?

I will be working with the FISL organizers and the various vendors and groups of the Smalltalk community to produce a full mini-conference. I hope to have both beginning and advanced Smalltalk training, as well as various Seaside tutorials. I expect this conference will attract a significant number of Smalltalk developers to FISL for the first time, as well as expose Smalltalk to the remainder of FISL, so it’s a win for everybody.

Many thanks, Mr. Schwartz, for the interview.

Arc

March 3rd, 2008 § 0 comments § permalink

Arc’s Out:

Arc is still a work in progress. We’ve done little more than take a snapshot of the code and put it online.

I’ve working on this for a long, long time and realized I’ll never get it done properly so I’ll release it anyway.

Why release it now? Because, as I suddenly realized a couple months ago, it’s good enough.

It’s shit but I’m famous enough that people will be talking about it for a long time. People will think it’s good even if it’s really just a bunch of macros on top of Scheme.

I worry about releasing it, because I don’t want there to be forces pushing the language to stop changing.

I’m not going to change it, but if you idiot enought to want to use it, remember that there’s not documention. In other words, don’t call me if you can understand a single line of the code.

Which is why, incidentally, Arc only supports Ascii. MzScheme, which the current version of Arc compiles to, has some more advanced plan for dealing with characters. (…) But the kind of people who would be offended by that wouldn’t like Arc anyway.

I don’t understand and don’t care for any other character set other than my precious ASCII. I learned it forty years ago and I’m not giving it up now. No way. Ah, that why Yahoo! completely rewrote the application I sold them. Bunch of losers.

Why? Because Arc is tuned for exploratory programming, and the W3C-approved way of doing things represents the opposite spirit.

Also, I don’t understand anything about new and modern standards and technologies like XHTML and CSS. And I’m not waste my precious VC time learning them. And I don’t care about you people who dare to make the Web less complicated. Did I mention why Yahoo! had to rewrite the program they bought from me?

Tables are the lists of html. The W3C doesn’t like you to use tables to do more than display tabular data because then it’s unclear what a table cell means.

I told you. I don’t understand anything about HTML.

So experience suggests we should embrace dirtiness. Or at least some forms of it; in other ways, the best quick-and-dirty programs are usually quite clean.

Look! A dumpster! Let’s have some fun!

Arc tries to be a language that’s dirty in the right ways. It tries not to forbid things, for example. (…) For now, best to say it’s a quick and dirty language for writing quick and dirty programs.

I lost so much time with this shit that the world should share my pain. Basic, watch yourself. It’s Arc time! Agora é a vez do Arc.

The Pragmatic Programmer

March 2nd, 2008 § 2 comments § permalink

This is another book about software as a craft but written in a style that’s much more interesting and accessible. Dave Thomas and Andy Hunt have a lot of experience in the field and it shows.

Most of the advice given is pretty obvious but every programmer should remind himself now and them of what’s important to his programming career. This book does exactly that.

Most of the advice provided in the book is in those areas where programmers have more trouble–like communication and dealing with managers. That’s very necessary, considering how much more integrated progamming has become today, and more so in Agile contexts where outside interaction is paramount. But there is also a lot of practical advice about the best way to prototype, how to handle the problem domain in terms of languages choice and lots of similar subjects.

Also, reading this book reminds me again of how bad Software Craftsmanship was. McBreen really sounds like he read this book, had a couple of nice insights and decided to write an entire book on what should have been an article or an essay.

Of course, two of the most interesting parts of the books are the challenges and exercises. The challenges are questions about the text just read, leading the reader to expand his comprehension and think about the way what has been just learned applies to his work. The exercises, on the other hand, are more about practicing the knowledge in code. Both are good tools to make sure the knowledge acquired in fixed into the reader’s memory.

Of course, the book is not without flaws although many of them may be attributed to the time in which it was written. For example, there is a tendency in the text to present Java and its related technologies as leading the way to the future. But those are small problems in a otherwise great book.

The ending was a little slow, as well, in face of everything that was already said but I would encourage readers to stick with the book. Even in the slow chapters there a lot of food for thought.

All in all, this is a practical and current book that will benefit every programmer reading it. Even programmers with a lot of experience will learn something or at least be reminded of things they should be doing and may not be doing right now. I strongly recommend it.