I must be the last one to realize it, but a popular online store here in Brazil, Americanas, launched three digital services stores: a digital photo printing service, a music download service (in the mold of iTunes and Yahoo! Music), and a mobile ring tones store. The musics are still DRM-locked (and it’s likely they will remain so) but it’s a start given the store’s popularity.
June 25th, 2005 § Comments Off § permalink
June 24th, 2005 § Comments Off § permalink
I’ve been working with UNIX and UNIX-like systems for a long time already and I never cease to be amazed at the differences in philosophy between those systems and other operating systems.
Recently, I was moving a system from a server to another, and, for various reasons, I had to modify every page in the system. In a Windows system that would require me to use a variety of complete applications, but on Linux it took me only two utilities: a recursive wget to download the system from a server to the other, and a similarly recursive rpl to change the things I needed in each page — directly in the server’s command line. The use of wget was required because I had only FTP access to the first server.
On Windows, even if I had remote access to the server, I would likely have to install a FTP client, a complete application, to download the files. Then I would have to install a text editor with support for recursive search and replace. That’s not hard, but I still would have to install them. (And most times when I’m consulting on Windows, I find exactly this setup in clients’ servers: FTP clients and text editors). Without remote access, something Windows administrators don’t like to grant, unlike UNIX administrators, it would have been even harder.
Of course, you can simulate a UNIX environment on Windows. But the unholy trinity power-flexibility-simplicity I so value in any computing environment is not inherent to Windows as it is for Linux, out of the box.
June 23rd, 2005 § Comments Off § permalink
All this talk about the use of open source by the Brazilian government seems like a great idiocy to me sometimes. Despite what the open source zealots say, the government understands the open source as well as somebody who has never seen a computer in his life — with or without Gilberto Gil. I’m a big supporter of open source — I have talk about it, written and distributed open source code, fought for its adoption in the places where I worked when it was warranted — but I believe the government’s instance is wrong in many accounts, and if not seriously discussed in some of them, bad for the country in the long term.
There are two basic kinds of open source projects: those that were created because some needed exceeded the inertia required to bootstrap a development community — JBoss and Apache are some examples — and those that were created to scratch an itch. The former tend to succeed and the latter to fail. Of course, there are exceptions — Linux itself is one of them.
Nonetheless, the first kind of open source project tends to be able to keep themselves on their feet, going strong and creating a whole ecosystem around them — complete with parasites and forks, but keeping the project’s original vision within its building parameters. The second kind, on the other ends, tends to start strong, but is soon releasing only small updates that are not worth the download and reconfiguration required by them.
Of course, where the first kind of projects fits the needs of the government, the government would do well by using them. As any business, the government is more interested in stability and maintainability than ideology. The second kind of project, in this sense, can even be hurtful to the government’s technological base, because of its limited lifetime and the possibility of successive reinventions of the wheel — something that happens too much in bureaucracies and needs no help from open source projects to make it even worse.
So, the choice of open source code, where the government is concerned, should not be guided only by ideology. More likely, the current open source revival has more to do with cost than the other benefits open source brings. That kind of thinking will surely become a problem as more open source projects are adopted. The government has many itches to scratch, but itches are not the areas that open source scratches well.
I know one certain example in which the government adopted an inferior open source codebase and tried to coax it into a working applications. They succeed only in creating a huge mess of code that’s hard to maintain and read — and that only after they rewrote half of the initial code.
There is also another fundamental question: the internal market. No company will willing become a philanthropy — which is how government seems to understand open source — deciding to release all of its projects as open source, especially those who require a good degree of customization. There are some systems which do not benefit from a open source approach, including specialize system, of restricted use, so common in the government.
A unilateral decision favoring open source can eventually slow the internal market, raise costs, and reduce quality. Can WordPress function as a CMS system, complete with workflow? Maybe, but only in a limited way. Can it be customized to become a full-blown CMS? Yes, if it’s acceptable to fork it and make is fragile and brittle to chances.
In those instances — and especially those who require integration with legacy system — I believe a strong internal market, not bound to open source, is the best way to go. Open source for open source’s sake, merely because it seems healthier, prettier and less costly, will do the government no good.
Of course, I think there are people in the government thinking about those issues. I doubt people like Sérgio Amadeu would be so careless. But the zealots are everywhere, and I don’t see any harm in bringing the subject again. Open source can only benefit from this kind of discussion.
It’s not that American Gods is better written than Neverwhere. It’s more that Neverwhere seems to be an introduction or a first attempt to what would be American Gods later. Both stories have so much in common that reading them in such a short time span made me pay more attention to some of the flaws of the book than I would had otherwise.
That said, Neverwhere is a good book. Neil Gaiman once again shows his talent as a teller of modern fairy tales, whose magic is explicit in every page. It’s impossible not to be fascinated and surprised at the situations and characters Gaiman paints in the pages of Neverwhere. Even being a much smaller book than American Gods, it achieves the same verisimilitude of the latter in terms of world-building.
Nevewhere tells the story of Richard Mayhew, and Englishman who leaves the country for a job in London, searching for a better life. He succeeded and his quiet life in London seems to be heading to the right place: he has a good job, the perfect fiancée, and everything is going well. Until, in a fateful night, he finds a girl lying on the sidewalk near his home. She is hurt and afraid, and he helps her, which transforms his whole life. Suddenly, nobody seems to know — or even see — him, except for the mysterious inhabitants of another London, London Below. Recruited for a cause in which he doesn’t believe, Mayhew needs to learn to deal with the dangerous world of which he’s part now if he expects to survive the day.
In his London Below, Gaiman creates a convincing vision of London’s underworld: an extraordinary place filled with mythical creatures whose lives flow at the margins of London Above. As with other famous cities of literature, London Below is complex and multifaceted, with surprised sprouting from every dark corner. Obviously, a London reader will see much more than I saw, but Gaiman is careful enough to make most of the references readily understandable in the international version of the book.
After I had read the book, I wanted to see the TV series in which it was based but I don’t think that will happen anytime soon. Even being for a different medium, such visual version would have been interesting to see.
With another of Gaiman’s books in my collection, it’s time to buy Coraline, which I’ll do as soon as I finished the other books I’m reading now. Judging by everything I read from Gaiman until, I won’t be disappointed.
June 22nd, 2005 § Comments Off § permalink
In the past ten years, I accumulated more e-mail accounts than I care for. Some free, some I still pay for — by the way, who still pays for e-mail accounts today — and, of those, fourteen still remained active.
After the Great Inadvertent System Erasure, I decided to reduce the number of accounts I had given the trouble I had to configure all of them after I reinstalled the system, with all assorted filters and tools. Also, checking and filtering those accounts was taking a long time every day, and some of them received mostly garbage.
So I started by simply deleting eight of them, which were perfectly useless. Other, which are in my server, were redirected to a single account, since the individual accounts no longer serve any purpose and only represent more configuration.
I’m now down to two personal accounts, which serve completely different needs. I also exchanged all POP3 accounts for IMAP4 accounts to avoid problems with local backups. IMAP4 has its flaws, but it will serve me now.
Nice. Now I only need to find a better e-mail client. Evolution is good at POP3, but sucks at IMAP4. Thunderbird is interesting, but is interface is still a bit crude. Not that I mind testing new software, anyway.
In the ten years since I started to work developing Web applications, I learned something well: never trust a library to write the HTML for you. The more I work with Web applications the more persuaded I am that this is a good rule. Any library that hides the generation of the presentation layer behind complex routines or components tends to limit choice and make optimizing the resulting HTML a real pain — if possible at all.
The problem with such kind of abstractions is that they leak too easily, leaving the developer in a situation in which he must uses the abstraction only for simple things and break it every time something more complex is needed.
Take .NET, for example, since it’s especially awful in this area. Created to completely eliminate the need to write HTML by hand, it ends up uselessly complicating the job of programmers, forcing them to deal with two interfaces at the same time to achieve an objective that concerns only one of them. The result is a half-baked solution, hard to read and hard to maintain, with unpredictable results in different platforms.
Effectively, those libraries ignore the decisions made by the programmers, generating code that may, at first glance, behave the way the programmer intended it, but that is often so riddled with collateral effects that it will surely cause problems in the future.
This is fatal when Web standards come into the equation. Taking .NET as a example again, one of its components is intended to simulate a panel in a normal UI. On Internet Explorer, the component is correctly rendered as a
<div>. On Mozilla, however, it comes out as a
<table>, completely changing the page’s semantics, which, on its turn, will affect style sheets and scripts related to it.
When I started programming for the Web, it was the sheer boredom of creating all the widgets needed for a page by hand that lead me to create my first libraries to take care of writing HTML for me. Learning by experience that those libraries more often hindered me than helped me, I went to the opposite extreme, using templates engines that simply exposed the data coming from the business layer to the presentation layer in the form of arrays and hashes, using looping statements and simple functions to enumerate items or select between to values. That didn’t help me either.
With my recent experiences in Rails, I’m finally seeing how a framework can eliminate the boredom of writing HTML code while still keeping readability. Rails offers functions to generate HTML but those functions are generally simple and transparent, a simple layer over the presentation layer itself. This thin layer is flexible and it’s simple to customize the functions to generate only what you really need. They save time on simple tasks without preventing complex tasks from being done.
Rails, of course, is not perfect. Some of its functions can only be customized if they are overridden —
errormessagesfor comes to my mind, for example.
Others tools are worse. PHP blends logic and presentation in an unacceptable way. On the other hand, .NET presumes to separate them but they remain tightly coupled behind the scenes. The various libraries and frameworks for Java make both mistakes. Rails attained a good balance of this area mainly because of its good use of MVC, but care must be employed when designing new function to avoid trying to excessively simplify things.
Anyway, it seems things are starting to get better. More and more people are thinking about those subjects, and I believe new libraries will be even more practical in this area.
June 20th, 2005 § Comments Off § permalink
Sometime ago, I said I was impressed about the number of blogs Scoble read every day — at the time, more than 1400.
Now, I’m more and more impressed with the number of entries some bloggers can produce in the duration of a single day. If I stop reading the blogs to which I’m currently subscribe for a couple days, some exceed Bloglines’s limit of 200 saved entries per blog. And most are individual blogs, since I’m subscribed to just a couple of group blogs and newspaper feeds.
Apparently, some people are spending more time blogging than working.
June 19th, 2005 § Comments Off § permalink
Laziness is one of the fundamental virtues of programming, together with impatience and hubris. Laziness to never solve manually something you can solve automatically. As you can see, nothing bad about it at all.
Unfortunately, it doesn’t work all the time. In the next few weeks, I will have to edit almost 2000 PHP and HTML files manually, cleaning up garbage left by other programmers in countless implementations in many tools along various years. I can fully automate the task because each files is virtuall unique, and was modified to a state in which you can’t almost recognize any structure. There are areas in the content that look identical in the screen but whose code is uttely different from that in similar files in terms of formatting, case, and nesting.
Since I’m not doing this, I will have to pay somebody else to do it. Laziness. Laziness.
June 15th, 2005 § Comments Off § permalink
There’s nothing else less interesting to me in a game than final bosses. I think it’s quite anticlimactic to spend sixty hours in a game, raking your brains, just to end up in a closed room with a character that’s ten times more powerful than you and that you can only hope to win if you click fast enough. I would vote for the abolition of final bosses if I could.
June 14th, 2005 § Comments Off § permalink
My recent experiments with the mobile Web gave me a new understanding of how far we still are from really supporting mobile users. Excepting a few sites specifically designed for the small and limited screen of handhels and cell phones, most sites don’t offer any support at all for mobile devices.
The screens in my cell phone (176×208) and in my handheld (320×320) are reasonably large for such devices. Using them, it is possible to browse most sites I want to,but sometimes at the cost of many frustating minutes.Not because the sites don’t render well (Opera, my preferred mobile browser, does a very good job of converting most sites to a layout better suited for mobiel devices), but because of additional complexities relating to usability and accessibility.
A couple sites — for example, Bloglines and Google Search — are available in special versions specifically suited for mobile devices. But even those sites sin by trying to keep their mobile versions as close as possible to their normal versions.
Google Search’s result screen, for example, is nearly identical in both versions. I’d rather have more context and fewer links in each page because that would save me both money and time allowing me to choose the right result quicker.
Bloglines’s mobile version is very good, but fails slightly by keeping a huge image in each page and by forgetting to optimize the feed page, which sometimes can be really huge. Considering that some phone companies charge for the KB, the cost of access can grow quickly.
Other sites,. like Fictionwise, which I use a lot, are virtually unusable on mobile devices — which, in Fictionwise’s case, is ironic, considering mobile users are their primary public.Fictionwise’s homepage is over 100KB even without loading its related images. Were the site designed for mobile devices, it would be possible to buy and download e-books directly to the Palm or cell phone, without having to use the PC as a middleman.
Ironically, many blogs look good on mobile devices, thanks to the proliferation of Blogger- and MovableType-based layouts, which are commonly done in XHTML and CSS, degrading gracefully in most situations.
That was my experience so far. In short, it’s perfectly possible to browse the Web through mobile devices, but that the experience is still very far from that supplied by a desktop browser.