Implementing SCORM: Building the backend

November 26th, 2003 § 1 comment

In my previous entry about SCORM, I presented some considerations about the implementation of its API on a LMS, talking about both its client part, which runs on the browser, and the server part, which runs on a Web server. The entry was an overview of the issues found in the implementation, but it didn’t look into the fine details of the process. This entry tries to complement some of the information found in the previous entry, explaining some details of the server-side implementation. One of the fundamental aspects of the SCORM API is the fact that it’s built around a Web environment, which implies that it uses HTTP for the communication between the local part of the API, which exposes the SCORM run-time environment to the learning objects, and the remote part of the API, which is responsible for the real implementation of the functions in the backend. HTTP is, by its very nature, a stateless protocol, which means the no information is retained between requests — each request is served as if it were unique, as if no other requests existed, both in the past and in the future. To work around this limitation in situations that require some kind of information to persist between requests, many mechanisms were development, involving, in most of the cases, the transmission of a value, called a session identifier, between the browser and the server. Tracking this identifier, an application running on the server can find whether a requests belongs into a given session or not.

The fact that HTTP is stateless is important because the implementation of the transport layer between the local API and the remote API requires some kind of session tracking. Without a knowledge of what session a request belongs to, the LMS will not be able to execute them as it will lack the required context of the request. As stated in the previous entry, the responsibility of launching a SCO belongs to the LMS, and, as it has full control of what happens at the moment, it can use information only it has access to (like data about users and their history of utilization of the system) to establish a suitable session environment to further invocations. Regardless of the actual mechanism used to implement the session, it must be established.

That said, a first recommendation about the implementation of that transport layer between the local API and its remote counterpart is to avoid a dependence on the session mechanism provided by the development language used, if it exists. Talking ASP as an example, although it supports session tracking, that support doesn’t extend to distributed servers. So it would be better to implement your own session mechanism either using cookies or passing a parameters with each request if you are using a language with similar session semantics. The motive is simple: the remote API can be located on a different server than the LMS and you can’t make any assumptions about it. As most languages do not support distributed session out-of-the-box, keeping your own session mechanism will give your greater control over it, and reduce or eliminate problems resulting from changes in core assumptions about the location of the server API. If a server is changed, for example, you won’t need to rewrite any of your code to deal with sessions spanning multiple servers. Moreover, the implementation of such session mechanism is usually very simple. In many cases, you only need to send a session identifier with every request. This identifier can be easily built from information initialized by the LMS when the SCO is launched. For instance, you can use the user identifier and the SCO identifier as parts of such session identifier.

A second recommendation, about the implementation of the backend API processing itself, is to take care of what information the LMS makes use of to serve a request. The SCORM specification makes it clear that each SCO must be an independent entity, able to function by itself in a proper environment that conforms to the standard without requiring any other support but its own dependencies. So, a good implementation will make use of the minimum possible data from its own environment to execute the SCO, extracting the necessary data from the correct places. A common mistake is to use data from the LMS backend database when it should instead have been extracted from the SCO itself. If your LMS doubles as a LCMS, the risk is magnified since the temptation of using the LCMS database to serve data is stronger. Information like whether the SCO is a credit module or not should come from the manifest, even if it is duplicated on the database of the LMS or LCMS for performance reasons — or any other reasons for the matter.

Another recommendation, already mentioned in the previous entry too, is to make use of a dictionary table to describe the SCORM elements you implement. Such table helps to simply the implementation of the API, grouping common processing together, and can be reused in other parts of the system to provide easy aggregation data about modules, courses and users. If a table like that exists, it’s quite easy to create a simple state machine to process the requests from the SCOs, which will simply development and minimize the chance of bugs. It can also help to identify quickly what elements must be implemented only as run-time information without requiring persistent storage. An example of such elements is the “cmi.core.student_data.*” group, which comes from the LMS’ own information about its users. A dictionary can also simplify the implementation of root and array elements.

The choice of what SCORM elements the LMS will implement depends on the conformance level intended for it. As many elements are optional, it would seem that leaving them out of a implementation can save development time. However, besides adding value to the LMS — increasing its conformance level — the implementation of some optional elements can reduce development time in other areas of the SCORM process as well. For example, the creation of SCOs to apply assessment tests can be greatly facilitated by the implementation of the optional elements “cmi.student_data.*” and “cmi.interactions.n.*”.

Also considering the development of the API, as the previous entry mentioned, some extra elements can be added to the dictionary, specific to the LMS itself. However, great care must be taken to insure compatibility with the standard here. Firstly, the elements must be named in a way that won’t conflict with present and future versions of the standard. Secondly, the implementation of those elements must be restricted to the communication between the local API and the remote API. Any use or requirement of those elements in a SCO would violate the standard.

Concluding, I believe that following those recommendations can help to avoid many problems in the implementation of the SCORM standard in a LMS while also increasing its flexibility and extensibility, even if some of them imply in development compromises. As happens in the implementation of many emergent standards, compromises must be reached in some places to attain a fully functional system. However, they must be reached in a way that won’t imply in the need of big changes to the code in the future. I believe the compromises described above are compatible with this goal.

I hope this entry has helped. In a future entry, I will deal with details of the implementation of SCOs.

§ One Response to Implementing SCORM: Building the backend

  • rosy says:

    I want to implement SCORM enabled LMS. Please tell from where should i get the API for SCORM implememtation or what steps should I take to implement SCORM in my LMS.

What's this?

You are currently reading Implementing SCORM: Building the backend at Reflective Surface.

meta