Saturday, August 12, 2006

ROA and Microformats

The most recent feedback I've been getting on my ruminations regarding the Resource Oriented Architecture have been mostly concerned with the programmability of the web. In its vanilla state, the web is very easy to program. Basically, all the computer program needs to know how to do is identify the resource and then send it one of the three or four rigid, predefined messages that apply no matter what. These messages are:
  1. Add this resource
  2. Give me the representation of this resource
  3. Make the resource state transition
  4. Destroy the resource

Works like a charm every time. The beauty of this model is that it is unbreakable. Adhering to this model, one will never be forced to go through the pain of 'upgrading the web'. One's code will keep working no matter what.

Is Simplicity the Problem?

To talk to most web developers, you'd get an impression that this beautiful simplicity is more of a problem than a solution. Basically, it all boils down to the fact that programmers complain how this protocol (we're talking HTTP here) is too plain. It doesn't give them the 'power' they're used to when working with the Java API, or with .NET API and so on.

For example, in Java we have an open-ended world of unlimited custom made, home grown protocols. Anyone in the world is free to create their own monster mash, and invent their own capabilities and name them however they feel like. This is what programmers call 'power'.

But that's what I call 'weakness'. Why? Simply because it's so goddamn confusing. How can a thing that's so confusing be considered powerful?

The Problem of Discoverability

Some developers do recognize this problem (i.e. the problem with open-ended, unlimited world of home grown capabilities). Yes, it may be wonderful to have this vast world of incredibly sophisticated capabilities, but what's the point if no one knows about them? It would be absolutely unrealistic to expect that there be a central control instance that would maintain the world-wide inventory of all the ever growing capabilities that are being added to the web daily.

So instead of abandoning the wild geese chase, these architects suggest we use methods of piecemeal discovery. Various techniques have been proposed to that end: reflection, introspection, Web Services Description Language (WSDL), god knows what else. None of these really work, because even after you've discovered that there is a capability out there you had no idea existed, there isn't anything you can do to use it. This is because, while you may be able to discover the remote procedure call signature of that capability (i.e. how to call it, what types of parameters is it expecting, and what type of parameters is it returning), you still have absolutely no way of deciphering the meaning of that capability. What does it really mean, what does it really do?

You could always assume, but there is inevitably a big ass in every assumption.

It is very hard trying to interpret the intentions that some content conveys by relying on the formally measurable parameters. That would be akin to trying to figure out whether a person likes something or not by measuring that person's pulse, blood pressure, blood sugar level, brain wave activity, etc. Sure, all these things are measurable, but are they really conducive to attaining unambiguous conclusion?

Work from the Known, not from the Assumed

All the RPC methodologies prefer to work from the assumed standpoint. In other words, the RPC client prefers to engage the server in a preliminary conversation. The conversation goes something like this:

Client: "Hi, I am about to request that you render a service for me. Could you please tell me what you're capable of?"

Server: "Hi there, I offer wide variety of top-notch services for your exquisite enjoyment. What would be your pleasure today?"

Client: "Oh, I was hoping that you could help me convert inches to centimeters. Can you do that?"

Server: "Here is the list of things I can do (offers a long list of convoluted names)."

Client: "OK, let's see... (tries to find the name that would resemble the inches-to-centimeters conversion)"

Once the client makes a decision, the real conversation commences, meaning the real data may be exchanged.

In contrast, resource oriented client does not engage the resource in any sort of preliminary chit-chat. The client simply identifies the resource and asks it to send its representation to the client. The client examines the received representation and decides to either give it a miss, ask the resource to make a state transition, or ask it to destroy itself (or perhaps ask it to add a new resource). Simple as that. The conversation between the client and the resource commences right out of the gate. There's no pussyfootin'.

The Problem of Enriching the Protocol

So discoverability didn't really get us anywhere, nor could it ever do so. People are slowly but surely beginning to reach the conclusion that it is much safer and ultimately much better to ask the resource for its representation, than to interrogate it about its dubious capabilities. At least by sticking to the representation model, we know that our request will always get serviced in the predictable way.

But the problem now seems to be that the representation of the resource is not structured enough. What does that mean? Let's go back to my tennis court example for a minute -- if we identify certain tennis court in our town, and request to get its representation, the response will travel to our client and will be rendered for our consumption. We will then be able to read about it in more details. For instance, we may be able to see that this tennis court is not booked on Saturday morning, which is exactly the information we've been looking for (i.e. we've been searching for a tennis court in our town that would be free this coming Saturday morning).

So right there we see that this resource (i.e. tennis court) is endowed with the capability to be in a booked or free state. And that's all we need to know in order to fulfill our goal (and thus we'll find the web site that's hosting this resource to be very useful to us).

Now, most programmers see this situation as being very problematic. Basically, they are complaining that this representation of the resource is only human-friendly, and that machines have been left out of the equation. The highly unstructured content of the resource's representation may be fine for humans, but is all but useless for the machines.

Because of that, they propose that the rock solid HTTP protocol be enriched, ameliorated, and opened up for allowing us to enforce more structure upon the content of the resource's representation.

How do they propose to do that? Microformats is one way that seems to be getting many people's hope quite high. So let's look at how do Microformats propose to enrich the HTTP protocol.

The 80/20 Myth

Microformats offer a very non-intrusive approach to ameliorating the protocol. That approach is based on the more 'organic' view of things. In other words, it's bazaar rather than a cathedral, a garden rather than a crystal palace.

The so-called Zen of Microformats states that it only makes sense to cope with the 80% of the problem space, and leave the remaining 20% of the unsolved portion to take care of itself.

This, of course, is very reasonable. It is rather unacceptable from the engineering standpoint, but we all know by now that software development is as close to engineering as tap dancing is close to Dave Chapelle's Block Party.

In the nutshell, then, Microformats propose to open up the playing field for structuring the wild and woolly content as it is being served on the web as we speak.

Right now, it is possible to see some of the Microformats in action. Plenty of good ideas that definitely add value to the meaning of the structure of the resource representation.

So where's the problem? It's in the unsubstantiated belief that this additional structuring of the resource representation will catch on in approximately 80% of the cases. My hunch is that this expectation is hugely blown out of proportion.

The Selfish Web

One of the fascinating qualities of the web is that it offers one of the most altruistic experiences that emerge out of the most selfish motives. This is called 'harvesting the collective intelligence'. Each individual on the web pursues his/her proprietary, selfish goals, and yet the community tends to benefit vastly from such selfish pursuits.

But it would behoove us to keep in mind that, on the web, work avoidance is the norm. People mostly blurt things out on the web and then go on their merry ways. No one has the time nor any intention to stop and carefully structure their content.

Presently, the content offered on the web is at best structured to offer the following semantics:
  • HTML head with a half-meaningful title (hopefully)
  • body with (hopefully) only one H1 (heading one) markup tag
  • ordered/unordered lists enumerating some collection
  • divisions with semi-meaningful class names and ids
If one is extremely lucky, one may find an HTML representation of a resource that offers such well-formedness. But in most cases, the representations we do get are even below such extremely lax standards.

How are we then to expect that Microformats will pick up and reach the 80% of all representations? I think it's a pipe dream. I am doubtful that Microformats will ever reach even 20% of the representations out on the web. I hate to say this, but I'm afraid that we're more realistically looking at 2% to 5% rate of adoption.

Only time will tell, as always.

Thursday, August 10, 2006

Is the Web Machine-Friendly?

People would like the web to be machine-friendly. What does that mean? Basically, people would like be able to teach the machines to go out on the web and do the legwork for them.

Now, machines are notoriously very brittle, and tend to easily break. You throw an unexpected piece of information at the machine, and it freaks out.

Humans are different. Humans can cope with irregularities. That's because we are blessed with common-sense.

Is the Web Broken?

We know for a fact that the web is not machine-friendly. But does that mean that it is broken? Some people tend to think yes, since the web cannot offer regular, uniform and predictable experience to the machines, it is broken.

Some other people, myself included, tend to think differently. I don't believe that the web should be made uniform, just so that the machines could traverse it without experiencing any hiccups.

So, in my view, the web is not broken. The web is just fine the way it is. It is the expectation that the web must be machine-friendly that is broken.

Smart Servant does not Imply Automation

Most people make a primordial mistake upon hearing about the Smart Servant and think that it means some really smart piece of automation. But that's very 19th century thinking. Today, in the 21st century, this is not what we're after anymore.

We're really after humanization of the technology. We want machines to learn to bend over backwards and kiss their own ass and serve our human needs. Nothing more.

And for that, we don't need massive automation. We don't need to turn the web into the wasteland of bland uniformity. Let the web be what it already is -- an enormous mass of messy, irregular, wacky and crazy stuff. That's life. That's what human beings thrive on.

We need to harness the technology that will help us participate and contribute to this mess. We don't need technology that will help us clean up and solve this mess. It is up to us, humans, to decide what's a problem and what isn't a problem.

Should the Web be Easy to Manipulate Programmatically?

My answer is: why? Who needs programmatic ways for accessing the content on the web? This is because I want to be in charge. I am the one who's in the driver's seat. Even when I hire a chauffeur to drive me, and am sitting in the back of the car, it is still me who is in the driver's seat.

Same is with the web. I am the consumer, the participator, the contributor on the web. I don't want machines to do that. I don't see any value or benefit in expecting the machines to do that.

How Can ROA Help Us Build Smart Servants?

Pat Maddox and Abhijit Nadgouda are two clever dudes. They tend to ask tough questions, which is fine by me -- keeps me on my toes. I see people like them being at the forefront of the next wave of software development. People who don't take things at a face value, who keep probing and exploring and are not satisfied until everything becomes crystal clear.

Those who do not see any value in such behavior will be left behind, partying on their Titanic while it gradually and almost imperceptibly continues to sink. Eventually, it will run out of escape boats and life jackets.

All right, onto addressing Pat's questions:
How does this thinking allow us, as developers, to build smart servants?
Bingo! Right smack in the middle, he asks the absolute perfect question. But just to flip it around a bit, I'd like to ask a slightly modified question:

Why Haven't We Been Building Smart Servants?

In other words, what was it that was stopping us from trying to build smart servants? I mean, after more than 40 years of developing all kinds of software, why is it that only now are we starting to talk about smart servants?

The reason for it is quite simple: if you're drowning, and are gasping for air, you don't have enough free time and energy to start reciting poetry. And we've been drowning for the past 40 years or so in the turbulent waters of oppressive computing infrastructure woes.
If 98% of our time must be dedicated to serving the finicky infrastructure, there is really no free time left to think about finer things in life. Such as -- a smart servant.

Computing Infrastructure is Rapidly Losing its Luster

We have reached a point where there aren't any more compelling reasons to be fascinated by the computing infrastructure. Naturally, in previous times, when a single computer used to command a million dollars price tag, there was lots to be fascinated with. But today this infrastructure is dirt cheap, and therefore ceases to be the subject of heated conversations. Similar thing happened to the light bulb, the radio, the TV, etc.

We are therefore standing at the threshold of the realization that we are not to serve this dirt cheap computing infrastructure. We are slowly coming to our senses and are beginning to insist that the computing infrastructure must serve us.

What that means is that we're turning the tables on the computing infrastructure. From now on, instead of spending 98% of our time servicing this infrastructure, we don't want to spend more than 2% of the time doing it. It is a drastic, radical turning point, where we basically turn things upside down.

So if you're planning to continue being engaged in using computing infrastructure the way tool vendors are shoving it down our throats (like Microsoft, IBM, Sun, Oracle, etc.), you know with frightening degree of certainty that 98% of your efforts will continue to be wasted on servicing that infrastructure. That means that 98% of your decisions will add 0% value to your business (and will keep adding 100% value to the vendors' business).

If, however, you make a healthy transition and stop eating junk food and embrace the world of resource-oriented programming, you will be forsaking your love affair with the computing infrastructure and will begin embracing the smart server model.

The Honeymoon is Over

Often times, even though the honeymoon is long gone, people still don't feel ready to acknowledge that fact. This is what's happening with the infrastructure-centric software development. Lots of people have fallen into a bad habit, created by the pushers (i.e. the tool vendors), and are now convinced that they must keep feeding the beast. But in actuality they really don't have to keep feeding the beast. Walking away from your dealer is easier than you might think.

It is hard to consider a divorce if you're convinced that the honeymoon is still in full swing. But the time for filing a divorce has come.

Building Smart Servants Takes Full Attention

It would not be possible to build a smart servant if we cannot have our undivided attention focused on it. So long as we keep using the tool vendors' vision of the computing infrastructure-centric model, we won't have the ability to focus on building the smart servant. There will always be something else that is more pressing, more important than the smart servant project.

But if we switch to the liberating technologies such as resource-oriented programming and embrace principles of radical simplicity, the computing infrastructure concerns will cease popping into our head while we're developing software. That experience is very liberating (as all people who've made an effort to learn Ruby can attest for).

Browsing the Resources

Pat then continues his inquiry:
Right now with a resource-oriented paradigm, the only way to use the resources is through a browser.
You don't have to use the resources via a browser. If you can identify and locate a resource, you can send it an HTTP request in more than one way. Then, you'll receive an HTTP response from that resource, and it's up to you how would you like to handle that response.
This is because, as I said, you still need to know the attributes of a resource in order to do anything with it programatically.
You can do a lot of things programatically to a resource even if you don't know any of its attributes. You can ask it to represent itself to you. You can ask it to destroy itself. You can ask it to make a transition to a different state.
So we can publish useful resources, because humans are smart enough to process them, but how can we consume the resources?
As I've already said, we can consume them by asking them to represent themselves etc.
Even if it’s through a browser, you’ve already proclaimed that the browser is a broken model.
True, but we're not forced to stay with the browser. AJAX is leading the way out of the browsing paradigm.

Wednesday, August 9, 2006

Should the Purpose of Conversation be Pre-Established?

In response to my recent post on Resource Oriented Architecture (ROA), Abhijit Nadgouda made the following comment:
Not sure if this meant that the capability was a surprise to the client. If the client did not know of the capability, why did the client engage with the resource? Shouldn't the purpose of conversation be pre-established? Should the client expect any capability from the resource?
Just to clue you in, I was talking about the absence of a need for a client to know the particulars for the resource the client is interacting with. So long as the resource understands the request to represent itself, to make a state transition, and to eventually destroy itself, the client can accomplish its goal.

Now, when the client identifies and locates the resource, it is most natural for the client to expect to receive the representation of that resource. To use the example from my original post, if I, as a client, am looking for a tennis court where I could play with my friends, upon identifying and locating the potential court, I'd like to see its representation.

At that point, the identified tennis court will ship its representation to me. This representation will then be rendered in my browser. By examining the representation I've just received, I should be able to get a better picture about the resource's capability.

Now, Abhijit's question is: is this capability a surprise to me, the client? Well, hard to say, isn't it? I mean, it all depends on what I, as a client, was expecting when engaging in the conversation with the resource.

If, for instance, I was expecting the resource to offer shower facilities and it didn't, then yes, maybe I'd be surprised. But then again maybe not, because many of the public tennis courts in my city do not offer any amenities.

The Web Is About Exploratory Behavior

Another interesting question is this:
If the client did not know of the capability, why did the client engage with the resource?
It's called exploration. And web is all about exploring. Poking around. Does this tennis court have a wall to bounce the ball off of, or not? I don't know, let's explore and find out.

Shouldn't the Purpose of Conversation be Pre-Established?

Why enforce such a constraint? The web is also about freedom. Let me start by an informal chat, and see where it takes us. There's no need to impose an authoritarian, military type of a rigid conversation that mustn't deviate from the very narrow 'norm'.

Should the Client Expect any Capability from the Resource?

Yes, and we've covered that expectation in very many details already. To reiterate, a web client invariably expects any resource on the web to be capable of representing itself, of making the transition of its state, and of destroying itself.

Tuesday, August 8, 2006

Learn to Walk Before You Try to Run

It is wonderful that so many people are nowadays serendipitously discovering the power of Resource Oriented Architecture (ROA). However, one of the most problematic aspects of human nature is impatience. I feel qualified to speak about this foible because I am one of the most impatient people around.

And that's always been the downfall in any venture. So it is with rediscovering the world of resources. When I say 'rediscovering', I'm trying to remind the reader that resources have been built into the web from the day one. But somehow, we chose not to notice them. Instead, we chose to focus on remote procedure calls (i.e. the services).

The Follies of Impatience

Humans like to complicate things. Here we have a very robust and simple situation -- an ever growing collection of resources which could be accessed via three simple commands: represent yourself, modify yourself, destroy yourself. But in our blind impatience we've jumped at the conclusion that things are just too simple and that we need a much more complicated protocol in order to make things work. We rushed to invent the dreaded web services.

This impatient over-engineering is so foolish, that one is at a loss when trying to find an equivalent folly in other engineering fields. The closest one I could come up is with the system of traffic lights.

Today, we have this traffic regulation system that consist of the protocol based on the state of the physical semaphores out in the streets. At any point in time, each semaphore could assume one of the three possible states: it could turn turn green, which means 'go', it could turn yellow, which means 'get ready to stop' or it could turn red, which means 'stop' .

That's all there is to it. Very simple very elegant, and serves its purpose perfectly -- to regulate otherwise astronomically complex traffic patterns.

Now imagine if we've given in to the engineering follies of impatience, and let civic engineers take over the asylum and go nuts with their 'solutions'. We'd be then quite easily looking at 64 different colors that a traffic semaphore could take. Mauve would possibly mean 'go, but be forewarned that there's construction ahead', blue would mean 'speed limit around the corner', purple would mean 'food and lodging ahead', brown would mean 'traffic congestion after the next intersection', and so on. The possibilities are endless.

Would that be an improvement? Quite the contrary, it would be a veritable disaster. Not only would the chances of drivers getting totally confused be blown out of all proportions, but we'd probably witness the 'vendor wars' all over again. Different municipalities would strive to come up with their own version of comprehensive traffic regulating color scheme. Drivers would have to learn and memorize numerous color patterns, and to be additionally very aware when they have crossed the boundaries from one jurisdiction to another jurisdiction.

All in all an incredible mess which thankfully got avoided by patiently sticking to the dull but extremely reliable protocol of only three traffic lights.

Same as reducing the number of decisions when driving leads to the more reliable public traffic, reducing the number of decisions when traveling the web leads to the more reliable web software. On the web, all we need to know is that a resource is identifiable and locatable, and that it can represent itself to us, and can respond to our requests to change its state or to destroy itself. And if we keep these radically simple things in mind when developing web software, we'll realize that we can do absolutely anything on the web.

Not only that, but by observing these five simple rules (identify the resource, locate it, ask it to represent itself, ask it to make a state transition, ask it to destroy itself), we open up the world of endless possibilities for other interested parties to join in on the conversation and to ameliorate and augment our business. All that without expecting anyone to learn anything in particular.

This is the world of true democracy, where everyone is welcome and everyone is qualified and is free to feel adequate to contribute their own value.

Contrast that with the world of web services where each and every business is free to invent its own system of 'traffic lights' and then insist that anyone driving through their territory learns their convoluted system and obeys it. It is painfully obvious how web services had regressed us and damaged our credibility big time. It'll take many years before the damage can get undone. In the meantime, we'd do well to really learn how to walk properly on the web, before we try to run and again fall flat on our faces.

Replacing Service Oriented Architecture with Resource Oriented Architecture

We've already seen that resource is the central abstraction upon which the web is based. The problem in today's mainstream web development is that most developers have been forced to make a quick switch from the mainframe or the desktop or the client/server environments to the web environment. Now, we've seen that in the mainframe or in the desktop etc. environments, other equally powerful abstractions found their way into the collective mindshare. For example, file is a very powerful abstraction that allows software developers to program all kinds of nifty things on the *nix box.

At the same time, object is an equally powerful abstraction that allows software developers to develop all kinds of nifty things in a distributed system.

Also, remote procedure call (RPC) is a very powerful abstraction allowing developers to define all kinds of contracts across the system boundaries.

All these abstractions, as powerful as they are, have proven inadequate in the world of web development. The biggest problem right now is that most web developers seem to be barking up the wrong tree by relying on another strong abstraction -- service. This reliance led to the very popular Service Oriented Architecture (SOA) on the web, which, being inadequate, is slowing web development down and thus incurring a lot of unwanted delays and damages.

What is a Service?

As the name itself implies, a service is basically a rendered capability. This concept actually comes from the world of objects. Objects are often in the world of software referred to as 'bundles of capabilities'. When we set the environment in such a way that a contract could be made between the provider of the service (i.e. an object) and a consumer (i.e. a client), we arrive at the service-oriented architecture.

A service, being a general purpose abstraction, can be anything. Thus, services are extremely diverse, and that creates a huge problem. Any consumer-to-be must keep an inventory of the available services of interest that are on the web. Not only that, but because each service comes with its own arbitrary protocol, the consumer-wannabe must also make sure that the inventory of corresponding protocols is kept up to date.

This places enormous pressure on the consumers. How are they to keep up with the incredibly rich and diverse world of web-based services? It simply isn't possible. Certain systems have been proposed in the past to allow interested parties to discover the availability of the web services and to then engage in learning the ins and outs of the particular service's protocol. But we haven't seen a truly workable solution to that problem. My hunch is that we never will.

The Web is All About Conversation

The nature of the web, as a medium and as an enabling technological platform, is conversational. The web is abuzz with incessant streams of conversations going on around the world. Some private, some public.

Because of that, the web is built on the premise that there always will be plenty of third parties who will be interested in joining the conversation. A classic example -- a group of friends interested in playing tennis may strike a conversation about who, when and where. At the same time, there could be a facilities provider who may be interested in joining the conversation by offering a nice court with all amenities for a very reasonable fee. It's a win-win situation.

But how is the third party (i.e. the tennis court provider) to join in on the conversation?

The Web is All About Relinquishing Control

In the non-web world, where control is the highest order of the day, such 'joining in on a conversation' could typically not happen in a haphazard way. A specialized protocol would have to be developed and all the interested parties would have to be notified beforehand. But often just being notified isn't sufficient. Sometimes the notified parties who would want to join in on the conversation tend to learn that, in order to join, they first need to enroll in some sort of an educational course. High barriers to entry etc. There would therefore be very little possibility for someone to just jump in the middle of a fairly advanced conversation.

And yet that's exactly what the web is all about. The web as a medium is extremely (some even say radically) open, extremely trusting. It places almost no restrictions on who joins in. Not only that, but it also places almost no restrictions on when does the third party join in. So, usually on the web, there's no need to take a university course before we can participate in the conversation.

This is possible because no one owns the web, and consequently no one can control it. The web is everyone's property, and everyone is welcome to join in. It's the world of sharing. The need for control has been relinquished, and it is left to the participants to sort their discrepancies out. The web as a medium and as a technology is not going to offer any assistance in that regard.

Back to the Resources

Because of the uniqueness of the web as a medium, the only abstraction that does it true justice is resource. The web is a collection of resources. These resources are astronomically diverse, and it would be mathematically impossible to maintain any semblance of a reasonable inventory of web resources.

This reality then forces us to give up on trying to control and maintain the inventory. And instead, we turn our attention to the web protocol.

Each resource on the web, no matter how unique and complicated it may be, obeys only one protocol. This protocol has three major aspects to it. In no particular order, these aspect are:
  1. Each resource knows how to represent itself to the consumer
  2. Each resource knows how to make a transition from one state to another state
  3. Each resource knows how to self-destruct
In addition to the above three aspects of the web protocol, there is a protocol that allows clients to add a new, never before existing resource to the web.

All in all, four very simple aspects of a single protocol. And using these four aspects, it is possible to accomplish absolutely anything on the web.

How Does the Web Protocol Work?

If there is a resource out on the web, such as a tennis court, that resource could be identified. Once identified, that resource could be located on the web (using its corresponding URI). Once located, a request, obeying one of the three aspects outlined above, could be sent to the resource.

Typically, a client would initially want the resource (i.e. a tennis court) to respond to the request by shipping its representation. In reality, no one knows where the actual resource sits, nor how is its state implemented. And that's a good thing -- one less thing to worry about, one less decision to make.

The resource decides how to represent itself (based on some sort of business logic, of which we as clients know nothing, and frankly don't want to know). Its representation travels to the requesting client, who renders it and then consumes it.

Once the representation of the resource's state has been consumed, the requesting client may make a decision to modify that state. For example, if the state of the identified tennis court is that it is free on Saturday morning at 10:00 am, the client may decide to change that state to booked. The client sends another request to the resource, this time using a different aspect of the protocol (basically, using the aspect number 2). The resource handles this request, and, obeying its business logic, makes the transition from its original state (free) to its new state (booked).

Keep in mind that in this scenario the client didn't have to know in advance that the resource is endowed with the capability to be in the free or in the booked state. The client merely engaged the resource in the conversation, and upon receiving the representation of the resource, realized that the resource's state could be changed. That realization prompted the client to try and request the resource state transition.

And if the resource didn't find any objection to that request (for example, if the client's request was deemed as authoritative enough to request the state transition), the resource would obey and would transition to its new state (i.e. booked).

Representational Logic

It is very important to realize at this point that we are dealing with representational logic only. What does that mean? Well, when dealing with representations, we should not take things too literally.

For instance, if an authorized person identifies the tennis court and sends it the request to self-destruct, the resource may obey and destroy itself. But just because we may witness this event take place on the web, doesn't necessarily mean that the resource actually did get obliterated from the system. We have no way of knowing how is the resource implemented, and thus have no way of understanding that resource's true destiny.

It may easily be that, upon receiving a request to destroy itself, the resource merely makes a state transition from "represent yourself to the client's request" to "ignore any requests to represent yourself". But in reality the resource would still be there, with all its remaining state intact.

Next time someone requests a representation of that tennis court, they may get an exception response stating how that tennis court does not exists. However, that may not be entirely true. The resource may actually exist, it's just not to be represented to the clients requesting it.

Procedural to Object-Oriented to Resource-Oriented

It is possible to develop object-oriented software using procedural languages (such as C). But it is very impractical to do so. Still, a fully blown object-oriented system is implemented, under the hood, using lower level procedural constructs.

In a similar fashion, we are about to make a transition from object, or service oriented discipline, to resource oriented discipline. But contrary to how some people interpret this transition, it does not imply supplanting object oriented way of doing things. Same as object oriented implies procedural, as its underpinning, resource oriented approach could easily imply the object oriented underpinning.

For example, when a tennis court (a resource we've mentioned above) receives a request to make a state transition, the logic governing that transition may be implemented using a full fledged object oriented system. But the clients making that request absolutely need not know that. Resource oriented systems expect the clients to only know a simple four-pronged protocol.

Convert PDF





Would you like to convert a
pdf file to word
You can accomplish
pdf to word
conversion
with software. The

conversions
of
PDF to MS Office programs can be done
much easier than you might believe.

Thursday, August 3, 2006

Transitioning From Object-Oriented to Resource-Oriented Programming

Ten years ago I was hired by a fairly sizable corporation to help them transition from procedural to object-oriented software development. At that time, many businesses have been going through the growing pains of adopting object oriented model, which most of them didn't find very palatable.

Fast forward to present time, when we see how most business have made a successful transition to object oriented model of development. But have they made a successful transition to a more productive way of developing software?

Time for a Change (Again!)

Just as everyone is settling into a comfortable groove, there comes another jolt. Unexpected, to be sure, but nevertheless absolutely necessary. We're talking about the resource oriented development.

What is resource oriented development? It's the type of development that adopts one powerful abstraction (i.e. resource) and then enforces it onto any problem area.

This is not the first time that we've been doing this. In Unix world, for example, we had file oriented development. Unix mindset adopted a very powerful abstraction named file, and decided to enforce that abstraction onto any problem that Unix developers might be trying to solve. It proved to be a very powerful way of doing things, as the ongoing success of Unix platform attests to this very day.

Similar thing was happening in other areas with procedure oriented development (also known as procedural development). A central abstraction (i.e. a procedure) was used as a philosopher's stone to be applied to any and every kind of programming problem.

Then object oriented development became the order of the day. This type of development adopted the notion of object. Everything is an object. Thus, enforcing this abstraction onto any problem domain, object oriented development was able to make major inroads in the world of software development.

But today we're finding that objects, as an abstraction, are not cutting the mustard when it comes to challenges of software development. We will explain now why is that.

Objects are an Extensible Protocol

Great thing about objects is that they behave. Unlike files and resources that just passively sit there, object actively behave. That was, and still is, their biggest selling point.

But that's also their biggest shortcoming. By being endowed with the ability to behave, objects have become unpredictable.

If we examine a typical object, such as for example a Customer, we see that it has a public interface. What that means is that we can send messages to it and it will respond to those messages by behaving. In less technical parlance, we usually say that objects are 'bundles of capabilities'.

The collection of these capabilities is what we call a protocol. Protocol exists in order to enable third parties to join the conversation. If a third party does not understand the protocol, or if it isn't even aware that the protocol exists, its attempts to join the conversation will be futile.

So where's the problem? The problem lies in the fact that the object-defined protocol is entirely arbitrary. This means that an innocent bystander (i.e. a third party) stands very little chance of knowing about that arbitrary protocol. It must learn about it, it must somehow discover it.

The very fact that the onus of discovering/learning the protocol is placed on the client (i.e. on the consumer) means that the solution is very expensive. In contrast, any client using the 'file' abstraction in the Unix solution is free from any such expectations. An innocent bystander in the Unix world is only expected to know the standard, immutable protocol (which is by the way quite simple) in order to accomplish anything.

The limitlessly extensible protocol that the world of objects imposes on software developers is very detrimental to achieving elegant, acceptable solutions. Something must change.

Rediscovering the World of Resources

Resources are limitless. So how's that an improvement over the world of objects?

In the resource oriented development, we tamper the infinite and unpredictable variety of resources with the extremely rigid and simple protocol. True, there are possibly countless resources we might encounter, but the way to engage in the conversation is extremely simple and immutable.

Basically, we have adopted the Hypertext Transfer Protocol (HTTP) when dealing with resources. And this protocol defines extremely stringent set of acceptable behavior. In a nutshell, when dealing with resources, we can identify and locate them via the abstraction called Uniform Resource Identifier (URI). Once identified and located, a resource is susceptible to only a handful of possible behaviors. And in practice, this set of acceptable behaviors boils down to four notorious ones:
  1. Deliver the representation of the resource
  2. Modify the resource
  3. Destroy the resource
In addition to the above, it is also possible to add, or create a resource that never existed before.

This protocol is published worldwide, and shouldn't come as a surprise to anyone using the web to accomplish something.

The rigidity and the immutability of this protocol offers a lot of power and robustness to the software written to accommodate it.