SOAP and REST

Am glad that Havoc brings up the simplicity of jamming and pulling XML from a socket:

Miguel, that's cool, I know SOAP (with a library) doesn't involve a ton of application code. My point is more that (to me) it's just not worth learning about for this sort of thing. If you want to get a couple of pieces of data on an Amazon product you just do "get url contents" and "stuff into XML parser" and "grab the Item node" and that's about it. All programmers already know how to download a URL and pull out an XML node, and being able to try out requests in the browser by editing the address in the location bar is pretty useful. "REST" is just very conceptually simple and requires no ramp-up time, even if it involves a couple more lines of code.

Recently I have been considering the implementation of a system that would replace D-BUS and CORBA with a simple HTTP framework.

At the core of such replacement there is an activation and arbitration system running on a well known location, for example: http://localhost:123/. This can be an Apache module or a custom built HTTP server.

The daemon, very much like our existing activation systems (bonobo-activation and d-bus) has a list of services that can be started on demand and provides assorted services to participants.

When an application starts, it registers itself with the local arbitration daemon, it does this with a RESTful call, lets say:

http://localhost:123/registration/id=f-spot&location=http://localhost:9000/

Where "f-spot" in this case is the name of the application, and "localhost:9000" is the end point where F-Spot happens to have an HTTP server running that exposes the F-Spot RESTful API.

A client application that wants to communicate with F-Spot always contacts the well-known end-point:

http://localhost:123/app/f-spot/request

The arbitration daemon merely returns an HTTP 303 result pointing to: http://localhost:9000/request. The http client library already has support for this redirection.

If F-Spot was not running, the arbitration daemon would look into a special directory that lists the available applications. The file could be called "f-spot.service" and it would describe how to launch f-spot, very similar to the way d-bus does it:

	[Service]
	Name=f-spot
	Exec=/opt/gnome/bin/f-spot
	

The request comes in on http://localhost:123/f-spot/request, the activator launches f-spot, f-spot registers itself using the previously discussed call, a redirect is then sent to the client and you are done.

To use this system as a message bus, in the same spirit as d-bus, you merely connect and listen to another well known-end point: http://localhost:123/bus/event-name. The connection is never closed by the server and clients keep reading data from this stream.

To push information into the bus, a POST request is done http://localhost:123/bus/event-name. The contents of the POST are then delivered to all the clients that are listening on the endpoint.

The format and protocol for the information that flows on that particular HTTP request as REST is up to the creator, users would have to follow that format. For example it could be one line at a time, or it could be a url pointing to the full message.

As for the security of this system, it should use a mechanism similar to what we have used in the past with Bonobo Activation: a randomly generated password is generated and stored on the file system in a well known location, and private location (~/.config/private/something).

The security process is the only component that requires a little bit of code. The rest can be implemented just with an HTTP client and the HTTP server for those providing information. A client application would only needs something like this:

	FILE f = popen ("http-get-key");
	fgets (password, sizeof (buf), f);
	close (f);
	

The benefits I think are multiple:

  • HTTP is a well known protocol.
  • There are plenty of HTTP client and server implementations that can be employed.
  • The protocol can be as simple or as complex as needed by the applications. From passing all the arguments on a REST header as arguments to the tool-aware SOAP.
  • HTTP works well with large volumes of data and large numbers of clients.
  • Scaling HTTP is a well understood problem.
  • Users can start with a REST API, but can easily move to SOAP if they need to.
  • HTTP includes format negotiations: clients can request a number of different formats and the server can send the best possible match, it is already part of the protocol. This colors the request with a second dimension if they choose to.
  • Servers can take advantage of HTTP frameworks to implement their functionality on the desktop.
  • It is not another dependency on an obscure Linux library.
  • The possibility of easily capturing, proxying and caching any requests.
  • Redirects could be used to redirect request to remote hosts transparently and securily.
  • And finally, makes the desktop "Web 2.0 ready" :-)

HTTP on its own has plenty of fantastic advantages that today we are bound to reimplement with things like D-Bus or live without them.

The implementation of this infrastructure is trivial. All I need now is a name for the project.

Posted on 26 Nov 2005 by Miguel de Icaza
This is a personal web page. Things said here do not represent the position of my employer.