If you look back at how we did computing around 60’s, you’ll find that there used to be one big server, and the users accessed it via dumb terminals. All of a user’s applications and data resided at the server. Dumb terminals consisting of a VDU and a keyboard, took character input from the user and sent it over to the server for processing. The terminals then displayed the character output from the server to the user.
As the prices of computer hardware dropped, we saw the PC-era, when every user had his own personal computer. Now, a user had his applications and data residing on his own computer. We saw many powerful applications in this era.
Then the internet came up in a big way. Today, the applications and data have again moved back to the server. The clients use web browsers to access their applications. The applications at the server emit HTML which the browsers display much like the dumb terminals that displayed character output from the server. When you click on something, the browser tells the server that the user clicked on that thing, the server then does the required processing depending on the application, and again emits HTML which the browser displays to the user. Isn’t browser today, what a dumb terminal was yesterday? Isn’t ‘history repeating itself’?
The reason the applications and data again moved back to the server with the advent of the internet is the inherent heterogeneity of the internet environment. On the internet, a particular web-site is accessed from a variety of computers with varying architectures. Hence, if the services of the web-site are to be made available to all of those computers, it only makes sense to do the processing at the server and use some kind of language to communicate the results and take input from those computers. This language is HTML which both the server and clients understand.
But not only does this require the servers to be very powerful, the computing power of the client devices also largely remains unutilized.
Which makes me wonder, what will the future be like?
As the computer hardware continues to become cheaper, we can expect to see a single user to be surrounded by a variety of computing devices. He will want to access his applications and data from all of those devices. The future definitely belongs to an architecture which makes this possible.
One of the solutions, and which is widely adopted today, is to have the application resident at the server and accessed via a web-browser. But hey, imagine playing a cricket or a baseball game application of this sort. By the time you actually see the ball on your device’s screen, it might have already hit the wickets. Yes, my point is that the latency inherent in a network environment will make you irritated and frustrated.
A better solution is to move the game application to your device and then set it executing for you. This requires the game application to be in a language which all devices in the world understand.
But who should decide where to execute the application -- at the server or your device? The simplest solution would be to make the user decide, but it would be great if the system could take this decision on its own.
Today, we have Java applets which are applications which the browser obtains from the server and executes on the client’s computing device. What I’m talking about is an architecture, which decides on its own, whether to send the application as a Java applet to execute at the client’s device, or execute it at the server and use HTML to communicate the results to the browser.
The basic idea there was of code-mobility and this realization was what got me so excited when I wrote EtherYatri.NET - a mobile-agent toolkit for the Microsoft .NET Platform.