Plan 9 looks like it is attempting to achieve nirvana - a distributed system of computing with high speed networks and shared resources that take immediate advantage of new faster hardware.

CPU servers are assumed to be multiprocessors and the authors assume that in the future additional CPU's will be as easy to buy and plug in as new disks are now.The CPU servers have no local storage to disk, just a lot of memory. The only disks they can access are over the network. This seems like a flaw to me because I live in a bandwidth limited world, but maybe it works in real life. It claims to be connected to a 20MB/sec DMA disk. I don't know if that is enough bandwidth or not.

The file system just serves files. It makes daily backups and requires that a large disk be connected to it, 300 GB. It stores a copy of the filesystem once a day on this hugs write- once optical disk. During the day it only stores incremental changes to the magnetic disks attached to it. The disks are transparent to the clients. I don't think this is enough storage personally. This OS is written for a programming environment and I know that such environments can often generate huge quantities of data for regression tests. Especially if one is writing a special purpose compiler that requires that the regression tests include both the source of the regression and the binary output. The backup system also may lure users into a false sense of security. In New Jersey they don't have earth quakes. In California it is vital to have an off site backup. With a system such as this one it is too easy to not bother with an offsite backup.

In the plan 9 system the terminals for individual users were custom made for Bell Labs. They are cheap and effective.

Networking is central to this operating system. Computing and file serving happen transparently -- except for performance. Over longer physical distances performance takes a big hit because they don't have the back to back DMA controllers everywhere. Traditional networking over Ethernet or Datakit takes longer. This is not as much of a problem if the network is designed appropriately with the CPU and file server close to each other and the terminals farther away.

The name space resembles (to me) a system with an automounter that mounts or attaches remote services to the local file system so that they may be accessed by the local file system by default. Access is still permitted in the global file system. To use it one must use the globally qualified name.

The plan 9 Window System implements a user level file server. You can run the window system in a window. This is very strange to me. This is a side effect of the way the windowing system implements windows as little pieces of the file system. When you cat something to /dev/console it is not just an alias or a way of addressing a window, it is a real file /dev/console that has all the properties of a file system. I don't yet see the advantages of this.

Security is not addressed directly by plan9, but it does have one feature that my be detrimental to sysadmins living in the United States. "The contents of a file with read permission for only its owner will not be divulged by the file server to any other user, even the administrator." This is a big problem in my mind considering the government stance that ISP's and sysadmins are responsible for the material their users store and transmit (for example cache's of dirty pictures on LLNL systems.) I personally think that privacy is better than government intrusion by court order, but I also believe in a sysadmin's right and a company's right to read the current work of an employee that died in a tragic car accident.

The authors claim that the system is efficient. It appears to be for that environment. I wonder about the flexibility of the environment. If the super computing resources available to those who work at Bell Labs are not available at a small college, will the system still be efficient?