[Previous] [Contents] [Next]

From COM to COM+

COM+ currently encompasses two areas of functionality: a fundamental programming architecture for building software components (which was first defined by the original COM specification), plus an integrated suite of component services with an associated run-time environment for building sophisticated software components. Components that execute within this environment are called configured components. Components that do not take advantage of this environment are called unconfigured components; they play by the standard rules of COM. While you can use the COM+ programming model without the component services and run-time model, much of the power of COM+ is realized when these two parts are used together.

The COM+ programming architecture described so far provides a model for creating component software. For many developers, however, this is insufficient. In the typical corporation, for example, developers build business components that operate as part of a larger application, often using a client/server or three-tier approach. Developers expend a great deal of effort to build the simplest of components. Even if the goals of a business component are relatively modest, developers must create a robust and secure housing for it. If many clients connect to the component simultaneously, developers must ensure that only clients with the proper authorization can perform certain privileged operations using the component. Scalability is another concern if the component might be accessed by a large number of clients simultaneously.

Large systems need to anticipate client failures in the midst of complex operations involving the database server. The client might have been storing data locally and then updating data on the database server based on some local information. If the client fails in the middle of this type of operation, you cannot be sure of the integrity of the database without implementing some sort of transaction protocol. Finally, if the internal data structures of the component become corrupted, this corruption must be detected before the integrity of the database itself is compromised. You can see that a major problem with the basic model for component software is that developers are left to implement an enormous amount of functionality themselves—functionality that has little to do with the goals of the application itself.

Microsoft has realized that writing robust server-side systems requires intensive work. Threading and concurrency, security and administration, robustness, and the like are crucial to any distributed system, and developing software with these features seamlessly integrated requires tremendous effort. This effort is also completely unrelated to the actual processing done by the system. Microsoft SQL Server, for example, is a database server that deals with these issues in addition to its bread-and-butter work of processing SQL queries.

Out of this realization, Microsoft developed Microsoft Transaction Server (MTS), the first Windows-based implementation of a run-time environment that provides these services for software components. The run-time environment and component services offered by COM+ have evolved from ideas that originated with MTS. COM+ provides a robust run-time environment that deals with most of the issues facing developers of server-side systems and enables them to develop simple, in-process COM+ components containing the actual business-related functionality.

Although any in-process COM+ component can run in the COM+ run-time environment, in-process components not designed specifically for execution within that environment—executable components, for example—cannot take full advantage of the services offered by COM+.

The fundamental architecture of COM+ imposes minimal overhead, but Microsoft realized that many developers of customized corporate applications could benefit from a standard set of application services. For example, developers are often confronted with similar challenges when building line-of-business applications, such as security, reliability, concurrency, and scalability. Instead of forcing developers to create their own solutions to these problems for each application, COM+ offers these built-in services to components at run time. Not all components need or want these services. In specialized applications where performance is of the utmost importance, the run-time overhead imposed by these services causes developers to reject them. In many standard applications, however, these services offer tremendous value to both the developer and the overall stability of the entire project.

Windows DNA: A Three-Tier Approach

Many new enterprise information systems are being developed to run on Windows 2000. To help developers take better advantage of the application services offered by Windows 2000, Microsoft coined the term Windows DNA—Windows Distributed interNet Applications Architecture. Windows DNA is the application development model for the Windows platform. Basically, Windows DNA offers a three-tier architecture based on COM+, as shown in Figure 1-3. Because Windows DNA provides a comprehensive and integrated set of services on the Windows platform, developers are free from the burden of building or assembling the required infrastructure for distributed applications and can focus on delivering business solutions. The goal of the three-tier approach is to separate the business logic from a client/server system by moving it to a middle tier that runs on Windows 2000. The resulting three-tier architecture consists of a presentation layer, business logic components, and the data services layer.

Click to view at full size.

Figure 1-3. The Windows DNA Architecture.


The client side of a client/server system typically encompasses the functionality of both the user interface and the business logic that drives the system, leaving only the database on the server side. This design leads to heavyweight client-side applications that tend to be tied to a particular operating system and can be difficult to deploy and support. In a three-tier architecture, the client is designed to be as lightweight as possible, normally handling only the user interface. Such a thin client might consist of forms designed in Visual Basic or perhaps only of HTML pages designed to run in a Web browser such as Internet Explorer.

Developing client-side applications composed solely of HTML pages is alluring to many corporations because of their platform independence and ease of distribution. Developers of applications that require a user interface richer than the one possible purely with HTML might consider using Dynamic HTML or scripting code (Internet Explorer supports both VBScript and JScript) or including ActiveX controls or Java applets in their Web pages. Like HTML, Java applets are platform-independent, and both Java applets and ActiveX controls offer automated distribution. Applications requiring an even more robust presentation layer can be built with full-fledged programming languages including Visual Basic, Java, and C++. While client programs built in HTML use the Hypertext Transfer Protocol (HTTP) to communicate with the Web server, applications built in Visual Basic, Java, or C++ typically make direct method calls to the business components running in the COM+ environment.

Business Logic

While the client/server architecture is relatively fixed on deploying the client-side and the server-side components on different computers, the business logic component of a three-tier design can lead to more flexible solutions. For example, the business logic of an application might be implemented as an in-process COM+ component designed to run in the process of the client application on the client side or in the process of a Web server on the server side. Alternatively, the business logic component might run in the COM+ environment on a third machine that is separate from both the client and the database server.


The data tier of the Windows DNA model consists of SQL servers such as SQL Server, Oracle, Sybase, DB2, or any other database server that supports OLE DB or Open Database Connectivity (ODBC). Typically, COM+ components running in the middle tier use ActiveX Data Objects (ADO, the COM+ component that provides a high-level wrapper for OLE DB) to connect with and query the database. OLE DB makes it possible to access data from a wide variety of database servers, including legacy systems.

Component Services

Microsoft found that developers spend too much of their time writing housekeeping code—as much as 30 percent of the total time they spend building COM+ components. COM+ component services provide a standard implementation of services that are frequently needed by component developers, thereby freeing developers to concentrate on the business problem at hand. This should bring the ideas originating in COM+ to an even wider audience, which is important if COM+ is to fulfill its goal of becoming the component object technology of the future for Windows services. Figure 1-4 shows the evolution of services from COM to COM+.

Click to view at full size.

Figure 1-4. The evolution of component services from COM to COM+.

Just-In-Time Activation

Although COM+ makes certain demands of your components, it also offers them much in the way of functionality. One major feature of COM+ is its ability to scale middle-tier components so that they can support hundreds or even thousands of simultaneous clients. A client that attempts to instantiate a COM+ object running in a COM+ environment receives a reference to a context object implemented by COM+—not a reference to the component's object (as shown in Figure 1-5). Only when the client later makes a method call into the component does COM+ finally instantiate the actual object. This technique, known as just-in-time activation, lets client programs obtain references to objects that they might not intend to use immediately, without incurring unnecessary overhead.

When implementing complex business logic components, one component can access other components, and those components can invoke still other components. Managing this chain reaction properly is a complex task. In COM+, the system-created context object that shadows each user object contains information that helps manage these complex relationships.

Click to view at full size.

Figure 1-5. A client application using a configured COM+ component while it is transparently activated and deactivated.

COM+ also extends the COM model to allow early deactivation of an object. COM+ can deactivate a component even while client programs maintain valid references to that component. It does this by releasing all references to the object. This in turn causes properly built COM+ components to be destroyed when their internal reference count reaches zero. If the client requests services from an object that has been deactivated by COM+, that object is transparently reactivated. So while it might appear to a client process that it is using a single object from the time of creation until the time it releases the object, the client might in fact be working with many different instances of the same class. This ensures that the semantics of COM are not violated.

Just-in-time activation is a powerful resource-sharing concept because it enables COM+ to share the resources of the server more equitably among the active components. Imagine that a client process spends 10 percent of its time requesting services from a particular object. With the automated deactivation of objects running in COM+, the object is instantiated only 10 percent of the time instead of 90 percent. This can make a server machine far more scalable than if all objects remain active for the entire duration of their clients.

Object Pooling

To enhance the overall scalability of a distributed application, COM+ supports object pooling. When a client application releases an object that supports object pooling, instead of destroying the object, COM+ recycles it for later use by the same or another client. When a client later attempts to access the same kind of object, COM+ obtains an object from the pool if one is available. COM+ automatically instantiates a new object when the pool of recycled objects is empty. Objects that support pooling are required to restore their state to that of a newly manufactured object.

You should decide whether or not to support recycling of an object by weighing the expense of creating new objects against the cost of holding the resources of that object while it is stored in the object pool. An object that takes a long time to create but does not hold many resources when deactivated is a good candidate for recycling. Imagine an object that creates a complex memory structure on startup. If this type of object supports pooling, it can simply reinitialize the structure when deactivated and thereby increase performance at run time because the structure need not be re-created at each activation. With other objects, recycling might not be advantageous. For example, an object that is cheap to create and stores a lot of state for each client is not a good candidate for recycling because its state is not reusable by other clients.

Load Balancing

A distributed COM+ application can potentially have thousands of clients. In such cases, the just-in-time activation and object pooling features can fall short of providing the required application scalability. Therefore, the client workload should be distributed among multiple servers in a network. In COM+, load balancing is implemented at the component level. This means that a client application requesting a specific component first contacts a load balancing router. The router contains information about a cluster of machines belonging to the distributed application and balances the workload among these servers. Once the desired object has been instantiated on one of the servers in the application cluster, the client receives a reference directly to the component on the particular server. Any future requests by the client go directly to the component. While many load balancing algorithms have been devised, COM+ uses a simple response-time analysis algorithm to load balance servers. (In the future, COM+ might enable other load balancing algorithms to be installed.)

Load balancing is also important in a failure scenario. If a client has a reference to a component on a server that goes down, COM+ automatically routes a client request from the server to another server in the application cluster. This failover support helps provide uninterrupted client service, increasing the overall stability of the system. Since the load balancing router itself represents a single critical failure point, you can use the Windows 2000 clustering service to set up one or more backup routers to be used in the event of a failure.

In-Memory Database

The In-Memory Database (IMDB), another powerful COM+ service, is a transient, transactional database-style cache that enhances the performance of distributed applications. Implemented as an OLE DB provider, the IMDB provides extremely fast access to data on the local machine. Client applications use high-level data access components such as the ADO to create and access indexed, tabular data. These cached databases can be generated dynamically by the COM+ application or loaded from a persistent data store.

Queued Components

Queued components are a key feature of COM+ based on the Microsoft Message Queue Server (MSMQ) infrastructure included with Windows 2000. Using queued components, a client can easily execute method calls against a COM+ component, even if that component is off line or otherwise unavailable. The MSMQ system records and queues the method calls and automatically replays them whenever the component becomes available. Figure 1-6 illustrates how MSMQ is used to transfer data between the client and component.

Click to view at full size.

Figure 1-6. A client application accessing an object via MSMQ.


Using COM+, you can build components that automatically can participate in a distributed transaction. While transaction processing is one of its important features, COM+ actually enlists the help of the Microsoft Distributed Transaction Coordinator (DTC) to perform the transaction management. Microsoft originally designed OLE Transactions, an object-oriented, two-phase commit protocol based on COM, and then it implemented the specification in MS DTC, a transaction manager originally bundled with Microsoft SQL Server. (The OLE Transactions specification defines four fundamental interfaces: ITransaction, ITransactionDispenser, ITransactionOptions, and ITransactionOutcomeEvents.) However, Microsoft did not design the transaction management services provided by MS DTC solely for use by SQL Server. MS DTC is now an integrated service in Windows 2000, where its functionality is available to a wide variety of applications that require transaction management services.

In addition to the OLE Transactions specification, COM+ also supports the X/Open DTP XA standard. XA is the two-phase commit protocol defined by the X/Open DTP group. To allow COM+ to work with XA-compliant resource managers, the COM+ Software Development Kit (SDK) provides a special component that maps OLE Transactions to the XA standard. This makes it relatively straightforward for XA-compliant resource managers to provide resource dispensers that accept OLE Transactions from COM+ and then carry out the transaction with XA.

A transaction is typically initiated when an application is to perform some critical operation. The application initiates the transaction by notifying a transaction manager such as MS DTC. It then enlists the help of various resource managers to perform the work. (A resource manager is any service that supports the OLE Transactions specification, such as SQL Server.) Resource managers work in cooperation with MS DTC so that when the client application calls various resource dispensers, it carries with it information identifying the current transaction. (Resource dispensers are similar to resource managers, but without the guarantee of durability.)

Typically, transaction processing is most often applied to database access because of the crucial nature of the information stored there. However, transaction processing is not limited to the database management system (DBMS) domain. COM+, for example, also provides two resource dispensers: the ODBC Driver Manager and the Shared Property Manager. The ODBC Driver Manager is a resource dispenser that manages pools of database connections for COM+ components. You can also develop add-on resource dispensers using the COM+ SDK.

A resource manager enlisted to perform work on behalf of the client application also registers itself with the transaction manager. The transaction manager then keeps track of that resource manager throughout the remainder of the transaction. In transaction processing parlance, a transaction ends when the client application either commits or aborts the transaction. An abort operation causes the transaction manager to notify all resource managers involved in the transaction to roll back any operations performed as part of that transaction. A rollback can be likened to a humongous undo operation. If the client application fails before committing or aborting the transaction, MS DTC automatically aborts the transaction.

If everything goes well and the client application requests that the transaction be committed, MS DTC executes a two-phase commit protocol to commit the operations performed within the transaction. (A two-phase commit protocol ensures that transactions that apply to more than one server are completed on all servers or none at all.) The two-phase commit protocol results from coordination between MS DTC and supported resource managers. First, MS DTC queries each resource manager enlisted in the transaction to determine whether they agree to the commit operation. The vote must be unanimous; if any resource manager fails to respond or votes to abort the transaction, MS DTC notifies all the resource managers that the transaction is aborted and their operations must be rolled back. Only if all resource managers agree in the first phase of the protocol does MS DTC broadcast a second commit message, thereby completing the transaction successfully. A client application using transactions must be guaranteed that concurrent transactions are atomic and consistent, that they have proper isolation, and that once committed, the changes are durable. These conditions are sometimes referred to as the ACID (atomic, consistent, isolated, and durable) properties of transactions.

Each object running in the context of COM+ can be set to one of four levels of transaction support. A component can declare that it requires a transaction, requires a new transaction, supports transactions, or does not support transactions. The context objects of components that do not support transactions are created without a transaction, regardless of whether their client is running in the scope of a transaction. Unless the component developer or system administrator specifies otherwise, this setting is the default. Components that do not support transactions cannot take advantage of many features of COM+, including just-in-time activation. (Such components are never deactivated while clients hold valid references because COM+ does not have enough information about their current state.) Thus the default value is not recommended and is primarily intended to support components not originally designed for use with COM+.

Most COM+ objects are declared as either requiring a transaction or supporting transactions. Objects that support transactions can participate in the outcome of the transaction if their client is running in the scope of a transaction. If the client is not executing within a transaction, no transaction is available for the COM+ object. Objects that require a transaction either inherit the transaction of the client or have a transaction created for them if the client doesn't have one. Objects that require a new transaction never inherit the client's transaction; COM+ automatically initiates a fresh transaction regardless of whether the client has one.

Role-Based Security

As discussed, COM+ was designed to save developers from having to code a robust server-side process for every component that runs in the middle tier of a three-tier architecture. To this end, COM+ automatically helps components manage threading, concurrency, scalability, transactions, and security. The COM+ security model leverages that of Windows 2000. However, to simplify security issues, it offers two types of security, declarative and programmatic. You can use both when you design a COM+ object.

The key to understanding COM+ security is to understand the simple but powerful concept of roles. Roles are central to the flexible, declarative security model employed by most COM+ objects. A role is a symbolic name that abstracts and identifies a logical group of users—similar to the idea of a user group in Windows 2000. When a COM+ object is deployed, the administrator can create certain roles and then bind those roles to specific users and user groups. For example, a banking application might define roles and permissions for tellers and for managers. During deployment, the administrator can assign users Fred and Jane to the role of tellers and assign executive management to the role of managers. Fred and Jane can access certain components in the banking package, while executive managers can access all components. You can even configure role-based security on a per-interface, rather than a per-component, basis. The administrator can completely configure declarative security without help from the component developer. This is infinitely simpler than the low-level COM security model.

Sometimes, however, you might want to configure certain parameters to limit the access of users in particular roles. Perhaps you want tellers to be able to authorize withdrawals and transfers of up to $5,000, but only a manager should be able to authorize those above $5,000. Declarative security as configured by the administrator does not offer the fine degree of control you need. When you develop a COM+ object, you can use roles to program specific security logic that either grants or denies certain permissions.


Distributed applications use COM+ events to advertise and deliver information to other components or applications without prior knowledge of the identity of the components or applications. Event models can be categorized as either internal or external. With internal event models, the event semantic is completely contained within the scope of the publisher and subscriber. This generally requires that the publisher and subscriber run simultaneously. (Connection points are an example of this type of event model.)

The COM+ event service implements an external event model. This model removes as much of the event semantics as possible from the publisher and subscriber. The subscriptions are maintained outside the publisher and the subscriber and are retrieved when needed. The publisher and subscriber are thus greatly simplified. In particular, the subscriber need not contain any logic for building subscriptions. In fact, an events subscriber is any component that implements a given event class interface. Anyone can build an event subscriber with no additional work. In a world where subscribers greatly outnumber publishers, this is a big advantage. Plus, because of the removal of the subscription logic from the subscriber, a third party such as an administrator can build subscriptions between publishers and subscribers that were built and sold independently.

Another benefit of maintaining subscriptions outside the publisher is that the subscription's lifecycle need not match that of either the publisher or the subscriber. You can build subscriptions before either the publisher or the subscriber is up and running. This type of subscription, known as a persistent subscription, allows publishers to activate subscribers prior to calling them. (In this unusual relationship, the lines between clients and components are blurred.)