IDL has to make up for the shortcomings of the rather weakly typed C programming language on which it is loosely based. C and C++ have a number of implementation-dependent features, including the size of the short, int, and long types. When you use a single compiler on a single computer, you needn't be overly concerned with internal data formats because data is handled in a consistent manner. However, a distributed environment involving multiple machines with different architectures can become a minefield of inconsistencies. For example, some compilers define the internal representation of an integer as 16 bits, while others use 32 bits. In order for IDL to define interfaces between programs that run on different machine architectures, implementation-dependent data types are unacceptable. For this reason, IDL's designers developed a strongly typed language that concretely defines the size of all base types in IDL. These base types are listed in the following table.
Base Type | Description |
---|---|
boolean | A data item that can have the value TRUE or FALSE |
byte | An 8-bit data item guaranteed to be transmitted without any change |
char | An 8-bit unsigned character data item |
double | A 64-bit floating-point number |
float | A 32-bit floating-point number |
handle_t | A primitive handle that can be used for RPC binding or data serializing |
hyper | A 64-bit integer that can be declared as either signed or unsigned |
int | A 32-bit integer that can be declared as either signed or unsigned |
long | A 32-bit integer that can be declared as either signed or unsigned |
short | A 16-bit integer that can be declared as either signed or unsigned |
small | An 8-bit integer that can be declared as either signed or unsigned |
wchar_t | A 16-bit wide-character type |
In addition, different machines might be designed around the little endian or big endian architecture that determines the order in which bytes are stored in memory. The little endian architecture used by the Intel platform assigns the least significant byte of data to the lowest memory address and the most significant byte to the highest address. Processors that use the big endian architecture do the opposite. For example, the base-10 value 654 (0x028E in base 16) is represented in memory as 0x8E02 by an Intel CPU but as 0x028E on a Motorola CPU of the big endian variety. IDL uses the Network Data Representation (NDR) transfer format to ensure that network transmissions are independent of the data-type format on any particular computing architecture.1
You can define enumerated types in IDL using the enum keyword, as shown in the next code fragment. Note the use of the [v1_enum] attribute, which directs the marshaling code generated by MIDL to transmit the enumerated type as a 32-bit entity; by default, enumerated types are transmitted as 16-bit values. Enumerated types defined in IDL are also added to the type library file generated by MIDL. This means that high-level languages such as Microsoft Visual Basic and Java can use this information to provide syntax-completion information for enumerated types used as method parameters.
interface IWeek : IUnknown { typedef [v1_enum] enum DaysOfTheWeek { Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday } DaysOfTheWeek; HRESULT Test(DaysOfTheWeek day); } |