Most framework-style software spends an appreciable amount of time dynamically loading code. Some of this code is executed quite frequently. I've recently been working on a web framework where URLs map to type names and methods, so I've been digging into these sort of patterns a great deal lately.
The canonical means to map a type name to a System.Type instance is via System.Type.GetType(string). In a framework which performs a significant number of these lookups, it's not clear what sort of performance characteristics one can expect from this static framework function.
Here's the source for a simple test pitting Type.GetType() against a cache backed by a Dictionary<string, Type>. All tests were run on a Core 2 Duo 2.2 GHz, .NET CLR 3.5, and all numbers indicate the elapsed CPU ticks.
Each program was run 20 times, and the resulting timing statistics were run through Peirce's Criterion to filter out statistical outliers.
You can plainly see that using a static dictionary cache is over two orders of magnitude faster than going through GetType(). This is a huge savings when the number of lookups being performed is very high.
Edit: Type.GetType is thread-safe, so I updated the test to verify that these performance numbers hold even when locking the dictionary. The dictionary is still two orders of magnitude faster. There would have to be significant lock contention in a concurrent program to justify using Type.GetType instead of a dictionary cache.
The canonical means to map a type name to a System.Type instance is via System.Type.GetType(string). In a framework which performs a significant number of these lookups, it's not clear what sort of performance characteristics one can expect from this static framework function.
Here's the source for a simple test pitting Type.GetType() against a cache backed by a Dictionary<string, Type>. All tests were run on a Core 2 Duo 2.2 GHz, .NET CLR 3.5, and all numbers indicate the elapsed CPU ticks.
Type.GetType() | Dictionary<string, Type> |
---|---|
6236070640 | 51351056 |
6236193856 | 51440360 |
6237466224 | 51463192 |
6238210488 | 51583336 |
6240645816 | 51599480 |
6242089400 | 51687448 |
6244450392 | 51719808 |
6245201664 | 51757472 |
6248327048 | 51793696 |
6249253736 | 51800056 |
6250640672 | 51859704 |
6251133912 | 51885992 |
6253544768 | 51897264 |
6254336632 | 51946408 |
6255117872 | 52046512 |
6256060648 | 52106936 |
6256159176 | 52140984 |
6259453568 | 52391000 |
Average | |
6247464250.67 | 51803928 |
Each program was run 20 times, and the resulting timing statistics were run through Peirce's Criterion to filter out statistical outliers.
You can plainly see that using a static dictionary cache is over two orders of magnitude faster than going through GetType(). This is a huge savings when the number of lookups being performed is very high.
Edit: Type.GetType is thread-safe, so I updated the test to verify that these performance numbers hold even when locking the dictionary. The dictionary is still two orders of magnitude faster. There would have to be significant lock contention in a concurrent program to justify using Type.GetType instead of a dictionary cache.
Comments