Closing the Connection
Two golden rules characterize any code working with connections. First: open the connection as late as possible. Second: close the connection as early as possible. You want your application to work with connections for the shortest time possible.
When you close a connection, the connection is returned to the connection pool and made available to use again. The actual connection to the database is not severed. If you disable connection pooling, the actual connection to the database is closed. Note that you must close the connection to make it return to the pooldo not rely on the garbage collector to free connections. In fact, the connection is not necessarily closed when the reference goes out of scope. What the garbage collector does is destroy the .NET wrapper object representing the physical connection. In no case, does this automatically mean that the underlying connection is closed.
Connections are released to the pool when you call methods such as Close or Dispose. A connection object is removed from the pool only if the lifetime has expired or if a serious error has occurred.
Pooling and Security
The real advantage of connection pooling comes when programmers use the same connection string for all data access tasks in their applications. However, this approach may clash with other characteristics of the application. For example, using a single connection string makes it hard to manage security at the database level. On the other hand, if you give each user their own connection string (i.e., you use impersonation of user accounts) to access the database, you end up having many small pools and you don't end up reusing many connections. As often happens in life, the solution lies somewhere in the middle of these opposite positions. You could use a common set of accounts and modify your stored procedures to include an extra parameter that indicates the user's identity. Based on the passed user identity, each stored procedure could perform different tasks. This is an instance of a more general problemdetermining the identity (or the identities) to use to perform data access.
You can access data through a variety of accounts with the final choice being determined by the data access model in use. Overall, I suggest three possible approaches.
- The process identity of the calling process
- One or more service identities
- The original caller's identity (impersonation)
Let's see what happens if the process identity is used. This scenario is common particularly among ASP.NET applications. In this scenario, the application must take advantage of the Windows authentication model to connect to SQL Server.
The idea is that the account of the currently logged user is silently passed to SQL Server and no explicit user ID and password (nor any other sensitive information) is embedded in the connection string. The main advantage of this approach is that you deal with only one security model (the Windows security model) and inherit features such as account lockout and automatic password expiration that can only increase the overall level of security of the system. Sensitive data is still passed but that happens out of the application's control and, more importantly, the data travels as an encrypted stream.
To inform SQL Server that it has to retrieve a user's credentials from Windows, you set either the Integrated_Security or Trusted_Connection attributes to True in the connection string. You'll notice that SQL Server needs more time to handle a trusted connection. However, the performance degradation is negligible compared to the overall execution time of any SQL command. Furthermore, consider that SQL Server must be configured to run either in Windows Authentication Mode or Mixed Mode if you want to successfully use trusted connections.
With special regard to ASP.NET applications, you create your data tier components to use trusted connections to perform data-driven operations. The process account (i.e., ASPNET or NetworkService account) must be mirrored at the database level and secured granting the needed privileges. This approach doesn't have any negative impact on performance and scalability but is ideal only in the case in which one fixed identity can serve any database-related requests. If the requirements of the application dictate that more identities should be used, a different approach is needed.
The second approach is based on one or more service identities. The user is authenticated at the gate (i.e., IIS/ASP.NET) and assigned a role. A role is a string that indicates the role that the account plays within the context of the application. Each role is bound to a collection of privileges and permissions. The key point, though, is that neither the database nor the operating system checks these permissions. Instead, the middle-tier of the application itself guarantees that calls to the database are valid and authorized according to the role membership. In other words, the middle-tier component would perform database operations only if the role of the user enables the task. Once the call reaches the data tier, any involved components correctly assume that the user is legitimate. The data tier components then look at the role and impersonate an implicit built-in account configured to access the tables and the stored procedures defined for that role. Of course, the account is given database permissions to operate according to its privileges. Each role is associated with a unique account. As a result, few accounts are used by all users thus ensuring both good connection pooling and security.
The third approach is for the circumstance in which limiting the number of accounts is impossible because your data layer logic requires user-level authorization. In this case you need to impersonate the original caller when accessing the database. Database objects (i.e., stored procedures, database tables) determine what a user can and cannot do. In ASP.NET applications, you use impersonation together with trusted connections to maintain a high level of security. In contrast, Windows Forms applications let you use trusted connections and take the current identity. User-level authorization affects the overall scalability of the application because it defeats connection pooling. You should not take this route unless strictly necessary. In general, a "trusted model" based on roles and using a fixed number of user-independent accounts alleviates this issue while maintaining a certain association between users and functions.