t’s now been more thantwo years since I wrote the article “
Most of the tips comefrom my real-world projects, but there is another source too.
I developed some of the tips while writing my book
Without further ado,let’s get started.
Lesson Learned #1: Using Shared Methods
For years I’ve regardedthe usage of modules (.BAS) in VB6 as not being purely object oriented.Nevertheless, I have used modules a lot especially for helpers. Now, havingworked a lot with Visual Basic .NET and its Shared methods, modules in VB6 do in fact feel very object oriented! If youcall a sub in a module in VB6 this is how it could look if you prefix with themodule name:
If you call a Shared method in Visual Basic .NET, it could looklike this:
Shared methods andVB6-modules are more or less the same thing. For instance:
- You don’t need todo any instantiation before using them.
- You have to takecare when using module-level/Shareddata.
- You don’t get anycontext or interception overhead for COM+ when using them.
- You don’t have toworry about cleaning up of instances.
I really like the ideaof using Shared methods for my secondary COM+ layers in Visual Basic .NET. That is, thelayer called by the client (named the Application layer in my architecture)will be built with ordinary classes that are instantiated and have members.However, the later layers, those used by the Application layer classes, willoften be built with Shared methods. Theselayers are completely stateless anyway, and then Shared methods are a very good solution.
When I came to theinsights above, I decided to favour modules for my later layers in VB6 too. Thedrawbacks with using Shared methods/modules for the later layers as proposed here are as follows:You can’t use user-defined interfaces for getting consistent code andflexibility is lost because you can’t move a module to another COM+ applicationand call it directly from the Application layer in the first COM+ application.Apart from these, there are more advantages than disadvantages, for example,efficiency.
#2: Stored Procedures as the Only Way to Access the Database
The way to access SQL Server is by using storedprocedures, which present numerous advantages, such as:
- Fewer roundtrips
- A small entrypoint to the database
- A bettermechanism for security
- Efficiency (justin case you missed it before)
The only commonlyreferred-to drawback is the lack of portability in stored procedures betweendatabase platforms. This may be true, but in a way I think using storedprocedures can actually help portability, as all your database code is put inone single place. I recently reviewed some code from a project where they haddecided not to use stored procedures due to what they considered theportability problem. On the other hand, they had scattered PL-SQL (Oracle’sdialect) code all around the VB-code which is not easier to port, believe me!
I’ve been a fan ofstored procedures since early 1990, but it’s not been so many years since Idecided to use only stored procedures for communicating with thedatabase. Now this is completely natural. Some things are a bit clumsy (such assending arguments for IN-clauses), but stilldoable.
#3: One Resource Manager, Use Local Transactions
This is actually not arecently learned lesson, but I find it so important that I have included ithere anyway. COM+ transactions are distributed transactions, and that meansthey solve the problem of transactions when several resource managers (RM) areinvolved. If you have only one RM in your application, using COM+ transactionsis overkill. You’ll save resources and get higher throughput if you go forlocal T-SQL transactions instead.
OK, there is one moresituation where COM+ transactions shine, apart from the case of multiple RMs.This is when your components have to interoperate with components that youdon’t know much about, yet you still want your components to participate within the same transactions. (Moreover, COM+ transactions provide encapsulationand composition since you expose less of the transaction semantics and youdon’t have to send around connection objects.)
Because of the possibleneed for integration (and because having to add one more RM might happen soonerthan you think), it’s always a good idea to program so that you can move fromlocal T-SQL transactions to COM+ transactions as easily as possible, if needbe.
#4: UNIQUEIDENTIFIER Is a Great Datatype for Primary Keys
If we assume you favoursurrogate keys instead of natural keys in your database designs, you haveseveral datatypes to choose from.
UNIQUEIDENTIFIER orGlobally Unique Identifier (GUID) is, in my opinion, a great datatype for usingas primary keys. Yes, a GUID is large?four times the size of an INT?and that could affect the search times.Another disadvantage is that it’s hard to read and edit GUIDs when you browsethe database tables. Apart from this, you get the following advantages:
- They are not toosmall for large tables.
- There is no riskthat you might choose to expose them to users and that the users will learn acertain value.
- The databasewon’t have to maintain a counter. The work of assigning a key can be done atthe client or middle tier.
- They will beunique, not only for the tables but for the complete database.
Since GUIDs are uniquefor the complete database, there are several advantages, the most important onebeing that you can construct a complete structure of, say, orders and theirorderrows (complete with values for the primary keys and the foreign keys) atthe client or in the middle tier, before sending it all to the database.
Microsoft has changedthe implementation of how GUIDs are created, so now they no longer use the IDof the network card as one part of the GUID. Instead, the GUIDs are randomvalues so there is a theoretical problem of duplicate values. This risk is sosmall that in practical scenarios, it’s not a problem.
#5: Separate Public and Private Stored Procedures
If you have started towork with Visual Basic .NET, I’m sure you find it hard to go back to VB6, butfor most of us this is a must every now and then. What you see then is that VB6is very primitive when compared to Visual Basic .NET. On the other hand if yougo from VB6 to T-SQL, the relative difference is much larger. T-SQL is outdatedin many ways, for example you can’t differentiate between public and privatestored procedures. I like to expose as little as possible to the outside,because the less that is visible, the more I can change without the risk ofbreaking other code. Therefore, I have decided on a naming convention, in orderto distinguish the stored procedures that are public. I prefix them with a_. Ofcourse this requires discipline, but contracts like this are pretty common.Furthermore, if the client doesn’t adhere to the contract, he is at risk ofhaving his code broken when I decide to change a private stored procedure.
It’s extremelyimportant to master T-SQL if you want to write scalable and efficientapplications, therefore we’ll have to work hard to overcome the worstobstacles. In a forthcoming version of SQL Server it will be possible to workwith a managed language, such as Visual Basic .NET, for writing storedprocedures. I would also like to see Microsoft give T-SQL a face-lift, but thismight not happen.
#6: Often Skip JIT
In the “
Nevertheless, I stillthink it’s a good idea to declare module-level variables at the client side forCOM+ components. At least do so when you want to call a method more than once.You will then keep an instance alive at the server-side, but the relativedifference in memory consumption at the server-side is very small compared toyour COM+ component using JIT instead!
Also ensure that youalways code as if JIT has been used. Otherwise you can’t re-declare so that youenable JIT without a lot of work.
#7: Remember the Difference Between Cheap and ExpensiveRoundtrips
Roundtrips are amongthe worst enemies of distributed applications. Hunting and reducing roundtripsis a sure way to increase performance. This is why I’ve been aroundtrips-hunter for a long time, although I think I was hunting a bit blindbecause I kind of tried to reduce all roundtrips, which is not important. Whatis important is hunting the expensive roundtrips. If we assume you have adesign with three layers in one COM+ application it doesn’t cost you much atall to make calls between those layers. On the other hand, each time a call isneeded to go out from the COM+ application to SQL Server, for example, or toanother COM+ application (which is a server-application), that is an expensiveroundtrip to be avoided, if possible. To sum up, try to reduce roundtrips (orchatter) between processes and hosts as much as possible.
One explanation for whyI tried to reduce roundtrips between layers within a process earlier was that Iwanted to keep the flexibility of being able to split a COM+ application inseveral COM+ applications (of server-type). I also wanted to be able to movesome of the components to another machine. In reality this seldom happens, andif you find it absolutely necessary in the future, it would be better toredesign, because you can’t prepare for all possible situations without gettingnegative results now.
It is also better toclone than to partition a COM+ application, which is yet another reason for whyyou won’t get expensive roundtrips between your layers in COM+.
#8: Autocomplete Is Great
In my article “
There is also littlereason for me to use a module-level variable for keeping the ObjectContext. I use GetObjectContext insteadwhen I need the ObjectContext. This, together with AutoComplete, helps tosimplify the code structure enormously. Keep watching VB-2-The-Max in thecoming weeks, because you will soon find an article where my updated codestructure is discussed more in detail.
#9: Use “Must Be Activated in Caller’s Context”
As you probably know,COM+ lets your instances live in contexts and COM+ uses the context border asthe place for adding interception. That is, when a call to your instance passesthe context border, COM+ can intercept the call and do stuff before the callmoves on to your component and at the return from the method. This is a greatmodel, but as always, there are drawbacks too, the most obvious one beingoverhead. Each context takes quite a lot of memory, and interception eatsCPU-cycles. When you benefit from services, this is just fine, but when youhave components that don’t need services it is a waste of resources.
In these cases youshould check “Must be activated in caller’s context.” This means that if aninstance of this object can’t locate in the caller’s context, you will get anerror message. Otherwise the caller and the new instance will share a contextand there is no interception between those instances. You have also saved thememory of one context.
In order to succeedwith the co-location, your component with “Must be activated in caller’scontext” has to follow some rules. It must satisfy the following:
- Disabled COM+transactions
- Disabled JIT
- Disabled eventsand statistics
And the application maynot use component-level security. (Because of the last reason, those componentswill often be put in a separate Library application.)
If you make a quick anddirty test, you will see that the instantiation time for “co-located”components are much lower, but it’s not often that you create thousands ofinstances of a component, so I didn’t use to think this would make much of adifference in real-world applications. That was before I was invited to abrainstorming meeting for increasing throughput of a certain use case. When were-configured some components so they could “co-locate,” the throughputincreased fourfold!
#10: Some Scripts Are Good
Each and every time Iwork with ASP, I hate the script nature of it. Saying that, there are othertypes of scripts that I find good. The example I’m thinking about is lettingyour components generate a T-SQL script with several calls to stored proceduresand then this script is sent to SQL Server for execution in one singleroundtrip. Automatically generated scripts like these are great!
If you decide to usethis methodology, you will have to watch out that you don’t fall into the trapof too many string concatenations in VB6. (In Visual Basic .NET, the StringBuilder class is terrific here.)
If you use this scripttechnique, you will also solve the problem of using local T-SQL transactions.Most often I let the stored procedures control the transactions, but when atransaction has to span several stored procedures, my script solution comes inas a neat solution.
#11: Control Transactions from the “First” COM+ Layer
In the past I havedecided to control transactions in the last layer before I hit the database. (Icall that layer the Persistence Access layer, but a more common name for it isthe Data Access layer.) The idea was to get as short transactions as possibleand it works very well. The main problem is that it gives huge effects ondesign. It means that I have to ensure that all the data is sent to the methodin the Persistence Access layer in one go. That often makes the design ofprevious layers less natural and means that I always need a controller in thePersistence Access layer that can receive all the data, and then distribute itto primitive methods.
The solution to theproblem is to let the first COM+ layer control the transactions instead. I callthat layer the Application layer, and I have one class in the Application layerper use case. As a matter of fact, in my opinion transactions are an integralpart of use cases. Oops, won’t that mean getting longer transactions and longerlock periods? A catastrophe is coming Nope, when I use my script solution(discussed in #10), I actually only gather information to the script during thecomplete use case and when I’m ready to send the script to the database, it’snot until then the physical transaction is started and the locks are taken.It’s the same with both COM+ transactions and local T-SQL transactions. Abetter design, with as short transactions as before.
I have now presentedseveral tips from lessons learned from a couple of years of work with COM+, VB6and SQL Server. Hopefully you have found the tips useful.