Heard on .NET Rocks!: Pobar and Abrams on the CLR

Heard on .NET Rocks!: Pobar and Abrams on the CLR

am the host of “.NET Rocks!”, an Internet audio talk show for .NET developers online at and Together with my co-host Richard Campbell, we interview the movers and shakers in the .NET community. We now have over 125 shows archived online, and we publish a new show every Monday morning. For more history of the show check out the May/June 2004 issue of CoDe Magazine, in which the first column appeared. In each issue of CoDe Magazine I’ll highlight some of my favorite moments from a recent show.

Show #127 was a discussion with two distinguished members of the CLR team, Brad Abrams and Joel Pobar. These guys engaged us in a great discussion about memory management, the JIT compiler, and NGEN, to name a few great topics. The highlight of the show for me was listening to them answer the question, “If you had it to do over again, design the CLR that is, what would you do differently?”

Carl Franklin: Let me ask both of you guys this and I think this is what you are about to answer, Joel. If you had it to do over again, and maybe you do have to do it over again in CLR 2.0, but maybe you don’t, what would you do differently in terms of memory management? [I ask] because obviously you are making some assumptions, and whenever you make assumptions, somebody somewhere is going to need to do something opposite that.

Joel Pobar: Well, you want to go first Brad?

Brad Abrams: Man, I could write a book on what I would do differently. Overall I am super proud of this platform, but I feel so much smarter now than when we started. We could have done more in the development to think about the deterministic finalization issue. And the dispose pattern is a good one, but it’s complicated in a couple of respects especially in complex inheritance hierarchies. So there are some things we could have done, [maybe] added some methods to Object, for example, to handle some of this. I mean, it was too late when Brian sent his mail right before we shipped V1 and it’s certainly too late to do that now.

Carl Franklin: So you are saying this mail that Brian sent [said], “Okay this is fine, but here are some things that you need to understand.”

Brad Abrams: Essentially he said, “Here is why we can’t just go add deterministic finalization. Here are the issues that you might not know about.” And Brian was the development manager for the team. So it’s a very technical sort of response to the issue.

Carl Franklin: And Joel, how about you?

Joel Pobar: Well, I kind of follow Brad’s logic there. I mean, I think we could have done a little more in the deterministic finalization space especially for the C++ guys.

Carl Franklin: What exactly does that mean? I mean, do you mean you want to have some sort of hybrid system? Chris Sells actually wants to put reference counting in the CLR. He has petitioned you guys to do that. Is he still saying that?

Joel Pobar: I don’t know if he [is] still saying that.

Brad Abrams: Just a little plug here for Rotor. He actually did a Rotor project. We actually have the source code for the CLR available and anyone can just pick it up, and Chris did. He picked it up and he added reference counting to it.

Richard Campbell: As a weekend project, right?

Brad Abrams: [laughs] Yeah, a weekend project! It actually took him a little bit more than a weekend to go do it, but Joel can tell the story. [Chris] added a new IL instruction, so he can tell you about doing that. But anyway, Chris convinced himself that it is doable but the issues are more complex and subtle and especially impact performance on the overall system just for having it.

Carl Franklin: Would you say that some systems lend themselves better to reference counting and some better to non-deterministic finalization? Or is it pretty much across the board, all would be better with non-deterministic?

Brad Abrams: The way I think about it, if you have a constrained system where you know all the moving parts?you are not going to load any third-party code, you are not going to use anybody else’s component, you are going to write all the code yourself, you’re not going to leverage the OS?then you can write your own memory manager and it can be better than the GC. So, for example, I was talking to a guy and he said, “Look my data comes in in 32-bit chunks and I process it and push it back out in 32-bit chunks, [and] this is just how it works”. For him, he would be better off with his own custom memory management technique.

Carl Franklin: Well, he could be using C++, right? In mixed mode?

Brad Abrams: Yes, so it’s just a little bit more work to go do that. But yeah, he could.

Joel Pobar: There are other things that we aren’t even really considering in terms of the benefits of the generational GC that we have, and I think that’s kind of a key thing. I mean, reference counting versus the different types of generational garbage collectors and that kind of thing. And I think the big positive with at least our generational garbage collector is that [with] the three generations: the first generation usually aligns with the processor’s L2 cache. So you can basically collect all your [bits in] memory, stash it in one contiguous chunk and upon execution, when the processor actually goes to reference that data, Bam! It’s right there on L2. You don’t pay the price of going out to the bus, which is really kind of neat, but of course you pay the cost when you do actual generational garbage collection, where we have to actually do the mark, sweep, and compact thing. So we toss everything out of generation 0 promoted to gen2, have to compact it, you pay a cost there. On the other side, memory allocation is nearly free, it’s very, very quick because we have this pointer in memory and the pointer is, hey, this is where you can allocate your next chunk and away it goes. So the cost is really low for allocation, and [with] compacting you pay a bit of a cost there, but then you get the L2 benefit. You see where I am going with it?

Carl Franklin: Sure, it’s a trade off.

Joel Pobar: There’s a whole host of things that are really interesting, and to be honest, before I even came here I thought, “Well this is really cool stuff, I’d love to go and take a look at this.” But after coming here I know that we have got this smart guy, Patrick. He is a senior architect on the team. I mean, he has being doing garbage collectors for about 20-30 years. He has being doing it for the LISP community and all sorts of stuff. He wrote the majority of the code and I think in terms of the amount of effort it would actually take to implement your own garbage collector, you may as well just use ours. Plenty of man-years and plenty of thought has gone into it to make it as generic and as quick as possible.

Brad Abrams: And it just keeps getting better. I was in Pat’s office the other day, and he is working on making the thing run better on multi-core machines on 32-way machines, and that kind of scale-out. So, its’ just going to keep getting better.

Joel Pobar: You asked, “What could we have done to make it better?” Well, I think the whole value type / reference type thing is a little complex and a little?I hate to use the word “busted” because it’s not that bad. But, I think we could have done some work to make that a little bit easier.

Carl Franklin: I like to think there are three classes of types: value type, reference type, and string. [laughs]

Brad Abrams: I like it. Yes, you like to treat string as a value type, don’t you?

Carl Franklin: What’s up with string? It’s a reference type that thinks it’s a value type.

Brad Abrams: It’s an immutable reference type that has special language support.

Carl Franklin: Well sure. It’s hard, [though] if you try to learn the difference between the two you have to leave string for the last appendix. “Okay now we will talk about string.”

Joel Pobar: Did you hear what we have done in NGEN (native code generator) around strings?

Carl Franklin: No I haven’t, and let’s define NGEN, and by the way, let’s consider the fact that maybe it doesn’t even work.


Brad Abrams: Okay we’ve got to go into that.

Carl Franklin: We’ve got to go into that because the empirical evidence for me in testing NGEN, which basically makes native images that pre-JIT (Just In Time compiler) or pre-compile and then load directly into memory. What I mean by work is: does it actually save time? Does it actually make things faster?

Richard Campbell: Well the real question is “what scenarios is it going to benefit?” They are not obvious scenarios.

Carl Franklin: In my scenarios it didn’t do anything, didn’t speed up anything.

Joel Pobar: Really? Is this 1.1 or 2.0?

Carl Franklin: This is 1.1.

Joel Pobar: Okay, if you are looking for the fastest start up time and a reduced working set then I think NGEN is going to help you out there. Basically NGEN removes the JIT completely. You don’t have to invoke the JIT. The JIT brings with it about 200K worth of working set. Just to define working set, [it’s] the amount of memory an application consumes. So it doesn’t bring that 200K worth of working set with it, nor does it have to spend time actually taking the IL and going in and transforming that to X86 or X64.

Carl Franklin: It does have to load it and doesn’t it still do the security checks or no?

Joel Pobar: For the JIT compiler?

Carl Franklin: Yeah.

Joel Pobar: No, I think at least in 2.0, I believe it does.

.NET Rocks (continued)

Brad Abrams: Yeah, I mean, it still does some set of the checks at NGEN time where you could actually verify at that time, and then there [are] some checks that can’t happen at compile time.

Carl Franklin: And of course other code that depends on it that isn’t JIT-ed still has to JIT.

Brad Abrams: The other problem that we saw in 1.1 is that we had parts of the image still in IL and some in the NGEN image, so we still double load both images. So, in fact, if you look you see two copies of Mscorlib, for example, because it’s NGEN-ed. And so one of the big things we did in Whidbey was make that the only one copy and that really helps you.

Carl Franklin: Okay. I think you just answered this, but are all the framework assemblies NGEN-ed or just some of them?

Brad Abrams: Most of them are. So the rationale of when you shouldn’t NGEN is actually pretty interesting to understand. If you are writing a high-throughput app like a server type app when what you need is raw throughput then you shouldn’t NGEN it. And you actually probably don’t care so much about working set and you don’t care so much about start up time. What you care about is requests per second and it turns out that to get the magic of NGEN to work we need to put some indirections in. And with JIT we can just spit exactly the instructions we need with no fix-ups needed. So, for raw throughput, even in .NET Framework 2.0, you will be off not NGEN-ing.

Carl Franklin: It’s really the load time that we are saving, right?

Brad Abrams: Yeah its load time and working set, that’s right.

Carl Franklin: Interesting.

Richard Campbell: These are fun constraints to deal with in development. It depends on your project as to which things are going to be more important to you. Performance is not just [the] number of instructions executed in the given second. How much memory you ate, how much IO you executed around it, and so forth. Those things matter too.

Brad Abrams: [You’re] absolutely right. One other thing on NGEN I just want to mention is-we don’t quite have a ship plan yet but one thing we are really missing in the NGEN scenario is profile-based optimization so that we can actually reorder the basic blocks of the engine image based on actual usage scenarios. So the parts of your classes and methods that you use a lot will be early on in the image and together on pages and that reduces the number of page misses.

Richard Campbell: We see technology like this from the Office team?

Brad Abrams: So Office uses this and in fact we use this internally and we are working on a plan to get that going.

Richard Campbell: Out to us regular mortals.

Brad Abrams: Yeah exactly.

Richard Campbell: That would be cool.

Carl Franklin: All right here is another question I’ve got to ask you guys: Code Access Security: is this like the great feature that nobody ever uses? Am I far off base here?

Brad Abrams: No, it is a cool feature. I think its one of the cool benefits of the CLR that you can, if your scenario demands, run in a semi-trusted environment. And that’s really important for some scenarios.

Carl Franklin: And by the way, just let me say [that] I agree with you. I think it’s incredibly important. But let’s get back to the issue of adoption.

Joel Pobar: But also the most misunderstood. Right?

Brad Abrams: Okay, go ahead Joel.

Joel Pobar: My personal world is very much kind of command-line compiler based and language design sort of stuff but from my experiences when I go visit customers, go to these code camps and things like that, code access security gets ragged on quite a lot. And I have not really heard any justification for it other than, “Hey it’s pretty complicated,” and, “There [are] way too many dials.”

Carl Franklin: I have a sort of a pet theory about this and it’s just [this:] the reason that you see that is because it’s completely against the programmer’s nature to put restrictions on himself. Why would any programmer in their right mind want to reduce what they can do with their toolset instead of add more functionality to their toolset. And that’s essentially what you have to do. It’s an anti-ego thing, right?

Brad Abrams: “Well, I should be able to do this.”

Carl Franklin: It’s “Wear your seatbelt in the Ferrari.” That’s what it is. (laughs)

Richard Campbell: But all developers want to operate under administrator accounts instinctually. They eventually learn that seatbelts in Ferraris [are] useful.

Carl Franklin: Exactly. (laughs)

Brad Abrams: Exactly. You notice the guys that are professional race drivers that do it for a living. They wear these big five-point seatbelts, right? They are serious about it.

Richard Campbell: And the helmet and the gloves and the Nomex? suit, and it takes an army to get them out of the car? there is a reason for this. (laughs)

Brad Abrams: Yeah exactly. The professionals are all for the safety gear.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist