Browse DevX
Sign up for e-mail newsletters from DevX


Heard on .NET Rocks!: Pobar and Abrams on the CLR

Carl Franklin interviews two distinguished members of the CLR team, Brad Abrams and Joel Pobar, resulting in a great discussion about memory management, the JIT compiler, NGEN, and more. They answer the question: "If you had it to do over again, what would you do differently?"




Building the Right Environment to Support AI, Machine Learning and Deep Learning

am the host of ".NET Rocks!", an Internet audio talk show for .NET developers online at www.dotnetrocks.com and msdn.microsoft.com/dotnetrocks. Together with my co-host Richard Campbell, we interview the movers and shakers in the .NET community. We now have over 125 shows archived online, and we publish a new show every Monday morning. For more history of the show check out the May/June 2004 issue of CoDe Magazine, in which the first column appeared. In each issue of CoDe Magazine I'll highlight some of my favorite moments from a recent show. Show #127 was a discussion with two distinguished members of the CLR team, Brad Abrams and Joel Pobar. These guys engaged us in a great discussion about memory management, the JIT compiler, and NGEN, to name a few great topics. The highlight of the show for me was listening to them answer the question, "If you had it to do over again, design the CLR that is, what would you do differently?"

Carl Franklin: Let me ask both of you guys this and I think this is what you are about to answer, Joel. If you had it to do over again, and maybe you do have to do it over again in CLR 2.0, but maybe you don't, what would you do differently in terms of memory management? [I ask] because obviously you are making some assumptions, and whenever you make assumptions, somebody somewhere is going to need to do something opposite that. Joel Pobar: Well, you want to go first Brad?

Brad Abrams: Man, I could write a book on what I would do differently. Overall I am super proud of this platform, but I feel so much smarter now than when we started. We could have done more in the development to think about the deterministic finalization issue. And the dispose pattern is a good one, but it's complicated in a couple of respects especially in complex inheritance hierarchies. So there are some things we could have done, [maybe] added some methods to Object, for example, to handle some of this. I mean, it was too late when Brian sent his mail right before we shipped V1 and it's certainly too late to do that now. Carl Franklin: So you are saying this mail that Brian sent [said], "Okay this is fine, but here are some things that you need to understand."

Brad Abrams: Essentially he said, "Here is why we can't just go add deterministic finalization. Here are the issues that you might not know about." And Brian was the development manager for the team. So it's a very technical sort of response to the issue. Carl Franklin: And Joel, how about you?

Joel Pobar: Well, I kind of follow Brad's logic there. I mean, I think we could have done a little more in the deterministic finalization space especially for the C++ guys. Carl Franklin: What exactly does that mean? I mean, do you mean you want to have some sort of hybrid system? Chris Sells actually wants to put reference counting in the CLR. He has petitioned you guys to do that. Is he still saying that?

Joel Pobar: I don't know if he [is] still saying that. Brad Abrams: Just a little plug here for Rotor. He actually did a Rotor project. We actually have the source code for the CLR available and anyone can just pick it up, and Chris did. He picked it up and he added reference counting to it.

Richard Campbell: As a weekend project, right? Brad Abrams: [laughs] Yeah, a weekend project! It actually took him a little bit more than a weekend to go do it, but Joel can tell the story. [Chris] added a new IL instruction, so he can tell you about doing that. But anyway, Chris convinced himself that it is doable but the issues are more complex and subtle and especially impact performance on the overall system just for having it.

Carl Franklin: Would you say that some systems lend themselves better to reference counting and some better to non-deterministic finalization? Or is it pretty much across the board, all would be better with non-deterministic? Brad Abrams: The way I think about it, if you have a constrained system where you know all the moving parts—you are not going to load any third-party code, you are not going to use anybody else's component, you are going to write all the code yourself, you're not going to leverage the OS—then you can write your own memory manager and it can be better than the GC. So, for example, I was talking to a guy and he said, "Look my data comes in in 32-bit chunks and I process it and push it back out in 32-bit chunks, [and] this is just how it works". For him, he would be better off with his own custom memory management technique.

Carl Franklin: Well, he could be using C++, right? In mixed mode? Brad Abrams: Yes, so it's just a little bit more work to go do that. But yeah, he could.

Joel Pobar: There are other things that we aren't even really considering in terms of the benefits of the generational GC that we have, and I think that's kind of a key thing. I mean, reference counting versus the different types of generational garbage collectors and that kind of thing. And I think the big positive with at least our generational garbage collector is that [with] the three generations: the first generation usually aligns with the processor's L2 cache. So you can basically collect all your [bits in] memory, stash it in one contiguous chunk and upon execution, when the processor actually goes to reference that data, Bam! It's right there on L2. You don't pay the price of going out to the bus, which is really kind of neat, but of course you pay the cost when you do actual generational garbage collection, where we have to actually do the mark, sweep, and compact thing. So we toss everything out of generation 0 promoted to gen2, have to compact it, you pay a cost there. On the other side, memory allocation is nearly free, it's very, very quick because we have this pointer in memory and the pointer is, hey, this is where you can allocate your next chunk and away it goes. So the cost is really low for allocation, and [with] compacting you pay a bit of a cost there, but then you get the L2 benefit. You see where I am going with it? Carl Franklin: Sure, it's a trade off.

Joel Pobar: There's a whole host of things that are really interesting, and to be honest, before I even came here I thought, "Well this is really cool stuff, I'd love to go and take a look at this." But after coming here I know that we have got this smart guy, Patrick. He is a senior architect on the team. I mean, he has being doing garbage collectors for about 20-30 years. He has being doing it for the LISP community and all sorts of stuff. He wrote the majority of the code and I think in terms of the amount of effort it would actually take to implement your own garbage collector, you may as well just use ours. Plenty of man-years and plenty of thought has gone into it to make it as generic and as quick as possible. Brad Abrams: And it just keeps getting better. I was in Pat's office the other day, and he is working on making the thing run better on multi-core machines on 32-way machines, and that kind of scale-out. So, its' just going to keep getting better.

Joel Pobar: You asked, "What could we have done to make it better?" Well, I think the whole value type / reference type thing is a little complex and a little—I hate to use the word "busted" because it's not that bad. But, I think we could have done some work to make that a little bit easier. Carl Franklin: I like to think there are three classes of types: value type, reference type, and string. [laughs]

Brad Abrams: I like it. Yes, you like to treat string as a value type, don't you? Carl Franklin: What's up with string? It's a reference type that thinks it's a value type.

Brad Abrams: It's an immutable reference type that has special language support. Carl Franklin: Well sure. It's hard, [though] if you try to learn the difference between the two you have to leave string for the last appendix. "Okay now we will talk about string."

Joel Pobar: Did you hear what we have done in NGEN (native code generator) around strings? Carl Franklin: No I haven't, and let's define NGEN, and by the way, let's consider the fact that maybe it doesn't even work.

(laughs) Brad Abrams: Okay we've got to go into that.

Carl Franklin: We've got to go into that because the empirical evidence for me in testing NGEN, which basically makes native images that pre-JIT (Just In Time compiler) or pre-compile and then load directly into memory. What I mean by work is: does it actually save time? Does it actually make things faster? Richard Campbell: Well the real question is "what scenarios is it going to benefit?" They are not obvious scenarios.

Carl Franklin: In my scenarios it didn't do anything, didn't speed up anything. Joel Pobar: Really? Is this 1.1 or 2.0?

Carl Franklin: This is 1.1. Joel Pobar: Okay, if you are looking for the fastest start up time and a reduced working set then I think NGEN is going to help you out there. Basically NGEN removes the JIT completely. You don't have to invoke the JIT. The JIT brings with it about 200K worth of working set. Just to define working set, [it's] the amount of memory an application consumes. So it doesn't bring that 200K worth of working set with it, nor does it have to spend time actually taking the IL and going in and transforming that to X86 or X64.

Carl Franklin: It does have to load it and doesn't it still do the security checks or no? Joel Pobar: For the JIT compiler?

Carl Franklin: Yeah. Joel Pobar: No, I think at least in 2.0, I believe it does.

Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.



Thanks for your registration, follow us on our social networks to keep up-to-date