Forums

Optimizations

General discussion about Cobra. Releases and general news will also be posted here.
Feel free to ask questions or just say "Hello".

Optimizations

Postby torial » Wed Feb 06, 2013 11:52 am

Hi Charlies,

I was noticing you were doing a lot of optimizations in the code branch, and was wondering when you get to a stopping point, if you'd be willing to share some lessons learned.

Thanks,
Sean
torial
 
Posts: 229
Location: IA

Re: Optimizations

Postby Charles » Sat Feb 09, 2013 3:04 pm

I understand your request because I would have liked the same thing from the Mono guys. They wrote "You might want to keep a close eye on the memory consumption and on the method invocation counts. A lot of the performance gains in MCS for example came from reducing its memory usage, as opposed to changes in the execution path." ... but never gave a concrete example of reducing memory usage.

Now that I have optimized on both Mono and .NET, I can say that the experiences are fairly different and the approach you take will split along that divide.

For Mono, reducing memory usage is in fact really important. I presume that means they spend too many cpu cycles on garbage collection. Your main sources of info for Mono are:

http://www.mono-project.com/Performance_Tips
http://www.mono-project.com/Profiler
http://www.mono-project.com/Profile

I used the standard Mono profiler and examined both the memory section and the calls section to determine what the fattest parts were. Many of the objects created were components on their regex engine (the Cobra tokenizer uses regexes heavily). Previously, I found that hand written code was much faster than regexes (in both .NET and Mono) and would not generally allocate many objects. So most of my recent optimizations were replacing RegexTokenDef instances with custom TokenDef subclasses of my own creation.

The last time I optimized in .NET, I used the excellent ANTS profiler which clearly showed the slowest parts of code in a nice interactive GUI. You always want to go after the slowest parts. Optimizing something that is either not slow or does not get called frequently enough to matter will be a total waste of time. On .NET, execution code paths were definitely important to speeding things up. They are not overwhelmed by gc.

Some other misc notes:

-- Cobra programs run faster with -turbo which turns off contracts, assertions, null checks, etc. See cobra -help for info.

-- When I upgrade from Mono 2.10.9 to 2.10.10 or 3.0.3 on my Mac, the Cobra compiler slows down by 35 - 40%. So I'm sticking with 2.10.9 for now. I filled out a bug report at https://bugzilla.xamarin.com/show_bug.cgi?id=9679

-- There are two broad categories of improvements to make for speed. The first are the obvious ones where the profiler shows that a method is slow, or being called too many times. The second are the non-obvious ones where the profiler isn't going to directly point out what can be done.

Imagine you were coding decades ago and had a linear search algo in your program. As the list of data got longer, the search got slower. A profiler could show you how to fine tune your existing code, but it's never going to tell you to use a binary search or a hash table in its place. Writing radically different code that performs better requires some deep thought about your problem. In this example, we already have libraries for such things. But in a specific problem domain, you'll have to figure out what that different approach should be.

-- And the usual disclaimer: If your code is running fast enough then there are probably some bugs, refinements and features that need your attention.

HTH
Charles
 
Posts: 2515
Location: Los Angeles, CA

Re: Optimizations

Postby hopscc » Tue Feb 12, 2013 9:57 pm

Just out of interest how much difference did your changes make to compile run times (or your test cases) ?
hopscc
 
Posts: 632
Location: New Plymouth, Taranaki, New Zealand

Re: Optimizations

Postby Charles » Wed Feb 13, 2013 2:42 am

This is on Mono 2.10.9 on Mac OS X 10.6.8. Machine is 2.66 GHz Quad-Core Intel Xeon with 1066 MHz DDR3 RAM.

My test case is the Cobra compiler itself. I copied the older Snapshot/ directory before the optimizations were pushed into it. Then using:
Code: Select all
cd CobraWorkspace/Source
bin/build -v -timeit


A typical run with the slower Cobra is:

Phase timings:
32.04% 06.06secs Parsing source code
26.80% 05.06secs Generating C# code
17.32% 03.27secs Compiling C# code
14.74% 02.79secs Binding implementation
06.10% 01.15secs Binding interface
01.20% 00.23secs Counting Nodes
00.86% 00.16secs Binding inheritance
00.34% 00.06secs Reading libraries
00.32% 00.06secs Computing matching base members
00.21% 00.04secs Binding Cobra run-time library
00.06% 00.01secs Identifying .main
00.02% 00.00secs Binding use directives
00.00% 00.00secs Checking if a default number type should be suggested
100.00% 18.90secs Total for all phases

Compilation succeeded
timeit compile = 00:00:19.0274273
53322 lines compiled at 2802.4 lines/sec
156927 nodes compiled at 8247.4 nodes/sec
287093 tokens compiled at 15088.4 tokens/sec

And a typical run with the faster Cobra is:

Phase timings:
29.21% 03.91secs Parsing source code
24.26% 03.25secs Compiling C# code
20.18% 02.70secs Binding implementation
13.50% 01.81secs Generating C# code
09.20% 01.23secs Binding interface
01.61% 00.22secs Counting Nodes
00.69% 00.09secs Binding inheritance
00.49% 00.06secs Reading libraries
00.45% 00.06secs Computing matching base members
00.30% 00.04secs Binding Cobra run-time library
00.09% 00.01secs Identifying .main
00.02% 00.00secs Binding use directives
00.00% 00.00secs Checking if a default number type should be suggested
100.00% 13.38secs Total for all phases

Compilation succeeded
timeit compile = 00:00:13.5106138
53322 lines compiled at 3946.7 lines/sec
156927 nodes compiled at 11615.1 nodes/sec
287093 tokens compiled at 21249.4 tokens/sec

The overall improvement is 29.2%.

The parser is 35.5% faster (though really it was the lexer that got faster and probably more than that). Generating C# is 64.2% faster.

I left the Mono garbage collector as the default which is Boehms. In the past, if I set it to "sgen", I get another 15% improvement.

I don't have the numbers handy any more, but memory usage also came down. Maybe something like 25% with fewer "gc resizes" and "generation collections".

A "Hello, world" program benefits less. Like 3.7% faster.

There is a performance regression in Mono which I reported at https://bugzilla.xamarin.com/show_bug.cgi?id=9679

As usual, everything runs faster on .NET and Windows, but I didn't put numbers together there.
Charles
 
Posts: 2515
Location: Los Angeles, CA


Return to Discussion

Who is online

Users browsing this forum: No registered users and 126 guests

cron