Cobra compiler speed degradation (?)
Posted: Wed Oct 07, 2009 3:24 am
I was thinking about the cobra compiler the other day (not that its pertinent but I was digging a posthole at the time - one of the more pointless activities known to humankind)
and idly started wondering if the added features over the last months had affected the speed of compilation much.
I tend to watch the timeit value for the compiler to compile itself and dont think its changed from pre 0.8 release but if it drifted up by small increments on changes how much would I notice over time
and I thought as other changes get made in the future, how would you tell if things slowed down ? ( unless it happened in one big noticeable lump)
Well maybe we could get the compiler to tell us.
What if to the -timeit count we added a lines-compiled/sec calculation ( #lines in all compiled files/timeit time corrected perhaps for running the BE compiler) .
I'd expect for compiling the same files ( the compiler itself) the value would perhaps drift over a small range but could give you an idea of any significant degradation once a baseline was established
Does anyone else think this would perhaps be useful to have as a small performance sanity check on changes/additions ??
The thought is the deed so I implemented support for this on my system but have only been using it intermittently on my development compiler (compiling tests and little programs) not the snapshot.
The values are all over the place ( from 10s lines/sec to 1000s) - it looks like small single files compile much slower than many big files which indicates to me at least that the compiler is (for the cases I tried ) bound by its startup overhead.... but maybe its just wobble
Is it worthwhile to continue down this path , Should I post the changes as a patch ?
and idly started wondering if the added features over the last months had affected the speed of compilation much.
I tend to watch the timeit value for the compiler to compile itself and dont think its changed from pre 0.8 release but if it drifted up by small increments on changes how much would I notice over time
and I thought as other changes get made in the future, how would you tell if things slowed down ? ( unless it happened in one big noticeable lump)
Well maybe we could get the compiler to tell us.
What if to the -timeit count we added a lines-compiled/sec calculation ( #lines in all compiled files/timeit time corrected perhaps for running the BE compiler) .
I'd expect for compiling the same files ( the compiler itself) the value would perhaps drift over a small range but could give you an idea of any significant degradation once a baseline was established
Does anyone else think this would perhaps be useful to have as a small performance sanity check on changes/additions ??
The thought is the deed so I implemented support for this on my system but have only been using it intermittently on my development compiler (compiling tests and little programs) not the snapshot.
The values are all over the place ( from 10s lines/sec to 1000s) - it looks like small single files compile much slower than many big files which indicates to me at least that the compiler is (for the cases I tried ) bound by its startup overhead.... but maybe its just wobble
Is it worthwhile to continue down this path , Should I post the changes as a patch ?