Well let me kick my posting off here with a bit of a techy one.
Just finished work on my profiler, which looks all transparent and nice.
It works by replacing the trace() command. As I use mtasc for development rather than the Flash IDE you have the issue of not having a trace command as such ( No IDE, so no IDE based trace window ).
mtasc gets around this allowing you to overwrite the existing trace command with your own goodness. I've been using alcon as an alternative trace since moving over to mtasc, and it's pretty sweet. It uses a local connection to spit the data out to it's own window, so no nasty textfields in your swf, and it works with movies embedded on a site. Using a local connection means the output is displayed a bit slower than Flash spits it out, but to me that's no big deal, so long as I get to see what's causing the problem I can wait :)
Anyway in the brief moments of downtime I've been working on my own version, tailored to suit my needs. As you can see from the screeny it can output a fair bit of info from a trace command ( Such as the line number, calling package etc. ).
I've also copied the coloured output feature from alcon, so trace("test",2) outputs in a different colour, which is handy.
A colour value of 3 is classed as a fatal error, and if the flag's set then all output stops, so it's like a breakpoint and saves you have to scroll through a lot of output checking for that line which is badly broken.
It also supports trace("_dump",object), which spits out all the properties in an object ( Also covers the type, eg testFlag:Boolean=true; ), as well as trace("_dumpMC",mc) which displays the most relevant movieclip properties.
As it's a profiler and not just a trace replacement, it also handles trace("_profileStart"); trace("_profileStop"); and trace("_profileEnd"); ( Which stops the profiling all together ). When the end command is called it outputs all the methods which have been profiled, the number of times they've been called, the quickest they ran ( In ms ), the longest time they took, and the average.
So hopefully with this I'll be able to find bottlenecks quickly and speed them up before they become a performance issue. It also means testing different approaches can be done quickly to see exactly which way is quicker in a given situation.