As per request by the OP I'll chip in (without making a fool of myself, hopefully :P)
I think we're all agreed that recursion is just a more elegant way of coding. If done well it can make for more maintainable code, which is IMHO just as important (if not more) that shaving off 0.0001ms.
As far as the argument that JS does not perform Tail-call optimization is concerned: that's not entirely true anymore, using ECMA5's strict mode enables TCO. It was something I wasn't too happy about a while back, but at least I now know why arguments.callee
throws errors in strict mode. I know the link above links to a bug report, but the bug is set to WONTFIX. Besides, standard TCO is coming: ECMA6 (December 2013).
Instinctively, and sticking to the functional nature of JS, I'd say that recursion is the more efficient coding style 99.99% of the time. However, Florian Margaine has a point when he says that the bottleneck is likely to be found elsewhere. If you're manipulating the DOM, you're probably best focussing on writing your code as maintainable as possible. The DOM API is what it is: slow.
I think it's nigh on impossible to offer a definitive answer as to which is the faster option. Lately, a lot of jspref's I've seen show that Chrome's V8 engine is ridiculously fast at some tasks, which run 4x slower on FF's SpiderMonkey and vice versa. Modern JS engines have all sorts of tricks up their sleeves to optimize your code. I'm no expert, but I do feel that V8, for example, is highly optimized for closures (and recursion), whereas MS's JScript engine is not. SpiderMonkey often performs better when the DOM is involved...
In short: I'd say which technique will be more performant is, as always in JS, nigh on impossible to predict.