Typically it's going to be meatier functions in most cases, not the functions called most often a billion times in a loop.
When you do sample-based profiling (with a tool or by hand), often the biggest hotspots are going to be in tiny leafy calls which do simple things, like a function to compare two integers.
That function often isn't going to benefit from much, if any, optimization. At the very least those granular hotspots are rarely top priority. It's the function calling that leaf function that might be the trouble maker, or the function calling the function calling the function, like a sub-optimal sorting algorithm. With good tools you can drill down from callee to caller, and also see who spends the most time calling the callee.
It's often a mistake to obsess on callees and not look at callers down the call graph in a profiling session unless you are doing things very inefficiently at the micro level. Otherwise you might be overly sweating the small stuff and losing sight of the big picture. Just having a profiler in hand doesn't protect you from obsessing over trivial things. It's just a first step in the right direction.
Also you have to make sure you are profiling operations that align with things the users actually want to do, otherwise being totally disciplined and scientific in your measurements and benchmarks is worthless since it's not aligning with what the customers do with the product. I had a colleague one time who tuned the hell out of a subdivision algorithm to subdivide a cube to a billion facets and he took a lot of pride in that.... except users don't subdivide simple 6-polygon cubes into a billion facets. The whole thing slowed to a crawl when it tried to run on a production car model with over 100,000 polygons to subdivide, at which point it couldn't even do 2 or 3 levels of subdivision without slowing to a crawl. Put simply he wrote code that was super optimized for unrealistically small input sizes that didn't scale well at all to handle real world use cases, partially because he was a brilliant mathematician but someone unacquainted with how users actually used the product.
You have to optimize real use cases aligned with your users' interests or else it's worse than worthless, since all those optimizations which tend to at least somewhat degrade the maintainability of code have little user benefit and only all those negatives for the codebase.