But my question is for the situation where you have a low volume of
users (like an early B2B product), so you can't afford to use them as
"guinea pigs" because they are each very important.
Off course this is true but does have a 2-sided effect. If you don't test you cannot improve your product which will also negatively affect your user.
That being said: The testing does not have to be like gambling. It can be a measured exercise. The bigger companies you refer to generally are very good at measuring all interaction between the users and the app.
So, for example, they FB measures how many video's you watch on your timeline. When they update the timeline software with a new release they do this:
- Before: They always measure the statistic so they know the "normal", let's say 5 videos.
- Send the update to a small % of the users
- Measure how many video's they watch, let's say 2, bad update
After some measuring they automatically decide (they have a team controlling this):
if(currentVideosWatched < 50% * videosWatchedBefore) {
alertDevelopersToFix();
// FB doesn't seem to revert the release but I know of systems who automatically revert the update to the last stable version which may work well for you.
}else{
increaseAmountOfUsersForThisVersion();
}
You can make this process as manual or automated as you want.
That way they prevent really bad issues from happening. You can do quite the same thing in a more simple way off course. Read more here: http://arstechnica.com/business/2012/04/exclusive-a-behind-the-scenes-look-at-facebook-release-engineering/2/
To convert is to your case:
The problem I'm running into, is that to get this sweet feedback as
soon as possible, I have to show features unpolished.
It depends on your development method off course but in general you should be able to deliver features which "look" good. With that I mean: You should not deploy pieces which are just not done yet. That doesn't make sense. You also don't have to deliver fully polished versions. That's the other side.
This is a bit of a professional assessment you have to make. In general: Build a less complex / complete feature and launch it. When it gets traction you can further improve it. Most important is that you get the metrics so you don't have to guess around but you can make real decisions based on facts.
So my question is, how does one balance those 2 concerns? (getting
early, uncut feedback via actual use of a feature, vs. exposing
unpolished features and thus showing users a lower standard of
quality)
Try to deliver small good looking features and their initial measurements. That's the baseline. If you want you could even label them in the interface with "beta" flag if you want (though that might change user behavior).
On a small B2B I would personally flag the users where you want to send the beta updates to. So you can include people who like to give feedback first.
The other real alternative you have is to hire a bunch of real testers. The problem with this is that they need to be experienced with the B2B field you are working in. Otherwise they won't discover the business logic bugs but only the technical ones.