The basic Six Sigma activities are captured by the acronym DMAIC, which stands for: Define, Measure, Analyze, Improve, Control. You apply these to the process that you want to improve: define the process, measure it, use the measurements to form hypotheses about the causes of any problems, implement improvements, and ensure that the process remains statistically "in control".
As it relates to software, the process is your software development lifecycle (SDLC) or some part of it. You probably wouldn't try to apply Six Sigma principles to the whole SDLC (or at least, not initially). Instead, you'd look for areas where you think you've got a problem (e.g. our defect rate is too high; too many regressions; our schedule slips too often; too many misunderstandings between developers and customer; etc.). Let's say for now that the problem is that too many bugs are being produced (or at least reported) each week. So you'd define the software development/bug creation process. Then you'd start collecting metrics such as the number of lines of code written each day, frequency of requirements changes, number of hours each engineer spends in meetings, and other possibly-relevant facts.
Next, you look at the data and try to discern patterns. Maybe you notice that engineering team A hits every deadline that they're given, and often even finishes tasks early! Initially, team B doesn't seem quite so on the ball -- they miss their deadlines by a day or two at least half the time, and are occasionally late by a week or more. Management sees team B as something of a problem and is looking to shake things up. However, a closer look at the data shows that team B's bug rate is much lower than team A's, and what's more, team B is often asked to fix bugs attributable to team A because management feels that team A is to valuable to spend a lot of time on maintenance.
So, what do you do? Using the data you've collected and the analysis you've performed, you suggest a change: team A and team B will each fix their own bugs. With management's blessing (and against team A's vehement opposition) you implement that change. Then you continue collecting metrics, and you continue to analyze the data to see if your change made a difference. Repeat this measure/analyze/implement cycle until the bug rate is deemed acceptable. But you're not done yet. In fact, you're never done... you need to keep measuring the bug rate and checking that the bug rate remains within the acceptable range, i.e. it's statistically "in control".
Notice that there's nothing here that's specific to software development other than the specifics of the process you're improving, the kinds of metrics that you collect, etc. The activities that you use to improve a software development process are the same as those you'd use for a widget manufacturing process, even though software development is quite different from widget manufacturing. All that means is that you need to apply some common sense in the kinds of goals that you set for your process.