-1

In a feedback for a (deleted) question I asked here last year, I was told that there is not easy way to do software testing. We may find prepared test cases for protocols but in most case the test are based on the requirement and most of them are for that reason unique.

Today I was preparing a thesis subject for a student, in field of automate testing (measurement technology, NOT software testing) but I also found the site Standardized Test Information Interchange.

This tells me that indeed there is some intention to somehow standardize software testing.

Therefore I'm interested in which automated test technology, standards, best practices shell be applied for a software development to stay on a maintainable course for a long run if no previous automated software testing solution blocks the free engineering decision.

The text of the original site mentioned in this question:

Interoperability – Standardized Test Information Interchange

Have you ever tried switching from one automated testing tool to another but decided not to make the move because you already had too much time invested and too many automated test cases that existed using the old solution?

Hundreds of automated software testing tools solutions currently exist and each provides their own development language for automating test cases. One tool might use VBScript; another might use a proprietary scripting language; while yet another might let you choose from a multitude of programming languages to create your automated test cases .

The challenge with this non-standardized way of automating test cases development across the automated testing tool community is there is no interoperability and no data exchange capability among automated testing solutions. If you wanted to switch from tool A to tool B you would have to recreate all your tests in tool B; no standardized approach exists to automate this process.

To address this and other interoperability challenges, we at IDT, along with others, have proposed a standard to the OMG (www.omg.org) called the Test Information Interchange for Automating Software Test Processes for software systems (in short TestIF).

The goal of this standard is to achieve a specification that defines the format for the exchange of test information between tools, applications, and systems that utilize it. The term “test information” is deliberately vague, because it includes the concepts of tests (test cases), test results, test scripts, test procedures, and other items that are normally documented as part of a software test effort.

The long term goal is to standardize the exchange of all test related artifacts produced or consumed as part of the testing process.

SchLx
  • 113
  • 3
  • I've removed *lots* of commentary here to try and focus this on the question. I'm not sure it will be enough to save it, but it's a step in the right direction. – Philip Kendall Sep 28 '18 at 16:32
  • 1
    Possible duplicate of [I've inherited 200K lines of spaghetti code -- what now?](https://softwareengineering.stackexchange.com/questions/155488/ive-inherited-200k-lines-of-spaghetti-code-what-now) – Doc Brown Sep 28 '18 at 19:32
  • The linked answer is indeed covers a with range of interesting topic about similar situation I spend my working day. Although I'm more interested on the software testing. As in many technology driven scenario, to chose the right one defines the further path of development. Ex. no one would start a new project with Flash, while html5 has his roots in the standard! Avoid hypes, choose carefully, choose the one which has it's roots! – SchLx Sep 28 '18 at 22:15
  • Regarding standards: In my field of expertise (automated measurement also called automated testing) I would simple say that we shell find tool-chains or frameworks they are based on standards like ATML or ASAM-ATX. Does software testing have similar standardization? – SchLx Sep 28 '18 at 22:51
  • 1
    @SchLx: the canonical answer to your question is "get a copy of Feather's book "Working effectively with legacy code" (as given in the first comment below the linked question). Note that Feather defines "legacy code" as "code without tests", so yes, this **is** all about software testing. – Doc Brown Sep 29 '18 at 07:32
  • @DocBrown: The mentioned book tells us many helpful use cases about how to insert barriers in order to get a grip on the code to apply tests. I'm not sure how to tell: I'm missing the step of identifying known data entities "flowing through the lines of codes" and how to deal with them. Eg. "hey that's a temperature, hmm maybe we know something about that" or "yep, that's an email address, so I can tag with an 'attribute' and some test we know so well, will applied semi automatically". – SchLx Nov 22 '19 at 05:00
  • I hope, if we can share code (eg. on GitHub, I don't know: "why on earth would someone like to do that" ), than we may share tests and test data somehow similarly. – SchLx Nov 22 '19 at 05:11

1 Answers1

2

I think "ramping up testing" is more of a subgoal than a goal. The actual goal of all testing is to mitigate risk.

To mitigate risks, you need to know what those risks are. So the first step is a risk assessment process, which usually results in a risk matrix. I would pull together a group of key stakeholders and experienced technical staff and brainstorm on what those risks may be, in part based on your history with the software and how well releases have gone.

For each risk, the business will decide the impact (e.g. the risk of a "system down" scenario may be financial or reputational loss) and assign a score (usually 1-5). Meanwhile the technical team will decide the probability, also scored from 1-5. You then multiply these two factors and sort by the product to get a forced rank ordering (i.e. prioritization) of the risks.

Then for each risk you must have a mitigation strategy. Many of the risks will be mitigated by a certain type of testing-- functional testing, performance testing, failure mode testing, integration testing, data migration testing, penetration testing, etc. The team must then develop a test plan that provides confidence to stakeholders that the risk is mitigated.

One of the key risks is usually "Software is of low quality" or something similar. That specific risk is mitigated by Quality Assurance.

For an orderly QA, you must develop quality metrics. If you are starting from zero, I'd suggest gathering data from the existing code base and defect tracking system, so that you can establish a baseline; for example, you may find in your last five releases you had an average of one sev1 defect and 10 sev 2. You then may set a goal for zero sev1 and less than 5 sev2, that sort of thing. The development and QA teams will then need to come up with a plan to reach that goal.

The development team can improve quality with code reviews, automated unit tests, manual unit tests, pair coding, and other techniques. For each of these, goals may be set, e.g. for automated unit testing you may decide that 80% code coverage is required by a certain date. The QA team may use automated or manual smoke and functional tests. You may have specialized resources set up performance and stress tests. The QA and development leads should set up a process for continuous testing, measurement, and reporting.

Are these test approaches standard? Sort of. The terminology and the general techniques of each kind of testing are going to be more or less consistent. However, the prioritization for each type of testing will vary depending on where your risk areas are and how you ranked them. Some software will skip certain tests completely-- for example, you might not do certain kinds of security testing on an internal application-- while other tests may take higher priority-- testing for system stability on a high-performance 24/7 application, for example. Those choices and decisions will be business- and product- specific, and like all business decisions, are bounded by resource availability, budget, and time constraints. One size does not fit all.

John Wu
  • 26,032
  • 10
  • 63
  • 84
  • This covers remarkably well the methodology! Is there some sort of standardization, something similar to the article was linked to my question? Those it have an impact on productivity, maintenance ? – SchLx Sep 28 '18 at 23:00
  • Based on [this example](https://www.omg.org/spec/TestIF/20140918/Appendix_A_Sample_PSM_Compliant_XML.xml), TestIF appears to be a document standard, nothing more. These days, QA seem to use a lot of ad hoc spreadsheets (to hold their test cases, traceability matrix, etc.) and TestIF format solves that problem, paving the way for software (future) developers to write interoperable tools to work with the documents. That is it, as far as I can tell. TestIF does not appear to define a process at all. – John Wu Sep 29 '18 at 00:16
  • Don't misunderstand me, your answer helps me at the methodology part greatly. On the other side I'm also interested in the topic you are mentioning in your comment. I would like to go in a direction with a standard in the background, to select a future proof technology. In our organization there are many different development sites some woks on programming controllers some writes scripts for validation we in our group are working on automated measurements. There is no chance to use the same tests if there is no standardization. Does T.IF or other standard solves this? – SchLx Sep 29 '18 at 06:40
  • There are standard *categories* of tests. There aren't really standard tests, unless you are talking about something like automated penetration testing.You have to tool your own tests just as developers have to write their own code. – John Wu Sep 29 '18 at 09:11